Adema
Scanners, collectors and aggregators. On the underground movement of (pirated) theory text sharing
2009


# Scanners, collectors and aggregators. On the ‘underground movement’ of
(pirated) theory text sharing

_“But as I say, let’s play a game of science fiction and imagine for a moment:
what would it be like if it were possible to have an academic equivalent to
the peer-to-peer file sharing practices associated with Napster, eMule, and
BitTorrent, something dealing with written texts rather than music? What would
the consequences be for the way in which scholarly research is conceived,
communicated, acquired, exchanged, practiced, and understood?”_

Gary Hall – [Digitize this
book!](http://www.upress.umn.edu/Books/H/hall_digitize.html) (2008)

![ubuweb](https://openreflections.files.wordpress.com/2009/09/ubuweb.jpg?w=547)Ubu
web was founded in 1996 by poet [Kenneth
Goldsmith](http://en.wikipedia.org/wiki/Kenneth_Goldsmith "Kenneth Goldsmith")
and has developed from ‘a repository for visual, concrete and (later) sound
poetry, to a site that ‘embraced all forms of the avant-garde and beyond. Its
parameters continue to expand in all directions.’ As
[Wikipedia](http://en.wikipedia.org/wiki/UbuWeb) states, Ubu is non-commercial
and operates on a gift economy. All the same - by forming an amazing resource
and repository for the avant-garde movement, and by offering and hosting these
works on its platform, Ubu is violating copyright laws. As they state however:
‘ _should something return to print, we will remove it from our site
immediately. Also, should an artist find their material posted on UbuWeb
without permission and wants it removed, please let us know. However, most of
the time, we find artists are thrilled to find their work cared for and
displayed in a sympathetic context. As always, we welcome more work from
existing artists on site_.’

Where in the more affluent and popular media realms of block buster movies and
pop music the [Piratebay](http://thepiratebay.org/) and other download sites
(or p2p networks) like [Mininova](http://www.mininova.org/) are being sued and
charged with copyright infringement, the major powers to be seem to turn a
blind eye when it comes to Ubu and many other resource sites online that offer
digital versions of hard-to-get-by materials ranging from books to
documentaries.

This is and has not always been the case: in 2002 [Sebastian
Lütgert](http://www.wizards-of-
os.org/archiv/wos_3/sprecher/l_p/sebastian_luetgert.html) from Berlin/New York
was sued by the "Hamburger Stiftung zur Förderung von Wissenschaft und Kultur"
for putting online two downloadable texts from Theodor W. Adorno on his
website [textz.com](http://www.medienkunstnetz.de/artist/textz-
com/biography/), an underground archive for Literature. According to
[this](http://de.indymedia.org/2004/03/76975.shtml) Indymedia interview with
Lütgert, textz.com was referred to as ‘the Napster for books’ offering about
700 titles, focusing on, as Lütgert states _‘Theorie, Romane, Science-Fiction,
Situationisten, Kino, Franzosen, Douglas Adams, Kritische Theorie, Netzkritik
usw’._

The interview becomes even more interesting when Lütgert remarks that one can
still easily download both Adorno texts without much ado if one wants to. This
leads to the bigger question of the real reasons underlying the charge against
textz.com; why was textz.com sued? As Lütgert says in the interview: “ _Das
kann man sowieso_ [when referring to the still available Adorno texts] _._
_Aber es gibt schon lange einen klaren Unterschied zwischen offener
Verfügbarkeit und dem Untergrund. Man kann die freie Verbreitung von Inhalten
nicht unterbinden, aber man scheint verhindern zu wollen dass dies allzu offen
und selbstverständlich geschieht. Das ist es was sie stört.”
_

_![I don't have any
secrets](https://openreflections.files.wordpress.com/2009/09/i-dont-have-any-
secrets.jpg?w=547)_

But how can something be truly underground in an online environment whilst
still trying to spread or disseminate texts as widely as possible? This seems
to be the paradox of many - not quite legal and/or copyright protected -
resource sharing and collecting communities and platforms nowadays. However,
multiple scenario’s are available to evade this dilemma: by being frankly open
about the ‘status’ of the content on offer, as Ubu does, or by using little
‘tricks’ like an easy website registration, classifying oneself as a reading
group, or by relieving oneself from responsibility by stating that one is only
aggregating sources from elsewhere (linking) and not hosting the content on
its own website or blog. One can also state the offered texts or multimedia
files form a special issue or collection of resources, emphasizing their
educational and not-for-profit value.

Most of the ‘underground’ text and content sharing communities seem to follow
the concept of (the inevitability of) ‘[information wants to be
free](https://openreflections.wordpress.com/tag/information-wants-to-be-
free/)’, especially on the Internet. As Lütgert States: “ _Und vor allem sind
die über Walter Benjamin nicht im Bilde, der das gleiche Problem der
Reproduzierbarkeit von Werken aller Art schon zu Beginn des letzten
Jahrhunderts vor sich hatte und erkannt hat: die Massen haben das Recht, sich
das alles wieder anzueignen. Sie haben das Recht zu kopieren, und das Recht,
kopiert zu werden. Jedenfalls ist das eine ganz schön ungemütliche Situation,
dass dessen Nachlass jetzt von solch einem Bürokraten verwaltet wird._ _A:
Glaubst Du es ist überhaupt legitim intellektuellen Inhalt zu "besitzen"? Oder
__Eigentümer davon zu sein?_ _S: Es ist *unmöglich*. "Geistiges" Irgendwas
verbreitet sich immer weiter. Reemtsmas Vorfahren wären nie von den Bäumen
runtergekommen oder aus dem Morast rausgekrochen, wenn sich "geistiges"
Irgendwas nicht verbreitet hätte.”_

![646px-
Book_scanner_svg.jpg](https://openreflections.files.wordpress.com/2009/09
/646px-book_scanner_svg-jpg1.png?w=547)

What seems to be increasingly obvious, as the interview also states, is that
one can find virtually all Ebooks and texts one needs via p2p networks and
other file sharing community’s (the true
[Darknet](http://en.wikipedia.org/wiki/Darknet_\(file_sharing\)) in a way) –
more and more people are offering (and asking for!) selections of texts and
books (including the ones by Adorno) on openly available websites and blogs,
or they are scanning them and offering them for (educational) use on their
domains. Although the Internet is mostly known for the pirating and
dissemination of pirated movies and music, copyright protected textual content
has (of course) always been spread too. But with the rise of ‘born digital’
text content, and with the help of massive digitization efforts like Google
Books (and accompanying Google Books [download
tools](http://www.codeplex.com/GoogleBookDownloader)) accompanied by the
appearance of better (and cheaper) scanning equipment, the movement of
‘openly’ spreading (pirated) texts (whether or not focusing on education and
‘fair use’) seems to be growing fast.

The direct harm (to both the producers and their publishers) of the free
online availability of (in copyright) texts is also maybe less clear than for
instance with music and films. Many feel texts and books will still be
preferred to be read in print, making the online and free availability of text
nothing more than a marketing tool for the sales of the printed version. Once
discovered, those truly interested will find and buy the print book. Also more
than with music and film, it is felt essential to share information, as a
cultural good and right, to prevent censorship and to improve society.

![Piracy by Mikel Casal](https://openreflections.files.wordpress.com/2009/09
/piracy-by-mikel-casal.jpg?w=432&h=312)

This is one of the reasons the [Open
Access](http://en.wikipedia.org/wiki/Open_access_\(publishing\)) movement for
scientific research has been initiated. But where the amount of people and
institutions supportive of this movement is gradually growing (especially
where it concerns articles and journals in the Sciences), the spread
concerning Open Access (or even digital availability) of monographs in the
Humanities and Social Sciences (of which the majority of the resources on
offer in the underground text sharing communities consists) has only just
started.

This has lead to a situation in which some have decided that change is not
coming fast enough. Instead of waiting for this utopian Open Access future to
come gradually about, they are actively spreading, copying, scanning and
pirating scholarly texts/monographs online. Although many times accompanied by
lengthy disclaimers about why they are violating copyright (to make the
content more widely accessible for one), many state they will take down the
content if asked. Following the
[copyleft](http://en.wikipedia.org/wiki/Copyleft) movement, what has in a way
thus arisen is a more ‘progressive’ or radical branch of the Open Access
movement. The people who spread these texts deem it inevitable they will be
online eventually, they are just speeding up the process. As Lütgert states: ‘
_The desire of an increasingly larger section of the population to 100-percent
of information is irreversible. The only way there can be slowed down in the
worst case, but not be stopped._

![scribd-logo](https://openreflections.files.wordpress.com/2009/09/scribd-
logo.jpg?w=547)

Still we have not yet answered the question of why publishers (and their
pirated authors) are not more upset about these kinds of websites and
platforms. It is not a simple question of them not being aware that these kind
of textual disseminations are occurring. As mentioned before, the harm to
producers (scholars) and their publishers (in Humanities and Social Sciences
mainly Not-For-Profit University Presses) is less clear. First of all, their
main customers are libraries (compare this to the software business model:
free for the consumer, companies pay), who are still buying the legal content
and mostly follow the policy of buying either print or both print and ebook,
so there are no lost sales there for the publishers. Next to that it is not
certain that the piracy is harming sales. Unlike in literary publishing, the
authors (academics) are already paid and do not loose money (very little maybe
in royalties) from the online availability. Perhaps some publishers also see
the Open Access movement as something inevitably growing and they thus don’t
see the urge to step up or organize a collaborative effort against scholarly
text piracy (where most of the presses also lack the scale to initiate this).
Whereas there has been some more upsurge and worries about _[textbook
piracy](http://bookseller-association.blogspot.com/2008/07/textbook-
piracy.html)_ (since this is of course the area where individual consumers –
students – do directly buy the material) and websites like
[Scribd](http://www.scribd.com/), this mostly has to do with the fact that
these kind of platforms also host non-scholarly content and actively promote
the uploading of texts (where many of the text ‘sharing’ platforms merely
offer downloading facilities). In the case of Scribd the size of the platform
(or the amount of content available on the platform) also has caused concerns
and much [media coverage](http://labnol.blogspot.com/2007/04/scribd-youtube-
for-pirated-ebooks-but.html).

All of this gives a lot of potential power to text sharing communities, and I
guess they know this. Only authors might be directly upset (especially famous
ones gathering a lot of royalties on their work) or in the case of Lütgert,
their beneficiaries, who still do see a lot of money coming directly from
individual customers.

Still, it is not only the lack of fear of possible retaliations that is
feeding the upsurge of text sharing communities. There is a strong ideological
commitment to the inherent good of these developments, and a moral and
political strive towards institutional and societal change when it comes to
knowledge production and dissemination.

![Information Libre](https://openreflections.files.wordpress.com/2009/09
/information-libre.jpg?w=547)As Adrian Johns states in his
[article](http://www.culturemachine.net/index.php/cm/article/view/345/348)
_Piracy as a business force_ , ‘today’s pirate philosophy is a moral
philosophy through and through’. As Jonas Andersson
[states](http://www.culturemachine.net/index.php/cm/article/view/346/359), the
idea of piracy has mostly lost its negative connotations in these communities
and is seen as a positive development, where these movements ‘have begun to
appear less as a reactive force (i.e. ‘breaking the rules’) and more as a
proactive one (‘setting the rules’). Rather than complain about the
conservatism of established forms of distribution they simply create new,
alternative ones.’ Although Andersson states this kind of activism is mostly
_occasional_ , it can be seen expressed clearly in the texts accompanying the
text sharing sites and blogs. However, copyright is perhaps so much _an issue_
on most of these sites (where it is on some of them), as it is something that
seems to be simply ignored for the larger good of aggregating and sharing
resources on the web. As is stated clearly for instance in an
[interview](http://blog.sfmoma.org/2009/08/four-dialogues-2-on-aaaarg/) with
Sean Dockray, who maintains AAAARG:

_" The project wasn’t about criticizing institutions, copyright, authority,
and so on. It was simply about sharing knowledge. This wasn’t as general as it
sounds; I mean literally the sharing of knowledge between various individuals
and groups that I was in correspondence with at the time but who weren’t
necessarily in correspondence with each other."_

Back to Lütgert. The files from textz.com have been saved and are still
[accessible](http://web.archive.org/web/20031208043421/textz.gnutenberg.net/index.php3?enhanced_version=http://textz.com/index.php3)
via [The Internet Archive Wayback
Machine](http://web.archive.org/collections/web.html). In the case of
textz.com, these files contain ’typed out text’, so no scanned contents or
PDF’s. Textz.com (or better said its shadow or mirror) offers an amazing
collection of texts, including artists statements/manifestos and screenplays
from for instance David Lynch.

The text sharing community has evolved and now knows many players. Two other
large members in this kind of ‘pirate theory base network’ (although – and I
have to make that clear! – they offer many (and even mostly) legal and out of
copyright texts), still active today, are
[Monoskop/Burundi](http://burundi.sk/monoskop/log/) and
[AAAARG.ORG](http://a.aaaarg.org/). These kinds of platforms all seem to
disseminate (often even on a titular level) similar content, focusing mostly
on Continental Philosophy and Critical Theory, Cultural Studies and Literary
Theory, The Frankfurter Schule, Sociology/Social Theory, Psychology,
Anthropology and Ethnography, Media Art and Studies, Music Theory, and
critical and avant-garde writers like Kafka, Beckett, Burroughs, Joyce,
Baudrillard, etc.etc.

[Monoskop](http://www.burundi.sk/monoskop/index.php/Main_Page) is, as they
state, a collaborative wiki research on the social history of media art or a
‘living archive of writings on art, culture and media technology’. At the
sitemap of their log, or under the categories section, you can browse their
resources on genre: book, journal, e-zine, report, pamphlet etc. As I found
[here](http://www.slovakia.culturalprofiles.net/?id=7958), Burundi originated
in 2003 as a (Slovakian) media lab working between the arts, science and
technologies, which spread out to a European city based cultural network; They
even functioned as a press, publishing the Anthology of New Media Literature
(in Slovak) in 2006, and they hosted media events and curated festivals. It
dissolved in June 2005 although the
[Monoskop](http://www.slovakia.culturalprofiles.net/?id=7964) research wiki on
media art, has continued to run since the dissolving of Burundi.

![AAAARG](https://openreflections.files.wordpress.com/2009/09/aaaarg.jpg?w=547)As
is stated on their website, AAAARG is a conversation platform, or
alternatively, a school, reading group or journal, maintained by Los Angeles
artist[ Sean Dockray](http://www.design.ucla.edu/people/faculty.php?ID=64
"Sean Dockray"). In the true spirit of Critical Theory, its aim is to ‘develop
critical discourse outside of an institutional framework’. Or even more
beautiful said, it operates in the spaces in between: ‘ _But rather than
thinking of it like a new building, imagine scaffolding that attaches onto
existing buildings and creates new architectures between them_.’ To be able to
access the texts and resources that are being ‘discussed’ at AAAARG, you need
to register, after which you will be able to browse the
[library](http://a.aaaarg.org/library). From this library, you can download
resources, but you can also upload content. You can subscribe to their
[feed](http://aaaarg.org/feed) (RSS/XML) and [like
Monoskop](http://twitter.com/monoskop), AAAARG.org also maintains a [Twitter
account](http://twitter.com/aaaarg) on which updates are posted. The most
interesting part though is the ‘extra’ functions the platform offers: after
you have made an account, you can make your own collections, aggregations or
issues out of the texts in the library or the texts you add. This offers an
alternative (thematically ordered) way into the texts archived on the site.
You also have the possibility to make comments or start a discussion on the
texts. See for instance their elaborate [discussion
lists](http://a.aaaarg.org/discussions). The AAAARG community thus serves both
as a sharing and feedback community and in this way operates in a true p2p
fashion, in a way like p2p seemed originally intended. The difference being
that AAAARG is not based on a distributed network of computers, but is based
on one platform, to which registered users are able to upload a file (which is
not the case on Monoskop for instance – only downloading here).

Via[
mercurunionhall](http://mercerunionhall.blogspot.com/2009/06/aaaargorg.html),
I found the image underneath which depicts AAAARG.ORG's article index
organized as a visual map, showing the connections between the different
texts. This map was created and posted by AAAARG user john, according to
mercurunionhall.

![Connections-v1 by
John](https://openreflections.files.wordpress.com/2009/09/connections-v1-by-
john.jpg?w=547)

Where AAAArg.org focuses again on the text itself - typed out versions of
books - Monoskop works with more modern versions of textual distribution:
scanned versions or full ebooks/pdf’s with all the possibilities they offer,
taking a lot of content from Google books or (Open Access) publishers’
websites. Monoskop also links back to the publishers’ websites or Google
Books, for information about the books or texts (which again proves that the
publishers should know about their activities). To download the text however,
Monoskop links to [Sharebee](http://www.sharebee.com/), keeping the actual
text and the real downloading activity away from its platform.

Another part of the text sharing content consists of platforms offering
documentaries and lectures (so multi-media content) online. One example of the
last is the [Discourse Notebook Archive](http://www.discoursenotebook.com/),
which describes itself as an effort which has as its main goal ‘to make
available lectures in contemporary continental philosophy’ and is maintained
by Todd Kesselman, a PhD Student at The New School for Social Research. Here
you can find lectures from Badiou, Kristeva and Zizek (both audio and video)
and lectures aggregated from the European Graduate School. Kesselman also
links to resources on the web dealing with contemporary continental
philosophy.

![Eule - Society of
Control](https://openreflections.files.wordpress.com/2009/09/eule-society-of-
control.gif?w=547)Society of Control is a website maintained by [Stephan
Dillemuth](http://www.kopenhagen.dk/fileadmin/oldsite/interviews/solmennesker.htm),
an artist living and working in Munich, Germany, offering amongst others an
overview of his work and scientific research. According to
[this](http://www2.khib.no/~hovedfag/akademiet_05/tekster/interview.html)
interview conducted by Kristian Ø Dahl and Marit Flåtter his work is a
response to the increased influence of the neo-liberal world order on
education, creating a culture industry that is more than often driven by
commercial interests. He asks the question ‘How can dissidence grow in the
blind spots of the ‘society of control’ and articulate itself?’ His website,
the [Society of Control](http://www.societyofcontrol.com/disclaimer1.htm) is,
as he states, ‘an independent organization whose profits are entirely devoted
to research into truth and meaning.’

Society of Control has a [library
section](http://www.societyofcontrol.com/library/) which contains works from
some of the biggest thinkers of the twentieth century: Baudrillard, Adorno,
Debord, Bourdieu, Deleuze, Habermas, Sloterdijk und so weiter, and so much
more, a lot in German, and all ‘typed out’ texts. The library section offers a
direct search function, a category function and a a-z browse function.
Dillemuth states that he offers this material under fair use, focusing on not
for profit, freedom of information and the maintenance of freedom of speech
and information and making information accessible to all:

_“The Societyofcontrol website site contains information gathered from many
different sources. We see the internet as public domain necessary for the free
flow and exchange of information. However, some of these materials contained
in this site maybe claimed to be copyrighted by various unknown persons. They
will be removed at the copyright holder 's request within a reasonable period
of time upon receipt of such a request at the email address below. It is not
the intent of the Societyofcontrol to have violated or infringed upon any
copyrights.”_

![Vilem Flusser, Andreas Strohl, Erik Eisel Writings
\(2002\)](https://openreflections.files.wordpress.com/2009/09/vilem-flusser-
andreas-strohl-erik-eisel-writings-2002.jpg?w=547)Important in this respect is
that he put the responsibility of reading/using/downloading the texts on his
site with the viewers, and not with himself: _“Anyone reading or looking at
copyright material from this site does so at his/her own peril, we disclaim
any participation or liability in such actions.”_

Fark Yaraları = [Scars of Différance](http://farkyaralari.blogspot.com/) and
[Multitude of blogs](http://multitudeofblogs.blogspot.com/) are maintained by
the same author, Renc-u-ana, a philosophy and sociology student from Istanbul.
The first is his personal blog (with also many links to downloadable texts),
focusing on ‘creating an e-library for a Heideggerian philosophy and
Bourdieuan sociology’ on which he writes ‘market-created inequalities must be
overthrown in order to close knowledge gap.’ The second site has a clear
aggregating function with the aim ‘to give united feedback for e-book
publishing sites so that tracing and finding may become easier.’ And a call
for similar blogs or websites offering free ebook content. The blog is
accompanied by a nice picture of a woman warning to keep quiet, very
paradoxically appropriate to the context. Here again, a statement from the
host on possible copyright infringement _: ‘None of the PDFs are my own
productions. I 've collected them from web (e-mule, avax, libreremo, socialist
bros, cross-x, gigapedia..) What I did was thematizing._’ The same goes for
[pdflibrary](http://pdflibrary.wordpress.com/) (which seems to be from the
same author), offering texts from Derrida, Benjamin, Deleuze and the likes:
_‘_ _None of the PDFs you find here are productions of this blog. They are
collected from different places in the web (e-mule, avax, libreremo, all
socialist bros, cross-x, …). The only work done here is thematizing and
tagging.’_

[![GRUP_Z~1](https://openreflections.files.wordpress.com/2009/09/grup_z11.jpg?w=547)](http://multitudeofblogs.blogspot.com/)Our
student from Istanbul lists many text sharing sites on Multitude of blogs,
including [Inishark](http://danetch.blogspot.com/) (amongst others Badiou,
Zizek and Derrida), [Revelation](http://revelation-online.blogspot.com/2009/02
/keeping-ten-commandments.html) (a lot of history and bible study), [Museum of
accidents](http://museumofaccidents.blogspot.com/) (many resources relating to
again, critical theory, political theory and continental philhosophy) and
[Makeworlds](http://makeworlds.net/) (initiated from the [make world
festival](http://www.makeworlds.org/1/index.html) 2001).
[Mariborchan](http://mariborchan.wordpress.com/) is mainly a Zizek resource
site (also Badiou and Lacan) and offers next to ebooks also video and audio
(lectures and documentaries) and text files, all via links to file sharing
platforms.

What is clear is that the text sharing network described above (I am sure
there are many more related to other fields and subjects) is also formed and
maintained by the fact that the blogs and resource sites link to each other in
their blog rolls, which is what in the end makes up the network of text
sharing, only enhanced by RSS feeds and Twitter accounts, holding together
direct communication streams with the rest of the community. That there has
not been one major platform or aggregation site linking them together and
uploading all the texts is logical if we take into account the text sharing
history described before and this can thus be seen as a clear tactic: it is
fear, fear for what happened to textz.com and fear for the issue of scale and
fear of no longer operating at the borders, on the outside or at the fringes.
Because a larger scale means they might really get noticed. The idea of
secrecy and exclusivity which makes for the idea of the underground is very
practically combined with the idea that in this way the texts are available in
a multitude of places and can thus not be withdrawn or disappear so easily.

This is the paradox of the underground: staying small means not being noticed
(widely), but will mean being able to exist for probably an extended period of
time. Becoming (too) big will mean reaching more people and spreading the
texts further into society, however it will also probably mean being noticed
as a treat, as a ‘network of text-piracy’. The true strategy is to retain this
balance of openly dispersed subversivity.

Update 25 November 2005: Another interesting resource site came to my
attention recently: [Bedeutung](http://http://www.bedeutung.co.uk/index.php),
a philosophical and artistic initiative consisting of three projects:
[Bedeutung
Magazine](http://www.bedeutung.co.uk/index.php?option=com_content&view=article&id=1&Itemid=3),
[Bedeutung
Collective](http://www.bedeutung.co.uk/index.php?option=com_content&view=article&id=67&Itemid=4)
and [Bedeutung Blog](http://bedeutung.wordpress.com/), hosts a
[library](http://www.bedeutung.co.uk/index.php?option=com_content&view=article&id=85&Itemid=45)
section which links to freely downloadable online e-books, articles, audio
recordings and videos.

### Share this:

* [Twitter](https://openreflections.wordpress.com/2009/09/20/scanners-collectors-and-aggregators-on-the-%e2%80%98underground-movement%e2%80%99-of-pirated-theory-text-sharing/?share=twitter "Click to share on Twitter")
* [Facebook](https://openreflections.wordpress.com/2009/09/20/scanners-collectors-and-aggregators-on-the-%e2%80%98underground-movement%e2%80%99-of-pirated-theory-text-sharing/?share=facebook "Click to share on Facebook")
*

### Like this:

Like Loading...

### _Related_

### 17 comments on " Scanners, collectors and aggregators. On the
‘underground movement’ of (pirated) theory text sharing"

1. Pingback: [Humanism at the fringe « Snarkmarket](http://snarkmarket.com/2009/3428)

2. Pingback: [Scanners, collectors and aggregators. On the 'underground movement' of (pirated) theory text sharing « Mariborchan](http://mariborchan.wordpress.com/2009/09/20/scanners-collectors-and-aggregators-on-the-underground-movement-of-pirated-theory-text-sharing/)

3. Mariborchan

September 20, 2009

![](https://2.gravatar.com/avatar/b8eea582f7e9ac0a622e3dacecad5835?s=55&d=&r=G)

I took the liberty to pirate this article.

4. [jannekeadema1979](http://www.openreflections.wordpress.com)

September 20, 2009

![](https://2.gravatar.com/avatar/e4898febe4230b412db7f7909bcb9fc9?s=55&d=&r=G)

Thanks, it's all about the sharing! Hope you liked it.

5. Pingback: [links for 2009-09-20 « Blarney Fellow](http://blarneyfellow.wordpress.com/2009/09/21/links-for-2009-09-20/)

6. [scars of différance](http://farkyaralari.blogspot.com)

September 30, 2009

![](https://1.gravatar.com/avatar/7b10f9b53e5fe3d284857da59fe8919c?s=55&d=&r=G)

hi there, I'm the owner of the Scars of Différance blog, I'm grateful for your
reading which nurtures self-reflexivity.

text-sharers phylum is a Tardean phenomena, it works through imitation and
differences differentiate styles and archives. my question was inherited from
aby warburg who is perhaps the first kantian librarian (not books, but the
nomenclatura of books must be thought!), I shape up a library where books
speak to each other, each time fragmentary.

you are right about the "fear", that's why I don't reupload books that are
deleted from mediafire. blog is one of the ways, for ex there are e-mail
groups where chain-sharings happen and there are forums where people ask each
other from different parts of the world, to scan a book that can't be found in
their library/country. I understand publishers' qualms (I also work in a
turkish publishing house and make translations). but they miss a point, it was
the very movement which made book a medium that de-posits "book" (in the
Blanchotian sense): these blogs do indeed a very important service, they save
books from the databanks. I'm not going to make a easy rider argument and
decry technology.what I mean is this: these books are the very bricks which
make up resistance -they are not compost-, it is a sharing "partage" and these
fragmentary impartations (the act in which 'we' emancipate books from the
proper names they bear: author, editor, publisher, queen,…) make words blare.
our work: to disenfranchise.

to get larger, to expand: these are too ambitious terms, one must learn to
stay small, remain finite. a blog can not supplant the non-place of the
friendships we make up around books.

the epigraph at the top of my blog reads: "what/who exorbitates mutates into
its opposite" from a Turkish poet Cahit Zarifoğlu. and this logic is what
generates the slithering of the word. we must save books from its own ends.

thanks again, best.

p.s. I'm not the owner of pdf library.

7. Bedeutung

November 24, 2009

![](https://0.gravatar.com/avatar/665e8f5cb5d701f1c7e310b9b6fef277?s=55&d=&r=G)

Here, an article that might interest:

sharing-free-piracy>

8. [jannekeadema1979](http://www.openreflections.wordpress.com)

November 24, 2009

![](https://2.gravatar.com/avatar/e4898febe4230b412db7f7909bcb9fc9?s=55&d=&r=G)

Thanks for the link, good article, agree with the contents, especially like
the part 'Could, for instance, the considerable resources that might be
allocated to protecting, policing and, ultimately, sanctioning online file-
sharing not be used for rendering it less financially damaging for the
creative sector?'
I like this kind of pragmatic reasoning, and I know more people do.
By the way, checked Bedeutung, great journal, and love your
[library](http://www.bedeutung.co.uk/index.php?option=com_content&view=article&id=86&Itemid=46)
section! Will add it to the main article.

9. Pingback: [Borderland › Critical Readings](http://borderland.northernattitude.org/2010/01/07/critical-readings/)

10. Pingback: [Mariborchan » Scanners, collectors and aggregators. On the 'underground movement' of (pirated) theory text sharing](http://mariborchan.com/scanners-collectors-and-aggregators-on-the-underground-movement-of-pirated-theory-text-sharing/)

11. Pingback: [Urgh! AAAARG dead? « transversalinflections](http://transversalinflections.wordpress.com/2010/05/29/urgh-aaaarg-dead/)

12. [nick knouf](http://turbulence.org/Works/JJPS)

June 18, 2010

![](https://0.gravatar.com/avatar/9908205c0ec5ecb5f27266e7cb7bff13?s=55&d=&r=G)

This is Nick, the author of the JJPS project; thanks for the tweet! I actually
came across this blog post while doing background research for the project and
looking for discussions about AAAARG; found out about a lot of projects that I
didn't already know about. One thing that I haven't been able to articulate
very well is that I think there's an interesting relationship between, say,
Kenneth Goldsmith's own poetry and his founding of Ubu Web; a collation and
reconfiguration of the detritus of culture (forgotten works of the avant-
gardes locked up behind pay walls of their own, or daily minutiae destined to
be forgotten), which is something that I was trying to do, in a more
circumscribed space, in JJPS Radio. But the question of distribution of
digital works is something I find fascinating, as there are all sorts of
avenues that we could be investigating but we are not. The issue, as it often
is, is one of technical ability, and that's why one of the future directions
of JJPS is to make some of the techniques I used easier to use. Those who want
to can always look into the code, which is of course freely available, but
that cannot and should not be a prerequisite.

13. [jannekeadema1979](http://www.openreflections.wordpress.com)

June 18, 2010

![](https://2.gravatar.com/avatar/e4898febe4230b412db7f7909bcb9fc9?s=55&d=&r=G)

Hi Nick, thanks for your comment. I love the JJPS and it would be great if the
technology you mention would be easily re-usable. What I find fascinating is
how you use another medium (radio) to translate/re-mediate and in a way also
unlock textual material. I see you also have an Open Access and a Cut-up hour.
I am very much interested in using different media to communicate scholarly
research and even more in remixing and re-mediating textual scholarship. I
think your project(s) is a very valuable exploration of these themes while at
the same time being a (performative) critique of the current system. I am in
awe.

14. Pingback: [Text-sharing "in the paradise of too many books" – SLOTHROP](http://slothrop.com/2012/11/16/text-sharing-in-the-paradise-of-too-many-books/)

15. [Jason Kennedy](http://www.facebook.com/903035234)

May 6, 2015

![](https://i2.wp.com/graph.facebook.com/v2.2/903035234/picture?q=type%3Dlarge%26_md5%3Da95c382cfe878c70aaad88831f511711&resize=55%2C55)

Some obvious fails suggest major knowledge gaps regarding sourcing texts
online (outside of legal channels).

And featuring Scribd doesn't help.

Q: What's the largest pirate book site on the net, with an inventory almost as
large as Amazon?

And it's not L_____ G_____

16. [Janneke Adema](http://www.openreflections.wordpress.com)

May 6, 2015

![](https://2.gravatar.com/avatar/e4898febe4230b412db7f7909bcb9fc9?s=55&d=&r=G)

Do enlighten us Jason… And might I remind you that this post was written in
2009?

17. Mike Andrews

May 7, 2015

![](https://0.gravatar.com/avatar/c255ce6922fbb867a2ee635beb85bd71?s=55&d=&r=G)

Interesting topic, but also odd in some respects. Not translating the German
quotes is very unthoughtful and maybe even arrogant. If you are interested in
open access accessibility needs to be your top priority. I can read German,
but many of my friends (and most of the world) can't. It take a little effort
to just fix this, but you can do it.


Adema
The Ethics of Emergent Creativity: Can We Move Beyond Writing as Human Enterprise, Commodity and Innovation?
2019


# 3\. The Ethics of Emergent Creativity: Can We Move Beyond Writing as Human
Enterprise, Commodity and Innovation?

Janneke Adema

© 2019 Janneke Adema, CC BY 4.0
[https://doi.org/10.11647/OBP.0159.03](https://doi.org/10.11647/OBP.0159.03)

In 2013, the Authors’ Licensing & Collecting Society
(ALCS)[1](ch3.xhtml#footnote-152) commissioned a survey of its members to
explore writers’ earnings and contractual issues in the UK. The survey, the
results of which were published in the summary booklet ‘What Are Words Worth
Now?’, was carried out by Queen Mary, University of London. Almost 2,500
writers — from literary authors to academics and screenwriters — responded.
‘What Are Words Worth Now?’ summarises the findings of a larger study titled
‘The Business Of Being An Author: A Survey Of Authors’ Earnings And
Contracts’, carried out by Johanna Gibson, Phillip Johnson and Gaetano Dimita
and published in April 2015 by Queen Mary University of
London.[2](ch3.xhtml#footnote-151) The ALCS press release that accompanies the
study states that this ‘shocking’ new research into authors’ earnings finds a
‘dramatic fall, both in incomes, and the number of those working full-time as
writers’.[3](ch3.xhtml#footnote-150) Indeed, two of the main findings of the
study are that, first of all, the income of a professional author (which the
research defines as those who dedicate the majority of their time to writing)
has dropped 29% between 2005 and 2013, from £12,330 (£15,450 in real terms) to
just £11,000. Furthermore, the research found that in 2005 40% of professional
authors earned their incomes solely from writing, where in 2013 this figure
had dropped to just 11.5%.[4](ch3.xhtml#footnote-149)

It seems that one of the primary reasons for the ALCS to conduct this survey
was to collect ‘accurate, independent data’ on writers’ earnings and
contractual issues, in order for the ALCS to ‘make the case for authors’
rights’ — at least, that is what the ALCS Chief Executive Owen Atkinson writes
in the introduction accompanying the survey, which was sent out to all ALCS
members.[5](ch3.xhtml#footnote-148) Yet although this research was conducted
independently and the researchers did not draw conclusions based on the data
collected — in the form of policy recommendations for example — the ALCS did
frame the data and findings in a very specific way, as I will outline in what
follows; this framing includes both the introduction to the survey and the
press release that accompanies the survey’s findings. Yet to some extent this
framing, as I will argue, is already apparent in the methodology used to
produce the data underlying the research report.

First of all, let me provide an example of how the research findings have been
framed in a specific way. Chief Executive Atkinson mentions in his
introduction to the survey that the ALCS ‘exists to ensure that writers are
treated fairly and remunerated appropriately’. He continues that the ALCS
commissioned the survey to collect ‘accurate, independent data,’ in order to
‘make the case for writers’ rights’.[6](ch3.xhtml#footnote-147) Now this focus
on rights in combination with remuneration is all the more noteworthy if we
look at an earlier ALCS funded report from 2007, ‘Authors’ Earnings from
Copyright and Non-Copyright Sources: a Survey of 25,000 British and German
Writers’. This report is based on the findings of a 2006 writers’ survey,
which the 2013 survey updates. The 2007 report argues conclusively that
current copyright law has empirically failed to ensure that authors receive
appropriate reward or remuneration for the use of their
work.[7](ch3.xhtml#footnote-146) The data from the subsequent 2013 survey show
an even bleaker picture as regards the earnings of writers. Yet Atkinson
argues in the press release accompanying the findings of the 2013 survey that
‘if writers are to continue making their irreplaceable contribution to the UK
economy, they need to be paid fairly for their work. This means ensuring
clear, fair contracts with equitable terms and a copyright regime that support
creators and their ability to earn a living from their
creations’.[8](ch3.xhtml#footnote-145) Atkinson does not outline what this
copyright regime should be, nor does he draw attention to how this model could
be improved. More importantly, the fact that a copyright model is needed to
ensure fair pay stands uncontested for Atkinson and the ALCS — not surprising
perhaps, as protecting and promoting the rights of authors is the primary
mission of this member society. If there is any culprit to be held responsible
for the study’s ‘shocking’ findings, it is the elusive and further undefined
notion of ‘the digital’. According to Atkinson, digital technology is
increasingly challenging the mission of the ALCS to ensure fair remuneration
for writers, since it is ‘driving new markets and leading the copyright
debate’.[9](ch3.xhtml#footnote-144) The 2013 study is therefore, as Atkinson
states ‘the first to capture the impact of the digital revolution on writers’
working lives’.[10](ch3.xhtml#footnote-143) This statement is all the more
striking if we take into consideration that none of the questions in the 2013
survey focus specifically on digital publishing.[11](ch3.xhtml#footnote-142)
It therefore seems that — despite earlier findings — the ALCS has already
decided in advance what ‘the digital’ is and that a copyright regime is the
only way to ensure fair remuneration for writers in a digital context.

## Creative Industries

This strong uncontested link between copyright and remuneration can be traced
back to various other aspects of the 2015 report and its release. For example,
the press release draws a strong connection between the findings of the report
and the development of the creative industries in the UK. Again, Atkinson
states in the press release:

These are concerning times for writers. This rapid decline in both author
incomes and in the numbers of those writing full-time could have serious
implications for the economic success of the creative industries in the
UK.[12](ch3.xhtml#footnote-141)

This connection to the creative industries — ‘which are now worth £71.4
billion per year to the UK economy’,[13](ch3.xhtml#footnote-140) Atkinson
points out — is not surprising where the discourse around creative industries
maintains a clear bond between intellectual property rights and creative
labour. As Geert Lovink and Ned Rossiter state in their MyCreativity Reader,
the creative industries consist of ‘the generation and exploitation of
intellectual property’.[14](ch3.xhtml#footnote-139) Here they refer to a
definition created as part of the UK Government’s Creative Industries Mapping
Document,[15](ch3.xhtml#footnote-138) which states that the creative
industries are ‘those industries which have their origin in individual
creativity, skill and talent and which have a potential for wealth and job
creation through the generation and exploitation of intellectual property’.
Lovink and Rossiter point out that the relationship between IP and creative
labour lies at the basis of the definition of the creative industries where,
as they argue, this model of creativity assumes people only create to produce
economic value. This is part of a larger trend Wendy Brown has described as
being quintessentially neoliberal, where ‘neoliberal rationality disseminates
the model of the market to all domains and activities’ — and this includes the
realm of politics and rights.[16](ch3.xhtml#footnote-137) In this sense the
economization of culture and the concept of creativity is something that has
become increasingly embedded and naturalised. The exploitation of intellectual
property stands at the basis of the creative industries model, in which
cultural value — which can be seen as intricate, complex and manifold —
becomes subordinated to the model of the market; it becomes economic
value.[17](ch3.xhtml#footnote-136)

This direct association of cultural value and creativity with economic value
is apparent in various other facets of the ALCS commissioned research and
report. Obviously, the title of the initial summary booklet, as a form of
wordplay, asks ‘What are words worth?’. It becomes clear from the context of
the survey that the ‘worth’ of words will only be measured in a monetary
sense, i.e. as economic value. Perhaps even more important to understand in
this context, however, is how this economic worth of words is measured and
determined by focusing on two fixed and predetermined entities in advance.
First of all, the study focuses on individual human agents of creativity (i.e.
creators contributing economic value): the value of writing is established by
collecting data and making measurements at the level of individual authorship,
addressing authors/writers as singular individuals throughout the survey.
Secondly, economic worth is further determined by focusing on the fixed and
stable creative objects authors produce, in other words the study establishes
from the outset a clear link between the worth and value of writing and
economic remuneration based on individual works of
writing.[18](ch3.xhtml#footnote-135) Therefore in this process of determining
the economic worth of words, ‘writers’ and/or ‘authors’ are described and
positioned in a certain way in this study (i.e. as the central agents and
originators of creative objects), as is the form their creativity takes in the
shape of quantifiable outputs or commodities. The value of both these units of
measurement (the creator and the creative objects) are then set off against
the growth of the creative industries in the press release.

The ALCS commissioned survey provides some important insights into how
authorship, cultural works and remuneration — and ultimately, creativity — is
currently valued, specifically in the context of the creative industries
discourse in the UK. What I have tried to point out — without wanting to
downplay the importance either of writers receiving fair remuneration for
their work or of issues related to the sustainability of creative processes —
is that the findings from this survey have both been extracted and
subsequently framed based on a very specific economic model of creativity (and
authorship). According to this model, writing and creativity are sustained
most clearly by an individual original creator (an author) who extracts value
from the work s/he creates and distributes, aided by an intellectual property
rights regime. As I will outline more in depth in what follows, the enduring
liberal and humanist presumptions that underlie this survey continuously
reinforce the links between the value of writing and established IP and
remuneration regimes, and support a vision in which authorship and creativity
are dependent on economic incentives and ownership of works. By working within
this framework and with these predetermined concepts of authorship and
creativity (and ‘the digital’) the ALCS is strongly committed to the upkeep of
a specific model and discourse of creativity connected to the creative
industries. The ALCS does not attempt to complicate this model, nor does it
search for alternatives even when, as the 2007 report already implies, the
existing IP model has empirically failed to support the remuneration of
writers appropriately.

I want to use this ALCS survey as a reference point to start problematising
existing constructions of creativity, authorship, ownership, and
sustainability in relation to the ethics of publishing. To explore what ‘words
are worth’ and to challenge the hegemonic liberal humanist model of creativity
— to which the ALCS adheres — I will examine a selection of theoretical and
practical publishing and writing alternatives, from relational and posthuman
authorship to radical open access and uncreative writing. These alternatives
do not deny the importance of fair remuneration and sustainability for the
creative process; however, they want to foreground and explore creative
relationalities that move beyond the individual author and her ownership of
creative objects as the only model to support creativity and cultural
exchange. By looking at alternatives while at the same time complicating the
values and assumptions underlying the dominant narrative for IP expansion, I
want to start imagining what more ethical, fair and emergent forms of
creativity might entail. Forms that take into consideration the various
distributed and entangled agencies involved in the creation of cultural
content — which are presently not being included in the ALCS survey on fair
remuneration, for example. As I will argue, a reconsideration of the liberal
and humanist model of creativity might actually create new possibilities to
consider the value of words, and with that perhaps new solutions to the
problems pointed out in the ALCS study.

## Relational and Distributed Authorship

One of the main critiques of the liberal humanist model of authorship concerns
how it privileges the author as the sole source and origin of creativity. Yet
the argument has been made, both from a historical perspective and in relation
to today’s networked digital environment, that authorship and creativity, and
with that the value and worth of that creativity, are heavily
distributed.[19](ch3.xhtml#footnote-134) Should we therefore think about how
we can distribute notions of authorship and creativity more ethically when
defining the worth and value of words too? Would this perhaps mean a more
thorough investigation of what and who the specific agencies involved in
creative production are? This seems all the more important given that, today,
‘the value of words’ is arguably connected not to (distributed) authors or
creative agencies, but to rights holders (or their intermediaries such as
agents).[20](ch3.xhtml#footnote-133) From this perspective, the problem with
the copyright model as it currently functions is that the creators of
copyright don’t necessarily end up benefiting from it — a point that was also
implied by the authors of the 2007 ALCS commissioned report. Copyright
benefits rights holders, and rights holders are not necessarily, and often not
at all, involved in the production of creative work.

Yet copyright and the work as object are knit tightly to the authorship
construct. In this respect, the above criticism notwithstanding, in a liberal
vision of creativity and ownership the typical unit remains either the author
or the work. This ‘solid and fundamental unit of the author and the work’ as
Foucault has qualified it, albeit challenged, still retains a privileged
position.[21](ch3.xhtml#footnote-132) As Mark Rose argues, authorship — as a
relatively recent cultural formation — can be directly connected to the
commodification of writing and to proprietorship. Even more it developed in
tandem with the societal principle of possessive individualism, in which
individual property rights are protected by the social
order.[22](ch3.xhtml#footnote-131)

Some of the more interesting recent critiques of these constructs of
authorship and proprietorship have come from critical and feminist legal
studies, where scholars such as Carys Craig have started to question these
connections further. As Craig, Turcotte and Coombe argue, IP and copyright are
premised on liberal and neoliberal assumptions and constructs, such as
ownership, private rights, self-interest and
individualism.[23](ch3.xhtml#footnote-130) In this sense copyright,
authorship, the work as object, and related discourses around creativity
continuously re-establish and strengthen each other as part of a self-
sustaining system. We have seen this with the discourse around creative
industries, as part of which economic value comes to stand in for the creative
process itself, which, according to this narrative, can only be sustained
through an IP regime. Furthermore, from a feminist new materialist position,
the current discourse on creativity is very much a material expression of
creativity rather than merely its representation, where this discourse has
been classifying, constructing, and situating creativity (and with that,
authorship) within a neoliberal framework of creative industries.

Moving away from an individual construct of creativity therefore immediately
affects the question of the value of words. In our current copyright model
emphasis lies on the individual original author, but in a more distributed
vision the value of words and of creative production can be connected to a
broader context of creative agencies. Historically there has been a great
discursive shift from a valuing of imitation or derivation to a valuing of
originality in determining what counts as creativity or creative output.
Similar to Rose, Craig, Turcotte and Coombe argue that the individuality and
originality of authorship in its modern form established a simple route
towards individual ownership and the propertisation of creative achievement:
the original work is the author’s ownership whereas the imitator or pirate is
a trespasser of thief. In this sense original authorship is
‘disproportionately valued against other forms of cultural expression and
creative play’, where copyright upholds, maintains and strengthens the binary
between imitator and creator — defined by Craig, Turcotte and Coombe as a
‘moral divide’.[24](ch3.xhtml#footnote-129) This also presupposes a notion of
creativity that sees individuals as autonomous, living in isolation from each
other, ignoring their relationality. Yet as Craig, Turcotte and Coombe argue,
‘the act of writing involves not origination, but rather the adaptation,
derivation, translation and recombination of “raw material” taken from
previously existing texts’.[25](ch3.xhtml#footnote-128) This position has also
been explored extensively from within remix studies and fan culture, where the
adaptation and remixing of cultural content stands at the basis of creativity
(what Lawrence Lessig has called Read/Write culture, opposed to Read/Only
culture).[26](ch3.xhtml#footnote-127) From the perspective of access to
culture — instead of ownership of cultural goods or objects — one could also
argue that its value would increase when we are able to freely distribute it
and with that to adapt and remix it to create new cultural content and with
that cultural and social value — this within a context in which, as Craig,
Turcotte and Coombe point out, ‘the continuous expansion of intellectual
property rights has produced legal regimes that restrict access and downstream
use of information resources far beyond what is required to encourage their
creation’[27](ch3.xhtml#footnote-126)

To move beyond Enlightenment ideals of individuation, detachment and unity of
author and work, which determine the author-owner in the copyright model,
Craig puts forward a post-structuralist vision of relational authorship. This
sees the individual as socially situated and constituted — based also on
feminist scholarship into the socially situated self — where authorship in
this vision is situated within the communities in which it exists, but also in
relation to the texts and discourses that constitute it. Here creativity takes
place from within a network of social relations and the social dimensions of
authorship are recognised, as connectivity goes hand in hand with individual
autonomy. Craig argues that copyright should not be defined out of clashing
rights and interests but should instead focus on the kinds of relationships
this right would structure; it should be understood in relational terms: ‘it
structures relationships between authors and users, allocating powers and
responsibilities amongst members of cultural communities, and establishing the
rules of communication and exchange’.[28](ch3.xhtml#footnote-125) Cultural
value is then defined within these relationships.

## Open Access and the Ethics of Care

Craig, Turcotte and Coombe draw a clear connection between relational
authorship, feminism and (the ideals of) the open access movement, where as
they state, ‘rather than adhering to the individuated form of authorship that
intellectual property laws presuppose, open access initiatives take into
account varying forms of collaboration, creativity and
development’.[29](ch3.xhtml#footnote-124) Yet as I and others have argued
elsewhere,[30](ch3.xhtml#footnote-123) open access or open access publishing
is not a solid ideological block or model; it is made up of disparate groups,
visions and ethics. In this sense there is nothing intrinsically political or
democratic about open access, practitioners of open access can just as well be
seen to support and encourage open access in connection with the neoliberal
knowledge economy, with possessive individualism — even with CC licenses,
which can be seen as strengthening individualism —[31](ch3.xhtml#footnote-122)
and with the unity of author and work.[32](ch3.xhtml#footnote-121)

Nevertheless, there are those within the loosely defined and connected
‘radical open access community’, that do envision their publishing outlook and
relationship towards copyright, openness and authorship within and as part of
a relational ethics of care.[33](ch3.xhtml#footnote-120) For example Mattering
Press, a scholar-led open access book publishing initiative founded in 2012
and launched in 2016, publishes in the field of Science and Technology Studies
(STS) and works with a production model based on cooperation and shared
scholarship. As part of its publishing politics, ethos and ideology, Mattering
Press is therefore keen to include various agencies involved in the production
of scholarship, including ‘authors, reviewers, editors, copy editors, proof
readers, typesetters, distributers, designers, web developers and
readers’.[34](ch3.xhtml#footnote-119) They work with two interrelated feminist
(new materialist) and STS concepts to structure and perform this ethos:
mattering[35](ch3.xhtml#footnote-118) and care.[36](ch3.xhtml#footnote-117)
Where it concerns mattering, Mattering Press is conscious of how their
experiment in knowledge production, being inherently situated, puts new
relationships and configurations into the world. What therefore matters for
them are not so much the ‘author’ or the ‘outcome’ (the object), but the
process and the relationships that make up publishing:

[…] the way academic texts are produced matters — both analytically and
politically. Dominant publishing practices work with assumptions about the
conditions of academic knowledge production that rarely reflect what goes on
in laboratories, field sites, university offices, libraries, and various
workshops and conferences. They tend to deal with almost complete manuscripts
and a small number of authors, who are greatly dependent on the politics of
the publishing industry.[37](ch3.xhtml#footnote-116)

For Mattering Press care is something that extends not only to authors but to
the many other actants involved in knowledge production, who often provide
free volunteer labour within a gift economy context. As Mattering Press
emphasises, the ethics of care ‘mark vital relations and practices whose value
cannot be calculated and thus often goes unacknowledged where logics of
calculation are dominant’.[38](ch3.xhtml#footnote-115) For Mattering Press,
care can help offset and engage with the calculative logic that permeates
academic publishing:

[…] the concept of care can help to engage with calculative logics, such as
those of costs, without granting them dominance. How do we calculate so that
calculations do not dominate our considerations? What would it be to care for
rather than to calculate the cost of a book? This is but one and arguably a
relatively conservative strategy for allowing other logics than those of
calculation to take centre stage in publishing.[39](ch3.xhtml#footnote-114)

This logic of care refers, in part, to making visible the ‘unseen others’ as
Joe Deville (one of Mattering Press’s editors) calls them, who exemplify the
plethora of hidden labour that goes unnoticed within this object and author-
focused (academic) publishing model. As Endre Danyi, another Mattering Press
editor, remarks, quoting Susan Leigh Star: ‘This is, in the end, a profoundly
political process, since so many forms of social control rely on the erasure
or silencing of various workers, on deleting their work from representations
of the work’.[40](ch3.xhtml#footnote-113)

## Posthuman Authorship

Authorship is also being reconsidered as a polyvocal and collaborative
endeavour by reflecting on the agentic role of technology in authoring
content. Within digital literature, hypertext and computer-generated poetry,
media studies scholars have explored the role played by technology and the
materiality of text in the creation process, where in many ways writing can be
seen as a shared act between reader, writer and computer. Lori Emerson
emphasises that machines, media or technology are not neutral in this respect,
which complicates the idea of human subjectivity. Emerson explores this
through the notion of ‘cyborg authorship’, which examines the relation between
machine and human with a focus on the potentiality of in-
betweenness.[41](ch3.xhtml#footnote-112) Dani Spinosa talks about
‘collaboration with an external force (the computer, MacProse, technology in
general)’.[42](ch3.xhtml#footnote-111) Extending from the author, the text
itself, and the reader as meaning-writer (and hence playing a part in the
author function), technology, she states, is a fourth term in this
collaborative meaning-making. As Spinosa argues, in computer-generated texts
the computer is more than a technological tool and becomes a co-producer,
where it can occur that ‘the poet herself merges with the machine in order to
place her own subjectivity in flux’.[43](ch3.xhtml#footnote-110) Emerson calls
this a ‘break from the model of the poet/writer as divinely inspired human
exemplar’, which is exemplified for her in hypertext, computer-generated
poetry, and digital poetry.[44](ch3.xhtml#footnote-109)

Yet in many ways, as Emerson and Spinosa also note, these forms of posthuman
authorship should be seen as part of a larger trend, what Rolf Hughes calls an
‘anti-authorship’ tradition focused on auto-poesis (self-making), generative
systems and automatic writing. As Hughes argues, we see this tradition in
print forms such as Oulipo and in Dada experiments and surrealist games
too.[45](ch3.xhtml#footnote-108) But there are connections here with broader
theories that focus on distributed agency too, especially where it concerns
the influence of the materiality of the text. Media theorists such as N.
Katherine Hayles and Johanna Drucker have extensively argued that the
materiality of the page is entangled with the intentionality of the author as
a further agency; Drucker conceptualises this through a focus on ‘conditional
texts’ and ‘performative materiality’ with respect to the agency of the
material medium (be it the printed page or the digital
screen).[46](ch3.xhtml#footnote-107)

Where, however, does the redistribution of value creation end in these
narratives? As Nick Montfort states with respect to the agency of technology,
‘should other important and inspirational mechanisms — my CD player, for
instance, and my bookshelves — get cut in on the action as
well?’[47](ch3.xhtml#footnote-106) These distributed forms of authorship do
not solve issues related to authorship or remuneration but further complicate
them. Nevertheless Montfort is interested in describing the processes involved
in these types of (posthuman) co-authorship, to explore the (previously
unexplored) relationships and processes involved in the authoring of texts
more clearly. As he states, this ‘can help us understand the role of the
different participants more fully’.[48](ch3.xhtml#footnote-105) In this
respect a focus on posthuman authorship and on the various distributed
agencies that play a part in creative processes is not only a means to disrupt
the hegemonic focus on a romantic single and original authorship model, but it
is also about a sensibility to (machinic) co-authorship, to the different
agencies involved in the creation of art, and playing a role in creativity
itself. As Emerson remarks in this respect: ‘we must be wary of granting a
(romantic) specialness to human intentionality — after all, the point of
dividing the responsibility for the creation of the poems between human and
machine is to disrupt the singularity of human identity, to force human
identity to intermingle with machine identity’.[49](ch3.xhtml#footnote-104)

## Emergent Creativity

This more relational notion of rights and the wider appreciation of the
various (posthuman) agencies involved in creative processes based on an ethics
of care, challenges the vision of the single individualised and original
author/owner who stands at the basis of our copyright and IP regime — a vision
that, it is worth emphasising, can be seen as a historical (and Western)
anomaly, where collaborative, anonymous, and more polyvocal models of
authorship have historically prevailed.[50](ch3.xhtml#footnote-103) The other
side of the Foucauldian double bind, i.e. the fixed cultural object that
functions as a commodity, has however been similarly critiqued from several
angles. As stated before, and as also apparent from the way the ALCS report
has been framed, currently our copyright and remuneration regime is based on
ownership of cultural objects. Yet as many have already made clear, this
regime and discourse is very much based on physical objects and on a print-
based context.[51](ch3.xhtml#footnote-102) As such the idea of ‘text’ (be it
print or digital) has not been sufficiently problematised as versioned,
processual and materially changing within an IP context. In other words, text
and works are mostly perceived as fixed and stable objects and commodities
instead of material and creative processes and entangled relationalities. As
Craig et al. state, ‘the copyright system is unfortunately employed to
reinforce the norms of the analog world’.[52](ch3.xhtml#footnote-101) In
contrast to a more relational perspective, the current copyright regime views
culture through a proprietary lens. And it is very much this discursive
positioning, or as Craig et al. argue ‘the language of “ownership,”
“property,” and “commodity”’, which ‘obfuscates the nature of copyright’s
subject matter, and cloaks the social and cultural conditions of its
production and the implications of its
protection’.[53](ch3.xhtml#footnote-100) How can we approach creativity in
context, as socially and culturally situated, and not as the free-standing,
stable product of a transcendent author, which is very much how it is being
positioned within an economic and copyright framework? This hegemonic
conception of creativity as property fails to acknowledge or take into
consideration the manifold, distributed, derivative and messy realities of
culture and creativity.

It is therefore important to put forward and promote another more emergent
vision of creativity, where creativity is seen as both processual and only
ever temporarily fixed, and where the work itself is seen as being the product
of a variety of (posthuman) agencies. Interestingly, someone who has written
very elaborately about a different form of creativity relevant to this context
is one of the authors of the ALCS commissioned report, Johanna Gibson. Similar
to Craig, who focuses on the relationality of copyright, Gibson wants to pay
more attention to the networking of creativity, moving it beyond a focus on
traditional models of producers and consumers in exchange for a ‘many-to-many’
model of creativity. For Gibson, IP as a system aligns with a corporate model
of creativity, one which oversimplifies what it means to be creative and
measures it against economic parameters alone.[54](ch3.xhtml#footnote-099) In
many ways in policy driven visions, IP has come to stand in for the creative
process itself, Gibson argues, and is assimilated within corporate models of
innovation. It has thus become a synonym for creativity, as we have seen in
the creative industries discourse. As Gibson explains, this simplified model
of creativity is very much a ‘discursive strategy’ in which the creator is
mythologised and output comes in the form of commodified
objects.[55](ch3.xhtml#footnote-098) In this sense we need to re-appropriate
creativity as an inherently fluid and uncertain concept and practice.

Yet this mimicry of creativity by IP and innovation at the same time means
that any re-appropriation of creativity from the stance of access and reuse is
targeted as anti-IP and thus as standing outside of formal creativity. Other,
more emergent forms of creativity have trouble existing within this self-
defining and sustaining hegemonic system. This is similar to what Craig
remarked with respect to remixed, counterfeit and pirated, and un-original
works, which are seen as standing outside the system. Gibson uses actor
network theory (ANT) as a framework to construct her network-based model of
creativity, where for her ANT allows for a vision that does not fix creativity
within a product, but focuses more on the material relationships and
interactions between users and producers. In this sense, she argues, a network
model allows for plural agencies to be attributed to creativity, including
those of users.[56](ch3.xhtml#footnote-097)

An interesting example of how the hegemonic object-based discourse of
creativity can be re-appropriated comes from the conceptual poet Kenneth
Goldsmith, who, in what could be seen as a direct response to this dominant
narrative, tries to emphasise that exactly what this discourse classifies as
‘uncreative’, should be seen as valuable in itself. Goldsmith points out that
appropriating is creative and that he uses it as a pedagogical method in his
classes on ‘Uncreative Writing’ (which he defines as ‘the art of managing
information and representing it as writing’[57](ch3.xhtml#footnote-096)). Here
‘uncreative writing’ is something to strive for and stealing, copying, and
patchwriting are elevated as important and valuable tools for writing. For
Goldsmith the digital environment has fostered new skills and notions of
writing beyond the print-based concepts of originality and authorship: next to
copying, editing, reusing and remixing texts, the management and manipulation
of information becomes an essential aspect of
creativity.[58](ch3.xhtml#footnote-095) Uncreative writing involves a
repurposing and appropriation of existing texts and works, which then become
materials or building blocks for further works. In this sense Goldsmith
critiques the idea of texts or works as being fixed when asking, ‘if artefacts
are always in flux, when is a historical work determined to be
“finished”?’[59](ch3.xhtml#footnote-094) At the same time, he argues, our
identities are also in flux and ever shifting, turning creative writing into a
post-identity literature.[60](ch3.xhtml#footnote-093) Machines play important
roles in uncreative writing, as active agents in the ‘managing of
information’, which is then again represented as writing, and is seen by
Goldsmith as a bridge between human-centred writing and full-blown
‘robopoetics’ (literature written by machines, for machines). Yet Goldsmith is
keen to emphasise that these forms of uncreative writing are not beholden to
the digital medium, and that pre-digital examples are plentiful in conceptual
literature and poetry. He points out — again by a discursive re-appropriation
of what creativity is or can be — that sampling, remixing and appropriation
have been the norm in other artistic and creative media for decades. The
literary world is lagging behind in this respect, where, despite the
experiments by modernist writers, it continues neatly to delineate avant-garde
from more general forms of writing. Yet as Goldsmith argues the digital has
started to disrupt this distinction again, moving beyond ‘analogue’ notions of
writing, and has fuelled with it the idea that there might be alternative
notions of writing: those currently perceived as
uncreative.[61](ch3.xhtml#footnote-092)

## Conclusion

There are two addendums to the argument I have outlined above that I would
like to include here. First of all, I would like to complicate and further
critique some of the preconceptions still inherent in the relational and
networked copyright models as put forward by Craig et al. and Gibson. Both are
in many ways reformist and ‘responsive’ models. Gibson, for example, does not
want to do away with IP rights, she wants them to develop and adapt to mirror
society more accurately according to a networked model of creativity. For her,
the law is out of tune with its public, and she wants to promote a more
inclusive networked (copy) rights model.[62](ch3.xhtml#footnote-091) For Craig
too, relationalities are established and structured by rights first and
foremost. Yet from a posthuman perspective we need to be conscious of how the
other actants involved in creativity would fall outside such a humanist and
subjective rights model.[63](ch3.xhtml#footnote-090) From texts and
technologies themselves to the wider environmental context and to other
nonhuman entities and objects: in what sense will a copyright model be able to
extend such a network beyond an individualised liberal humanist human subject?
What do these models exclude in this respect and in what sense are they still
limited by their adherence to a rights model that continues to rely on
humanist nodes in a networked or relational model? As Anna Munster has argued
in a talk about the case of the monkey selfie, copyright is based on a logic
of exclusion that does not line up with the assemblages of agentic processes
that make up creativity and creative expression.[64](ch3.xhtml#footnote-089)
How can we appreciate the relational and processual aspects of identity, which
both Craig and Gibson seem to want to promote, if we hold on to an inherently
humanist concept of subjectification, rights and creativity?

Secondly, I want to highlight that we need to remain cautious of a movement
away from copyright and the copyright industries, to a context of free culture
in which free content — and the often free labour it is based upon — ends up
servicing the content industries (i.e. Facebook, Google, Amazon). We must be
wary when access or the narrative around (open) access becomes dominated by
access to or for big business, benefitting the creative industries and the
knowledge economy. The danger of updating and adapting IP law to fit a
changing digital context and to new technologies, of making it more inclusive
in this sense — which is something both Craig and Gibson want to do as part of
their reformative models — is that this tends to be based on a very simplified
and deterministic vision of technology, as something requiring access and an
open market to foster innovation. As Sarah Kember argues, this technocratic
rationale, which is what unites pro-and anti-copyright activists in this
sense, essentially de-politicises the debate around IP; it is still a question
of determining the value of creativity through an economic perspective, based
on a calculative lobby.[65](ch3.xhtml#footnote-088) The challenge here is to
redefine the discourse in such a way that our focus moves away from a dominant
market vision, and — as Gibson and Craig have also tried to do — to emphasise
a non-calculative ethics of relations, processes and care instead.

I would like to return at this point to the ALCS report and the way its
results have been framed within a creative industries discourse.
Notwithstanding the fact that fair remuneration and incentives for literary
production and creativity in general are of the utmost importance, what I have
tried to argue here is that the ‘solution’ proposed by the ALCS does not do
justice to the complexities of creativity. When discussing remuneration of
authors, the ALCS seems to prefer a simple solution in which copyright is seen
as a given, the digital is pointed out as a generalised scapegoat, and
binaries between print and digital are maintained and strengthened.
Furthermore, fair remuneration is encapsulated by the ALCS within an economic
calculative logic and rhetoric, sustained by and connected to a creative
industries discourse, which continuously recreates the idea that creativity
and innovation are one. Instead I have tried to put forward various
alternative visions and practices, from radical open access to posthuman
authorship and uncreative writing, based on vital relationships and on an
ethics of care and responsibility. These alternatives highlight distributed
and relational authorship and/or showcase a sensibility that embraces
posthuman agencies and processual publishing as part of a more complex,
emergent vision of creativity, open to different ideas of what creativity is
and can become. In this vision creativity is thus seen as relational, fluid
and processual and only ever temporarily fixed as part of our ethical decision
making: a decision-making process that is contingent on the contexts and
relationships with which we find ourselves entangled. This involves asking
questions about what writing is and does, and how creativity expands beyond
our established, static, or given concepts, which include copyright and a
focus on the author as a ‘homo economicus’, writing as inherently an
enterprise, and culture as commodified. As I have argued, the value of words,
indeed the economic worth and sustainability of words and of the ‘creative
industries’, can and should be defined within a different narrative. Opening
up from the hegemonic creative industries discourse and the way we perform it
through our writing practices might therefore enable us to explore extended
relationalities of emergent creativity, open-ended publishing processes, and a
feminist ethics of care and responsibility.

This contribution has showcased examples of experimental, hybrid and posthuman
writing and publishing practices that are intervening in this established
discourse on creativity. How, through them, can we start to performatively
explore a new discourse and reconfigure the relationships that underlie our
writing processes? How can the worth of writing be reflected in different
ways?

## Works Cited

(2014) ‘New Research into Authors’ Earnings Released’, Authors’ Licensing and
Collecting Society,
Us/News/News/What-are-words-worth-now-not-much.aspx>

Abrahamsson, Sebastian, Uli Beisel, Endre Danyi, Joe Deville, Julien McHardy,
and Michaela Spencer (2013) ‘Mattering Press: New Forms of Care for STS
Books’, The EASST Review 32.4, volume-32-4-december-2013/mattering-press-new-forms-of-care-for-sts-books/>

Adema, Janneke (2017) ‘Cut-Up’, in Eduardo Navas (ed.), Keywords in Remix
Studies (New York and London: Routledge), pp. 104–14,


— (2014) ‘Embracing Messiness’, LSE Impact of Social Sciences,
adema-pdsc14/>

— (2015) ‘Knowledge Production Beyond The Book? Performing the Scholarly
Monograph in Contemporary Digital Culture’ (PhD dissertation, Coventry
University), f4c62c77ac86/1/ademacomb.pdf>

— (2014) ‘Open Access’, in Critical Keywords for the Digital Humanities
(Lueneburg: Centre for Digital Cultures (CDC)),


— and Gary Hall (2013) ‘The Political Nature of the Book: On Artists’ Books
and Radical Open Access’, New Formations 78.1, 138–56,


— and Samuel Moore (2018) ‘Collectivity and Collaboration: Imagining New Forms
of Communality to Create Resilience in Scholar-Led Publishing’, Insights 31.3,


ALCS, Press Release (8 July 2014) ‘What Are Words Worth Now? Not Enough’,


Barad, Karen (2007) Meeting the Universe Halfway: Quantum Physics and the
Entanglement of Matter and Meaning (Durham, N.C., and London: Duke University
Press).

Boon, Marcus (2010) In Praise of Copying (Cambridge, MA: Harvard University
Press).

Brown, Wendy (2015) Undoing the Demos: Neoliberalism’s Stealth Revolution
(Cambridge, MA: MIT Press).

Chartier, Roger (1994) The Order of Books: Readers, Authors, and Libraries in
Europe Between the 14th and 18th Centuries, 1st ed. (Stanford, CA: Stanford
University Press).

Craig, Carys J. (2011) Copyright, Communication and Culture: Towards a
Relational Theory of Copyright Law (Cheltenham, UK, and Northampton, MA:
Edward Elgar Publishing).

— Joseph F. Turcotte, and Rosemary J. Coombe (2011) ‘What’s Feminist About
Open Access? A Relational Approach to Copyright in the Academy’, Feminists@law
1.1,

Cramer, Florian (2013) Anti-Media: Ephemera on Speculative Arts (Rotterdam and
New York, NY: nai010 publishers).

Drucker, Johanna (2015) ‘Humanist Computing at the End of the Individual Voice
and the Authoritative Text’, in Patrik Svensson and David Theo Goldberg
(eds.), Between Humanities and the Digital (Cambridge, MA: MIT Press), pp.
83–94.

— (2014) ‘Distributed and Conditional Documents: Conceptualizing
Bibliographical Alterities’, MATLIT: Revista do Programa de Doutoramento em
Materialidades da Literatura 2.1, 11–29.

— (2013) ‘Performative Materiality and Theoretical Approaches to Interface’,
Digital Humanities Quarterly 7.1 [n.p.],


Ede, Lisa, and Andrea A. Lunsford (2001) ‘Collaboration and Concepts of
Authorship’, PMLA 116.2, 354–69.

Emerson, Lori (2008) ‘Materiality, Intentionality, and the Computer-Generated
Poem: Reading Walter Benn Michaels with Erin Moureacute’s Pillage Land’, ESC:
English Studies in Canada 34, 45–69.

— (2003) ‘Digital Poetry as Reflexive Embodiment’, in Markku Eskelinen, Raine
Koskimaa, Loss Pequeño Glazier and John Cayley (eds.), CyberText Yearbook
2002–2003, 88–106,

Foucault, Michel, ‘What Is an Author?’ (1998) in James D. Faubion (ed.),
Essential Works of Foucault, 1954–1984, Volume Two: Aesthetics, Method, and
Epistemology (New York: The New Press).

Gibson, Johanna (2007) Creating Selves: Intellectual Property and the
Narration of Culture (Aldershot, England and Burlington, VT: Routledge).

— Phillip Johnson and Gaetano Dimita (2015) The Business of Being an Author: A
Survey of Author’s Earnings and Contracts (London: Queen Mary University of
London), [https://orca.cf.ac.uk/72431/1/Final Report - For Web
Publication.pdf](https://orca.cf.ac.uk/72431/1/Final%20Report%20-%20For%20Web%20Publication.pdf)

Goldsmith, Kenneth (2011) Uncreative Writing: Managing Language in the Digital
Age (New York: Columbia University Press).

Hall, Gary (2010) ‘Radical Open Access in the Humanities’ (presented at the
Research Without Borders, Columbia University),
humanities/>

— (2008) Digitize This Book!: The Politics of New Media, or Why We Need Open
Access Now (Minneapolis, MN: University of Minnesota Press).

Hayles, N. Katherine (2004) ‘Print Is Flat, Code Is Deep: The Importance of
Media-Specific Analysis’, Poetics Today 25.1, 67–90,


Hughes, Rolf (2005) ‘Orderly Disorder: Post-Human Creativity’, in Proceedings
of the Linköping Electronic Conference (Linköpings universitet: University
Electronic Press).

Jenkins, Henry, and Owen Gallagher (2008) ‘“What Is Remix Culture?”: An
Interview with Total Recut’s Owen Gallagher’, Confessions of an Aca-Fan,


Johns, Adrian (1998) The Nature of the Book: Print and Knowledge in the Making
(Chicago, IL: University of Chicago Press).

Kember, Sarah (2016) ‘Why Publish?’, Learned Publishing 29, 348–53,


— (2014) ‘Why Write?: Feminism, Publishing and the Politics of Communication’,
New Formations: A Journal of Culture/Theory/Politics 83.1, 99–116.

Kretschmer, M., and P. Hardwick (2007) Authors’ Earnings from Copyright and
Non-Copyright Sources : A Survey of 25,000 British and German Writers (Poole,
UK: CIPPM/ALCS Bournemouth University),
[https://microsites.bournemouth.ac.uk/cippm/files/2007/07/ALCS-Full-
report.pdf](https://microsites.bournemouth.ac.uk/cippm/files/2007/07/ACLS-
Full-report.pdf)

Lessig, Lawrence (2008) Remix: Making Art and Commerce Thrive in the Hybrid
Economy (New York: Penguin Press).

Lovink, Geert, and Ned Rossiter (eds.) (2007) MyCreativity Reader: A Critique
of Creative Industries (Amsterdam: Institute of Network Cultures),


McGann, Jerome J. (1992) A Critique of Modern Textual Criticism
(Charlottesville, VA: University of Virginia Press).

McHardy, Julien (2014) ‘Why Books Matter: There Is Value in What Cannot Be
Evaluated.’, Impact of Social Sciences [n.p.],


Mol, Annemarie (2008) The Logic of Care: Health and the Problem of Patient
Choice, 1st ed. (London and New York: Routledge).

Montfort, Nick (2003) ‘The Coding and Execution of the Author’, in Markku
Eskelinen, Raine Kosimaa, Loss Pequeño Glazier and John Cayley (eds.),
CyberText Yearbook 2002–2003, 2003, 201–17,
, pp. 201–17.

Moore, Samuel A. (2017) ‘A Genealogy of Open Access: Negotiations between
Openness and Access to Research’, Revue Française des Sciences de
l’information et de la Communication 11,

Munster, Anna (2016) ‘Techno-Animalities — the Case of the Monkey Selfie’
(presented at the Goldsmiths University, London),


Navas, Eduardo (2012) Remix Theory: The Aesthetics of Sampling (Vienna and New
York: Springer).

Parikka, Jussi, and Mercedes Bunz (11 July 2014) ‘A Mini-Interview: Mercedes
Bunz Explains Meson Press’, Machinology,
meson-press/>

Richards, Victoria (7 January 2016) ‘Monkey Selfie: Judge Rules Macaque Who
Took Grinning Photograph of Himself “Cannot Own Copyright”’, The Independent,
macaque-who-took-grinning-photograph-of-himself-cannot-own-
copyright-a6800471.html>

Robbins, Sarah (2003) ‘Distributed Authorship: A Feminist Case-Study Framework
for Studying Intellectual Property’, College English 66.2, 155–71,


Rose, Mark (1993) Authors and Owners: The Invention of Copyright (Cambridge,
MA: Harvard University Press).

Spinosa, Dani (14 May 2014) ‘“My Line (Article) Has Sighed”: Authorial
Subjectivity and Technology’, Generic Pronoun,


Star, Susan Leigh (1991) ‘The Sociology of the Invisible: The Primacy of Work
in the Writings of Anselm Strauss’, in Anselm Leonard Strauss and David R.
Maines (eds.), Social Organization and Social Process: Essays in Honor of
Anselm Strauss (New York: A. de Grutyer).

* * *

[1](ch3.xhtml#footnote-152-backlink) The Authors’ Licensing and Collecting
Society is a [British](https://en.wikipedia.org/wiki/United_Kingdom)
membership organisation for writers, established in 1977 with over 87,000
members, focused on protecting and promoting authors’ rights. ALCS collects
and pays out money due to members for secondary uses of their work (copying,
broadcasting, recording etc.).

[2](ch3.xhtml#footnote-151-backlink) This survey was an update of an earlier
survey conducted in 2006 by the Centre of Intellectual Property Policy and
Management (CIPPM) at Bournemouth University.

[3](ch3.xhtml#footnote-150-backlink) ‘New Research into Authors’ Earnings
Released’, Authors’ Licensing and Collecting Society, 2014,
Us/News/News/What-are-words-worth-now-not-much.aspx>

[4](ch3.xhtml#footnote-149-backlink) Johanna Gibson, Phillip Johnson, and
Gaetano Dimita, The Business of Being an Author: A Survey of Author’s Earnings
and Contracts (London: Queen Mary University of London, 2015), p. 9,
[https://orca.cf.ac.uk/72431/1/Final Report - For Web Publication.pdf
](https://orca.cf.ac.uk/72431/1/Final%20Report%20-%20For%20Web%20Publication.pdf)

[5](ch3.xhtml#footnote-148-backlink) ALCS, Press Release. What Are Words Worth
Now? Not Enough, 8 July 2014, worth-now-not-enough>

[6](ch3.xhtml#footnote-147-backlink) Gibson, Johnson, and Dimita, The Business
of Being an Author, p. 35.

[7](ch3.xhtml#footnote-146-backlink) M. Kretschmer and P. Hardwick, Authors’
Earnings from Copyright and Non-Copyright Sources: A Survey of 25,000 British
and German Writers (Poole: CIPPM/ALCS Bournemouth University, 2007), p. 3,
[https://microsites.bournemouth.ac.uk/cippm/files/2007/07/ALCS-Full-
report.pdf](https://microsites.bournemouth.ac.uk/cippm/files/2007/07/ACLS-
Full-report.pdf)

[8](ch3.xhtml#footnote-145-backlink) ALCS, Press Release, 8 July 2014,
[https://www.alcs.co.uk/news/what-are-words-](https://www.alcs.co.uk/news
/what-are-words-worth-now-not-enough)
worth-now-not-enough

[9](ch3.xhtml#footnote-144-backlink) Gibson, Johnson, and Dimita, The Business
of Being an Author, p. 35.

[10](ch3.xhtml#footnote-143-backlink) Ibid.

[11](ch3.xhtml#footnote-142-backlink) In the survey, three questions that
focus on various sources of remuneration do list digital publishing and/or
online uses as an option (questions 8, 11, and 15). Yet the data tables
provided in the appendix to the report do not provide the findings for
questions 11 and 15 nor do they differentiate according to type of media for
other tables related to remuneration. The only data table we find in the
report related to digital publishing is table 3.3, which lists ‘Earnings
ranked (1 to 7) in relation to categories of work’, where digital publishing
ranks third after books and magazines/periodicals, but before newspapers,
audio/audio-visual productions and theatre. This lack of focus on the effect
of digital publishing on writers’ incomes, for a survey that is ‘the first to
capture the impact of the digital revolution on writers’ working lives’, is
quite remarkable. Gibson, Johnson, and Dimita, The Business of Being an
Author, Appendix 2.

[12](ch3.xhtml#footnote-141-backlink) Ibid., p. 35.

[13](ch3.xhtml#footnote-140-backlink) Ibid.

[14](ch3.xhtml#footnote-139-backlink) Geert Lovink and Ned Rossiter (eds.),
MyCreativity Reader: A Critique of Creative Industries (Amsterdam: Institute
of Network Cultures, 2007), p. 14,


[15](ch3.xhtml#footnote-138-backlink) See:
estimates-january-2015/creative-industries-economic-estimates-january-2015
-key-findings>

[16](ch3.xhtml#footnote-137-backlink) Wendy Brown, Undoing the Demos:
Neoliberalism’s Stealth Revolution (Cambridge, MA: MIT Press, 2015), p. 31.

[17](ch3.xhtml#footnote-136-backlink) Therefore Lovink and Rossiter make a
plea to, ‘redefine creative industries outside of IP generation’. Lovink and
Rossiter, MyCreativity Reader, p. 14.

[18](ch3.xhtml#footnote-135-backlink) Next to earnings made from writing more
in general, the survey on various occasions asks questions about earnings
arising from specific categories of works and related to the amount of works
exploited (published/broadcast) during certain periods. Gibson, Johnson, and
Dimita, The Business of Being an Author, Appendix 2.

[19](ch3.xhtml#footnote-134-backlink) Roger Chartier, The Order of Books:
Readers, Authors, and Libraries in Europe Between the 14th and 18th Centuries,
1st ed. (Stanford: Stanford University Press, 1994); Lisa Ede and Andrea A.
Lunsford, ‘Collaboration and Concepts of Authorship’, PMLA 116.2 (2001),
354–69; Adrian Johns, The Nature of the Book: Print and Knowledge in the
Making (Chicago, IL: University of Chicago Press, 1998); Jerome J. McGann, A
Critique of Modern Textual Criticism (Charlottesville, VA, University of
Virginia Press, 1992); Sarah Robbins, ‘Distributed Authorship: A Feminist
Case-Study Framework for Studying Intellectual Property’, College English 66.2
(2003), 155–71,

[20](ch3.xhtml#footnote-133-backlink) The ALCS survey addresses this problem,
of course, and tries to lobby on behalf of its authors for fair contracts with
publishers and intermediaries. That said, the survey findings show that only
42% of writers always retain their copyright. Gibson, Johnson, and Dimita, The
Business of Being an Author, p. 12.

[21](ch3.xhtml#footnote-132-backlink) Michel Foucault, ‘What Is an Author?’,
in James D. Faubion (ed.), Essential Works of Foucault, 1954–1984, Volume Two:
Aesthetics, Method, and Epistemology (New York: The New Press, 1998), p. 205.

[22](ch3.xhtml#footnote-131-backlink) Mark Rose, Authors and Owners: The
Invention of Copyright (Cambridge, MA: Harvard University Press, 1993).

[23](ch3.xhtml#footnote-130-backlink) Carys J. Craig, Joseph F. Turcotte, and
Rosemary J. Coombe, ‘What’s Feminist About Open Access? A Relational Approach
to Copyright in the Academy’, Feminists@law 1.1 (2011),


[24](ch3.xhtml#footnote-129-backlink) Ibid., p. 8.

[25](ch3.xhtml#footnote-128-backlink) Ibid., p. 9.

[26](ch3.xhtml#footnote-127-backlink) Lawrence Lessig, Remix: Making Art and
Commerce Thrive in the Hybrid Economy (New York: Penguin Press, 2008); Eduardo
Navas, Remix Theory: The Aesthetics of Sampling (Vienna and New York:
Springer, 2012); Henry Jenkins and Owen Gallagher, ‘“What Is Remix Culture?”:
An Interview with Total Recut’s Owen Gallagher’, Confessions of an Aca-Fan,
2008,

[27](ch3.xhtml#footnote-126-backlink) Craig, Turcotte, and Coombe, ‘What’s
Feminist About Open Access?, p. 27.

[28](ch3.xhtml#footnote-125-backlink) Ibid., p. 14.

[29](ch3.xhtml#footnote-124-backlink) Ibid., p. 26.

[30](ch3.xhtml#footnote-123-backlink) Janneke Adema, ‘Open Access’, in
Critical Keywords for the Digital Humanities (Lueneburg: Centre for Digital
Cultures (CDC), 2014), ; Janneke Adema,
‘Embracing Messiness’, LSE Impact of Social Sciences, 2014,
adema-pdsc14/>; Gary Hall, Digitize This Book!: The Politics of New Media, or
Why We Need Open Access Now (Minneapolis, MN: University of Minnesota Press,
2008), p. 197; Sarah Kember, ‘Why Write?: Feminism, Publishing and the
Politics of Communication’, New Formations: A Journal of
Culture/Theory/Politics 83.1 (2014), 99–116; Samuel A. Moore, ‘A Genealogy of
Open Access: Negotiations between Openness and Access to Research’, Revue
Française des Sciences de l’information et de la Communication, 2017,


[31](ch3.xhtml#footnote-122-backlink) Florian Cramer, Anti-Media: Ephemera on
Speculative Arts (Rotterdam and New York: nai010 publishers, 2013).

[32](ch3.xhtml#footnote-121-backlink) Especially within humanities publishing
there is a reluctance to allow derivative uses of one’s work in an open access
setting.

[33](ch3.xhtml#footnote-120-backlink) In 2015 the Radical Open Access
Conference took place at Coventry University, which brought together a large
array of presses and publishing initiatives (often academic-led) in support of
an ‘alternative’ vision of open access and scholarly communication.
Participants in this conference subsequently formed the loosely allied Radical
Open Access Collective: [radicaloa.co.uk](https://radicaloa.co.uk/). As the
conference concept outlines, radical open access entails ‘a vision of open
access that is characterised by a spirit of on-going creative experimentation,
and a willingness to subject some of our most established scholarly
communication and publishing practices, together with the institutions that
sustain them (the library, publishing house etc.), to rigorous critique.
Included in the latter will be the asking of important questions about our
notions of authorship, authority, originality, quality, credibility,
sustainability, intellectual property, fixity and the book — questions that
lie at the heart of what scholarship is and what the university can be in the
21st century’. Janneke Adema and Gary Hall, ‘The Political Nature of the Book:
On Artists’ Books and Radical Open Access’, New Formations 78.1 (2013),
138–56, ; Janneke Adema and Samuel
Moore, ‘Collectivity and Collaboration: Imagining New Forms of Communality to
Create Resilience In Scholar-Led Publishing’, Insights 31.3 (2018),
; Gary Hall, ‘Radical Open Access in the
Humanities’ (presented at the Research Without Borders, Columbia University,
2010), humanities/>; Janneke Adema, ‘Knowledge Production Beyond The Book? Performing
the Scholarly Monograph in Contemporary Digital Culture’ (PhD dissertation,
Coventry University, 2015),
f4c62c77ac86/1/ademacomb.pdf>

[34](ch3.xhtml#footnote-119-backlink) Julien McHardy, ‘Why Books Matter: There
Is Value in What Cannot Be Evaluated’, Impact of Social Sciences, 2014, n.p.,
[http://blogs.lse.ac.uk/impactofsocial sciences/2014/09/30/why-books-
matter/](http://blogs.lse.ac.uk/impactofsocialsciences/2014/09/30/why-books-
matter/)

[35](ch3.xhtml#footnote-118-backlink) Karen Barad, Meeting the Universe
Halfway: Quantum Physics and the Entanglement of Matter and Meaning (Durham,
N.C. and London: Duke University Press, 2007).

[36](ch3.xhtml#footnote-117-backlink) Annemarie Mol, The Logic of Care: Health
and the Problem of Patient Choice, 1st ed. (London and New York: Routledge,
2008).

[37](ch3.xhtml#footnote-116-backlink) Sebastian Abrahamsson and others,
‘Mattering Press: New Forms of Care for STS Books’, The EASST Review 32.4
(2013), press-new-forms-of-care-for-sts-books/>

[38](ch3.xhtml#footnote-115-backlink) McHardy, ‘Why Books Matter’.

[39](ch3.xhtml#footnote-114-backlink) Ibid.

[40](ch3.xhtml#footnote-113-backlink) Susan Leigh Star, ‘The Sociology of the
Invisible: The Primacy of Work in the Writings of Anselm Strauss’, in Anselm
Leonard Strauss and David R. Maines (eds.), Social Organization and Social
Process: Essays in Honor of Anselm Strauss (New York: A. de Gruyter, 1991).
Mattering Press is not alone in exploring an ethics of care in relation to
(academic) publishing. Sarah Kember, director of Goldsmiths Press is also
adamant in her desire to make the underlying processes of publishing (i.e.
peer review, citation practices) more transparent and accountable Sarah
Kember, ‘Why Publish?’, Learned Publishing 29 (2016), 348–53,
. Mercedes Bunz, one of the editors running
Meson Press, argues that a sociology of the invisible would incorporate
‘infrastructure work’, the work of accounting for, and literally crediting
everybody involved in producing a book: ‘A book isn’t just a product that
starts a dialogue between author and reader. It is accompanied by lots of
other academic conversations — peer review, co-authors, copy editors — and
these conversations deserve to be taken more serious’. Jussi Parikka and
Mercedes Bunz, ‘A Mini-Interview: Mercedes Bunz Explains Meson Press’,
Machinology, 2014, mercedes-bunz-explains-meson-press/>. For Open Humanities Press authorship is
collaborative and even often anonymous: for example, they are experimenting
with research published in wikis to further complicate the focus on single
authorship and a static marketable book object within academia (see their
living and liquid books series).

[41](ch3.xhtml#footnote-112-backlink) Lori Emerson, ‘Digital Poetry as
Reflexive Embodiment’, in Markku Eskelinen, Raine Koskimaa, Loss Pequeño
Glazier and John Cayley (eds.), CyberText Yearbook 2002–2003, 2003, 88–106,


[42](ch3.xhtml#footnote-111-backlink) Dani Spinosa, ‘“My Line (Article) Has
Sighed”: Authorial Subjectivity and Technology’, Generic Pronoun, 2014,


[43](ch3.xhtml#footnote-110-backlink) Spinosa, ‘My Line (Article) Has Sighed’.

[44](ch3.xhtml#footnote-109-backlink) Emerson, ‘Digital Poetry as Reflexive
Embodiment’, p. 89.

[45](ch3.xhtml#footnote-108-backlink) Rolf Hughes, ‘Orderly Disorder: Post-
Human Creativity’, in Proceedings of the Linköping Electronic Conference
(Linköpings universitet: University Electronic Press, 2005).

[46](ch3.xhtml#footnote-107-backlink) N. Katherine Hayles, ‘Print Is Flat,
Code Is Deep: The Importance of Media-Specific Analysis’, Poetics Today 25.1
(2004), 67–90, ; Johanna Drucker,
‘Performative Materiality and Theoretical Approaches to Interface’, Digital
Humanities Quarterly 7.1 (2013),
; Johanna
Drucker, ‘Distributed and Conditional Documents: Conceptualizing
Bibliographical Alterities’, MATLIT: Revista do Programa de Doutoramento em
Materialidades da Literatura 2.1 (2014), 11–29.

[47](ch3.xhtml#footnote-106-backlink) Nick Montfort, ‘The Coding and Execution
of the Author’, in Markku Eskelinen, Raine Kosimaa, Loss Pequeño Glazier and
John Cayley (eds.), CyberText Yearbook 2002–2003, 2003, 201–17 (p. 201),


[48](ch3.xhtml#footnote-105-backlink) Montfort, ‘The Coding and Execution of
the Author’, p. 202.

[49](ch3.xhtml#footnote-104-backlink) Lori Emerson, ‘Materiality,
Intentionality, and the Computer-Generated Poem: Reading Walter Benn Michaels
with Erin Moureacute’s Pillage Land’, ESC: English Studies in Canada 34
(2008), 66.

[50](ch3.xhtml#footnote-103-backlink) Marcus Boon, In Praise of Copying
(Cambridge, MA: Harvard University Press, 2010); Johanna Drucker, ‘Humanist
Computing at the End of the Individual Voice and the Authoritative Text’, in
Patrik Svensson and David Theo Goldberg (eds.), Between Humanities and the
Digital (Cambridge, MA: MIT Press, 2015), pp. 83–94.

[51](ch3.xhtml#footnote-102-backlink) We have to take into consideration here
that print-based cultural products were never fixed or static; the dominant
discourses constructed around them just perceive them to be so.

[52](ch3.xhtml#footnote-101-backlink) Craig, Turcotte, and Coombe, ‘What’s
Feminist About Open Access?’, p. 2.

[53](ch3.xhtml#footnote-100-backlink) Ibid.

[54](ch3.xhtml#footnote-099-backlink) Johanna Gibson, Creating Selves:
Intellectual Property and the Narration of Culture (Aldershot, UK, and
Burlington: Routledge, 2007), p. 7.

[55](ch3.xhtml#footnote-098-backlink) Gibson, Creating Selves, p. 7.

[56](ch3.xhtml#footnote-097-backlink) Ibid.

[57](ch3.xhtml#footnote-096-backlink) Kenneth Goldsmith, Uncreative Writing:
Managing Language in the Digital Age (New York: Columbia University Press,
2011), p. 227.

[58](ch3.xhtml#footnote-095-backlink) Ibid., p. 15.

[59](ch3.xhtml#footnote-094-backlink) Goldsmith, Uncreative Writing, p. 81.

[60](ch3.xhtml#footnote-093-backlink) Ibid.

[61](ch3.xhtml#footnote-092-backlink) It is worth emphasising that what
Goldsmith perceives as ‘uncreative’ notions of writing (including
appropriation, pastiche, and copying), have a prehistory that can be traced
back to antiquity (thanks go out to this chapter’s reviewer for pointing this
out). One example of this, which uses the method of cutting and pasting —
something I have outlined more in depth elsewhere — concerns the early modern
commonplace book. Commonplacing as ‘a method or approach to reading and
writing involved the gathering and repurposing of meaningful quotes, passages
or other clippings from published books by copying and/or pasting them into a
blank book.’ Janneke Adema, ‘Cut-Up’, in Eduardo Navas (ed.), Keywords in
Remix Studies (New York and London: Routledge, 2017), pp. 104–14,


[62](ch3.xhtml#footnote-091-backlink) Gibson, Creating Selves, p. 27.

[63](ch3.xhtml#footnote-090-backlink) For example, animals cannot own
copyright. See the case of Naruto, the macaque monkey that took a ‘selfie’
photograph of itself. Victoria Richards, ‘Monkey Selfie: Judge Rules Macaque
Who Took Grinning Photograph of Himself “Cannot Own Copyright”’, The
Independent, 7 January 2016, /monkey-selfie-judge-rules-macaque-who-took-grinning-photograph-of-himself-
cannot-own-copyright-a6800471.html>

[64](ch3.xhtml#footnote-089-backlink) Anna Munster, ‘Techno-Animalities — the
Case of the Monkey Selfie’ (presented at the Goldsmiths University, London,
2016),

[65](ch3.xhtml#footnote-088-backlink) Sarah Kember, ‘Why Write?: Feminism,
Publishing and the Politics of Communication’, New Formations: A Journal of
Culture/Theory/Politics 83.1 (2014), 99–116.

Adema & Hall
The political nature of the book: on artists' books and radical open access
2013


The political nature of the book: on artists' books and radical open access
Adema, J. and Hall, G.

Author post-print (accepted) deposited in CURVE September 2013

Original citation & hyperlink:
Adema, J. and Hall, G. (2013). The political nature of the book: on artists' books and radical
open access. New Formations, volume 78 (1): 138-156

http://dx.doi.org/10.3898/NewF.78.07.2013

This is an Open Access article distributed under the terms of the Creative Commons
Attribution License (http://creativecommons.org/licenses/by/3.0/), which permits
unrestricted use, distribution, and reproduction in any medium, provided the original
work is properly cited.
This document is the author’s post-print version of the journal article, incorporating any
revisions agreed during the peer-review process. Some differences between the published
version and this version may remain and you are advised to consult the published version
if you wish to cite from it.

CURVE is the Institutional Repository for Coventry University
http://curve.coventry.ac.uk/open

Abstract
In this article we argue that the medium of the book can be a material and
conceptual means, both of criticising capitalism’s commodification of knowledge (for
example, in the form of the commercial incorporation of open access by feral and
predatory publishers), and of opening up a space for thinking about politics. The
book, then, is a political medium. As the history of the artist’s book shows, it can be
used to question, intervene in and disturb existing practices and institutions, and even
offer radical, counter-institutional alternatives. If the book’s potential to question and
disturb existing practices and institutions includes those associated with liberal
democracy and the neoliberal knowledge economy (as is apparent from some of the
more radical interventions occurring today under the name of open access), it also
includes politics and with it the very idea of democracy. In other words, the book is a
medium that can (and should) be ‘rethought to serve new ends’; a medium through
which politics itself can be rethought in an ongoing manner.

Keywords: Artists’ books, Academic Publishing, Radical Open Access, Politics,
Democracy, Materiality

Janneke Adema is a PhD student at Coventry University, writing a dissertation on the
future of the scholarly monograph. She is the author of the OAPEN report Overview
of Open Access Models for eBooks in the Humanities and Social Sciences (2010) and
has published in The International Journal of Cultural Studies, New Media & Society,
New Review of Academic Librarianship; Krisis: Journal for Contemporary
Philosophy; Scholarly and Research Communication; and LOGOS; and co-edited a
living book on Symbiosis (Open Humanities Press, 2011). Her research can be
followed on www.openreflections.wordpress.com.

Gary Hall is Professor of Media and Performing Arts and Director of the Centre for
Disruptive Media at Coventry University, UK. He is author of Culture in Bits
(Continuum, 2002) and Digitize This Book! (Minnesota UP, 2008). His work has
appeared in numerous journals, including Angelaki, Cultural Studies, The Oxford
Literary Review, Parallax and Radical Philosophy. He is also founding co-editor of
the open access journal Culture Machine (http://www.culturemachine.net), and co-

1

founder of Open Humanities Press (http://www.openhumanitiespress.org). More
details are available on his website http://www.garyhall.info.

THE POLITICAL NATURE OF THE BOOK: ON ARTISTS’ BOOKS AND
RADICAL OPEN ACCESS

Janneke Adema and Gary Hall

INTRODUCTION

The medium of the book plays a double role in art and academia, functioning not only
as a material object but also as a concept-laden metaphor. Since it is a medium
through which an alternative future for art, academia and even society can be enacted
and imagined, materially and conceptually, we can even go so far as to say that, in its
ontological instability with regard to what it is and what it conveys, the book serves a
political function. In short, the book can be ‘rethought to serve new ends’. 1 At the
same time, the medium of the book remains subject to a number of constraints: in
terms of its material form, structure, characteristics and dimensions; and also in terms
of the political economies, institutions and practices in which it is historically
embedded. Consequently, if it is to continue to be able to serve ‘new ends’ as a
medium through which politics itself can be rethought – although this is still a big if –
then the material and cultural constitution of the book needs to be continually
1

Johanna Drucker, The Century of Artists’ Books, 2nd ed., Granary Books, New York, 2004,
p49.

2

reviewed, reevaluated and reconceived. In order to explore critically this ‘political
nature of the book’, as we propose to think of it, along with many of the fundamental
ideas on which the book as both a concept and a material object is based, this essay
endeavours to demonstrate how developments undergone by the artist’s book in the
1960s and 1970s can help us to understand some of the changes the scholarly
monograph is experiencing now, at a time when its mode of production, distribution,
organisation and consumption is shifting from analogue to digital and from codex to
net. In what follows we will thus argue that a reading of the history of the artist’s
book can be generative for reimagining the future of the scholarly monograph, both
with respect to the latter’s potential form and materiality in the digital age, and with
respect to its relation to the economic system in which book production, distribution,
organisation and consumption takes place. Issues of access and experimentation are
crucial to any such future, we will suggest, if the critical potentiality of the book is to
remain open to new political, economic and intellectual contingencies.

THE HISTORY OF THE ARTIST’S BOOK

With the rise to prominence of digital publishing today, the material conditions of
book production, distribution, organisation and consumption are undergoing a rapid
and potentially profound transformation. The academic world is one arena in which
digital publishing is having a particularly strong impact. Here, the transition from
print to digital, along with the rise of self-publishing (Blurb, Scribd) and the use of
social media and social networks (Facebook, Twitter, Academia.edu) to communicate
and share scholarly research, has lead to the development of a whole host of
alternative publication and circulation systems for academic thought and knowledge.

3

Nowhere have such changes to the material conditions of the academic book been
rendered more powerfully apparent than in the emergence and continuing rise to
prominence of the open access movement. With its exploration of different ways of
publishing, circulating and consuming academic work (specifically, more open,
Gratis, Libre ways of doing so), and of different systems for governing, reviewing,
accrediting and legitimising that work, open access is frequently held as offering a
radical challenge to the more established academic publishing industry. Witness the
recent positioning in the mainstream media of the boycott of those publishers of
scholarly journals – Elsevier in particular – who charge extremely high subscription
prices and who refuse to allow authors to make their work freely available online on
an open access basis, in terms of an ‘Academic Spring’. Yet more potentially radical
still is the occupation of the new material conditions of academic book production,
distribution, organization and consumption by those open access advocates who are
currently experimenting with the form and concept of the book, with a view to both
circumventing and placing in question the very print-based system of scholarly
communication – complete with its ideas of quality, stability and authority – on
which so much of the academic institution rests.

In the light of the above, our argument in this essay is that some of these more
potentially radical, experimental developments in open access book publishing can be
related on the level of political and cultural significance to transformations undergone
in a previous era by the artist’s book. As a consequence, the history of the latter can
help us to explore in more depth and detail than would otherwise be possible the
relation in open access between experimenting with the medium of the book on a

4

material and conceptual level on the one hand, and enacting political alternatives in a
broader sense on the other. Within the specific context of 1960s and 1970s
counterculture, the artist’s book was arguably able to fill a certain political void,
providing a means of democratising and subverting existing institutions by
distributing an increasingly cheap and accessible medium (the book), and in the
process using this medium in order to reimagine what art is and how it can be
accessed and viewed. While artists grasped and worked through that relation between
the political, conceptual and material aspects of the book several decades ago, thanks
to the emergence of open access online journals, archives, blogs, wikis and free textsharing networks one of the main places in which this relation is being explored today
is indeed in the realm of academic publishing. 2

In order to begin thinking through some of the developments in publishing that are
currently being delved into under the banner of open access, then, let us pause for a
moment to reflect on some of the general characteristics of those earlier experiments
with the medium of the book that were performed by artists. Listed below are six key
areas in which artists’ books can be said to offer guidance for academic publishing in
the digital age, not just on a pragmatic level but on a conceptual and political level
too.

1) The Circumvention of Established Institutions

2

The relation in academic publishing between the political, conceptual and material aspects
of the book has of course been investigated at certain points in the past, albeit to varying
degrees and extents. For one example, see the ‘Working Papers’ and other forms of stencilled
gray literature that were produced and distributed by the Birmingham Centre for
Contemporary Cultural Studies in the 1960s and 1970s, as discussed by Ted Striphas and
Mark Hayward in their contribution to this issue.

5

According to the art theorist Lucy Lippard, the main reason the book has proved to be
so attractive as an artistic medium has to do with the fact that artists’ books are
‘considered by many the easiest way out of the art world and into the hearth of a
broader audience.’ 3 Books certainly became an increasingly popular medium of
artistic expression in Europe and the United States in the 1960s and 1970s. This was
largely due to their perceived potential to subvert the (commercial, profit-driven)
gallery system and to politicise artistic practice - to briefly introduce some of the
different yet as we can see clearly related arguments that follow - with the book
becoming a ‘democratic multiple’ that breached the walls held to be separating socalled high and low culture. Many artist-led and artist-controlled initiatives, such as
US-based Franklin Furnace, Printed Matter and Something Else Press, were
established during this period to provide a forum for artists excluded from the
traditional institutions of the gallery and the museum. Artists’ books played an
extremely important part in the rise of these independent art structures and publishing
ventures. 4 Indeed, for many artists such books embodied the ideal of being able to
control all aspects of their work.

Yet this movement toward liberating themselves from the gallery system by
publishing and exhibiting in artists’ books was by no means an easy transition for
many artists to make. It required them to come to terms with the idea that publishing
their own work did not amount to mere vanity self-publishing, in particular. Moore
and Hendricks describe this state of affairs in terms of the power and potential of ‘the

3

Lucy R. Lippard, ‘The Artist’s Book Goes Public’, in Joan Lyons (ed), Artists’ Books: a
Critical Anthology and Sourcebook, Rochester, New York: Visual Studies Workshop Press,
1993, p45.
4
Joan Lyons, ‘Introduction’, in Lyons (ed), Artists’ Books, p7.

6

page as an alternative space’. 5 From this perspective, producing, publishing and
distributing one’s own artist’s book was a sign of autonomy and independence; it was
nothing less than a way of being able to affect society directly. 6 The political potential
associated with the book by artists should therefore not be underestimated..
Accordingly, many artists created their own publishing imprints or worked together
with newly founded artist’s book publishers and printers (just as some academics are
today challenging the increasingly profit-driven publishing industry by establishing
not-for-profit, scholar-led, open access journals and presses). The main goal of these
independent (and often non-commercial) publisher-printer-artist collectives was to
make experimental, innovative work (rather than generate a profit), and to promote
ephemeral art works, which were often ignored by mainstream, mostly marketorientated institutions. 7 Artists’ books thus fitted in well with the mythology Johanna
Drucker describes as surrounding ‘activist artists’, and especially with the idea of the
book as a tool of independent activist thought. 8

2) The Relationship with Conceptual and Processual Art
In the context of this history of the artist’s book, one particularly significant
conceptual challenge to the gallery system came with the use of the book as a
platform for exhibiting original work (itself an extension of André Malraux’s idea of
the museum without walls). Curator Seth Siegelaub was among the first to publish his
artists – as opposed to exhibiting them – thus becoming, according to Germano

5

Hendricks and Moore, ‘The Page as Alternative Space: 1950 to 1969’, in Lyons (ed),
Artists’ Books, p87.
6
Pavel Büchler, ‘Books as Books’, in Jane Rolo and Ian Hunt (eds), Book Works: a Partial
History and Sourcebook, London: Book Works, 1996.
7
Clive Phillpot, ‘Some Contemporary Artists and Their Books’, in Cornelia Lauf and Clive
Phillpot (eds), Artist/Author: Contemporary Artists’ Books, New York, Distributed Art
Publishers, 1998, pp128-9.
8
Drucker, The Century of Artists’ Books, pp7-8.

7

Celant, ‘the first to allow complete operative and informative liberty to artists’. 9 The
Xerox Book and March 1-31, 1969, featuring work by Sol LeWitt, Robert Barry,
Douglas Huebler, Joseph Kosuth, Lawrence Weiner and other international artists, are
both examples of artists’ books where the book (or the catalogue) itself is the
exhibition. As Moore and Hendricks point out, this offered all kinds of benefits when
compared with traditional exhibitions: ‘This book is the exhibition, easily
transportable without the need for expensive physical space, insurance, endless
technical problems or other impediments. In this form it is relatively permanent and,
fifteen years later, is still being seen by the public.’ 10 Artists’ books thus served here
as an alternative space in themselves and at the same time functioned within a
network of alternative spaces, such as the above-mentioned Franklin Furnace
and Printed Matter.. Next to publishing and supporting artists’ books, such venues
offered a space for staging often highly politicised, critical, experimental and
performance art. 11 It is important to emphasise this aspect of artist book publishing, as
it shows that the book was used as a specific medium to exhibit works that could not
otherwise readily find a place within mainstream exhibition venues (a situation which,
as we will show, has been one of the main driving forces behind open access book
publishing). This focus on the book as a place for continual experimentation – be it on
the level of content or form – can thus be seen as underpinning what we are referring
to here as the ‘political nature of the book’ (playing on the title of Adrian Johns’
classic work of book history). 12

9

Germano Celant, Book as Artwork 1960-1972, New York, 6 Decades Books, 2011, p40.
Hendricks and Moore, ‘The Page as Alternative Space. 1950 to 1969’, p94.
11
Brian Wallis, ‘The Artist’s Book and Postmodernism’, in Cornelia Lauf and Clive Phillpot,
(eds), Artist/Author, 1998.
12
Adrian Johns, The Nature of the Book: Print and Knowledge in the Making, Chicago,
University of Chicago Press, 1998.
10

8

3) The Use of Accessible Technologies
As is the case with the current changes to the scholarly monograph, the rise of artists’
books can be perceived to have been underpinned (though by no means determined)
by developments in technology, with the revolution in mimeograph and offset
printing helping to take artists’ books out of the realm of expensive and rare
commodities by providing direct access to quick and inexpensive printing
methods. 13 Due to its unique characteristics – low production costs, portability,
accessibility and endurance – the artist’s book was regarded as having the potential to
communicate with a wider audience beyond the traditional art world. In particular, it
was seen as having the power to break down the barriers between so-called high and
low culture, using the techniques of mass media to enable artists to argue for their
own,

alternative

goals,

something

that

presented

all

kinds

of

political

possibilities.14 The artist’s book thus conveyed a high degree of artistic autonomy,
while also offering a far greater role to the reader or viewer, who was now able to
interact with the art object directly (eluding the intermediaries of the gallery and
museum system). Indeed, Lippard even went so far as to envision a future where
artists’ books would be readily available as part of mass consumer culture, at
‘supermarkets, drugstores and airports’. 15

4) The Politics of the Democratic Multiple

13

Hendricks and Moore, ‘The Page as Alternative Space’, pp94-95.
Joan Lyons, ‘Introduction’, in Lyons (ed), Artists’ Books, p7.
15
Lippard, ‘The Artist’s Book Goes Public’, p48; Lippard, ‘Conspicuous Consumption: New
Artists’ Books’, in Lyons (ed), Artists’ Books, p100. Is there a contradiction here between a
politics of artists’ books that is directed against commercial profit-driven galleries and
institutions, but which nevertheless uses the tools of mass consumer culture to reach a wider
audience (see also the critique Lippard offers in the next section)? And can a similar point be
made with respect to the politics of some open access initiatives and their use of social media
and (commercial, profit-driven) platforms such as Google Books and Amazon?
14

9

The idea of the book as a real democratic multiple came into being only after 1945, a
state of events that has been facilitated by a number of technological innovations,
including those detailed above. Yet the concept of the democratic multiple itself
developed in what was already a climate of political activism and social
consciousness. In this respect, the democratic multiple was part of both the overall
trend toward the dematerialization of art and the newly emergent emphasis on cultural
and artistic processes rather than ready-made objects. 16

Artists’ desire for

independence from established institutions and for the wider availability of their
works thus resonated with the democratising and anti-institutional potential of the
book as a medium. What is more, the book offered artists a space in which they were
able to experiment with the materiality of the medium itself and with the practices
that comprised it, and thus ultimately with the question of what constituted art and an
art object. This reflexivity of the book with regard to its own nature is one of the key
characteristics that make a book an artist’s book, and enable it to have political
potential in that it can be ‘rethought to serve new ends’. Much the same can be said
with respect to the relation between the book and scholarly communication: witness
the way reflection on the material nature of the book in the digital age has led to
questions being raised regarding how we structure scholarly communication and
practice scholarship more generally.

5) Conceptual Experimentation: Problematising the Concept and Form of the Book
Another key to understanding artists’ books and their history lies with the way the
radical change in printing technologies after World War II led to the reassessment of
the book form itself, and in particular, of the specific nature of the book’s materiality,

16

Drucker, The Century of Artists’ Books, p72.

10

of the very idea of the book, and of the notions and practices underlying the book’s
various uses.

When it came to reevaluating the materiality of the book, many experiments with
artists’ books tried to escape the linearity brought about by the codex form’s
(sequential) constraints, something which had long conditioned both writing and
reading practices. Undoubtedly, one of the most important theorists as far as
rethinking the materiality of the book in the period after 1945 is concerned is Ulises
Carrión. He defines the book as a specific set of conditions that should be (or need to
be) responded to. 17 Instead of seeing it as just a text, Carrión positions the book as an
object, a container and a sequence of spaces. For him, the codex is a form that needs
to be responded to in what he prefers to call ‘bookworks’. These are ‘books in which
the book form, as a coherent sequence of pages, determines conditions of reading that
are intrinsic to the work.’ 18 From this perspective, artists’ books interrogate the
structure and the meaning of the book’s form. 19

Yet the book is also a metaphor, a symbol and an icon to be responded to. 20 Indeed, it
is difficult to establish a precise definition or set of characteristics for artists’ books as
their very nature keeps changing. As Sowden and Bodman put it, ‘What a book is can
be challenged’. 21 Drucker, meanwhile, is at pains to point out that the book is open
for innovation, although the latter has its limits: ‘The convention of the book is both
its constrained meanings (as literacy, the law, text and so forth) and the space of new
17

James Langdon (ed), Book, Birmingham, Eastside Projects, 2010.
Ulises Carrión, ‘Bookworks Revisited’, in James Langdon (ed), Book, Birmingham,
Eastside Projects, 2010.
19
Drucker, The Century of Artists’ Books, pp3-4.
20
Ibid., p360.
21
Tom Sowden and Sarah Bodman, A Manifesto for the Book, Impact Press, 2010, p9.
18

11

work (the blank page, the void, the empty place).’ Books here ‘mutate, expand,
transform’. Accordingly, Drucker regards the transformed book as an intervention,
something that reflects the inherent critique that book experiments embody with
respect to their own constitution.22 One way of examining reflexively the structures
that make up the book is precisely by disturbing those structures. In certain respects
the page can be thought of as being finite (e.g. physically, materially), but it can also
be understood to be infinite, not least as a result of being potentially different on each
respective viewing/reading. This allows the book to be perceived as a self-reflexive
medium that is extremely well-suited to formal experiments. At the same time, it
allows it to be positioned as a potentially political medium, in the sense that it can be
used to intervene in and disturb existing practices and institutions.

6) The Problematisation of Reading and Authorship
As part of their constitution, artists’ books can be said to have brought into question
certain notions and practices relating to the book that had previously been taken too
much for granted – and perhaps still are. For instance, Brian Wallis shows how, ‘in
place of the omnipotent author’, postmodern artists’ books ‘acknowledge a
collectivity of voices and active participation of the reader’. 23 Carrión, for one, was
very concerned with the thought that readers might consume books passively, while
being unaware of their specificity as a medium. 24 The relationship between the book
and reading, and the way in which the physical aspect of the book can change how we
read, was certainly an important topic for artists throughout this period. Many
experiments with artists’ books focused on the interaction between author, reader and
22

Drucker, The Century of Artists’ Books.
Lucy Lippard and John Chandler, ‘The Dematerialization of Art’, Art International, 12, 2
(1968).
24
Langdon, Book.
23

12

book, offering an alternative, and not necessarily linear, reading experience. 25 Such
readerly interventions often represented a critical engagement with ideas of the author
as original creative genius derived from the cultural tradition of European
Romanticism. Joan Lyons describes this potential of the artist’s book very clearly:
‘The best of the bookworks are multinotational. Within them, words, images, colors,
marks, and silences become plastic organisms that play across the pages in variable
linear sequence. Their importance lies in the formulation of a new perceptual
literature whose content alters the concept of authorship and challenges the reader to a
new discourse with the printed page.’ 26 Carrión thus writes about how in the books of
the new art, as he calls them, words no longer transmit an author’s intention. Instead,
authors can use other people’s words as an element of the book as a whole – so much
so that he positions plagiarism as lying at the very basis of creativity. As far as artists’
books are concerned, it is not the artist’s intention that is at stake, according to
Carrión, but rather the process of testing the meaning of language. It is the reader who
creates the meaning and understanding of a book for Carrión, through his or her
specific meaning-extraction. Every book requires a different reading and opens up
possibilities to the reader. 27

THE INHIBITIONS OF MEDIATIC CHANGE

We can thus see that the very ‘nature’ of the book is particularly well suited to
experimentation and to reading against the grain. As a medium, the book has the
25

This has been one of the focal points of the books published and commissioned by UK
artist book publisher Book Works, for instance. Jane Rolo and Ian Hunt, ‘Introduction’, in
Book Works: A Partial History and Sourcebook, op. cit.
26
Joan Lyons, ‘Introduction’, p7.
27
Ulises Carrión, ‘The New Art of Making Books’, in James Langdon (ed), Book,
Birmingham, Eastside Projects, 2010.

13

potential to raise questions for some of the established practices and institutions
surrounding the production, distribution and consumption of printed matter. This
potential notwithstanding, it gradually became apparent (for some this realisation
occurred during the 1960s and 1970s, for others it only came about later) that the
ability of artists’ books to bring about institutional change in the art world, and to
question both the concept of the book and that of art as the singular aesthetic artefact
bolstered by institutional structures, was not particularly long-lasting. With respect to
the democratization of the artist’s book, for example, Lippard notes that, by losing its
distance, there was also a chance of the book losing its critical function. Here, says
Lippard, the ‘danger is that, with an expanding audience and an increased popularity
with collectors, the artist’s book will fall back into its edition de luxe or coffee table
origin … transformed into glossy, pricey products.’ For Lippard there is a discrepancy
between the characteristics of the medium which had the potential to break down
walls, and the actual content and form of most artists’ books which was highly
experimental and avant-garde, and thus inaccessible to readers/consumers outside of
the art world. 28

PROCESSES OF INCORPORATION AND COMMERCIALISATION

Interestingly, Carrión was one of the sharpest critics of the idea that artists’ books
should be somehow able to subvert the gallery system. In his ‘Bookworks Revisited’,
he showed how the hope surrounding this supposedly revolutionary potential of the
book as a medium was based on a gross misunderstanding of the mechanisms
underlying the art world. In particular, Carrión attacked the idea that the artist’s book

28

Lippard, ‘The Artist’s Book Goes Public’ pp47-48.

14

could do without any intermediaries. Instead of circumventing the gallery system, he
saw book artists as merely adopting an alternative set of intermediaries, namely book
publishers and critics. 29

Ten years later Stewart Cauley updated Carrión’s criticisms, arguing that as an art
form and medium, the artist’s book had not been able to avoid market mechanisms
and the celebrity cult of the art system. In fact, by the end of the 1980s the field of
artists’ publications had lost most of its experimental impetus and had become
something of an institution itself, imitating the gallery and museum system it was
initially designed to subvert. 30 Those interested in artists’ books initially found it
difficult to set up an alternative system, as they had to manage without organized
distribution, review mechanisms or funding schemes. When they were eventually able
to do so in the 1970s, the resulting structures in many ways mirrored the very
institutions they were supposed to be criticizing and providing an alternative to.31
Cauley points the finger of blame at the book community itself, especially at the fact
that artists at the time focused more on the concept and structure of the book than on
using the book form to make any kind of critical political statement. The idea that
artists’ books were disconnected from mainstream institutional systems has also been
debunked as a myth. As Drucker makes clear, many artists’ books were developed in
cooperation with museums or galleries, where they were perceived not as subversive
artefacts but rather as low-cost tools for gathering additional publicity for those
institutions and their activities. 32
29

Carrión, ‘Bookworks Revisited’; Johanna Drucker, ‘Artists’ Books and the Cultural Status
of the Book’, Journal of Communication, 44 (1994).
30
Stewart Cauley, ‘Bookworks for the ’90s’, Afterimage, 25, 6, May/June (1998).
31
Stefan Klima, Artists Books: A Critical Survey of the Literature, Granary Books, New
York, 1998, pp54-60.
32
Drucker, The Century of Artists’ Books, p78.

15

Following Abigail Solomon-Godeau, this process of commercialisation and
incorporation – or, as she calls it, ‘the near-total assimilation’ of art practice
(Solomon-Godeau focuses specifically on postmodern photography) and critique into
the discourses it professed to challenge – can be positioned as part of a general
tendency in conceptual and postmodern ‘critical art practices’. It is a development that
can be connected to the changing art markets of the time and viewed in terms of a
broader social and cultural shift to Reaganomics. For Solomon-Godeau, however, the
problem lay not only in changes to the art market, but in critical art practices and art
critique too, which in many ways were not robust enough to keep on reinventing
themselves. Nonetheless, even if they have become incorporated into the art market
and the commodity system, Solomon-Godeau argues that it is still possible for art
practices and institutional critiques to develop some (new) forms of sustainable
challenge from within these systems. As far as she is concerned, ‘a position of
resistance can never be established once and for all, but must be perpetually
refashioned and renewed to address adequately those shifting conditions and
circumstances that are its ground.’ 33

THE PROMISE OF OPEN ACCESS

At first sight many of the changes that have occurred recently in the world of
academic book publishing seem to resemble those charted above with respect to the
artist’s book. As was the case with the publishing of artists’ books, digital publishing
has provided interested parties with an opportunity to counter the existing
33

Abigail Solomon-Godeau, ‘Living with Contradictions: Critical Practices in the Age of
Supply-Side Aesthetics’, Social Text, 21 (1989).

16

(publishing) system and its institutions, to experiment with using contemporary and
emergent media to publish (in this case academic) books in new ways and forms, and
in the process to challenge established ideas of the printed codex book, together with
the material practices of production, distribution and consumption that surround it.
This has resulted in a new wave of scholar-led publishing initiatives in academia, both
formal (with scholars either becoming publishers themselves, or setting up crossinstitutional publishing infrastructures with libraries, IT departments and research
groups) and informal (using self-publishing and social media platforms such as blogs
and wikis). 34 The phenomenon of open access book publishing can be located within
this broader context – a context which, it is worth noting, also includes the closing of
many book shops due to fierce rivalry from the large supermarkets at one end of the
market, and online e-book traders such as Amazon at the other; the fact that the major
high-street book chains are increasingly loath to take academic titles - not just
journals but books too; and the handing over (either in part or in whole) to for-profit
corporations of many publishing organisations designed to serve charitable aims and
the public good: scholarly associations, learned societies, university presses, nonprofit and not-for-profit publishers.

From the early 1990s onwards, open access was pioneered and developed most
extensively in the science, technology, engineering and mathematics (STEM) fields,
where much of the attention was focused on the online self-archiving by scholars of
pre-publication (i.e. pre-print) versions of their research papers in central, subject or
institutionally-based repositories. This is known as the Green Road to open access, as

34

See, for example, Janneke Adema and Birgit Schmidt, ‘From Service Providers to Content
Producers: New Opportunities For Libraries in Collaborative Open Access Book Publishing’,
New Review of Academic Librarianship, 16 (2010).

17

distinct from the Gold Road, which refers to the publishing of articles in online, open
access journals. Of particular interest in this respect is the philosophy that lies behind
the rise of the open access movement, as it can be seen to share a number of
characteristics with the thinking behind artists’ books discussed earlier. The former
was primarily an initiative established by academic researchers, librarians, managers
and administrators, who had concluded that the traditional publishing system – thanks
in no small part to the rapid (and, as we shall see, ongoing) process of aggressive forprofit commercialisation it was experiencing – was no longer willing or able to meet
all of their communication needs. Accordingly, those behind this initiative wanted to
take advantage of the opportunities they saw as being presented by the new digital
publishing and distribution mechanisms to make research more widely and easily
available in a far faster, cheaper and more efficient manner than was offered by
conventional print-on-paper academic publishing. They had various motivations for
doing so. These include wanting to extend the circulation of research to all those who
were interested in it, rather than restricting access to merely those who could afford to
pay for it in the form of journal subscriptions, etc; 35 and a desire to promote the
emergence of a global information commons, and, through this, help to produce a
renewed democratic public sphere of the kind Jürgen Habermas propounds. From the
latter point of view (as distinct from the more radical democratic philosophy we
proceed to develop in what follows), open access was seen as working toward the
creation of a healthy liberal democracy, through its alleged breaking down of the
barriers between the academic community and the rest of society, and its perceived
consequent ability to supply the public with the information they need to make
knowledgeable decisions and actively contribute to political debate. Without doubt,
35

John Willinsky, The Access Principle: The Case for Open Access to Research and
Scholarship, Cambridge, Mass., The MIT Press, 2009, p5.

18

though, another motivating factor behind the development of open access was a desire
on the part of some of those involved to enhance the transparency, accountability,
discoverability, usability, efficiency and (cost) effectivity not just of scholarship and
research but of higher education itself. From the latter perspective (and as can again
be distinguished from the radical open access philosophy advocated below), making
research available on an open access basis was regarded by many as a means of
promoting and stimulating the neoliberal knowledge economy both nationally and
internationally. Open access is supposed to achieve these goals by making it easier for
business and industry to capitalise on academic knowledge - companies can build new
businesses based on its use and exploitation, for example - thus increasing the impact
of higher education on society and helping the UK, Europe and the West (and North)
to be more competitive globally. 36

To date, the open access movement has progressed much further toward its goal of
making all journal articles available open access than it has toward making all
academic books available in this fashion. There are a number of reasons why this is
the case. First, since the open access movement was developed and promoted most
extensively in the STEMs, it has tended to concentrate on the most valued mode of
publication in those fields: the peer-reviewed journal article. Interestingly, the recent

36

Gary Hall, Digitize This Book! The Politics of New Media, or Why We Need Open Access
Now, Minneapolis, University of Minnesota Press, 2008; Janneke Adema, Open Access
Business Models for Books in the Humanities and Social Sciences: An Overview of Initiatives
and Experiments, OAPEN Project Report, Amsterdam, 2010. David Willetts, the UK Science
Minister, is currently promoting ‘author-pays’ open access for just these reasons. See David
Willetts, ‘Public Access to Publicly-Funded Research’, BIS: Department for Business,
Innovation and Skills, May 2, 2012: https://www.gov.uk/government/speeches/public-accessto-publicly-funded-research--2

19

arguments around the ‘Academic Spring’ and ‘feral’ publishers such as Informa plc
are no exception to this general rule. 37

Second, restrictions to making research available open access associated with
publishers’ copyright and licensing agreements can in most cases be legally
circumvented when it comes to journal articles. If all other options fail, authors can
self-archive a pre-refereed pre-print of their article in a central, subject or
institutionally-based repository such as PubMed Central. However, it is not so easy to
elude such restrictions when it comes to the publication of academic books. In the
latter case, since the author is often paid royalties in exchange for their text, copyright
tends to be transferred by the author to the publisher. The text remains the intellectual
property of the author, but the exclusive right to put copies of that text up for sale, or
give them away for free, then rests with the publisher. 38

Another reason the open access movement has focused on journal articles is because
of the expense involved in publishing books in this fashion, since one of the main
models of funding open access in the STEMs, author-side fees, 39 is not easily
transferable either to book publishing or to the Humanities and Social Sciences
(HSS). In contrast to the STMs, the HSS feature a large number of disciplines in
which it is books (monographs in particular) published with esteemed international
37

David Harvie, Geoff Lightfoot, Simon Lilley and Kenneth Weir, ‘What Are We To Do
With Feral Publishers?’, submitted for publication in Organization, and accessible through
the Leicester Research Archive: http://hdl.handle.net/2381/9689.
38
See the Budapest Open Access Initiative, ‘Self-Archiving FAQ, written for the Budapest
Open Access Initiative (BOAI)’, 2002-4: http://www.eprints.org/self-faq/.
39
Although ‘author-pays’ is often positioned as the main model of funding open access
publication in the STEMs, a lot of research has disputed this fact. See, for example, Stuart
Shieber, ‘What Percentage of Open-Access Journals Charge Publication Fees’, The
Occasional Pamphlet on Scholarly Publishing, May 9, 2009:
http://blogs.law.harvard.edu/pamphlet/2009/05/29/what-percentage-of-open-access-journalscharge-publication-fees/.

20

presses, rather than articles in high-ranking journals, that are considered as the most
significant and valued means of scholarly communication. Authors in many fields in
the HSS are simply not accustomed to paying to have their work published. What is
more, many authors associate doing so with vanity publishing. 40 They are also less
likely to acquire the grants from either funding bodies or their institutions that are
needed to cover the cost of publishing ‘author-pays’. That the HSS in many Western
countries receive only a fraction of the amount of government funding the STEMs do
only compounds the problem, 41 as does the fact that higher rejection rates in the HSS,
as compared to the STEMs, mean that any grants would have to be significantly
larger, as the time spent on reviewing articles, and hence the amount of human labour
used, makes it a much more intensive process. 42 And that is just to publish journal
articles. Publishing books on an author-pays basis would be more expensive still.

Yet even though the open access movement initially focused more on journal articles
than on monographs, things have begun to change in this respect in recent years.
Undoubtedly, one of the major factors behind this change has been the fact that the

40

Maria Bonn, ‘Free Exchange of Ideas: Experimenting with the Open Access Monograph’,
College and Research Libraries News, 71, 8, September (2010) pp436-439:
http://crln.acrl.org/content/71/8/436.full.
41
Patrick Alexander, director of the Pennsylvania State University Press, provides the
following example: ‘Open Access STEM publishing is often funded with tax-payer dollars,
with publication costs built into researchers’ grant request… the proposed NIH budget for
2013 is $31 billion. NSF’s request for 2013 is around $7.3 billion. Compare those amounts to
the NEH ($154 million) and NEA ($154 million) and you can get a feel for why researchers
in the the arts and humanities face challenges in funding their publication costs.’ (Adeline
Koh, ‘Is Open Access a Moral or a Business Issue? A Conversation with The Pennsylvania
State University Press, The Chronicle of Higher Education, July 10, 2012:
http://chronicle.com/blogs/profhacker/is-open-access-a-moral-or-a-business-issue-aconversation-with-the-pennsylvania-state-university-press/41267)
42
See Mary Waltham’s 2009 report for the National Humanities Alliance, ‘The Future of
Scholarly Journals Publishing among Social Sciences and Humanities Associations’:
http://www.nhalliance.org/research/scholarly_communication/index.shtml; and Peter Suber,
‘Promoting Open Access in the Humanities’, 2004:
http://www.earlham.edu/~peters/writing/apa.htm. ‘On average, humanities journals have
higher rejection rates (70-90%) than STEM journals (20-40%)’, Suber writes.

21

publication of books on an open access basis has been perceived as one possible
answer to the ‘monograph crisis’. This phrase refers to the way in which the already
feeble sustainability of the print monograph is being endangered even further by the
ever-declining sales of academic books. 43 It is a situation that has in turn been brought
about by ‘the so-called “serials crisis”, a term used to designate the vertiginous rise of
the subscription to STEM journals since the mid-80s which… strangled libraries and
led to fewer and fewer purchases of books/monographs.’ 44 This drop in library
demand for monographs has led many presses to produce smaller print runs; focus on
more commercial, marketable titles; or even move away from monographs to
concentrate on text books, readers, and reference works instead. In short, conventional
academic publishers are now having to make decisions about what to publish more on
the basis of the market and a given text’s potential value as a commodity, and less on
the basis of its quality as a piece of scholarship. This last factor is making it difficult
for early career academics to publish the kind of research-led monographs that are
often needed to acquire that all important first full-time position. This in turn means
the HSS is, in effect, allowing publishers to make decisions on its future and on who
gets to have a long-term career on an economic basis, according to the needs of the
market – or what they believe those needs to be. But it is also making it hard for

43

Greco and Wharton estimate that the average number of library purchases of monographs
has dropped from 1500 in the 1970s to 200-300 at present. Thompson estimates that print
runs and sales have declined from 2000-3000 (print runs and sales) in the 1970s to print runs
of between 600-1000 and sales of between 400-500 nowadays. Albert N. Greco and Robert
Michael Wharton, ‘Should University Presses Adopt an Open Access [electronic publishing]
Business Model for all of their Scholarly Books?’, ELPUB. Open Scholarship: Authority,
Community, and Sustainability in the Age of Web 2.0 – Proceedings of the 12th
International Conference on Electronic Publishing held in Toronto, Canada 25-27 June
2008; John B. Thompson, Books in the Digital Age: The Transformation of Academic and
Higher Education Publishing in Britain and the United States, Cambridge, Polity Press, 2005.
44
Jean Kempf, ‘Social Sciences and Humanities Publishing and the Digital “Revolution”’
unpublished manuscript, 2010: http://perso.univlyon2.fr/~jkempf/Digital_SHS_Publishing.pdf; Thompson, Books in the Digital Age, pp. 9394.

22

authors in the HSS generally to publish monographs that are perceived as being
difficult, advanced, specialized, obscure, radical, experimental or avant-garde - a
situation reminiscent of the earlier state of events which led to the rise of artists’
books, with the latter emerging in the context of a perceived lack of exhibition space
for experimental and critical (conceptual) work within mainstream commercial
galleries.

Partly in response to this ‘monograph crisis’, a steadily increasing number of
initiatives have now been set up to enable authors in the HSS in particular to bring out
books open access – not just introductions, reference works and text books, but
research monographs and edited collections too. These initiatives include scholar-led
presses such as Open Humanities Press, re.press, and Open Book Publishers;
commercial presses such as Bloomsbury Academic; university presses, including
ANU E Press and Firenze University Press; and presses established by or working
with libraries, such as Athabasca University’s AU Press. 45

Yet important though the widespread aspiration amongst academics, librarians and
presses to find a solution to the monograph crisis has been, the reasons behind the
development of open access book publishing in the HSS are actually a lot more
diverse than is often suggested. For instance, to the previously detailed motivating
factors that inspired the rise of the open access movement can be added the desire,
shared by many scholars, to increase accessibility to (specialized) HSS research, with
a view to heightening its reputation, influence, impact and esteem. This is seen as

45

A list of publishers experimenting with business models for OA books is available at:
http://oad.simmons.edu/oadwiki/Publishers_of_OA_books. See also Adema, Open Access
Business Models.

23

being especially significant at a time when the UK government, to take just one
example, is emphasizing the importance of the STEMs while withdrawing support
and funding for the HSS. Many scholars in the HSS are thus now willing to stand up
against, and even offer a counter-institutional alternative to, the large, established,
profit-led, commercial firms that have come to dominate academic publishing – and,
in so doing, liberate the long-form argument from market constraints through the
ability to publish books that often lack a clear commercial market.

TWO STRATEGIES: ACCESSIBILITY AND EXPERIMENTATION

That said, all of these reasons and motivating factors behind the recent changes in
publishing models are still very much focused on making more scholarly research
more accessible. Yet for at least some of those involved in the creation and
dissemination of open access books, doing so also constitutes an important stage in
the development of what might be considered more ‘experimental’ forms of research
and publication; forms for which commercial and heavily print-based systems of
production and distribution have barely provided space. Such academic experiments
are thus perhaps capable of adopting a role akin to, if not the exact equivalent of, that
we identified artists’ books as having played in the countercultural context of the
1960s and 1970s: in terms of questioning the concept and material form of the book;
promoting alternative ways of reading and communicating via books; and
interrogating modern, romantic notions of authorship. We are thinking in particular of
projects that employ open peer-review procedures (such as Kathleen Fitzpatrick’s
Planned Obsolescence, which uses the CommentPress Wordpress plugin to enable
comments to appear alongside the main body of the text), wikis (e.g. Open

24

Humanities Press’ two series of Liquid and Living Books) and blogs (such as those
created using the Anthologize app developed at George Mason University). 46 These
enable varying degrees of what Peter Suber calls ‘author-side openness’ when it
comes to reviewing, editing, changing, updating and re-using content, including
creating derivative works. Such practices pose a conceptual challenge to some of the
more limited interpretations of open access (what has at times been dubbed ‘weak
open access’), 47 and can on occasion even constitute a radical test of the integrity and
identity of a given work, not least by enabling different versions to exist
simultaneously. In an academic context this raises questions of both a practical and
theoretical nature that have the potential to open up a space for reimagining what
counts as scholarship and research, and of how it can be responded to and accessed:
not just which version of a work is to be cited and preserved, and who is to have
ultimate responsibility for the text and its content; but also what an author, a text, and
a work actually is, and where any authority and stability that might be associated with
such concepts can now be said to reside.

It is interesting then that, although they can be positioned as constituting two of the
major driving forces behind the recent upsurge in the current interest in open access
book publishing, as ‘projects’, the at times more obviously or overtly ‘political’ (be it
liberal-democratic, neoliberal or otherwise) project of using digital media and the
Internet to create wider access to book-based research on the one hand, and
experimenting—as part of the more conceptual, experimental aspects of open access
book publishing—with the form of the book (a combination of which we identified as
46

See http://mediacommons.futureofthebook.org/mcpress/plannedobsolescence;
http://liquidbooks.pbwiki.com/; http://www.livingbooksaboutlife.org/; http://anthologize.org/.
47
See Peter Suber, SPARC OA newsletter, issue 155, March 2, 2011:
http://www.earlham.edu/~peters/fos/newsletter/03-02-11.htm

25

being essential components of the experimental and political potential of artists’
books) and the way our dominant system of scholarly communication currently
operates on the other, often seem to be rather disconnected. Again, a useful
comparison can be made to the situation described by Lippard, where more
(conceptually or materially) experimental artists’ books were seen as being less
accessible to a broader public and, in some cases, as going against the strategy of
democratic multiples, promoting exclusivity instead.

It is certainly the case that, in order to further the promotion of open access and
achieve higher rates of adoption and compliance among the academic community, a
number of strategic alliances have been forged between the various proponents of the
open access movement. Some of these alliances (those associated with Green open
access, for instance) have taken making the majority if not indeed all of the research
accessible online without a paywall (Gratis open access) 48 as their priority, perhaps
with the intention of moving on to the exploration of other possibilities, including
those concerned with experimenting with the form of the book, once critical mass has
been attained – but perhaps not. Hence Stevan Harnad’s insistence that ‘it’s time to
stop letting the best get in the way of the better: Let’s forget about Libre and Gold OA
until we have managed to mandate Green Gratis OA universally.’ 49 Although they
cannot be simply contrasted and opposed to the former (often featuring many of the
same participants), other strategic alliances have focused more on gaining the trust of
the academic community. Accordingly, they have prioritized allaying many of the

48

For an overview of the development of these terms, see:
http://www.arl.org/sparc/publications/articles/gratisandlibre.shtml
49
Stevan Harnad, Open Access: Gratis and Libre, Open Access Archivangelism,
Thursday, May 3, 2012.

26

anxieties with regard to open access publications – including concerns regarding their
quality, stability, authority, sustainability and status with regard to publishers’
copyright licenses and agreements – that have been generated as a result of the
transition toward the digital mode of reproduction and distribution. More often than
not, such alliances have endeavoured to do so by replicating in an online context
many of the scholarly practices associated with the world of print-on-paper
publishing. Witness the way in which the majority of open access book publishers
continue to employ more or less the same quality control procedures, preservation
structures and textual forms as their print counterparts: pre-publication peer review
conducted by scholars who have already established their reputations in the paper
world; preservation carried out by academic libraries; monographs consisting of
numbered pages and chapters arranged in a linear, sequential order and narrative, and
so on. As Sigi Jöttkandt puts it with regard to the strategy of Open Humanities Press
in this respect:

We’re intending OHP as a tangible demonstration to our still generally
sceptical colleagues in the humanities that there is no reason why OA
publishing cannot have the same professional standards as print. We aim to
show that OA is not only academically credible but is in fact being actively
advanced by leading figures in our fields, as evidenced by our editorial
advisory board. Our hope is that OHP will contribute to OA rapidly becoming
standard practice for scholarly publishing in the humanities. 50

50

Sigi Jöttkandt, 'No-fee OA Journals in the Humanities, Three Case Studies: A Presentation
by Open Humanities Press', presented at the Berlin 5 Open Access Conference: From Practice
to Impact: Consequences of Knowledge Dissemination, Padua, September 19, 2007:
http://openhumanitiespress.org/Jottkandt-Berlin5.pdf

27

Relatively few open access publishers, however, have displayed much interest in
combining such an emphasis on achieving universal, free, online access to research
and/or the gaining of trust, with a rigorous critical exploration of the form of the book
itself. 51 And this despite the fact that the ability to re-use material is actually an
essential feature of what has become known as the Budapest-Bethesda-Berlin (BBB)
definition of open access, which is one of the major agreements underlying the
movement. 52 It therefore seems significant that, of the books presently available open
access, only a minority have a license where price and permission barriers to research
are removed, with the result that the research is available under both Gratis and Libre
(re-use) conditions. 53

REIMAGINING THE BOOK, OR RADICAL OPEN ACCESS

Admittedly, there are many in the open access community who regard the more
radical experiments conducted with and on books as highly detrimental to the
strategies of large-scale accessibility and trust respectively. From this perspective,
efforts designed to make open access material available for others to (re)use, copy,
51

Open Humanities Press (http://openhumanitiespress.org/) and Media Commons Press
(http://mediacommons.futureofthebook.org/mcpress/) remain the most notable exceptions on
the formal side of the publishing scale, the majority of experiments with the form of the book
taking place in the informal sphere (e.g. blogbooks self-published by Anthologize, and
crowd-sourced, ‘sprint’ generated books such as Dan Cohen and Tom Scheinfeldt’s Hacking
the Academy: http://hackingtheacademy.org/).
52
See Peter Suber on the BBB definition here:
http://www.earlham.edu/~peters/fos/newsletter/09-02-04.htm, where he also states that two
of the three BBB component definitions (the Bethesda and Berlin statements) require
removing barriers to derivative works.
53
An examination of the licenses used on two of the largest open access book publishing
platforms or directories to date, the OAPEN (Open Access Publishing in Academic
Networks) platform and the DOAB (Directory of Open Access Books), reveals that on the
OAPEN platform (accessed May 6th 2012) 2 of the 966 books are licensed with a CC-BY
license, and 153 with a CC-BY-NC license (which still restricts commercial re-use). On the
DOAB (accessed May 6th 2012) 5 of the 778 books are licensed with a CC-BY license, 215
with CC-BY-NC.

28

reproduce and distribute in any medium, as well as make and distribute derivative
works, coupled with experiments with the form of the book, are seen as being very
much secondary objectives (and even by some as unnecessarily complicating and
diluting open access’s primary goal of making all of the research accessible online
without a paywall). 54 And, indeed, although in many of the more formal open access
definitions (including the important Bethesda and Berlin definitions of open access,
which require removing barriers to derivative works), the right to re-use and reappropriate a scholarly work is acknowledged and recommended, in both theory and
practice a difference between ‘author-side openness’ and ‘reader-side openness’ tends
to be upheld—leaving not much space for the ‘readerly interventions’ that were so
important in opening up the kind of possibilities for ‘reading against the grain’ that
the artist’s book promoted, something we feel (open access) scholarly works should
also strive to encourage and support. 55 This is especially the case with regard to the
publication of books, where a more conservative vision frequently holds sway. For
instance, it is intriguing that in an era in which online texts are generally connected to
a network of other information, data and mobile media environments, the open access
book should for the most part still find itself presented as having definite limits and a
clear, distinct materiality.

But if the ability to re-use material is an essential feature of open access – as, let us
repeat, it is according to the Budapest-Bethesda-Berlin and many of other influential
definitions of the term – then is working toward making all of the research accessible

54

See, for example, Stevan Harnad, Open Access: Gratis and Libre, Open Access
Archivangelism, Thursday, May 3, 2012.
55
For more on author-side and reader-side openness respectively, see Peter Suber, SPARC
OA newsletter: http://www.earlham.edu/~peters/fos/newsletter/03-02-11.htm

29

online on a Gratis basis and/or gaining the trust of the academic community the best
way for the open access movement (including open access book publishing) to
proceed, always and everywhere? If we do indeed wait until we have gained a critical
mass of open access content before taking advantage of the chance the shift from
analogue to digital creates, might it not by then be too late? Does this shift not offer
us the opportunity, through its loosening of much of the stability, authority, and
‘fixity’ of texts, to rethink scholarly publishing, and in the process raise the kind of
fundamental questions for our ideas of authorship, authority, legitimacy, originality,
permanence, copyright, and with them the text and the book, that we really should
have been raising all along? If we miss this opportunity, might we not find ourselves
in a similar situation to that many book artists and publishers have been in since the
1970s, namely, that of merely reiterating and reinforcing established structures and
practices?

Granted, following a Libre open access strategy may on occasion risk coming into
conflict with those more commonly accepted and approved open access strategies (i.e.
those concerned with achieving accessibility and the gaining of trust on a large-scale).
Nevertheless, should open access advocates on occasion not be more open to adopting
and promoting forms of open access that are designed to make material available for
others to (re)use, copy, reproduce, distribute, transmit, translate, modify, remix and
build upon? In particular, should they not be more open to doing so right here, right
now, before things begin to settle down and solidify again and we arrive at a situation
where we have succeeded merely in pushing the movement even further toward rather
weak, watered-down and commercial versions of open access?

30

CONCLUSION

We began by looking at how, in an art world context, the idea and form of the book
have been used to engage critically many of the established cultural institutions, along
with some of the underlying philosophies that inform them. Of particular interest in
this respect is the way in which, with the rise of offset printing and cheaper
production methods and printing techniques in the 1960s, there was a corresponding
increase in access to the means of production and distribution of books. This in turn
led to the emergence of new possibilities and roles that the book could be put to in an
art context, which included democratizing art and critiquing the status quo of the
gallery system. But these changes to the materiality and distribution of the codex
book in particular – as an artistic product as well as a medium – were integrally linked
with questions concerning the nature of both art and the book as such. Book artists
and theorists thus became more and more engaged in the conceptual and practical
exploration of the materiality of the book. In the end, however, the promise of
technological innovation which underpinned the changes with respect to the
production and distribution of artists’ books in the 1960s and 1970s was not enough
to generate any kind of sustainable (albeit repeatedly reviewed, refashioned and
renewed) challenge within the art world over the longer term.

The artist’s book of the 1960s and 1970s therefore clearly had the potential to bring
about a degree of transformation, yet it was unable to elude the cultural practices,
institutions and the market mechanisms that enveloped it for long (including those
developments in financialisation and the art market Solomon-Godeau connects to the
shift to Reaganomics). Consequently, instead of criticising or subverting the

31

established systems of publication and distribution, the artist’s book ended up being
largely integrated into them. 56 Throughout the course of this article we have argued
that its conceptual and material promise notwithstanding, there is a danger of
something similar happening to open access publishing today. Take the way open
access has increasingly come to be adopted by commercial publishers. If one of the
motivating factors behind at least some aspects of the open access movement – not
just the aforementioned open access book publishers in the HSS, but the likes of
PLoS, too – has been to stand up against, and even offer an alternative to, the large,
profit-led firms that have come to dominate the field of academic publishing, recent
years have seen many such commercial publishers experimenting with open access
themselves, even if such experiments have so far been confined largely to journals.57
Most commonly, this situation has resulted in the trialling of ‘author-side’ fees for the
open access publishing of journals, a strategy seen as protecting the interests of the
established publishers, and one which has recently found support in the Finch Report
from a group of representatives of the research, library and publishing communities
convened by David Willetts, the UK Science Minister. 58 But the idea that open access
56

That said, there is currently something of a revival of print, craft and artist's book
publishing taking place in which the paperbound book is being re-imagined in offline
environments. In this post-digital print culture, paper publishing is being used as a new form
of avant-garde social networking that, thanks to its analog nature, is not so easily controlled
by the digital data-gathering commercial hegemonies of Google, Amazon, Facebook et al. For
more, see Alessandro Ludovico, Post-Digital Print - the Mutation of Publishing Since 1984,
Onomatopee, 2012; and Florian Cramer, `Post-Digital Writing', Electronic Book Review,
December, 2012: http://electronicbookreview.com/thread/electropoetics/postal.
57
For more details, see Wilhelm Peekhaus, ‘The Enclosure and Alienation of Academic
Publishing: Lessons for the Professoriate’, tripleC, 10(2), 2012: http://www.triplec.at/index.php/tripleC/article/view/395
58
‘Accessibility, Sustainability, Excellence: How to Expand Access to Research Publications,
Report of the Working Group on Expanding Access to Published Research Findings’, June
18, 2012: http://www.researchinfonet.org/wp-content/uploads/2012/06/Finch-Group-reportFINAL-VERSION.pdf. For one overview of some of the problems that can be identified from
an HSS perspective in the policy direction adopted by Finch and Willetts, see Lucinda
Matthews-Jones, ‘Open Access and the Future of Academic Journals’, Journal of Victorian
Culture Online, November 21, 2012: http://myblogs.informa.com/jvc/2012/11/21/openaccess-and-the-future-of-academic-journals/

32

may represent a commercially viable publishing model has attracted a large amount of
so-called predatory publishers, too, 59 who (like Finch and Willetts) have propagated a
number of misleading and often quite mistaken accounts of open access. 60 The
question is thus raised as to whether the desire to offer a counter-institutional
alternative to the large, established, commercial firms is likely to become somewhat
marginalised and neutralised as a result of open access publishing being seen more
and more by such commercial publishers as just another means of generating a profit.
Will the economic as well as material practices transferred from the printing press
continue to inform and shape our communication systems? As Nick Knouf argues, to
raise this question, ‘is not to damn open access publishing by any means; rather, it is
to say that open access publishing, without a concurrent interrogation of the economic
underpinnings of the scholarly communication system, will only reform the situation
rather than provide a radical alternative.’ 61

With this idea of providing a radical challenge to the current scholarly communication
system in mind, and drawing once again on the brief history of artists’ books as
presented above, might it not be helpful to think of open access less as a project and
model to be implemented, and more as a process of continuous struggle and critical
resistance? Here an analogy can be drawn with the idea of democracy as a process. In
‘Historical Dilemmas of Democracy and Their Contemporary Relevance for
Citizenship’, the political philosopher Etiènne Balibar develops an interesting analysis
of democracy based on a concept of the ‘democratisation of democracy’ he derives
59

For a list of predatory OA publishers see: http://scholarlyoa.com/publishers/
This list has increased from 23 predatory publishers in 2011, to 225 in 2012.
60
See the reference to the research of Peter Murray Rust in Sigi Jöttkandt, ‘No-fee OA
Journals in the Humanities’.
61
Nicholas Knouf, ‘The JJPS Extension: Presenting Academic Performance Information’,
Journal of Journal Performance Studies, 1 (2010).

33

from a reading of Hannah Arendt and Jacques Rancière. For Balibar, the problem
with much of the discourse surrounding democracy is that it perceives the latter as a
model that can be implemented in different contexts (in China or the Middle East, for
instance). He sees discourses of this kind as running two risks in particular. First of
all, in conceptualizing democracy as a model there is a danger of it becoming a
homogenizing force, masking differences and inequalities. Second, when positioned
as a model or a project, democracy also runs the risk of becoming a dominating force
– yet another political regime that takes control and power. According to Balibar, a
more interesting and radical notion of democracy involves focusing on the process of
the democratisation of democracy itself, thus turning democracy into a form of
continuous struggle (or struggles) – or, perhaps better, continuous critical selfreflection. Democracy here is not an established reality, then, nor is it a mere ideal; it
is rather a permanent struggle for democratisation. 62

Can open access be understood in similar terms: less as a homogeneous project
striving to become a dominating model or force, and more as an ongoing critical
struggle, or series of struggles? And can we perhaps locate what some perceive as the
failure of artists’ books to contribute significantly to such a critical struggle after the
1970s to the fact that ultimately they became (incorporated in) dominant institutional
settings themselves – a state of affairs brought about in part by their inability to
address issues of access, experimentation and self-reflexivity in an ongoing critical
manner?

62

Etienne Balibar, ‘Historical Dilemmas of Democracy and Their Contemporary Relevance
for Citizenship’, Rethinking Marxism, 20 (2008).

34

Certainly, one of the advantages of conceptualizing open access as a process of
struggle rather than as a model to be implemented would be that doing so would
create more space for radically different, conflicting, even incommensurable positions
within the larger movement, including those that are concerned with experimenting
critically with the form of the book and the way our system of scholarly
communication currently operates. As we have shown, such radical differences are
often played down in the interests of strategy. To be sure, open access can experience
what Richard Poynder refers to as a ‘bad tempered wrangles’ over relatively ‘minor
issues’ such as ‘metadata, copyright, and distributed versus central archives’. 63 Still,
much of the emphasis has been on the importance of trying to maintain a more or less
unified front (within certain limits, of course) in the face of criticisms from
publishers, governments, lobbyists and so forth, lest its opponents be provided with
further ammunition with which to attack the open access movement, and dilute or
misinterpret its message, or otherwise distract advocates from what they are all
supposed to agree are the main tasks at hand (e.g. achieving universal, free, online
access to research and/or the gaining of trust). Yet it is important not to see the
presence of such differences and conflicts within the open access movement in purely
negative terms – the way they are often perceived by those working in the liberal
tradition, with its ‘rationalist belief in the availability of a universal consensus based
on reason’. 64 (This emphasis on the ‘universal’ is also apparent in fantasies of having
not just universal open access, but one single, fully integrated and indexed global
archive.) In fact if, as we have seen, one of the impulses behind open access is to
make knowledge and research – and with it society – more open and democratic, it

63

Richard Poynder, ‘Time to Walk the Walk’, Open and Shut?, 17 March, 2005:
http://poynder.blogspot.com/2005/03/time-to-walk-talk.html.
64
Chantal Mouffe, On the Political, London, Routledge, 2005, p11.

35

can be argued that the existence of such dissensus will help achieve this ambition.
After all, and as we know from another political philosopher, Chantal Mouffe, far
from placing democracy at risk, a certain degree of conflict and antagonism actually
constitutes the very possibility of democracy. 65 It seems to us that such a critical, selfreflexive, processual, non-goal oriented way of thinking about academic publishing
shares much with the mode of working of the artist - which is why we have argued
that open access today can draw productively on the kind of conceptual openness and
political energy that characterised experimentation with the medium of the book in
the art world of the 1960s and 1970s.

65

Mouffe, On the Political, p30.

36


Barok
Communing Texts
2014


Communing Texts

_A talk given on the second day of the conference_ [Off the
Press](http://digitalpublishingtoolkit.org/22-23-may-2014/program/) _held at
WORM, Rotterdam, on May 23, 2014. Also available
in[PDF](/images/2/28/Barok_2014_Communing_Texts.pdf "Barok 2014 Communing
Texts.pdf")._

I am going to talk about publishing in the humanities, including scanning
culture, and its unrealised potentials online. For this I will treat the
internet not only as a platform for storage and distribution but also as a
medium with its own specific means for reading and writing, and consider the
relevance of plain text and its various rendering formats, such as HTML, XML,
markdown, wikitext and TeX.

One of the main reasons why books today are downloaded and bookmarked but
hardly read is the fact that they may contain something relevant but they
begin at the beginning and end at the end; or at least we are used to treat
them in this way. E-book readers and browsers are equipped with fulltext
search functionality but the search for "how does the internet change the way
we read" doesn't yield anything interesting but the diversion of attention.
Whilst there are dozens of books written on this issue. When being insistent,
one easily ends up with a folder with dozens of other books, stucked with how
to read them. There is a plethora of books online, yet there are indeed mostly
machines reading them.

It is surely tempting to celebrate or to despise the age of artificial
intelligence, flat ontology and narrowing down the differences between humans
and machines, and to write books as if only for machines or return to the
analogue, but we may as well look back and reconsider the beauty of simple
linear reading of the age of print, not for nostalgia but for what we can
learn from it.

This perspective implies treating texts in their context, and particularly in
the way they commute, how they are brought in relations with one another, into
a community, by the mere act of writing, through a technique that have
developed over time into what we have came to call _referencing_. While in the
early days referring to texts was practised simply as verbal description of a
referred writing, over millenia it evolved into a technique with standardised
practices and styles, and accordingly: it gained _precision_. This precision
is however nothing machinic, since referring to particular passages in other
texts instead of texts as wholes is an act of comradeship because it spares
the reader time when locating the passage. It also makes apparent that it is
through contexts that the web of printed books has been woven. But even though
referencing in its precision has been meant to be very concrete, particularly
the advent of the web made apparent that it is instead _virtual_. And for the
reader, laborous to follow. The web has shown and taught us that a reference
from one document to another can be plastic. To follow a reference from a
printed book the reader has to stand up, walk down the street to a library,
pick up the referred volume, flip through its pages until the referred one is
found and then follow the text until the passage most probably implied in the
text is identified, while on the web the reader, _ideally_ , merely moves her
finger a few milimeters. To click or tap; the difference between the long way
and the short way is obviously the hyperlink. Of course, in the absence of the
short way, even scholars are used to follow the reference the long way only as
an exception: there was established an unwritten rule to write for readers who
are familiar with literature in the respective field (what in turn reproduces
disciplinarity of the reader and writer), while in the case of unfamiliarity
with referred passage the reader inducts its content by interpreting its
interpretation of the writer. The beauty of reading across references was
never fully realised. But now our question is, can we be so certain that this
practice is still necessary today?

The web silently brought about a way to _implement_ the plasticity of this
pointing although it has not been realised as the legacy of referencing as we
know it from print. Today, when linking a text and having a particular passage
in mind, and even describing it in detail, the majority of links physically
point merely to the beginning of the text. Hyperlinks are linking documents as
wholes by default and the use of anchors in texts has been hardly thought of
as a _requirement_ to enable precise linking.

If we look at popular online journalism and its use of hyperlinks within the
text body we may claim that rarely someone can afford to read all those linked
articles, not even talking about hundreds of pages long reports and the like
and if something is wrong, it would get corrected via comments anyway. On the
internet, the writer is meant to be in more immediate feedback with the
reader. But not always readers are keen to comment and not always they are
allowed to. We may be easily driven to forget that quoting half of the
sentence is never quoting a full sentence, and if there ought to be the entire
quote, its source text in its whole length would need to be quoted. Think of
the quote _information wants to be free_ , which is rarely quoted with its
wider context taken into account. Even factoids, numbers, can be carbon-quoted
but if taken out of the context their meaning can be shaped significantly. The
reason for aversion to follow a reference may well be that we are usually
pointed to begin reading another text from its beginning.

While this is exactly where the practices of linking as on the web and
referencing as in scholarly work may benefit from one another. The question is
_how_ to bring them closer together.

An approach I am going to propose requires a conceptual leap to something we
have not been taught.

For centuries, the primary format of the text has been the page, a vessel, a
medium, a frame containing text embedded between straight, less or more
explicit, horizontal and vertical borders. Even before the material of the
page such as papyrus and paper appeared, the text was already contained in
lines and columns, a structure which we have learnt to perceive as a grid. The
idea of the grid allows us to view text as being structured in lines and
pages, that are in turn in hand if something is to be referred to. Pages are
counted as the distance from the beginning of the book, and lines as the
distance from the beginning of the page. It is not surprising because it is in
accord with inherent quality of its material medium -- a sheet of paper has a
shape which in turn shapes a body of a text. This tradition goes as far as to
the Ancient times and the bookroll in which we indeed find textual grids.

[![Papyrus of Plato
Phaedrus.jpg](/images/thumb/4/49/Papyrus_of_Plato_Phaedrus.jpg/700px-
Papyrus_of_Plato_Phaedrus.jpg)](/File:Papyrus_of_Plato_Phaedrus.jpg)

[![](/skins/common/images/magnify-
clip.png)](/File:Papyrus_of_Plato_Phaedrus.jpg "Enlarge")


A crucial difference between print and digital is that text files such as HTML
documents nor markdown documents nor database-driven texts did inherit this
quality. Their containers are simply not structured into pages, precisely
because of the nature of their materiality as media. Files are written on
memory drives in scattered chunks, beginning at point A and ending at point B
of a drive, continuing from C until D, and so on. Where does each of these
chunks start is ultimately independent from what it contains.

Forensic archaeologists would confirm that when a portion of a text survives,
in the case of ASCII documents it is not a page here and page there, or the
first half of the book, but textual blocks from completely arbitrary places of
the document.

This may sound unrelated to how we, humans, structure our writing in HTML
documents, emails, Office documents, even computer code, but it is a reminder
that we structure them for habitual (interfaces are rectangular) and cultural
(human-readability) reasons rather then for a technical necessity that would
stem from material properties of the medium. This distinction is apparent for
example in HTML, XML, wikitext and TeX documents with their content being both
stored on the physical drive and treated when rendered for reading interfaces
as single flow of text, and the same goes for other texts when treated with
automatic line-break setting turned off. Because line-breaks and spaces and
everything else is merely a number corresponding to a symbol in character set.

So how to address a section in this kind of document? An option offers itself
-- how computers do, or rather how we made them do it -- as a position of the
beginning of the section in the array, in one long line. It would mean to
treat the text document not in its grid-like format but as line, which merely
adapts to properties of its display when rendered. As it is nicely implied in
the animated logo of this event and as we know it from EPUBs for example.

The general format of bibliographic record is:



Author. Title. Publisher. [Place.] Date. [Page.] URL.


In the case of 'reference-linking' we can refer to a passage by including the
information about its beginning and length determined by the character
position within the text (in analogy to _pp._ operator used for printed
publications) as well as the text version information (in printed texts served
by edition and date of publication). So what is common in printed text as the
page information is here replaced by the character position range and version.
Such a reference-link is more precise while addressing particular section of a
particular version of a document regardless of how it is rendered on an
interface.

It is a relatively simple idea and its implementation does not be seem to be
very hard, although I wonder why it has not been implemented already. I
discussed it with several people yesterday to find out there were indeed
already attempts in this direction. Adam Hyde pointed me to a proposal for
_fuzzy anchors_ presented on the blog of the Hypothes.is initiative last year,
which in order to overcome the need for versioning employs diff algorithms to
locate the referred section, although it is too complicated to be explained in
this setting.[1] Aaaarg has recently implemented in its PDF reader an option
to generate URLs for a particular point in the scanned document which itself
is a great improvement although it treats texts as images, thus being specific
to a particular scan of a book, and generated links are not public URLs.

Using the character position in references requires an agreement on how to
count. There are at least two options. One is to include all source code in
positioning, which means measuring the distance from the anchor such as the
beginning of the text, the beginning of the chapter, or the beginning of the
paragraph. The second option is to make a distinction between operators and
operands, and count only in operands. Here there are further options where to
make the line between them. We can consider as operands only characters with
phonetic properties -- letters, numbers and symbols, stripping the text from
operators that are there to shape sonic and visual rendering of the text such
as whitespaces, commas, periods, HTML and markdown and other tags so that we
are left with the body of the text to count in. This would mean to render
operators unreferrable and count as in _scriptio continua_.

_Scriptio continua_ is a very old example of the linear onedimensional
treatment of the text. Let's look again at the bookroll with Plato's writing.
Even though it is 'designed' into grids on a closer look it reveals the lack
of any other structural elements -- there are no spaces, commas, periods or
line-breaks, the text is merely one flow, one long line.

_Phaedrus_ was written in the fourth century BC (this copy comes from the
second century AD). Word and paragraph separators were reintroduced much
later, between the second and sixth century AD when rolls were gradually
transcribed into codices that were bound as pages and numbered (a dramatic
change in publishing comparable to digital changes today).[2]

'Reference-linking' has not been prominent in discussions about sharing books
online and I only came to realise its significance during my preparations for
this event. There is a tremendous amount of very old, recent and new texts
online but we haven't done much in opening them up to contextual reading. In
this there are publishers of all 'grounds' together.

We are equipped to treat the internet not only as repository and library but
to take into account its potentials of reading that have been hiding in front
of our very eyes. To expand the notion of hyperlink by taking into account
techniques of referencing and to expand the notion of referencing by realising
its plasticity which has always been imagined as if it is there. To mesh texts
with public URLs to enable entaglement of referencing and hyperlinks. Here,
open access gains its further relevance and importance.

Dušan Barok

_Written May 21-23, 2014, in Vienna and Rotterdam. Revised May 28, 2014._

Notes

1. ↑ Proposals for paragraph-based hyperlinking can be traced back to the work of Douglas Engelbart, and today there is a number of related ideas, some of which were implemented on a small scale: fuzzy anchoring, 1(http://hypothes.is/blog/fuzzy-anchoring/); purple numbers, 2(http://project.cim3.net/wiki/PMWX_White_Paper_2008); robust anchors, 3(http://github.com/hypothesis/h/wiki/robust-anchors); _Emphasis_ , 4(http://open.blogs.nytimes.com/2011/01/11/emphasis-update-and-source); and others 5(http://en.wikipedia.org/wiki/Fragment_identifier#Proposals). The dependence on structural elements such as paragraphs is one of their shortcoming making them not suitable for texts with longer paragraphs (e.g. Adorno's _Aesthetic Theory_ ), visual poetry or computer code; another is the requirement to store anchors along the text.
2. ↑ Works which happened not to be of interest at the time ceased to be copied and mostly disappeared. On the book roll and its gradual replacement by the codex see William A. Johnson, "The Ancient Book", in _The Oxford Handbook of Papyrology_ , ed. Roger S. Bagnall, Oxford, 2009, pp 256-281, 6(http://google.com/books?id=6GRcLuc124oC&pg=PA256).

Addendum (June 9)

Arie Altena wrote a [report from the
panel](http://digitalpublishingtoolkit.org/2014/05/off-the-press-report-day-
ii/) published on the website of Digital Publishing Toolkit initiative,
followed by another [summary of the
talk](http://digitalpublishingtoolkit.org/2014/05/dusan-barok-digital-imprint-
the-motion-of-publishing/) by Irina Enache.

The online repository Aaaaarg [has
introduced](http://twitter.com/aaaarg/status/474717492808413184) the
reference-link function in its document viewer, see [an
example](http://aaaaarg.fail/ref/60090008362c07ed5a312cda7d26ecb8#0.102).


Barok
Poetics of Research
2014


_An unedited version of a talk given at the conference[Public
Library](http://www.wkv-stuttgart.de/en/program/2014/events/public-library/)
held at Württembergischer Kunstverein Stuttgart, 1 November 2014._

_Bracketed sequences are to be reformulated._

Poetics of Research

In this talk I'm going to attempt to identify [particular] cultural
algorithms, ie. processes in which cultural practises and software meet. With
them a sphere is implied in which algorithms gather to form bodies of
practices and in which cultures gather around algorithms. I'm going to
approach them through the perspective of my practice as a cultural worker,
editor and artist, considering practice in the same rank as theory and
poetics, and where theorization of practice can also lead to the
identification of poetical devices.

The primary motivation for this talk is an attempt to figure out where do we
stand as operators, users [and communities] gathering around infrastructures
containing a massive body of text (among other things) and what sort of things
might be considered to make a difference [or to keep making difference].

The talk mainly [considers] the role of text and the word in research, by way
of several figures.

A

A reference, list, scheme, table, index; those things that intervene in the
flow of narrative, illustrating the point, perhaps in a more economic way than
the linear text would do. Yet they don't function as pictures, they are
primarily texts, arranged in figures. Their forms have been
standardised[normalised] over centuries, withstood the transition to the
digital without any significant change, being completely intuitive to the
modern reader. Compared to the body of text they are secondary, run parallel
to it. Their function is however different to that of the punctuation. They
are there neither to shape the narrative nor to aid structuring the argument
into logical blocks. Nor is their function spatial, like in visual poems.
Their positions within a document are determined according to the sequential
order of the text, [standing as attachments] and are there to clarify the
nature of relations among elements of the subject-matter, or to establish
relations with other documents. The [premise] of my talk is that these
_textual figures_ also came to serve as the abstract[relational] models
determining possible relations among documents as such, and in consequence [to
structure conditions [of research]].

B

It can be said that research, as inquiry into a subject-matter, consists of
discrete queries. A query, such as a question about what something is, what
kinds, parts and properties does it have, and so on, can be consulted in
existing documents or generate new documents based on collection of data [in]
the field and through experiment, before proceeding to reasoning [arguments
and deductions]. Formulation of a query is determined by protocols providing
access to documents, which means that there is a difference between collecting
data outside the archive (the undocumented, ie. in the field and through
experiment), consulting with a person--an archivist (expert, librarian,
documentalist), and consulting with a database storing documents. The
phenomena such as [deepening] of specialization and throughout digitization
[have given] privilege to the database as [a|the] [fundamental] means for
research. Obviously, this is a very recent [phenomenon]. Queries were once
formulated in natural language; now, given the fact that databases are queried
[using] SQL language, their interfaces are mere extensions of it and
researchers pose their questions by manipulating dropdowns, checkboxes and
input boxes mashed together on a flat screen being ran by software that in
turn translates them into a long line of conditioned _SELECTs_ and _JOINs_
performed on tables of data.

Specialization, digitization and networking have changed the language of
questioning. Inquiry, once attached to the flesh and paper has been
[entrusted] to the digital and networked. Researchers are querying the black
box.

C

Searching in a collection of [amassed/assembled] [tangible] documents (ie.
bookshelf) is different from searching in a systematically structured
repository (library) and even more so from searching in a digital repository
(digital library). Not that they are mutually exclusive. One can devise
structures and algorithms to search through a printed text, or read books in a
library one by one. They are rather [models] [embodying] various [processes]
associated with the query. These properties of the query might be called [the
sequence], the structure and the index. If they are present in the ways of
querying documents, and we will return to this issue, are they persistent
within the inquiry as such? [wait]

D

This question itself is a rupture in the sequence. It makes a demand to depart
from one narrative [a continuous flow of words] to another, to figure out,
while remaining bound to it [it would be even more as a so-called rhetorical
question]. So there has been one sequence, or line, of the inquiry--about the
kinds of the query and its properties. That sequence itself is a digression,
from within the sequence about what is research and describing its parts
(queries). We are thus returning to it and continue with a question whether
the properties of the inquiry are the same as the properties of the query.

E

But isn't it true that every single utterance occurring in a sequence yields a
query as well? Let's consider the word _utterance_. [wait] It can produce a
number of associations, for example with how Foucault employs the notion of
_énoncé_ in his _Archaeology of Knowledge_ , giving hard time to his English
translators wondering whether _utterance_ or _statement_ is more appropriate,
or whether they are interchangeable, and what impact would each choice have on
his reception in the Anglophone world. Limiting ourselves to textual forms for
now (and not translating his work but pursing a different inquiry), let us say
the utterance is a word [or a phrase or an idiom] in a sequence such as a
sentence, a paragraph, or a document.

## (F) The
structure[[edit](/index.php?title=Talks/Poetics_of_Research&action=edit§ion=1
"Edit section: \(F\) The structure")]

This distinction is as old as recorded Western thought since both Plato and
Aristotle differentiate between a word on its own ("the said", a thing said)
and words in the company of other words. For example, Aristotle's _Categories_
[lay] on the [notion] of words on their own, and they are made the subject-
matter of that inquiry. [For him], the ambiguity of connotation words
[produce] lies in their synonymity, understood differently from the moderns--
not as more words denoting a similar thing but rather one word denoting
various things. Categories were outlined as a device to differentiate among
words according to kinds of these things. Every word as such belonged to not
less and not more than one of ten categories.

So it happens to the word _utterance_ , as to any other word uttered in a
sequence, that it poses a question, a query about what share of the spectrum
of possibly denoted things might yield as the most appropriate in a given
context. The more context the more precise share comes to the fore. When taken
out of the context ambiguity prevails as the spectrum unveils in its variety.

Thus single words [as any other utterances] are questions, queries,
themselves, and by occuring in statements, in context, their [means] are being
singled out.

This process is _conditioned_ by what has been formalized as the techniques of
_regulating_ definitions of words.

### (G) The structure: words as
words[[edit](/index.php?title=Talks/Poetics_of_Research&action=edit§ion=2
"Edit section: \(G\) The structure: words as words")]

* [![](/images/thumb/c/c8/Philitas_in_P.Oxy.XX_2260_i.jpg/144px-Philitas_in_P.Oxy.XX_2260_i.jpg)](/File:Philitas_in_P.Oxy.XX_2260_i.jpg)

P.Oxy.XX 2260 i: Oxyrhynchus papyrus XX, 2260, column i, with quotation from
Philitas, early 2nd c. CE. 1(http://163.1.169.40/cgi-
bin/library?e=q-000-00---0POxy--00-0-0--0prompt-10---4------0-1l--1-en-50---
20-about-2260--
00031-001-0-0utfZz-8-00&a=d&c=POxy&cl=search&d=HASH13af60895d5e9b50907367)
2(http://en.wikipedia.org/wiki/File:POxy.XX.2260.i-Philitas-
highlight.jpeg)

* [![](/images/thumb/9/9e/Cyclopaedia_1728_page_210_Dictionary_entry.jpg/88px-Cyclopaedia_1728_page_210_Dictionary_entry.jpg)](/File:Cyclopaedia_1728_page_210_Dictionary_entry.jpg)

Ephraim Chambers, _Cyclopaedia, or an Universal Dictionary of Arts and
Sciences_ , 1728, p. 210. 3(http://digicoll.library.wisc.edu/cgi-
bin/HistSciTech/HistSciTech-
idx?type=turn&entity=HistSciTech.Cyclopaedia01.p0576&id=HistSciTech.Cyclopaedia01&isize=L)

* [![](/images/thumb/b/b8/Detail_from_the_Liddell-Scott_Greek-English_Lexicon_c1843.jpg/160px-Detail_from_the_Liddell-Scott_Greek-English_Lexicon_c1843.jpg)](/File:Detail_from_the_Liddell-Scott_Greek-English_Lexicon_c1843.jpg)

Detail from the Liddell-Scott Greek-English Lexicon, c1843.

Dictionaries have had a long life. The ancient Greek scholar and poet Philitas
of Cos living in the 4th c. BCE wrote a vocabulary explaining the meanings of
rare Homeric and other literary words, words from local dialects, and
technical terms. The vocabulary, called _Disorderly Words_ (Átaktoi glôssai),
has been lost, with a few fragments quoted by later authors. One example is
that the word πέλλα (pélla) meant "wine cup" in the ancient Greek region of
Boeotia; contrasted to the same word meaning "milk pail" in Homer's _Iliad_.

Not much has changed in the way how dictionaries constitute order. Selected
archives of statements are queried to yield occurrences of particular words,
various _criteria[indicators]_ are applied to filtering and sorting them and
in turn the spectrum of [denoted] things allocated in this way is structured
into groups and subgroups which are then given, according to other set of
rules, shorter or longer names. These constitute facets of [potential]
meanings of a word.

So there are at least _four_ sets of conditions [structuring] dictionaries.
One is required to delimit an archive[corpus of texts], one to select and give
preference[weights] to occurrences of a word, another to cluster them, and yet
another to abstract[generalize] the subject-matter of each of these clusters.
Needless to say, this is a craft of a few and these criteria are rarely being
disclosed, despite their impact on research, and more generally, their
influence as conditions for production[making] of a so called _common sense_.

It doesn't take that much to reimagine what a dictionary is and what it could
be, especially having large specialized corpora of texts at hand. These can
also serve as aids in production of new words and new meanings.

### (H) The structure: words as knowledge and the
world[[edit](/index.php?title=Talks/Poetics_of_Research&action=edit§ion=3
"Edit section: \(H\) The structure: words as knowledge and the world")]

* [![](/images/thumb/0/02/Boethius_Porphyrys_Isagoge.jpg/120px-Boethius_Porphyrys_Isagoge.jpg)](/File:Boethius_Porphyrys_Isagoge.jpg)

Boethius's rendering of a classification tree described in Porphyry's Isagoge
(3th c.), [6th c.] 10th c.
4(http://www.e-codices.unifr.ch/en/sbe/0315/53/medium)

* [![](/images/thumb/d/d0/Cyclopaedia_1728_page_ii_Division_of_Knowledge.jpg/94px-Cyclopaedia_1728_page_ii_Division_of_Knowledge.jpg)](/File:Cyclopaedia_1728_page_ii_Division_of_Knowledge.jpg)

Ephraim Chambers, _Cyclopaedia, or an Universal Dictionary of Arts and
Sciences_ , London, 1728, p. II. 5(http://digicoll.library.wisc.edu/cgi-
bin/HistSciTech/HistSciTech-
idx?type=turn&entity=HistSciTech.Cyclopaedia01.p0015&id=HistSciTech.Cyclopaedia01&isize=L)

* [![](/images/thumb/d/d6/Encyclopedie_1751_Systeme_figure_des_connaissances_humaines.jpg/116px-Encyclopedie_1751_Systeme_figure_des_connaissances_humaines.jpg)](/File:Encyclopedie_1751_Systeme_figure_des_connaissances_humaines.jpg)

Système figuré des connaissances humaines, _Encyclopédie ou Dictionnaire
raisonné des sciences, des arts et des métiers_ , 1751.
6(http://encyclopedie.uchicago.edu/content/syst%C3%A8me-figur%C3%A9-des-
connaissances-humaines)

* [![](/images/thumb/9/96/Haeckel_Ernst_1874_Stammbaum_des_Menschen.jpg/96px-Haeckel_Ernst_1874_Stammbaum_des_Menschen.jpg)](/File:Haeckel_Ernst_1874_Stammbaum_des_Menschen.jpg)

Haeckel - Darwin's tree.

Another _formalized_ and [internalized] process being at play when figuring
out a word is its [containment]. Word is not only structured by way of things
it potentially denotes but also by words it is potentially part of and those
it contains.

The fuzz around categorization of knowledge _and_ the world in the Western
thought can be traced back to Porphyry, if not further. In his introduction to
Aristotle's _Categories_ this 3rd century AD Neoplatonist began expanding the
notions of genus and species into their hypothetic consequences. Aristotle's
brief work outlines ten categories of 'things that are said' (legomena,
λεγόμενα), namely substance (or substantive, {not the same as matter!},
οὐσία), quantity (ποσόν), qualification (ποιόν), a relation (πρός), where
(ποῦ), when (πότε), being-in-a-position (κεῖσθαι), having (or state,
condition, ἔχειν), doing (ποιεῖν), and being-affected (πάσχειν). In his
different work, _Topics_ , Aristotle outlines four kinds of subjects/materials
indicated in propositions/problems from which arguments/deductions start.
These are a definition (όρος), a genus (γένος), a property (ἴδιος), and an
accident (συμβεβηϰόϛ). Porphyry does not explicitly refer _Topics_ , and says
he omits speaking "about genera and species, as to whether they subsist (in
the nature of things) or in mere conceptions only"
8(http://www.ccel.org/ccel/pearse/morefathers/files/porphyry_isagogue_02_translation.htm#C1),
which means he avoids explicating whether he talks about kinds of concepts or
kinds of things in the sensible world. However, the work sparked confusion, as
the following passage [suggests]:

> "[I]n each category there are certain things most generic, and again, others
most special, and between the most generic and the most special, others which
are alike called both genera and species, but the most generic is that above
which there cannot be another superior genus, and the most special that below
which there cannot be another inferior species. Between the most generic and
the most special, there are others which are alike both genera and species,
referred, nevertheless, to different things, but what is stated may become
clear in one category. Substance indeed, is itself genus, under this is body,
under body animated body, under which is animal, under animal rational animal,
under which is man, under man Socrates, Plato, and men particularly." (Owen
1853,
9(http://www.ccel.org/ccel/pearse/morefathers/files/porphyry_isagogue_02_translation.htm#C2))

Porphyry took one of Aristotle's ten categories of the word, substance, and
dissected it using one of his four rhetorical devices, genus. Employing
Aristotle's categories, genera and species as means for logical operations,
for dialectic, Porphyry's interpretation resulted in having more resemblance
to the perceived _structures_ of the world. So they began to bloom.

There were earlier examples, but Porphyry was the most influential in
injecting the _universalist_ version of classification [implying] the figure
of a tree into the [locus] of Aristotle's thought. Knowledge became
monotheistic.

Classification schemes [growing from one point] play a major role in
untangling the format of modern encyclopedia from that of the dictionary
governed by alphabet. Two of the most influential encyclopedias of the 18th
century are cases in the point. Although still keeping 'dictionary' in their
titles, they are conceived not to represent words but knowledge. The [upper-
most] genus of the body was set as the body of knowledge. The English
_Cyclopaedia, or an Universal Dictionary of Arts and Sciences_ (1728) splits
into two main branches: "natural and scientifical" and "artificial and
technical"; these further split down to 47 classes in total, each carrying a
structured list (on the following pages) of thematic articles, serving as
table of contents. The French _Encyclopedia: or a Systematic Dictionary of the
Sciences, Arts, and Crafts_ (1751) [unwinds] from judgement ( _entendement_ ),
branches into memory as history, reason as philosophy, and imagination as
poetry. The logic of containers was employed as an aid not only to deal with
the enormous task of naming and not omiting anything from what is known, but
also for the management of labour of hundreds of writers and researchers, to
create a mechanism for delegating work and the distribution of
responsibilities. Flesh was also more present, in the field research, with
researchers attending workshops and sites of everyday life to annotate it.

The world came forward to unshine the word in other schemes. Darwin's tree of
evolution and some of the modern document classification systems such as
Charles A. Cutter's _Expansive Classification_ (1882) set to classify the
world itself and set the field for what has came to be known as authority
lists structuring metadata in today's computing.

### The structure
(summary)[[edit](/index.php?title=Talks/Poetics_of_Research&action=edit§ion=4
"Edit section: The structure \(summary\)")]

Facetization of meaning and branching of knowledge are both the domain of the
unit of utterance.

While lexicographers[dictionarists] structure thought through multi-layered
processes of abstraction of the written record, knowledge growers dissect it
into hierarchies of [mutually] contained notions.

One seek to describe the word as a faceted list of small worlds, another to
describe the world as a structured lists of words. One play prime in the
domain of epistemology, in what is known, controlling the vocabulary, another
in the domain of ontology, in what is, controlling reality.

Every [word] has its given things, every thing has its place, closer or
further from a single word.

The schism between classifying words and classifying the world implies it is
not possible to construct a universal classification scheme[system]. On top of
that, any classification system of words is bound to a corpus of texts it is
operating upon and any classification system of the world again operates with
words which are bound to a vocabulary[lexicon] which is again bound to a
corpus [of texts]. It doesn't mean it would prevent people from trying.
Classifications function as descriptors of and 'inscriptors' upon the world,
imprinting their authority. They operate from [a locus of] their
corpus[context]-specificity. The larger the corpus, the more power it has on
shaping the world, as far as the word shapes it (yes, I do imply Google here,
for which it is a domain to be potentially exploited).

## (J) The
sequence[[edit](/index.php?title=Talks/Poetics_of_Research&action=edit§ion=5
"Edit section: \(J\) The sequence")]

The structure-yielding query [of] the single word [shrinks][zuzuje
sa,spresnuje] with preceding and following words. Inquiry proceeds in the flow
that establishes another kind[mode] of relationality, chaining words into the
sequence. While the structuring property of the query brings words apart from
each other, its sequential property establishes continuity and brings these
units into an ordered set.

This is what is responsible for attaching textual figures mentioned earlier
(lists, schemes, tables) to the body of the text. Associations can be also
stated explicitly, by indexing tables and then referring them from a
particular point in the text. The same goes for explicit associations made
between blocks of the text by means of indexed paragraphs, chapters or pages.

From this follows that all utterances point to the following utterance by the
nature of sequential order, and indexing provides means for pointing elsewhere
in the document as well.

A lot can be said about references to other texts. Here, to spare time, I
would refer you to a talk I gave a few months ago and which is online
10(http://monoskop.org/Talks/Communing_Texts).

This is still the realm of print. What happens with document when it is
digitized?

Digitization breaks a document into units of which each is assigned a numbered
position in the sequence of the document. From this perspective digitization
can be viewed as a total indexation of the document. It is converted into
units rendered for machine operations. This sequentiality is made explicit, by
means of an underlying index.

Sequences and chains are orders of one dimension. Their one-dimensional
ordering allows addressability of each element and [random] access. [Jumps]
between [random] addresses are still sequential, processing elements one at a
time.

## (K) The
index[[edit](/index.php?title=Talks/Poetics_of_Research&action=edit§ion=6
"Edit section: \(K\) The index")]

* [![](/images/thumb/2/27/Summa_confessorum.1310.jpg/103px-Summa_confessorum.1310.jpg)](/File:Summa_confessorum.1310.jpg)

Summa confessorum [1297-98], 1310.
7(http://www.bl.uk/onlinegallery/onlineex/illmanus/roymanucoll/j/011roy000008g11u00002000.html)

[The] sequencing not only weaves words into statements but activates other
temporalities, and _presents occurrences of words from past statements_. As
now when I am saying the word _utterance_ , each time there surface contexts
in which I have used it earlier.

A long quote from Frederick G. Kilgour, _The Evolution of the Book_ , 1998, pp
76-77:

> "A century of invention of various types of indexes and reference tools
preceded the advent of the first subject index to a specific book, which
occurred in the last years of the thirteenth century. The first subject
indexes were "distinctions," collections of "various figurative or symbolic
meanings of a noun found in the scriptures" that "are the earliest of all
alphabetical tools aside from dictionaries." (Richard and Mary Rouse supply an
example: "Horse = Preacher. Job 39: 'Hast thou given the horse strength, or
encircled his neck with whinning?')

>

> [Concordance] By the end of the third decade of the thirteenth century Hugh
de Saint-Cher had produced the first word concordance. It was a simple word
index of the Bible, with every location of each word listed by [its position
in the Bible specified by book, chapter, and letter indicating part of the
chapter]. Hugh organized several dozen men, assigning to each man an initial
letter to search; for example, the man assigned M was to go through the entire
Bible, list each word beginning with M and give its location. As it was soon
perceived that this original reference work would be even more useful if words
were cited in context, a second concordance was produced, with each word in
lengthy context, but it proved to be unwieldy. [Soon] a third version was
produced, with words in contexts of four to seven words, the model for
biblical concordances ever since.

>

> [Subject index] The subject index, also an innovation of the thirteenth
century, evolved over the same period as did the concordance. Most of the
early topical indexes were designed for writing sermons; some were organized,
while others were apparently sequential without any arrangement. By midcentury
the entries were in alphabetical order, except for a few in some classified
arrangement. Until the end of the century these alphabetical reference works
indexed a small group of books. Finally John of Freiburg added an alphabetical
subject index to his own book, _Summa Confessorum_ (1297—1298). As the Rouses
have put it, 'By the end of the [13]th century the practical utility of the
subject index is taken for granted by the literate West, no longer solely as
an aid for preachers, but also in the disciplines of theology, philosophy, and
both kinds of law.'"

In one sense neither subject-index nor concordane are indexes, they are words
or group of words selected according to given criteria from the body of the
text, each accompanied with a list of identifiers. These identifiers are
elements of an index, whether they represent a page, chapter, column, or other
[kind of] block of text. Every identifier is an unique _address_.

The index is thus an ordering of a sequence by means of associating its
elements with a set of symbols, when each element is given unique combination
of symbols. Different sizes of sets yield different number of variations.
Symbol sets such as an alphabet, arabic numerals, roman numerals, and binary
digits have different proportions between the length of a string of symbols
and the number of possible variations it can contain. Thus two symbols of
English alphabet can store 26^2 various values, of arabic numerals 10^2, of
roman numberals 8^2 and of binary digits 2^2.

Indexation is segmentation, a breaking into segments. From as early as the
13th century the index such as that of sections has served as enabler of
search. The more [detailed] indexation the more precise search results it
enables.

The subject-index and concordance are tables of search results. There is a
direct lineage from the 13th-century biblical concordances and the birth of
computational linguistic analysis, they were both initiated and realised by
priests.

During the World War II, Jesuit Father Roberto Busa began to look for machines
for the automation of the linguistic analysis of the 11 million-word Latin
corpus of Thomas Aquinas and related authors.

Working on his Ph.D. thesis on the concept of _praesens_ in Aquinas he
realised two things:

> "I realized first that a philological and lexicographical inquiry into the
verbal system of an author has t o precede and prepare for a doctrinal
interpretation of his works. Each writer expresses his conceptual system in
and through his verbal system, with the consequence that the reader who
masters this verbal system, using his own conceptual system, has to get an
insight into the writer's conceptual system. The reader should not simply
attach t o the words he reads the significance they have in his mind, but
should try t o find out what significance they had in the writer's mind.
Second, I realized that all functional or grammatical words (which in my mind
are not 'empty' at all but philosophically rich) manifest the deepest logic of
being which generates the basic structures of human discourse. It is .this
basic logic that allows the transfer from what the words mean today t o what
they meant to the writer.

>

> In the works of every philosopher there are two philosophies: the one which
he consciously intends to express and the one he actually uses to express it.
The structure of each sentence implies in itself some philosophical
assumptions and truths. In this light, one can legitimately criticize a
philosopher only when these two philosophies are in contradiction."
11(http://www.alice.id.tue.nl/references/busa-1980.pdf)

Collaborating with the IBM in New York from 1949, the work, a concordance of
all the words of Thomas Aquinas, was finally published in the 1970s in 56
printed volumes (a version is online since 2005
12(http://www.corpusthomisticum.org/it/index.age)). Besides that, an
electronic lexicon for automatic lemmatization of Latin words was created by a
team of ten priests in the scope of two years (in two phases: grouping all the
forms of an inflected word under their lemma, and coding the morphological
categories of each form and lemma), containing 150,000 forms
13(http://www.alice.id.tue.nl/references/busa-1980.pdf#page=4). Father
Busa has been dubbed the father of humanities computing and recently also of
digital humanities.

The subject-index has a crucial role in the printed book. It is the only means
for search the book offers. Subjects composing an index can be selected
according to a classification scheme (specific to a field of an inquiry), for
example as elements of a certain degree (with a given minimum number of
subclasses).

Its role seemingly vanishes in the digital text. But it can be easily
transformed. Besides serving as a table of pre-searched results the subject-
index also gives a distinct idea about content of the book. Two patterns give
us a clue: numbers of occurrences of selected words give subjects weights,
while words that seem specific to the book outweights other even if they don't
occur very often. A selection of these words then serves as a descriptor of
the whole text, and can be thought of as a specific kind of 'tags'.

This process was formalized in a mathematical function in the 1970s, thanks to
a formula by Karen Spärck Jones which she entitled 'inverse document
frequency' (IDF), or in other words, "term specificity". It is measured as a
proportion of texts in the corpus where the word appears at least once to the
total number of texts. When multiplied by the frequency of the word _in_ the
text (divided by the maximum frequency of any word in the text), we get _term
frequency-inverse document frequency_ (tf-idf). In this way we can get an
automated list of subjects which are particular in the text when compared to a
group of texts.

We came to learn it by practice of searching the web. It is a mechanism not
dissimilar to thought process involved in retrieving particular information
online. And search engines have it built in their indexing algorithms as well.

There is a paper proposing attaching words generated by tf-idf to the
hyperlinks when referring websites 14(http://bscit.berkeley.edu/cgi-
bin/pl_dochome?query_src=&format=html&collection=Wilensky_papers&id=3&show_doc=yes).
This would enable finding the referred content even after the link is dead.
Hyperlinks in references in the paper use this feature and it can be easily
tested: 15(http://www.cs.berkeley.edu/~phelps/papers/dissertation-
abstract.html?lexical-
signature=notemarks+multivalent+semantically+franca+stylized).

There is another measure, cosine similarity, which takes tf-idf further and
can be applied for clustering texts according to similarities in their
specificity. This might be interesting as a feature for digital libraries, or
even a way of organising library bottom-up into novel categories, new
discourses could emerge. Or as an aid for researchers to sort through texts,
or even for editors as an aid in producing interesting anthologies.

## Final
remarks[[edit](/index.php?title=Talks/Poetics_of_Research&action=edit§ion=7
"Edit section: Final remarks")]

1

New disciplines emerge all the time - most recently, for example, cultural
techniques, software studies, or media archaeology. It takes years, even
decades, before they gain dedicated shelves in libraries or a category in
interlibrary digital repositories. Not that it matters that much. They are not
only sites of academic opportunities but, firstly, frameworks of new
perspectives of looking at the world, new domains of knowledge. From the
perspective of researcher the partaking in a discipline involves negotiating
its vocabulary, classifications, corpus, reference field, and specific
terms[subjects]. Creating new fields involves all that, and more. Even when
one goes against all disciplines.

2

Google can still surprise us.

3

Knowledge has been in the making for millenia. There have been (abstract)
mechanisms established that govern its conditions. We now possess specialized
corpora of texts which are interesting enough to serve as a ground to discuss
and experiment with dictionaries, classifications, indexes, and tools for
references retrieval. These all belong to the poetic devices of knowledge-
making.

4

Command-line example of tf-idf and concordance in 3 steps.

* 1\. Process the files text.1-5.txt and produce freq.1-5.txt with lists of (nonlemmatized) words (in respective texts), ordered by frequency:

> for i in {1..5}; do tr '[A-Z]' '[a-z]' < text.$i.txt | tr -c '[a-z]'
'[\012*]' | tr -d '[:punct:]' | sort | uniq -c | sort -k 1nr | sed '1,1d' >
temp.txt; max=$(awk -vvar=1 -F" " 'NR

1 {print $var}' temp.txt); awk
-vmaxx=$max -F' ' '{printf "%-7.7f %s\n", $1=0.5+($1/(maxx*2)), $2}' > freq.$i.txt; done && rm temp.txt

* 2\. Process the files freq.1-5.txt and produce tfidf.1-5.txt containing a list of words (out of 500 most frequent in respective lists), ordered by weight (specificity for each text):

> for j in {1..5}; do rm freq.$j.txt.temp; lines=$(wc -l freq.$j.txt) && for i
in {1..500}; do word=$(awk -vline="$i" -vfield=2 -F" " 'NR

line {print
$field}' freq.$j.txt); tf=$(awk -vline="$i" -vfield=1 -F" " 'NR

line {print
$field}' freq.$j.txt); count=$(egrep -lw $word freq.?.txt | wc -l); idf=$(echo
"1+l(5/$count)" | bc -l); tfidf=$(echo $tf*$idf | bc); echo $word $tfidf >>
freq.$j.txt.temp; done; sort -k 2nr < freq.$j.txt.temp > tfidf.$j.txt; done

* 3\. Process the files tfidf.1-5.txt and their source text, text.txt, and produce occ.txt with concordance of top 3 words from each of them:

> rm occ.txt && for j in {1..5}; do echo "$j" >> occ.txt; ptx -f -w 150
text.txt.$j > occ.$j.txt; for i in {1..3}; do word=$(awk -vline="$i" -vfield=1
-F" " 'NR

line {print $field}' tfidf.$j.txt); egrep -i
"[alpha:](/index.php?title=Alpha:&action=edit&redlink=1 "Alpha: \(page does
not exist\)") $word" occ.$j.txt >> occ.txt; done; done

Dušan Barok

_Written 23 October - 1 November 2014 in Bratislava and Stuttgart._


Barok
Techniques of Publishing
2014


Techniques of Publishing

Draft translation of a talk given at the seminar Informace mezi komoditou a komunitou [The Information Between Commodity and Community] held at Tranzitdisplay in Prague, Czech Republic, on May 6, 2014

My contribution has three parts. I will begin by sketching the current environment of publishing in general, move on to some of the specificities of publishing
in the humanities and art, and end with a brief introduction to the Monoskop
initiative I was asked to include in my talk.
I would like to thank Milos Vojtechovsky, Matej Strnad and CAS/FAMU for
the invitation, and Tranzitdisplay for hosting this seminar. It offers itself as an
opportunity for reflection for which there is a decent distance from a previous
presentation of Monoskop in Prague eight years ago when I took part in a new
media education workshop prepared by Miloš and Denisa Kera. Many things
changed since then, not only in new media, but in the humanities in general,
and I will try to articulate some of these changes from today’s perspective and
primarily from the perspective of publishing.

I. The Environment of Publishing
One change, perhaps the most serious, and which indeed relates to the humanities
publishing as well, is that from a subject that was just a year ago treated as a paranoia of a bunch of so called technological enthusiasts, is today a fact with which
the global public is well acquainted: we are all being surveilled. Virtually every
utterance on the internet, or rather made by means of the equipment connected
to it through standard protocols, is recorded, in encrypted or unencrypted form,
on servers of information agencies, besides copies of a striking share of these data
on servers of private companies. We are only at the beginning of civil mobilization towards reversal of the situation and the future is open, yet nothing suggests
so far that there is any real alternative other than “to demand the impossible.”
There are at least two certaintes today: surveillance is a feature of every communication technology controlled by third parties, from post, telegraphy, telephony
to internet; and at the same time it is also a feature of the ruling power in all its
variants humankind has come to know. In this regard, democracy can be also understood as the involvement of its participants in deciding on the scale and use of
information collected in this way.
I mention this because it suggests that also all publishing initiatives, from libraries,
through archives, publishing houses to schools have their online activities, back1

ends, shared documents and email communication recorded by public institutions–
which intelligence agencies are, or at least ought to be.
In regard to publishing houses it is notable that books and other publications today are printed from digital files, and are delivered to print over email, thus it is
not surprising to claim that a significant amount of electronically prepared publications is stored on servers in the public service. This means that besides being
required to send a number of printed copies to their national libraries, in fact,
publishers send their electronic versions to information agencies as well. Obviously, agencies couldn’t care less about them, but it doesn’t change anything on
the likely fact that, whatever it means, the world’s largest electronic repository of
publications today are the server farms of the NSA.
Information agencies archive publications without approval, perhaps without awareness, and indeed despite disapproval of their authors and publishers, as an
“incidental” effect of their surveillance techniques. This situation is obviously
radically different from a totalitarianism we got to know. Even though secret
agencies in the Eastern Bloc were blackmailing people to produce miserable literature as their agents, samizdat publications could at least theoretically escape their
attention.
This is not the only difference. While captured samizdats were read by agents of
flesh and blood, publications collected through the internet surveillance are “read”
by software agents. Both of them scan texts for “signals”, ie. terms and phrases
whose occurrences trigger interpretative mechanisms that control operative components of their organizations.
Today, publishing is similarly political and from the point of view of power a potentially subversive activity like it was in the communist Czechoslovakia. The
difference is its scale, reach and technique.
One of the messages of the recent “revelations” is that while it is recommended
to encrypt private communication, the internet is for its users also a medium of
direct contact with power. SEO, or search engine optimization, is now as relevant technique for websites as for books and other publications since all of them
are read by similar algorithms, and authors can read this situation as a political
dimension of their work, as a challenge to transform and model these algorithms
by texts.

2

II. Techniques of research in the humanities literature
Compiling the bibliography
Through the circuitry we got to the audience, readers. Today, they also include
software and algorithms such as those used for “reading” by information agencies
and corporations, and others facilitating reading for the so called ordinary reader,
the reader searching information online, but also the “expert” reader, searching
primarily in library systems.
Libraries, as we said, are different from information agencies in that they are
funded by the public not to hide publications from it but to provide access to
them. A telling paradox of the age is that on the one hand information agencies
are storing almost all contemporary book production in its electronic version,
while generally they absolutely don’t care about them since the “signal” information lies elsewhere, and on the other in order to provide electronic access, paid or
direct, libraries have to costly scan also publications that were prepared for print
electronically.
A more remarkable difference is, of course, that libraries select and catalogize
publications.
Their methods of selection are determined in the first place by their public institutional function of the protector and projector of patriotic values, and it is reflected
in their preference of domestic literature, ie. literature written in official state languages. Methods of catalogization, on the other hand, are characterized by sorting
by bibliographic records, particularly by categories of disciplines ordered in the
tree structure of knowledge. This results in libraries shaping the research, including academic research, towards a discursivity that is national and disciplinary, or
focused on the oeuvre of particular author.
Digitizing catalogue records and allowing readers to search library indexes by their
structural items, ie. the author, publisher, place and year of publication, words in
title, and disciplines, does not at all revert this tendency, but rather extends it to
the web as well.
I do not intend to underestimate the value and benefits of library work, nor the
importance of discipline-centered writing or of the recognition of the oeuvre of
the author. But consider an author working on an article who in the early phase
of his research needs to prepare a bibliography on the activity of Fluxus in central Europe or on the use of documentary film in education. Such research cuts
through national boundaries and/or branches of disciplines and he is left to travel
not only to locate artefacts, protagonists and experts in the field but also to find
literature, which in turn makes even the mere process of compiling bibliography
relatively demanding and costly activity.
3

In this sense, the digitization of publications and archival material, providing their
free online access and enabling fulltext search, in other words “open access”, catalyzes research across political-geographical and disciplinary configurations. Because while the index of the printed book contains only selected terms and for
the purposes of searching the index across several books the researcher has to have
them all at hand, the software-enabled search in digitized texts (with a good OCR)
works with the index of every single term in all of them.
This kind of research also obviously benefits from online translation tools, multilingual case bibliographies online, as well as second hand bookstores and small
specialized libraries that provide a corrective role to public ones, and whose “open
access” potential has been explored to the very small extent until now, but which
I won’t discuss here further for the lack of time.
Writing
The disciplinarity and patriotism are “embedded” in texts themselves, while I repeat that I don’t say this in a pejorative way.
Bibliographic records in bodies of texts, notes, attributions of sources and appended references can be read as formatted addresses of other texts, making apparent a kind of intertextual structure, well known in hypertext documents. However, for the reader these references are still “virtual”. When following a reference
she is led back to a library, and if interested in more references, to more libraries.
Instead, authors assume certain general erudition of their readers, while following references to their very sources is perceived as an exception from the standard
self-limitation to reading only the body of the text. Techniques of writing with
virtual bibliography thus affirm national-disciplinary discourses and form readers
and authors proficient in the field of references set by collections of local libraries
and so called standard literature of fields they became familiar with during their
studies.
When in this regime of writing someone in the Czech Republic wants to refer to
the work of Gilbert Simondon or Alexander Bogdanov, to give an example, the
effect of his work will be minimal, since there was practically nothing from these
authors translated into Czech. His closely reading colleague is left to try ordering
books through a library and wait for 3-4 weeks, or to order them from an online
store, travel to find them or search for them online. This applies, in the case of
these authors, for readers in the vast majority of countries worldwide. And we can
tell with certainty that this is not only the case of Simondon and Bogdanov but
of the vast majority of authors. Libraries as nationally and pyramidally situated
institutions face real challenges in regard to the needs of free research.
This is surely merely one aspect of techniques of writing.
4

Reading
Reading texts with “live” references and bibliographies using electronic devices is
today possible not only to imagine but to realise as well. This way of reading
allows following references to other texts, visual material, other related texts of
an author, but also working with occurrences of words in the text, etc., bringing
reading closer to textual analysis and other interesting levels. Due to the time
limits I am going to sketch only one example.
Linear reading is specific by reading from the beginning of the text to its end,
as well as ‘tree-like’ reading through the content structure of the document, and
through occurrences of indexed words. Still, techniques of close reading extend
its other aspect – ‘moving’ through bibliographic references in the document to
particular pages or passages in another. They make the virtual reference plastic –
texts are separated one from another merely by a click or a tap.
We are well familiar with a similar movement through the content on the web
– surfing, browsing, and clicking through. This leads us to an interesting parallel: standards of structuring, composing, etc., of texts in the humanities has been
evolving for centuries, what is incomparably more to decades of the web. From
this stems also one of the historical challenges the humanities are facing today:
how to attune to the existence of the web and most importantly to epistemological consequences of its irreversible social penetration. To upload a PDF online is
only a taste of changes in how we gain and make knowledge and how we know.
This applies both ways – what is at stake is not only making production of the
humanities “available” online, it is not only about open access, but also about the
ways of how the humanities realise the electronic and technical reality of their
own production, in regard to the research, writing, reading, and publishing.
Publishing
The analogy between information agencies and national libraries also points to
the fact that large portion of publications, particularly those created in software,
is electronic. However the exceptions are significant. They include works made,
typeset, illustrated and copied manually, such as manuscripts written on paper
or other media, by hand or using a typewriter or other mechanic means, and
other pre-digital techniques such as lithography, offset, etc., or various forms of
writing such as clay tablets, rolls, codices, in other words the history of print and
publishing in its striking variety, all of which provide authors and publishers with
heterogenous means of expression. Although this “segment” is today generally
perceived as artists’ books interesting primarily for collectors, the current process
of massive digitization has triggered the revival, comebacks, transformations and
5

novel approaches to publishing. And it is these publications whose nature is closer
to the label ‘book’ rather than the automated electro-chemical version of the offset
lithography of digital files on acid-free paper.
Despite that it is remarkable to observe a view spreading among publishers that
books created in software are books with attributes we have known for ages. On
top of that there is a tendency to handle files such as PDFs, EPUBs, MOBIs and
others as if they are printed books, even subject to the rules of limited edition, a
consequence of what can be found in the rise of so called electronic libraries that
“borrow” PDF files and while someone reads one, other users are left to wait in
the line.
Whilst, from today’s point of view of the humanities research, mass-printed books
are in the first place archives of the cultural content preserved in this way for the
time we run out of electricity or have the internet ‘switched off’ in some other
way.

III. Monoskop
Finally, I am getting to Monoskop and to begin with I am going to try to formulate
its brief definition, in three versions.
From the point of view of the humanities, Monoskop is a research, or questioning, whose object’s nature renders no answer as definite, since the object includes
art and culture in their widest sense, from folk music, through visual poetry to
experimental film, and namely their history as well as theory and techniques. The
research is framed by the means of recording itself, what makes it a practise whose
record is an expression with aesthetic qualities, what in turn means that the process of the research is subject to creative decisions whose outcomes are perceived
esthetically as well.
In the language of cultural management Monoskop is an independent research
project whose aim is subject to change according to its continual findings; which
has no legal body and thus as organisation it does not apply for funding; its participants have no set roles; and notably, it operates with no deadlines. It has a reach
to the global public about which, respecting the privacy of internet users, there
are no statistics other than general statistics on its social networks channels and a
figure of numbers of people and bots who registered on its website and subscribed
to its newsletter.
At the same time, technically said, Monoskop is primarily an internet website
and in this regard it is no different from any other communication media whose
function is to complicate interpersonal communication, at least due to the fact
that it is a medium with its own specific language, materiality, duration and access.
6

Contemporary media
Monoskop has began ten years ago in the milieu of a group of people running
a cultural space where they had organised events, workshops, discussion, a festival,
etc. Their expertise, if to call that way the trace left after years spent in the higher
education, varied well, and it spanned from fine art, architecture, philosophy,
through art history and literary theory, to library studies, cognitive science and
information technology. Each of us was obviously interested in these and other
fields other than his and her own, but the praxis in naming the substance whose
centripetal effects brought us into collaboration were the terms new media, media
culture and media art.
Notably, it was not contemporary art, because a constituent part of the praxis was
also non-visual expression, information media, etc., so the research began with the
essentially naive question ‘of what are we contemporary?’. There had been not
much written about media culture and art as such, a fact I perceived as drawback
but also as challenge.
The reflection, discussion and critique need to be grounded in reality, in a wider
context of the field, thus the research has began in-field. From the beginning, the
website of Monoskop served to record the environment, including people, groups,
organizations, events we had been in touch with and who/which were more or
less explicitly affiliated with media culture. The result of this is primarily a social
geography of live media culture and art, structured on the wiki into cities, with
a focus on the two recent decades.
Cities and agents
The first aim was to compile an overview of agents of this geography in their
wide variety, from eg. small independent and short-lived initiatives to established
museums. The focus on the 1990s and 2000s is of course problematic. One of
its qualities is a parallel to the history of the World Wide Web which goes back
precisely to the early 1990s and which is on the one hand the primary recording
medium of the Monoskop research and on the other a relevant self-archiving and–
stemming from its properties–presentation medium, in other words a platform on
which agents are not only meeting together but potentially influence one another
as well.
http://monoskop.org/Prague
The records are of diverse length and quality, while the priorities for what they
consist of can be generally summed up in several points in the following order:

7

1. Inclusion of a person, organisation or event in the context of the structure.
So in case of a festival or conference held in Prague the most important is to
mention it in the events section on the page on Prague.
2. Links to their web presence from inside their wiki pages, while it usually
implies their (self-)presentation.
http://monoskop.org/The_Media_Are_With_Us
3. Basic information, including a name or title in an original language, dates
of birth, foundation, realization, relations to other agents, ideally through
links inside the wiki. These are presented in narrative and in English.
4. Literature or bibliography in as many languages as possible, with links to
versions of texts online if there are any.
5. Biographical and other information relevant for the object of the research,
while the preference is for those appearing online for the first time.
6. Audiovisual material, works, especially those that cannot be found on linked
websites.
Even though pages are structured in the quasi same way, input fields are not structured, so when you create a wiki account and decide to edit or add an entry, the
wiki editor offers you merely one input box for the continuous text. As is the case
on other wiki websites. Better way to describe their format is thus articles.
There are many related questions about representation, research methodology,
openness and participation, formalization, etc., but I am not going to discuss them
due to the time constraint.
The first research layer thus consists of live and active agents, relations among
them and with them.
Countries
Another layer is related to a question about what does the field of media culture
and art stem from; what and upon what does it consciously, but also not fully
consciously, builds, comments, relates, negates; in other words of what it may be
perceived a post, meta, anti, retro, quasi and neo legacy.
An approach of national histories of art of the 20th century proved itself to be
relevant here. These entries are structured in the same way like cities: people,
groups, events, literature, at the same time building upon historical art forms and
periods as they are reflected in a range of literature.
8

http://monoskop.org/Czech_Republic
The overviews are organised purposely without any attempts for making relations
to the present more explicit, in order to leave open a wide range of intepretations
and connotations and to encourage them at the same time.
The focus on art of the 20th century originally related to, while the researched
countries were mostly of central and eastern Europe, with foundations of modern
national states, formations preserving this field in archives, museums, collections
but also publications, etc. Obviously I am not saying that contemporary media
culture is necessarily archived on the web while art of the 20th century lies in
collections “offline”, it applies vice versa as well.
In this way there began to appear new articles about filmmakers, fine artists, theorists and other partakers in artistic life of the previous century.
Since then the focus has considerably expanded to more than a century of art and
new media on the whole continent. Still it portrays merely another layer of the
research, the one which is yet a collection of fragmentary data, without much
context. Soon we also hit the limit of what is about this field online. The next
question was how to work in the internet environment with printed sources.
Log
http://monoskop.org/log
When I was installing this blog five years ago I treated it as a side project, an offshoot, which by the fact of being online may not be only an archive of selected
source literature for the Monoskop research but also a resource for others, mainly
students in the humanities. A few months later I found Aaaarg, then oriented
mainly on critical theory and philosophy; there was also Gigapedia with publications without thematic orientation; and several other community library portals
on password. These were the first sources where I was finding relevant literature
in electronic version, later on there were others too, I began to scan books and catalogues myself and to receive a large number of scans by email and soon came to
realise that every new entry is an event of its own not only for myself. According
to the response, the website has a wide usership across all the continents.
At this point it is proper to mention the copyright. When deciding about whether
to include this or that publication, there are at least two moments always present.
One brings me back to my local library at the outskirts of Bratislava in the early
1990s and asks that if I would have found this book there and then, could it change
my life? Because books that did I was given only later and elsewhere; and here I
think of people sitting behind computers in Belarus, China or Kongo. And even
9

if not, the latter is a wonder on whether this text has a potential to open up some
serious questions about disciplinarity or national discursivity in the humanities,
while here I am reminded by a recent study which claims that more than half
of academic publications are not read by more than three people: their author,
reviewer and editor. What does not imply that it is necessary to promote them
to more people but rather to think of reasons why is it so. It seems that the
consequences of the combination of high selectivity with open access resonate
also with publishers and authors from whom the complaints are rather scarce and
even if sometimes I don’t understand reasons of those received, I respect them.
Media technology
Throughout the years I came to learn, from the ontological perspective, two main
findings about media and technology.
For a long time I had a tendency to treat technologies as objects, things, while now
it seems much more productive to see them as processes, techniques. As indeed
nor the biologist does speak about the dear as biology. In this sense technology is
the science of techniques, including cultural techniques which span from reading,
writing and counting to painting, programming and publishing.
Media in the humanities are a compound of two long unrelated histories. One of
them treats media as a means of communication, signals sent from point A to the
point B, lacking the context and meaning. Another speaks about media as artistic
means of expression, such as the painting, sculpture, poetry, theatre, music or
film. The term “media art” is emblematic for this amalgam while the historical
awareness of these two threads sheds new light on it.
Media technology in art and the humanities continues to be the primary object of
research of Monoskop.
I attempted to comment on political, esthetic and technical aspects of publishing.
Let me finish by saying that Monoskop is an initiative open to people and future
and you are more than welcome to take part in it.

Dušan Barok
Written May 1-7, 2014, in Bergen and Prague. Translated by the author on May 10-13,
2014. This version generated June 10, 2014.


Barok
Shadow Libraries
2018


_A talk given at the [Shadow Libraries](http://www.sgt.gr/eng/SPG2096/)
symposium held at the National Museum of Contemporary Art (EMST) in
[Athens](/Athens "Athens"), 17 March 2018. Moderated by [Kenneth
Goldsmith](/Kenneth_Goldsmith "Kenneth Goldsmith") (UbuWeb) and bringing
together [Dusan Barok](/Dusan_Barok "Dusan Barok") (Monoskop), [Marcell
Mars](/Marcell_Mars "Marcell Mars") (Public Library), [Peter
Sunde](/Peter_Sunde "Peter Sunde") (The Pirate Bay), [Vicki
Bennett](/Vicki_Bennett "Vicki Bennett") (People Like Us), [Cornelia
Sollfrank](/Cornelia_Sollfrank "Cornelia Sollfrank") (Giving What You Don't
Have), and Prodromos Tsiavos, the event was part of the _[Shadow Libraries:
UbuWeb in Athens](http://www.sgt.gr/eng/SPG2018/) _programme organised by [Ilan
Manouach](/Ilan_Manouach "Ilan Manouach"), Kenneth Goldsmith and the Onassis
Foundation._

[![Shadow Libraries.jpg](/images/thumb/8/8e/Shadow_Libraries.jpg/500px-
Shadow_Libraries.jpg)](/File:Shadow_Libraries.jpg)

This is the first time that I was asked to talk about Monoskop as a _shadow
library_.

What are shadow libraries?
[Lawrence Liang](/Lawrence_Liang "Lawrence Liang") wrote a think piece for _e-
flux_ a couple of years ago,
in response to the closure of Library.nu, a digital library that had operated
from 2004, first as Ebooksclub, later as Gigapedia.
He wrote that:

[![Liang Lawrence 2012 Shadow
Libraries.jpg](/images/thumb/5/53/Liang_Lawrence_2012_Shadow_Libraries.jpg
/500px-
Liang_Lawrence_2012_Shadow_Libraries.jpg)](http://www.e-flux.com/journal/37/61228
/shadow-libraries/)

In the essay, he moves between identifying Library.nu as digital Alexandria
and as its shadow.
In this account, even large libraries exist in the shadows cast by their
monumental precedessors.
There’s a lineage, there’s a tradition.

Almost everyone and every institution has a library, small or large.
They’re not necessarily Alexandrias, but they strive to stay relevant.
Take the University of Amsterdam where I now work.
University libraries are large, but they’re hardly _large enough_.
The publishing market is so huge that you simply can’t keep up with all the
niche little disciplines.
So either you have to wait days or weeks for a missing book to be ordered
somewhere.
Or you have some EBSCO ebooks.
And most of the time if you’re searching for a book title in the catalogue,
all you get are its reviews in various journals the library subscribes to.

So my colleagues keep asking me.
Dušan, where do I find this or that book?
You need to scan through dozens of texts, check one page in that book, table
of contents of another book, read what that paper is about.

[![Arts humanities and social sciences digital libraries
2018.jpg](/images/thumb/8/81/Arts_humanities_and_social_sciences_digital_libraries_2018.jpg
/500px-
Arts_humanities_and_social_sciences_digital_libraries_2018.jpg)](/Digital_libraries#Libraries
"Digital libraries#Libraries")

Well, just look _online_.

So what do digital libraries do?

[![Hand writing.jpg](/images/thumb/a/a2/Hand_writing.jpg/500px-
Hand_writing.jpg)](/File:Hand_writing.jpg)

You write a manuscript, have it published,

[![Scanning hand.jpg](/images/thumb/4/48/Scanning_hand.jpg/500px-
Scanning_hand.jpg)](/File:Scanning_hand.jpg)

and someone scans it later.

[![Hands typing.jpg](/images/thumb/8/8a/Hands_typing.jpg/500px-
Hands_typing.jpg)](/File:Hands_typing.jpg)

Or scrapes it from somewhere, since most books today are born digital and live
their digital lives.
...
Digital libraries need to be creative.
They don’t just preserve and circulate books.

[![Hirsal Josef Groegerova Bohumila eds Slovo pismo akce hlas
frontmatter.jpg](/images/thumb/9/98/Hirsal_Josef_Groegerova_Bohumila_eds_Slovo_pismo_akce_hlas_frontmatter.jpg
/500px-
Hirsal_Josef_Groegerova_Bohumila_eds_Slovo_pismo_akce_hlas_frontmatter.jpg)](https://monoskop.org/log/?p=10262)

They engage in extending print runs, making new editions, readily
reproducible, unlimited editions.

[![Hirsal Josef Groegerova Bohumila eds Slovo pismo akce hlas
Reichardt.jpg](/images/thumb/e/ef/Hirsal_Josef_Groegerova_Bohumila_eds_Slovo_pismo_akce_hlas_Reichardt.jpg
/500px-
Hirsal_Josef_Groegerova_Bohumila_eds_Slovo_pismo_akce_hlas_Reichardt.jpg)](https://monoskop.org/images/d/de/Hirsal_Josef_Groegerova_Bohumila_eds_Slovo_pismo_akce_hlas.pdf#page=87)

This one comes with something extra. Isn’t this beautiful? You can read along
someone else.
In this case we know these annotations come from the Slovak avant-garde visual
poet and composer [Milan Adamciak](/Milan_Adamciak "Milan Adamciak").

[![Milan Adamciak John Cage in Bratislava 1992
.jpg](/images/thumb/7/70/Milan_Adamciak_John_Cage_in_Bratislava_1992_.jpg
/500px-Milan_Adamciak_John_Cage_in_Bratislava_1992_.jpg)](/Milan_Adamciak
"Milan Adamciak")

...standing in the middle.
A couple of pages later...

[![Hirsal Josef Groegerova Bohumila eds Slovo pismo akce hlas
pp232-233.jpg](/images/thumb/4/4b/Hirsal_Josef_Groegerova_Bohumila_eds_Slovo_pismo_akce_hlas_pp232-233.jpg
/500px-
Hirsal_Josef_Groegerova_Bohumila_eds_Slovo_pismo_akce_hlas_pp232-233.jpg)](https://monoskop.org/images/d/de/Hirsal_Josef_Groegerova_Bohumila_eds_Slovo_pismo_akce_hlas.pdf#page=117)

...you can clearly see how he found out about a book containing one million
random digits [see note 24 on the image]. The strangest book.

[![Million Random Digits with 100000 Normal Deviates
p400.jpg](/images/thumb/d/d5/Million_Random_Digits_with_100000_Normal_Deviates_p400.jpg
/500px-
Million_Random_Digits_with_100000_Normal_Deviates_p400.jpg)](https://monoskop.org/log/?p=5780)

He was still alive when we put it up on Monoskop, and could experience it.
...
Digital libraries may seem like virtual, grey places, nonplaces.
But these little chance encounters happen all the time there.
There are touches. There are traces. There are many hands involved, visible
hands.
They join writers’ hands and help creating new, unlimited editions.

[![Second-Hand Libraries.jpg](/images/thumb/d/d4/Second-Hand_Libraries.jpg
/500px-Second-Hand_Libraries.jpg)](/File:Second-Hand_Libraries.jpg)

They may be off Google, but for many, especially younger generation these are
the places to go to learn, to share.
Rather than in a shadow, they are out in the open, in plain sight.

[![Hawking 2017 on free access to
research.jpg](/images/thumb/e/ea/Hawking_2017_on_free_access_to_research.jpg
/500px-
Hawking_2017_on_free_access_to_research.jpg)](http://www.cam.ac.uk/research/news
/step-inside-the-mind-of-the-young-stephen-hawking-as-his-phd-thesis-goes-
online-for-first-time)

This made rounds last year.
As scholars, as authors, we have reasons to have our works freely accessible
by everyone.
We do it for feedback, for invites to lecture, for citations.
Sounds great.
So when after long two, three, four, five years I have my manuscript ready,
where will I go?
Will I go to an established publisher or an open access press?
Will I send it to MIT Press or Open Humanities Press?
Traditional publishers have better distribution, and they often have a strong
brand.
It’s often about career moves and bios, plans A’s and plan B’s.
There are no easy answers, but one can always be a little inventive.
In the end, one should not feel guilty for publishing with MIT Press.
But at the same time, one should neither feel guilty for scanning and sharing
such a book with others.
...
You know, there’s fighting, there are court cases.
[Aaaaarg](/Aaaaarg "Aaaaarg"), a digital library run by our dear friend [Sean
Dockray](/Sean_Dockray "Sean Dockray"), is facing a Canadian publisher.
Open Library is now facing the Authors Guild for lending scanned books
deaccessioned from libraries.
They need our help, our support.

[![Cabinet 2012 Monoskop
takedown.jpg](/images/thumb/2/28/Cabinet_2012_Monoskop_takedown.jpg/500px-
Cabinet_2012_Monoskop_takedown.jpg)](https://monoskop.org/log/?p=373#comment-16498)

But collisions of interests can be productive.
This is what our beloved _Cabinet_ magazine did when they found their PDFs
online.
They converted all their articles into HTML and put them online.
The most beautiful takedown request we have ever received.

[![The Writings of Swartz Aaron
2015.jpg](/images/thumb/7/77/The_Writings_of_Swartz_Aaron_2015.jpg/500px-
The_Writings_of_Swartz_Aaron_2015.jpg)](https://monoskop.org/log/?p=16598)

So what is at stake? What are these digital books?
They are poor versions of print books.
They come with no binding, no paper, no weight.
They come as PDFs, EPUBs, JPEGs in online readers, they come as HTML.
By the way, HTML is great, you can search it, copy, save it, it’s lightweight,
it’s supported by all browsers, footnotes too, you can adapt its layout
easily.
That’s completely fine for a researcher.
As a researcher, you just need source code:
you need plain text, page numbers, images, working footnotes, relevant data
and code.
_Data and code_ as well:
this is where online companions to print books come in,
you want to publish your research material,
your interviews, spreadsheets, software you made.
...
Here we distinguish between researchers and readers.
As _readers_ we will always build our beautiful libraries at home, and
elsewhere,
filled with books and... and external harddrives.
...
There may be _no contradiction_ between the existence of a print book in
stores and the existence of its free digital version.

[![Unconditional Basic
Access.jpg](/images/thumb/f/f3/Unconditional_Basic_Access.jpg/500px-
Unconditional_Basic_Access.jpg)](/File:Unconditional_Basic_Access.jpg)

So what we’ve been asking for is access, basic access. The access to culture
and knowledge for research, educational, noncommercial purposes. A low budget,
poor bandwidth access. Access to badly OCR’d ebooks with grainy images. Access
to culture and knowledge _light_.

Thank you.

Dusan Barok

_Written on 16-17 March 2018 in Athens and Amsterdam. Published online on 21
March 2018._


Bodo
A Short History of the Russian Digital Shadow Libraries
2014


Draft Manuscript, 11/4/2014, DO NOT CITE!

A short history of the Russian digital shadow libraries
Balazs Bodo, Institute for Information Law, University of Amsterdam

“What I see as a consequence of the free educational book distribution: in decades generations of people
everywhere in the World will grow with the access to the best explained scientific texts of all times.
[…]The quality and accessibility of education to poors will drastically grow too. Frankly, I'm seeing this as
the only way to naturally improve mankind: by breeding people with all the information given to them at
any time.” – Anonymous admin of Aleph, explaining the reason d’étre of the site

Abstract
RuNet, the Russian segment of the internet is now the home of the most comprehensive scientific pirate
libraries on the net. These sites offer free access to hundreds of thousands of books and millions of
journal articles. In this contribution we try to understand the factors that led to the development of
these sites, and the sociocultural and legal conditions that enable them to operate under hostile legal
and political conditions. Through the reconstruction of the micro-histories of peer produced online text
collections that played a central role in the history of RuNet, we are able to link the formal and informal
support for these sites to the specific conditions developed under the Soviet and post Soviet times.

(pirate) libraries on the net
The digitization and collection of texts was one of the very first activities enabled by computers. Project
Gutenberg, the first in line of digital libraries was established as early as 1971. By the early nineties, a
number of online electronic text archives emerged, all hoping to finally realize the dream that was
chased by humans every since the first library: the collection of everything (Battles, 2004), the Memex
(Bush, 1945), the Mundaneum (Rieusset-Lemarié, 1997), the Library of Babel (Borges, 1998). It did not
take long to realize that the dream was still beyond reach: the information storage and retrieval
technology might have been ready, but copyright law, for the foreseeable future was not. Copyright
protection and enforcement slowly became one of the most crucial issues around digital technologies.

1
Electronic copy available at: http://ssrn.com/abstract=2616631

Draft Manuscript, 11/4/2014, DO NOT CITE!
And as that happened, the texts, which were archived without authorization were purged from the
budding digital collections. Those that survived complete deletion were moved into the dark, locked
down sections of digital libraries that sometimes still lurk behind the law-abiding public façades. Hopes
for a universal digital library can be built was lost in just a few short years as those who tried it (such as
Google or Hathitrust) got bogged down in endless court battles.
There are unauthorized texts collections circulating on channels less susceptible to enforcement, such as
DVDs, torrents, or IRC channels. But the technical conditions of these distribution channels do not enable
the development of a library. Two of the most essential attributes of any proper library: the catalogue
and the community are hard to provide on such channels. The catalog doesn’t just organize the
knowledge stored in the collection; it is not just a tool of searching and browsing. It is a critical
component in the organization of the community of “librarians” who preserve and nourish the
collection. The catalog is what distinguishes an unstructured heap of computer files from a wellmaintained library, but it is the same catalog, which makes shadow libraries, unauthorized texts
collections an easy target of law enforcement. Those few digital online libraries that dare to provide
unauthorized access to texts in an organized manner, such as textz.org, a*.org, monoskop or Gigapedia/
library.nu, all had their bad experiences with law enforcement and rights holder dismay.
Of these pirate libraries, Gigapedia—later called Library.nu—was the largest at the turn of the 2010’s. At
its peak, it was several orders of magnitudes bigger than its peers, offering access to nearly a million
English language documents. It was not just size that made Gigapedia unique. Unlike most sites, it
moved beyond its initial specialization in scientific texts to incorporate a wide range of academic
disciplines. Compared to its peers, it also had a highly developed central metadata database, which
contained bibliographic details on the collection and also, significantly, on gaps in the collection, which
underpinned a process of actively solicited contributions from users. With the ubiquitous
scanner/copiers, the production of book scans was as easy as copying them, thus the collection grew
rapidly.
Gigapedia’s massive catalog made the site popular, which in turn made it a target. In early 2012, a group
of 17 publishers was granted an injunction against the site (now called Library.nu; and against iFile.it—
the hosting site that stored most of Library.nu’s content). Unlike the record and movie companies,
which had collaborated on dozens of lawsuits over the past decade, the Library.nu injunction and lawsuit
were the first coordinated publisher actions against a major file-sharing site, and the first to involve
major university publishers in particular. Under the injunction, the Library.nu adminstrators closed the
site. The collection disappeared and the community around it dispersed. (Liang, 2012)
Gigapedia’s collection was integrated into Aleph’s predominantly Russian language collection before the
shutdown, making Aleph the natural successor of Gigapedia/library.nu.

Libraries in the RuNet

2
Electronic copy available at: http://ssrn.com/abstract=2616631

Draft Manuscript, 11/4/2014, DO NOT CITE!
The search soon zeroed in on a number of sites with strong hints to their Russian origins. Sites like Aleph,
[sc], [fi], [os] are open, completely free to use, and each offers access to a catalog comparable to the late
Gigapedia’s.
The similarity of these seemingly distinct services is no coincidence. These sites constitute a tightly knit
network, in which Aleph occupies the central position. Aleph, as its name suggests, is the source library,
it aims to seed of all scientific digital libraries on the net. Its mission is simple and straightforward. It
collects free-floating scientific texts and other collections from the Internet and consolidates them (both
content and metadata) into a single, open database. Though ordinary users can search the catalog and
retrieve the texts, its main focus is the distribution of the catalog and the collection to anyone who
wants to build services upon them. Aleph has regularly updated links that point to its own, neatly packed
source code, its database dump, and to the terabytes worth of collection. It is a knowledge infrastructure
that can be freely accessed, used and built upon by anyone. This radical openness enables a number of
other pirate libraries to offer Aleph’s catalogue along with books coming from other sources. By
mirroring Aleph they take over tasks that the administrators of Aleph are unprepared or unwilling to do.
Handling much of the actual download traffic they relieve Aleph from the unavoidable investment in
servers and bandwidth, which, in turn puts less pressure on Aleph to engage in commercial activities to
finance its operation. While Aleph stays in the background, the network of mirrors compete for
attention, users and advertising revenue as their design, business model, technical sophistication is finetuned to the profile of their intended target audience.
This strategy of creating an open infrastructure serves Aleph well. It ensures the widespread distribution
of books while it minimizes (legal) exposure. By relinquishing control, Aleph also ensures its own longterm survival, as it is copied again and again. In fact, openness is the core element in the philosophy of
Aleph, which was summed up by one of its administrators as to:
“- collect valuable science/technology/math/medical/humanities academic literature. That is,
collect humanity's valuable knowledge in digital form. Avoid junky books. Ignore "bestsellers".
- build a community of people who share knowledge, improve quality of books, find good and
valuable books, and correct errors.
- share the files freely, spreading the knowledge altruistically, not trying to make money, not
charging money for knowledge. Here people paid money for many books that they considered
valuable and then shared here on [Aleph], for free. […]
This is the true spirit of the [Aleph] project.”

3

Draft Manuscript, 11/4/2014, DO NOT CITE!
Reading, publishing, censorship and libraries in Soviet-Russia
“[T]he library of the Big Lubyanka was unique. In all probability it had been assembled out of confiscated
private libraries. The bibliophiles who had collected those books had already rendered up their souls to
God. But the main thing was that while State Security had been busy censoring and emasculating all the
libraries of the nation for decades, it forgot to dig in its own bosom. Here, in its very den, one could read
Zamyatin, Pilnyak, Panteleimon Romanov, and any volume at all of the complete works of Merezhkovsky.
(Some people wisecracked that they allowed us to read forbidden books because they already regarded
us as dead. But I myself think that the Lubyanka librarians hadn't the faintest concept of what they were
giving us—they were simply lazy and ignorant.)”
(Solzhenitsyn, 1974)
In order to properly understand the factors that shaped Russian pirate librarians’ and their wider
environments’ attitudes towards bottom-up, collaborative, copyright infringing open source digital
librarianship, we need to go back nearly a century and take a close look at the specific social and political
conditions of the Soviet times that shaped the contemporary Russian intelligentsia’s attitudes towards
knowledge.

The communist ideal of a reading nation
Russian culture always had a reverence for the printed word, and the Soviet state, with its Leninist
program of mass education further stressed the idea of the educated, reading public. As Stelmach (1993)
put it:
Reading almost transplanted religion as a sacred activity: in the secularized socialist state, where the
churches were closed, the free press stifled and schools and universities politicized, literature became the
unique source of moral truth for the population. Writers were considered teachers and prophets.
The Soviet Union was a reading culture: in the last days of the USSR, a quarter of the adult population
were considered active readers, and almost everyone else categorized as an occasional reader. Book
prices were low, alternative forms of entertainment were scarce, and people were poor, making reading
one of the most attractive leisure activities.
The communist approach towards intellectual property protection reflected the idea of the reading
nation. The Soviet Union inherited a lax and isolationist copyright system from the tsarist Russia. Neither
the tsarist Russian state nor the Soviet state adhered to international copyright treaties, nor did they
enter into bilateral treaties. Tsarist Russia’s refusal to grant protection to foreign authors and
translations had primarily an economic rationale. The Soviet regime added a strong ideological claim:
granting exclusive ownership to authors was against the interests of the reading public, and “the cultural
development of the masses,” and only served the private interests of authors and heirs.
“If copyright had an economic function, that was only as a right of remuneration for his contribution to
the extension of the socialist art heritage. If copyright had a social role, this was not to protect the author

4

Draft Manuscript, 11/4/2014, DO NOT CITE!
from the economically stronger exploiter, but was one of the instruments to get the author involved in
the great communist educational project.” (Elst, 2005, p 658)
The Soviet copyright system, even in its post-revolutionary phase, maintained two persistent features
that served as important instruments of knowledge dissemination. First, the statutorily granted
“freedom of translation” meant that translation was treated as an exception to copyright, which did not
require rights holder authorization. This measure dismantled a significant barrier to access in a
multicultural and multilingual empire. By the same token, the denial of protection to foreign authors and
rights holders eased the imports of foreign texts (after, of course the appropriate censorship review).
Due to these instruments:
“[s]oon after its founding, the Soviet Union became as well the world's leading literary pirate, not only
publishing in translation the creations of its own citizens but also publishing large numbers of copies of
the works of Western authors both in translation and in the original language.” (Newcity, 1980, p 6.)
Looking simply at the aggregate numbers of published books, the USSR had an impressive publishing
industry on a scale appropriate to a reading nation. Between 1946 and 1970 more than 1 billion copies of
over 26 thousand different work were published, all by foreign authors (Newcity, 1978). In 1976 alone,
more than 1.7 billion copies of 84,304 books were printed. (Friedberg, Watanabe, & Nakamoto, 1984, fn
4.)
Of course these impressive numbers reflected neither a healthy public sphere, nor a well-functioning
print ecology. The book-based public sphere was both heavily censored and plagued by the peculiar
economic conditions of the Soviet, and later the post-Soviet era.

Censorship
The totalitarian Soviet state had many instruments to control the circulation of literary and scientific
works. 1 Some texts never entered official circulation in the first hand: “A particularly harsh
prepublication censorship [affected] foreign literature, primarily in the humanities and socioeconomic
disciplines. Books on politics, international relations, sociology, philosophy, cybernetics, semiotics,
linguistics, and so on were hardly ever published.” (Stelmakh, 2001, p 145.)
Many ‘problematic’ texts were only put into severely limited circulation. Books were released in small
print runs; as in-house publications, or they were only circulated among the trustworthy few. As the
resolution of the Central Committee of the Communist Party of June 4, 1959, stated: “Writings by
bourgeois authors in the fields of philosophy, history, economics, diplomacy, and law […] are to be
published in limited quantities after the excision from them of passages of no scholarly or practical

1

We share Helen Freshwater’s (2003) approach that censorship is a more complex phenomenon than the state just
blocking the circulation of certain texts. Censorship manifested itself in more than one ways and its dominant
modus operandi, institutions, extent, focus, reach, effectiveness showed extreme variations over time. This short
chapter however cannot go into the intricate details of the incredibly rich history of censorship in the Soviet Union.
Instead, through much simplification we try to demonstrate that censorship did not only affect literary works, but
extended deep into scholarly publishing, including natural science disciplines.

5

Draft Manuscript, 11/4/2014, DO NOT CITE!
interest. They are to be supplied with extensive introductions and detailed annotations." (quoted in
Friedberg et al., 1984)
Truncation and mutilation of texts was also frequent. Literary works and texts from humanities and
social sciences were obvious subjects of censorship, but natural sciences and technical fields did not
escape:
“In our film studios we received an American technical journal, something like Cinema, Radio and
Television. I saw it on the chief engineer's desk and noticed that it had been reprinted in Moscow.
Everything undesirable, including advertisements, had been removed, and only those technical articles
with which the engineer could be trusted were retained. Everything else, even whole pages, was missing.
This was done by a photo copying process, but the finished product appeared to be printed.” (Dewhirst &
Farrell, 1973, p. 127)
Mass cultural genres were also subject to censorship and control. Women's fiction, melodrama, comics,
detective stories, and science fiction were completely missing or heavily underrepresented in the mass
market. Instead, “a small group of officially approved authors […] were published in massive editions
every year, [and] blocked readers' access to other literature. […]Soviet literature did not fit the formula
of mass culture and was simply bad literature, but it was issued in huge print-runs.” (Stelmakh, 2001, p.
150)
Libraries were also important instruments of censorship. When not destroyed altogether, censored
works ended up in the spetskhrans, limited access special collections established in libraries to contain
censored works. Besides obvious candidates such as anti-Soviet works and western ‘bourgeois’
publications, many scientific works from the fields of biology, nuclear physics, psychology, sociology,
cybernetics, and genetics ended up in these closed collections (Ryzhak, 2005). Access to the spetskhrans
was limited to those with special permits issued by their employers. “Only university educated readers
were enrolled and only those holding positions of at least junior scientific workers were allowed to read
the publications kept by the spetskhran” (Ryzhak, 2005). In the last years of the USSR, the spetskhran of
the Russian State Library—the largest of them with more than 1 million items in the collection—had 43
seats for its roughly 4500 authorized readers. Yearly circulation was around 200,000 items, a figure that
included “the history and literature of other countries, international relations, science of law, technical
sciences and others.” (Ryzhak, 2005)
Librarians thus played a central role in the censorship machinery. They did more than guard the contents
of limited-access collections and purge the freely accessible stocks according to the latest Party
directives. As the intermediaries between the readers and the closed stacks, their task was to carefully
guide readers’ interests:
“In the 1970s, among the staff members of the service department of the Lenin State Library of the
U.S.S.R., there were specially appointed persons-"politcontrollers"-who, apart from their regular
professional functions, had to perform additional control over the literature lent from the general stocks
(not from the restricted access collections), thus exercising censorship over the percolation of avant-garde

6

Draft Manuscript, 11/4/2014, DO NOT CITE!
aesthetics to the reader, the aesthetics that introduced new ways of thinking and a new outlook on life
and social behavior.” (Stelmakh, 2001)
Librarians also used library cards and lending histories to collect and report information on readers and
suspicious reading habits.
Soviet economic dysfunction also severely limited access to printed works. Acute and chronic shortages
of even censor-approved texts were common, both on the market and in libraries. When the USSR
joined its first first international copyright treaty in its history in 1973 (the UNESCO-backed Universal
Copyright Convention), which granted protection to foreign authors and denied “freedom of
translation,” the access problems only got worse. Soviet concern that granting protection to foreign
authors would result in significant royalty payments to western rightsholders proved valid. By 1976, the
yearly USSR trade deficit in publishing reached a million rubles (~5.5 million current USD) (Levin, 1983, p.
157). This imbalance not only affected the number of publications that were imported into the cashpoor country, but also raised the price of translated works to the double that of Russian-authored books
(Levin, 1983, p. 158).

The literary and scientific underground in Soviet times
Various practices and informal institutions evolved to address the problems of access. Book black
markets flourished: “In the 1970s and 1980s the black market was an active part of society. Buying books
directly from other people was how 35 percent of Soviet adults acquired books for their own homes, and
68 percent of families living in major cities bought books only on the black market.” (Stelmakh, 2001, p
146). Book copying and hoarding was practiced to supplement the shortages:
“People hoarded books: complete works of Pushkin, Tolstoy or Chekhov. You could not buy such things.
So you had the idea that it is very important to hoard books. High-quality literary fiction, high quality
science textbooks and monographs, even biographies of famous people (writers, scientists, composers,
etc.) were difficult to buy. You could not, as far as I remember, just go to a bookstore and buy complete
works of Chekhov. It was published once and sold out and that's it. Dostoyevsky used to be prohibited in
the USSR, so that was even rarer. Lots of writers were prohibited, like Nabokov. Eventually Dostoyevsky
was printed in the USSR, but in very small numbers.
And also there were scientists who wanted scientific books and also could not get them. Mathematics
books, physics - only very few books were published every year, you can't compare this with the market in
the U.S. Russian translations of classical monographs in mathematics were difficult to find.
So, in the USSR, everyone who had a good education shared the idea that hoarding books is very, very
important, and did just that. If someone had free access to a Xerox machine, they were Xeroxing
everything in sight. A friend of mine had entire room full of Xeroxed books.”2
From the 1960s onwards, the ever-growing Samizdat networks tried to counterbalance the effects of
censorship and provide access to both censored classics and information on the current state of Soviet

2

Anonymous source #1

7

Draft Manuscript, 11/4/2014, DO NOT CITE!
society. Reaching a readership of around 200,000, these networks operated in a networked, bottom-up
manner. Each node in the chain of distribution copied the texts it received, and distributed the copies.
The nodes also carried information backwards, towards the authors of the samizdat publications.
In the immediate post-Soviet political turmoil and economic calamity, access to print culture did not get
any easier. Censorship officially ended, but so too did much of the funding for the state-funded
publishing sector. Mass unemployment, falling wages, and the resulting loss of discretionary income did
not facilitate the shift toward market-based publishing models. The funding of libraries also dwindled,
limiting new acquisitions (Elst, 2005, p. 299-300). Economic constraints took the place of political ones.
But in the absence of political repression, self-organizing efforts to address these constraints acquired
greater scope of action. Slowly, the informal sphere began to deliver alternative modes of access to
otherwise hard-to-get literary and scientific works.
Russian pirate libraries emerged from these enmeshed contexts: communist ideologies of the reading
nation and mass education; the censorship of texts; the abused library system; economic hardships and
dysfunctional markets, and, most importantly, the informal practices that ensured the survival of
scholarship and literary traditions under hostile political and economic conditions. The prominent place
of Russian pirate libraries in the larger informal media economy—and of Russian piracy of music, film,
and other copyrighted work more generally—cannot be understood outside this history.

The emergence of DIY digital libraries in RuNet
The copying of censored and uncensored works (by hand, by typewriters, by photocopying or by
computers), the hoarding of copied texts, the buying and selling of books on the black market, and the
informal, peer-to-peer distribution of samizdat material were integral parts of the everyday experience
of much of educated Soviet and post-Soviet readers. The building and maintenance of individual
collections and the participation in the informal networks of exchange offered a sense of political,
economic and cultural agency—especially as the public institutions that supported the core professions
of the intelligentsia fell into sustained economic crisis.
Digital technologies were embraced by these practices as soon as they appeared:
"From late 1970s, when first computers became used in the USSR and printers became available,
people started to print forbidden books, or just books that were difficult to find, not necessarily
forbidden. I have seen myself a print-out on a mainframe computer of a science fiction novel,
printed in all caps! Samizdat was printed on typewriters, xeroxed, printed abroad and xeroxed, or
printed on computers. Only paper circulated, files could not circulate until people started to have
PCs at home. As late as 1992 most people did not have a PC at home. So the only reason to type
a big text into a computer was to print it on paper many times.”3
People who worked in academic and research institutions were well positioned in this process: they had
access to computers, and many had access to the materials locked up in the spetskhrans. Many also had
3

Anonymous source #1

8

Draft Manuscript, 11/4/2014, DO NOT CITE!
the time and professional motivations to collect and share otherwise inaccessible texts. The core of
current digital collections was created in this late-Soviet/early post-Soviet period by such professionals.
Their home academic and scientific institutions continued to play an important role in the development
of digital text collections well into the era of home computing and the internet.
Digitized texts first circulated in printouts and later on optical/magnetic storage media. With the
emergence of digital networking these texts quickly found their way to the early Internet as well. The
first platform for digital text sharing was the Russian Fidonet, a network of BBS systems similar to
Usenet, which enabled the mass distribution of plain text files. The BBS boards, such as the Holy Spirit
BBS’ “SU.SF & F.FANDOM” group whose main focus was Soviet-Russian science fiction and fantasy
literature, connected fans around emerging collections of shared texts. As an anyonmous interviewee
described his experience in the early 1990s…
“Fidonet collected a large number of plaintext files in literature / fiction, mostly in Russian, of course.
Fidonet was almost all typed in by hand. […] Maybe several thousand of the most important books,
novels that "everyone must read" and such stuff. People typed in poetry, smaller prose pieces. I have
myself read a sci-fi novel printed on a mainframe, which was obviously typed in. This novel was by
Strugatski brothers. It was not prohibited or dissident, but just impossible to buy in the stores. These
were culturally important, cult novels, so people typed them in. […] At this point it became clear that
there was a lot of value in having a plaintext file with some novels, and the most popular novels were first
digitized in this way.”
The next stage in the text digitization started around 1994. By that time growing numbers of people had
computers, scanning peripherals, OCR software. Russian internet and PC penetration while extremely
low overall in the 1990s (0.1% of the population having internet access in 1994, growing to 8.3% by
2003), began to make inroads in educational and scientific institutions and among Moscow and
St.Petersburg elites, who were often the critical players in these networks. As access to technologies
increased a much wider array of people began to digitize their favorite texts, and these collections began
to circulate, first via CD-ROMs, later via the internet.
One of such collection belonged to Maxim Moshkov, who published his library under the name lib.ru in
1994. Moshkov was a graduate of the Moscow State University Department of Mechanics and
Mathematics, which played a large role in the digitization of scientific works. After graduation, he started
to work for the Scientific Research Institute of System Development, a computer science institute
associated with the Russian Academy of Sciences. He describes the early days of his collection as follows:
“ I began to collect electronic texts in 1990, on a desktop computer. When I got on the Internet in 1994, I
found lots of sites with texts. It was like a dream came true: there they were, all the desired books. But
these collections were in a dreadful state! Incompatible formats, different encodings, missing content. I
had to spend hours scouring the different sites and directories to find something.
As a result, I decided to convert all the different file-formats into a single one, index the titles of the books
and put them in thematic directories. I organized the files on my work computer. I was the main user of
my collection. I perfected its structure, made a simple, fast and convenient search interface and

9

Draft Manuscript, 11/4/2014, DO NOT CITE!
developed many other useful functions and put it all on the Internet. Soon, people got into the habit of
visiting the site. […]
For about 2 years I have scoured the internet: I sought out and pulled texts from the network, which were
lying there freely accessible. Slowly the library grew, and the audience increased with it. People started
to send books to me, because they were easier to read in my collection. And the time came when I
stopped surfing the internet for books: regular readers are now sending me the books. Day after day I get
about 100 emails, and 10-30 of them contain books. So many books were sent in, that I did not have time
to process them. Authors, translators and publishers also started to send texts. They all needed the
library.”(Мошков, 1999)

In the second half of the 1990’s, the Russian Internet—RuNet—was awash in book digitization projects.
With the advent of scanners, OCR technology, and the Internet, the work of digitization eased
considerably. Texts migrated from print to digital and sometimes back to print again. They circulated
through different collections, which, in turn, merged, fell apart, and re-formed. Digital libraries with the
mission to collect and consolidate these free-floating texts sprung up by the dozens.
Such digital librarianship was the antithesis of official Soviet book culture: it was free, bottom-up,
democratic, and uncensored. It also offered a partial remedy to problems created by the post-Soviet
collapse of the economy: the impoverishment of libraries, readers, and publishers. In this context, book
digitization and collecting also offered a sense of political, economic and cultural agency, with parallels
to the copying and distribution of texts in Soviet times. The capacity to scale up these practices coincided
with the moment when anti-totalitarian social sentiments were the strongest, and economic needs the
direst.
The unprecedented bloom of digital librarianship is the result of the superimposition of multiple waves
of distinct transformations: technological, political, economical and social. “Maksim Moshkov's Library”
was ground zero for this convergence and soon became a central point of exchange for the community
engaged in text digitization and collection:
[At the outset] there were just a couple of people who started scanning books in large quantities. Literally
hundreds of books. Others started proofreading, etc. There was a huge hole in the market for books.
Science fiction, adventure, crime fiction, all of this was hugely in demand by the public. So lib.ru was to a
large part the response, and was filled by those books that people most desired and most valued.
For years, lib.ru integrated as much as it could of the different digital libraries flourishing in the RuNet. By
doing so, it preserved the collections of the many short-lived libraries.
This process of collection slowed in the early 2000’s. By that time, lib.ru had all of the classics, resulting
in a decrease in the flow of new digitized material. By the same token, the Russian book market was
finally starting to offer works aimed at the popular mainstream, and was flooded by cheap romances,
astrology, crime fiction, and other genres. Such texts started to appear in, and would soon flood lib.ru.
Many contributors, including Moshkov, were concerned that such ephemera would dilute the original
10

Draft Manuscript, 11/4/2014, DO NOT CITE!
library. And so they began to disaggregate the collection. Self-published literature, “user generated
content,” and fan fiction was separated into the aptly named samizdat.lib.ru, which housed original texts
submitted by readers. Popular fiction--“low-brow literature”—was copied from the relevant subsections
of lib.ru and split off. Sites specializing in those genres quickly formed their own ecosystem. [L], the first
of its kind, now charges a monthly fee to provide access to the collection. The [f] community split off
from [L] the same way that [L] split off from lib.ru, to provide free and unrestricted access to a
fundamentally similar collection. Finally, some in the community felt the need to focus their efforts on a
separate collection of scientific works. This became Kolhoz collection.

The genesis of a million book scientific library
A Kolhoz (Russian: колхо́ з) was one of the types of collective farm that emerged in the early Soviet
period. In the early days, it was a self-governing, community-owned collaborative enterprise, with many
of the features of a commons. For the Russian digital librarians, these historical resonances were
intentional.
The kolhoz group was initially a community that scanned and processed scientific materials: books and,
occasionally, articles. The ethos was free sharing. Academic institutes in Russia were in dire need of
scientific texts; they xeroxed and scanned whatever they could. Usually, the files were then stored on the
institute's ftp site and could be downloaded freely. There were at least three major research institutes
that did this, back in early 2000s, unconnected to each other in any way, located in various faraway parts
of Russia. Most of these scans were appropriated by the kolhoz group and processed into DJVU4.
The sources of files for kolhoz were, initially, several collections from academic institutes (downloaded
whenever the ftp servers were open for anonymous access; in one case, from one of the institutes of the
Chinese academy of sciences, but mostly from Russian academic institutes). At that time (around 2002),
there were also several commercialized collections of scanned books on sale in Russia (mostly, these were
college-level textbooks on math and physics); these files were also all copied to kolhoz and processed into
DJVU. The focus was on collecting the most important science textbooks and monographs of all time, in
all fields of natural science.
There was never any commercial support. The kolhoz group never had a web site with a database, like
most projects today. They had an ftp server with files, and the access to ftp was given by PM in a forum.
This ftp server was privately supported by one of the members (who was an academic researcher, like
most kolhoz members). The files were distributed directly by burning files on writable DVDs and giving the

4

DJVU is a file format that revolutionized online book distribution the way mp3 revolutionized the online music
distribution. For books that contain graphs, images and mathematical formulae scanning is the only digitization
option. However, the large number of resulting image files is difficult to handle. The DJVU file format allows for the
images of scanned book pages to be stored in the smallest possible file size, which makes it the perfect medium for
the distribution of scanned e-books.

11

Draft Manuscript, 11/4/2014, DO NOT CITE!
DVDs away. Later, the ftp access was closed to the public, and only a temporary file-swapping ftp server
remained. Today the kolhoz DVD releases are mostly spread via torrents.” 5
Kolhoz amassed around fifty thousand documents, the mexmat collection of the Moscow State
University Department of Mechanics and Mathematics (Moshkov’s alma mater) was around the same
size, the “world of books” collection (mirknig) had around thirty thousand files, and there were around a
dozen other smaller archives, each with approximately 10 thousand files in their respective collections.
The Kolhoz group dominated the science-minded ebook community in Russia well into the late 2000’s.
Kolhoz, however, suffered from the same problems as the early Fidonet-based text collections. Since it
was distributed in DVDs, via ftp servers and on torrents, it was hard to search, it lacked a proper catalog
and it was prone to fragmentation. Parallel solutions soon emerged: around 2006-7, an existing book site
called Gigapedia copied the English books from Kolhoz, set up a catalog, and soon became the most
influential pirate library in the English speaking internet.
Similar cataloguing efforts soon emerged elsewhere. In 2007, someone on rutracker.ru, a Russian BBS
focusing on file sharing, posted torrent links to 91 DVDs containing science and technology titles
aggregated from various other Russian sources, including Kolhoz. This massive collection had no
categorization or particular order. But it soon attracted an archivist: a user of the forum started the
laborious task of organizing the texts into a usable, searchable format—first filtering duplicates and
organizing existing metadata first into an excel spreadsheet, and later moving to a more open, webbased database operating under the name Aleph.
Aleph inherited more than just books from Kolhoz and Moshkov’s lib.ru. It inherited their elitism with
regard to canonical texts, and their understanding of librarianship as a community effort. Like the earlier
sites, Aleph’s collections are complemented by a stream of user submissions. Like the other sites, the
number of submissions grew rapidly as the site’s visibility, reputation and trustworthiness was
established, and like the others it later fell, as more and more of what was perceived as canonical
literature was uploaded:
“The number of mankind’s useful books is about what we already have. So growth is defined by newly
scanned or issued books. Also, the quality of the collection is represented not by the number of books but
by the amount of knowledge it contains. [ALEPH] does not need to grow more and I am not the only one
among us who thinks so. […]
We have absolutely no idea who sends books in. It is practically impossible to know, because there are a
million books. We gather huge collections which eliminate any traces of the original uploaders.
My expectation is that new arrivals will dry up. Not completely, as I described above, some books will
always be scanned or rescanned (it nowadays happens quite surprisingly often) and the overall process of
digitization cannot and should not be stopped. It is also hard to say when the slowdown will occur: I
expected it about a year ago, but then library.nu got shut down and things changed dramatically in many
respects. Now we are "in charge" (we had been the largest anyways, just now everyone thinks we are in
5

Anonymous source #1

12

Draft Manuscript, 11/4/2014, DO NOT CITE!
charge) and there has been a temporary rise in the book inflow. At the moment, relatively small or
previously unseen collections are being integrated into [ALEPH]. Perhaps in a year it will saturate.
However, intuition is not a good guide. There are dynamic processes responsible for eBook availability. If
publishers massively digitize old books, they'll obviously be harvested and that will change the whole
picture.” 6
Aleph’s ambitions to create a universal library are limited , at least in terms of scope. It does not want to
have everything, or anything. What it wants is what is thought to be relevant by the community,
measured by the act of actively digitizing and sharing books. But it has created a very interesting strategy
to establish a library which is universal in terms of its reach. The administrators of Aleph understand that
Gigapedia’s downfall was due to its visibility and they wish to avoid that trap:
“Well, our policy, which I control as strictly as I can, is to avoid fame. Gigapedia's policy was to gain as
much fame as possible. Books should be available to you, if you need them. But let the rest of the world
stay in its equilibrium. We are taking great care to hide ourselves and it pays off.”7
They have solved the dilemma of providing access without jeopardizing their mission by open sourcing
the collection and thus allowing others to create widely publicized services that interface with the
public.They let others run the risk of getting famous.

Mirrors and communities
Aleph serves as a source archive for around a half-dozen freely accessible pirate libraries on the net. The
catalog database is downloadable, the content is downloadable, even the server code is downloadable.
No passwords are required to download and there are no gatekeepers. There are no obstacle to setting
up a similar library with a wider catalog, with improved user interface and better services, with a
different audience or, in fact, a different business model.
This arrangement creates a two-layered community. The core group of the Aleph admins maintains the
current service, while a loose and ever changing network of ‘mirror sites’ build on the Aleph
infrastructure.
“The unspoken agreement is that the mirrors support our ideas. Otherwise we simply do not interact with
them. If the mirrors do support this, they appear in the discussions, on the Web etc. in a positive context.
This is again about building a reputation: if they are reliable, we help with what we can, otherwise they
should prove the World they are good on their own. We do not request anything from them. They are free
to do anything they like. But if they do what we do not agree with, it'll be taken into account in future
relations. If you think for a while, there is no other democratic way of regulation: everyone expresses his
own views and if they conform with ours, we support them. If the ideology does not match, it breaks
down.”8

6

Anonymous source #1
Anonymous source #2
8
Anonymous source #1
7

13

Draft Manuscript, 11/4/2014, DO NOT CITE!
The core Aleph team claims to exclusively control only two critical resources: the BBS that is the home of
the community, and the book-uploading interface. That claim is, however, not entirely accurate. For the
time being, the academic minded e-book community indeed gathers on the BBS managed by Aleph, and
though there is little incentive to move on, technically nothing stands in the way of alternatives to spring
up. As for the centralization of the book collection: many of the mirrors have their own upload pages
where one can contribute to a mirror’s collection, and it is not clear how or whether books that land at
one of the mirrors find their way back to the central database. Aleph also offers a desktop library
management tool, which enables dedicated librarians to see the latest Aleph database on their desktop
and integrate their local collections with the central database via this application. Nevertheless, it seems
that nothing really stands in the way of the fragmentation of the collection, apart from the willingness of
uploaders to contribute directly to Aleph rather than to one of its mirrors (or other sites).
Funding for Aleph comes from the administrators’ personal resources as well as occasional donations
when there is a need to buy or rent equipment or services:
“[W]e've been asking and getting support for this purpose for years. […] All our mirrors are supported
primarily from private pockets and inefficient donation schemes: they bring nothing unless a whole
campaign is arranged. I asked the community for donations 3 or 4 times, for a specific purpose only and
with all the budget spoken for. And after getting the requested amount of money we shut down the
donations.”9
Mirrors, however, do not need to be non-commercial to enjoy the support of the core Aleph community,
they just have to provide free access. Ad-supported business models that do not charge for individual
access are still acceptable to the community, but there has been serious fallout with another site, which
used the Aleph stock to seed its own library, but decided to follow a “collaborative piracy” business
approach.
“To make it utmost clear: we collaborate with anyone who shares the ideology of free knowledge
distribution. No conditions. [But] we can't suddenly start supporting projects that earn money. […]
Moreover, we've been tricked by commercial projects in the past when they used the support of our
community for their own benefit.”10
The site in question, [e], is based on a simple idea: If a user cannot find a book in its collection, the
administrators offer to purchase a digital or print copy, rip it, and sell it to the user for a fraction of the
original price—typically under $1. Payments are to be made in Amazon gift cards which make the
purchases easy but the de-anonymization of users difficult. [e] recoups its investment, in principle,
through resale. While clearly illegal, the logic is not that different from that of private subscription
libraries, which purchase a resource and distribute the costs and benefits among club members.

9

BBS comment posted on Jan 15, 2013
BBS comment posted on Jan 15, 2013

10

14

Draft Manuscript, 11/4/2014, DO NOT CITE!
Although from the rights holders’ perspective there is little difference between the two approaches,
many participants in the free access community draw a sharp line between the two, viewing the sales
model as a violation of community norms.
“[e] is a scam. They were banned in our forum. Yes, most of the books in [e] came from [ALEPH], because
[ALEPH] is open, but we have nothing to do with them... If you wish to buy a book, do it from legal
sources. Otherwise it must be free.[…]
What [e] wants:
- make money on ebook downloads, no matter what kind of ebooks.
- get books from all the easy sources - spend as little effort as possible on books - maximize profit.
- no need to build a community, no need to improve quality, no need to correct any errors - just put all
files in a big pile - maximize profit.
- files are kept in secret, never given away, there is no listing of files, there is no information about what
books are really there or what is being done.
There are very few similarities in common between [e]and [ALEPH], and these similarities are too
superficial to serve as a common ground for communication. […]
They run an illegal business, making a profit.”11
Aleph administrators describe a set of values that differentiates possible site models. They prioritize the
curatorial mission and the provision of long term free access to the collection with all the costs such a
position implies, such as open sourcing the collection, ignoring takedown requests, keeping a low profile,
refraining from commercial activities, and as a result, operating on a reduced budget . [e] prioritizes the
expansion of its catalogue on demand but that implies a commercial operation, a larger budget and the
associated high legal risk. Sites carrying Aleph’s catalogue prioritize public visibility, carry ads to cover
costs but respond to takedown requests to avoid as much trouble as they can. From the perspective of
expanding access, these are not easy or straightforward tradeoffs. In Aleph’s case, the strong
commitment to the mission of providing free access comes with significant sacrifices, the most important
of which is relinquishing control over its most valuable asset: its collection of 1.2 million scientific books.
But they believe that these costs are justified by the promise, that this way the fate of free access is not
tied to the fate of Aleph.
The fact that piratical file sharing communities are willing to make substantial sacrifices (in terms of selfrestraint) to ensure their long term survival has been documented in a number of different cases. (Bodó,
2013) Aleph is unique, however in its radical open source approach. No other piratical community has
given up all the control over itself entirely. This approach is rooted in the way how it regards the legal
status of its subject matter, i.e. scholarly publications in the first place. While norms of openness in the
field of scientific knowledge production were first formed in the Enlightenment period, Aleph’s
11

BBS comments posted on Jul 02, 2013, and Aug 25, 2013

15

Draft Manuscript, 11/4/2014, DO NOT CITE!
copynorms are as much shaped by the specificities of post-Soviet era as by the age old realization that in
science we can see further if we are allowed “standing on the shoulders of giants”.

Copyright and copynorms around Russian pirate libraries
The struggle to re-establish rightsholders’ control over digitized copyrighted works has defined the
copyright policy arena since Napster emerged in 1999. Russia brought a unique history to this conflict. In
Russia, digital libraries and their emerged in a period a double transformation: the post-Soviet copyright
system had to adopt global norms, while the global norms struggled to adapt to the emergence of digital
copying.
The first post-Soviet decade produced new copyright laws that conformed with some of the international
norms advocated by Western rightsholders, but little legal clarity or enforceability (Sezneva & Karaganis,
2011). Under such conditions, informally negotiated copynorms set in to fill the void of non-existent,
unreasonable, or unenforceable laws. The pirate libraries in the RuNet are as much regulated by such
norms as by the actual laws themselves.
During most of the 1990’s user-driven digitization and archiving was legal, or to be more exact, wasn’t
illegal. The first Russian copyright law, enacted in 1993, did not cover “internet rights” until a 2006
amendment (Budylin & Osipova, 2007; Elst, 2005, p. 425). As a result, many argued (including the
Moscow prosecutor’s office), that the distribution of copyrighted works via the internet was not
copyright infringement. Authors and publishers, who saw their works appear in digital form, and
circulated via CD-ROMs and the internet, had to rely on informal norms, still in development, to establish
control over their texts vis-à-vis enthusiastic collectors and for-profit entrepreneurs.
The HARRYFAN CD was one of the early examples of a digital text collection in circulation before internet
access was widespread. The CD contained around ten thousand texts, mostly Russian science fiction. It
was compiled in 1997 by Igor Zagumenov, a book enthusiast, from the texts that circulated on the Holy
Spirit BBS. The CD was a non-profit project, planned to be printed and sold in around 1000 copies.
Zagumenov did get in touch with some of the authors and publishers, and got permission to release
some of their texts, but the CD also included many other works that were uploaded to the BBS without
authorization. The CD included the following copyright notice, alongside the name and contact of
Zagumenov and those who granted permission:
Texts on this CD are distributed in electronic format with the consent of the copyright holders or their
literary agent. The disk is aimed at authors, editors, translators and fans SF & F as a compact reference
and information library. Copying or reproduction of this disc is not allowed. For the commercial use of
texts please refer directly to the copyright owners at the following addresses.
The authors whose texts and unpublished manuscripts appeared in the collection without authorization
started to complain to those whose contact details were in the copyright notice. Some complained
about the material damage the collection may have caused to them, but most complaints focused on
moral rights: unauthorized publication of a manuscript, the mutilation of published works, lack of
attribution, or the removal of original copyright and contact notices. Some authors had no problem
16

Draft Manuscript, 11/4/2014, DO NOT CITE!
appearing in non-commercially distributed collections but objected to the fact that the CDs were sold
(and later overproduced in spite of Zagumenov’s intentions).
The debate, which took place in the book-related fora of Fidonet, had some important points.
Participants again drew a significant distinction between free access provided first by Fidonet (and later
by lib.ru, which integrated some parts of the collection) and what was perceived as Zagumenov’s forprofit enterprise—despite the fact that the price of the CD only covered printing costs. The debate also
drew authors’ and publishers’ attention to the digital book communities’ actions, which many saw as
beneficial as long as it respected the wishes of the authors. Some authors did not want to appear online
at all, others wanted only their published works to be circulated.
Lib.ru of course integrated the parts of the HARRYFAN CD into its collection. Moshkov’s policy towards
authors’ rights was to ask for permission, if he could contact the author or publisher. He also honored
takedown requests sent to him. In 1999 he wrote on copyright issues as follows:
The author’s interests must be protected on the Internet: the opportunity to find the original copy, the
right of attribution, protection from distorting the work. Anyone who wants to protect his/her rights,
should be ready to address these problems, ranging from the ability to identify the offending party, to the
possibility of proving infringement.[…]
Meanwhile, it has become a stressing question how to protect authors-netizens' rights regarding their
work published on the Internet. It is known that there are a number of periodicals that reprint material
from the Internet without the permission of the author, without payment of a fee, without prior
arrangement. Such offenders need to be shamed via public outreach. The "Wall of shame" website is one
of the positive examples of effective instruments established by the networked public to protect their
rights. It manages to do the job without bringing legal action - polite warnings, an indication of potential
trouble and shaming of the infringer.
Do we need any laws for digital libraries? Probably we do, but until then we have to do without. Yes, of
course, it would be nice to have their status established as “cultural objects” and have the same rights as
a "real library" to collect information, but that might be in the distant future. It would also be nice to
have the e-library "legal deposits" of publications in electronic form, but when even Leninka [the Russian
State Library] cannot always afford that, what we really need are enthusiastic networkers. […]
The policy of the library is to take everything they give, otherwise they cease to send books. It is also to
listen to the authors and strictly comply with their requirements. And it is to grow and prosper. […] I
simply want the books to find their readers because I am afraid to live in a world where no one reads
books. This is already the case in America, and it is speeding up with us. I don’t just want to derail this
process, I would like to turn it around.”

17

Draft Manuscript, 11/4/2014, DO NOT CITE!
Moshkov played a crucial role in consolidating copynorms in the Russian digital publishing domain. His
reputation and place in the Russian literary domain is marked by a number of prizes12, and the library’s
continued existence. This place was secured by a number of closely intertwined factors:







Framing and anchoring the digitization and distribution practice in the library tradition.
The non-profit status of the enterprise.
Respecting the wishes of the rights holders even if he was not legally obliged to do so.
Maintaining active communication with the different stakeholders in the community,
including authors and readers.
Responding to a clear gap in affordable, legal access.
Conservatism with regard to the book, anchored in the argument that digital texts are not
substitutes for printed matter.

Many other digital libraries tried to follow Moshkov’s formula, but the times were changing. Internet and
computer access left the sub-cultural niches and became mainstream; commercialization became a
viable option and thus an issue for both the community and rightsholders; and the legal environment
was about to change.

Formalization of the IP regime in the 2000s
As soon as the 1993 copyright law passed, the US resumed pressure on the Russian government for
further reform. Throughout the period—and indeed to the present day—US Trade Representative
Special 301 reports cited inadequate protections and lack of enforcement of copyright. Russia’s plans to
join the WTO, over which the US had effective veto power, also became leverage to bring the Russian
copyright regime into compliance with US norms.
Book piracy was regularly mentioned in Special 301 reports in the 2000s, but the details, alleged losses,
and analysis changed little from year to year. The estimated $40M USD losses per year throughout this
period were dwarfed by claims from the studios and software vendors, and clearly were not among the
top priorities of the USTR. For most of the decade, the electronic availability of bestsellers and academic
textbooks was seen in the context of print substitution, rather than damage to the non-existent
electronic market. And though there is little direct indication, the Special 301 reports name sites which
(unlike lib.ru) were serving audiences beyond the RuNet, indicating that the focus of enforcement was
not to protect US interests in the Russian market, but to prevent sites based in Russia to cater for
demand in the high value Western-European and US markets.
A 1998 amendment to the 1993 copyright law extended the legal framework to encompass digital rights,
though in a fashion that continued to produce controversy. After 1998, digital services had to license
content from collecting societies, but those societies needed no permission from rightsholders provided
they paid royalites. The result was a proliferation of collective management organizations, competing to
license the material to digital services (Sezneva and Karaganis, 2011), which under this arrangement
12

ROTOR, the International Union of Internet Professionals in Russia voted lib.ru as the “literary site of the year” in
1999,2001 and 2003, “electronic library of the year” in 2004,2006,2008,2009, and 2010, “programmer of the year”
in 1999, and “man of the year” in 2004 and 2005.

18

Draft Manuscript, 11/4/2014, DO NOT CITE!
were compliant with Russian law, but were regarded as illegal by Western rights holders who claimed
that the Russian collecting societies were not representing them.
The best known of dispute from this time was the one around the legality of Allofmp3.com, a site that
sold music from western record labels at prices far below those iTunes or other officially licensed
vendors. AllofMP3.com claimed that it was licensed by ROMS, the Russian Society for Multimedia and
Internet (Российское общество по мультимедиа и цифровым сетям (НП РОМС)), but despite of that
became the focal point of US (and behind them, major label) pressure, leading to an unsuccessful
criminal prosecution of the site owner and eventual closure of the site in 2007. Although Lib.ru had
some direct agreements with authors, it also licensed much of its collection from ROMS, and thus was in
the same legal situation as AllofMP3.com. .
Lib.ru avoided the attention of foreign rightholders and Russian state pressure and even benefited from
state support during the period, the receiving a $30,000 grant from the Federal Agency for Press and
Mass Communications to digitize the most important works from the 1930’s. But the chaotic licensing
environment that governed their legal status also came back to haunt them. In 2005, a lawsuit was
brought against Moshkov by KM Online (KMO), an online vendor that sold digital texts for a small fee.
Although the KMO collection—like every other collection—had been assembled from a wide range of
sources on the Internet, KMO claimed to pay a 20% royalty on its income to authors. In 2004 KMO
requested that lib.ru take down works by several authors with whom (or with whose heirs) KMO claimed
to be in exclusive contract to distribute their texts online. KMO’s claims turned out to be only partly true.
KMO had arranged contracts with a number of the heirs to classics of the Soviet period, who hoped to
benefit from an obscure provision in the 1993 Russian copyright law that granted copyrights to the heirs
of politically prosecuted and later rehabilitated Soviet-era authors. Moshkov, in turn, claimed that he
had written or oral agreements with many of the same authors and heirs, in addition to his agreement
with ROMS.
The lawsuit was a true public event. It generated thousands of news items both online and in the
mainstream press. Authors, members of the publishing industry, legal professionals, librarians, internet
professionals publicly supported Moshkov, while KMO was seen as a rogue operator that would lie to
make easy money on freely-available digital resources.
Eventually, the court ruled that KMO indeed had one exclusive contract with Eduard Gevorgyan, and that
the publication of his texts by Moshkov infringed the moral (but not the economic) rights of the author.
Moshkov was ordered to pay 3000 Rubles (approximately $100) in compensation.
The lawsuit was a sign of a slow but significant transformation in the Russian print ecosystem. The idea
of a viable market for electronic books began to find a foothold. Electronic versions of texts began to be
regarded as potential substitutes for the printed versions, not advertisements for them or supplements
to them. More and more commercial services emerged, which regard the well-entrenched free digital
libraries as competitors. As Russia continued to bring its laws into closer conformance with WTO
requirements, ahead of Russia’s admission in 2012, western rightsholders gained enough power to
demand enforcement against RuNet pirate sites. The kinds of selective enforcement for political or

19

Draft Manuscript, 11/4/2014, DO NOT CITE!
business purposes, which had marked the Russian IP regime throughout the decade (Sezneva &
Karaganis, 2011), slowly gave way to more uniform enforcement.

Closure of the Legal Regime
The legal, economic, and cultural conditions under which Aleph and its mirrors operate today are very
different from those of two decades earlier. The major legal loopholes are now closed, though Russian
authorities have shown little inclination to pursue Aleph so far:
I can't say whether it's the Russian copyright enforcement or the Western one that's most dangerous for
Aleph; I'd say that Russian enforcement is still likely to tolerate most of the things that Western
publishers won't allow. For example, lib.ru and [L] and other unofficial Russian e-libraries are tolerated
even though far from compliant with the law. These kinds of e-libraries could not survive at all in western
countries.13
Western publishers have been slow to join record, film, and software companies in their aggressive
online enforcement campaigns, and academic publishers even more so. But such efforts are slowly
increasing, as the market for digital texts grows and as publishers benefit from the enforcement
precedents set or won by the more aggressive rightsholder groups. The domain name of [os], one of the
sites mirroring the Aleph collection was seized, apparently due to the legal action taken by a US
rightholder, and it also started to respond to DMCA notices, removing links to books reported to be
infringing. Aleph responds to this with a number of tactical moves:
We want books to be available, but only for those who need them. We do not want [ALEPH] to be visible.
If one knows where to get books, there are here for him or her. In this way we stay relatively invisible (in
search engines, e.g.), but all the relevant communities in the academy know about us. Actually, if you
question people at universities, the percentage of them is quite low. But what's important is that the
news about [ALEPH] is spread mostly by face-to-face communication, where most of the unnecessary
people do not know about it. (Unnecessary are those who aim profit)14
The policy of invisibility is radically different from Moshkov’s policy of maximum visibility. Aleph hopes
that it can recede into the shadows where it will be protected by the omerta of academics sharing the
sharing ethos:
In Russian academia, [Aleph] is tacitly or actively supported. There are people that do not want to be
included, but it is hard to say who they are in most cases. Since there are DMCA complaints, of course
there are people who do not want stuff to appear here. But in our experience the complainers are only
from the non-scientific fellows. […] I haven't seen a single complaint from the authors who should
constitute our major problem: professors etc. No, they don't complain. Who complains are either of such
type I have mentioned or the ever-hungry publishers.15

13

Anonymous source #1
Anonymous source #1
15
Anonymous source #1
14

20

Draft Manuscript, 11/4/2014, DO NOT CITE!
The protection the academic community has to offer may not be enough to fend off the publishers’
enforcement actions. The option to recede further into the darknets and hide behind the veil of privacy
technologies is one option the Aleph site has: the first mirror on I2P, an anonymizing network designed
to hide the whereabouts and identity of web services is already operational. But
[i]f people are physically served court invitations, they will have to close the site. The idea is, however,
that the entire collection is copied throughout the world many times over, the database is open, the code
for the site is open, so other people can continue.16

On methodology
We tried to reconstruct the story behind Aleph by conducting interviews and browsing through the BBS
of the community. Access to the site and community members was given under a strict condition of
anonymity. We thus removed any reference to the names and URLs of the services in question.
At one point we shared an early draft of this paper with interested members and asked for their
feedback. Beyond access and feedback, community members were helping the writing of this article by
providing translations of some Russian originals, as well as reviewing the translations made by the
author. In return, we provided financial contributions to the community, in the value of 100 USD.
We reproduced forum entries without any edits to the language, we, however, edited interviews
conducted via IM services to reflect basic writing standards.

16

Anonymous source #1

21

Draft Manuscript, 11/4/2014, DO NOT CITE!
References

Abelson, H., Diamond, P. A., Grosso, A., & Pfeiffer, D. W. (2013). Report to the President MIT and the
Prosecution of Aaron Swartz. Cambridge, MA. Retrieved from http://swartzreport.mit.edu/docs/report-to-the-president.pdf
Alekseeva, L., Pearce, C., & Glad, J. (1985). Soviet dissent: Contemporary movements for national,
religious, and human rights. Wesleyan University Press.
Bodó, B. (2013). Set the fox to watch the geese: voluntary IP regimes in piratical file-sharing
communities. In M. Fredriksson & J. Arvanitakis (Eds.), Piracy: Leakages from Modernity.
Sacramento, CA: Litwin Books.
Borges, J. L. (1998). The library of Babel. In Collected fictions. New York: Penguin.
Bowers, S. L. (2006). Privacy and Library Records. The Journal of Academic Librarianship, 32(4), 377–383.
doi:http://dx.doi.org/10.1016/j.acalib.2006.03.005
Budylin, S., & Osipova, Y. (2007). Is AllOfMP3 Legal? Non-Contractual Licensing Under Russian Copyright
Law. Journal Of High Technology Law, 7(1).
Bush, V. (1945). As We May Think. Atlantic Monthly.
Dewhirst, M., & Farrell, R. (Eds.). (1973). The Soviet Censorship. Metuchen, NJ: The Scarecrow Press.
Elst, M. (2005). Copyright, freedom of speech, and cultural policy in the Russian Federation.
Leiden/Boston: Martinus Nijhoff.
Ermolaev, H. (1997). Censorship in Soviet Literature: 1917-1991. Rowman & Littlefield.
Foerstel, H. N. (1991). Surveillance in the stacks: The FBI’s library awareness program. New York:
Greenwood Press.
Friedberg, M., Watanabe, M., & Nakamoto, N. (1984). The Soviet Book Market: Supply and Demand.
Acta Slavica Iaponica, 2, 177–192. Retrieved from
http://eprints.lib.hokudai.ac.jp/dspace/bitstream/2115/7941/1/KJ00000034083.pdf
Interview with Dusan Barok. (2013). Neural, 10–11.
Interview with Marcell Mars. (2013). Neural, 6–8.
Komaromi, A. (2004). The Material Existence of Soviet Samizdat. Slavic Review, 63(3), 597–618.
doi:10.2307/1520346

22

Draft Manuscript, 11/4/2014, DO NOT CITE!
Lessig, L. (2013). Aaron’s Laws - Law and Justice in a Digital Age. Cambridge,MA: Harward Law School.
Retrieved from http://www.youtube.com/watch?v=9HAw1i4gOU4
Levin, M. B. (1983). Soviet International Copyright: Dream or Nightmare. Journal of the Copyright Society
of the U.S.A., 31, 127.
Liang, L. (2012). Shadow Libraries. e-flux. Retrieved from http://www.e-flux.com/journal/shadowlibraries/
Newcity, M. A. (1978). Copyright law in the Soviet Union. Praeger.
Newcity, M. A. (1980). Universal Copyright Convention as an Instrument of Repression: The Soviet
Experiment, The. In Copyright L. Symp. (Vol. 24, p. 1). HeinOnline.
Patry, W. F. (2009). Moral panics and the copyright wars. New York: Oxford University Press.
Post, R. (1998). Censorship and Silencing: Practices of Cultural Regulation. Getty Research Institute for
the History of Art and the Humanities.
Rieusset-Lemarié, I. (1997). P. Otlet’s mundaneum and the international perspective in the history of
documentation and information science. Journal of the American Society for Information Science,
48(4), 301–309.
Ryzhak, N. (2005). Censorship in the USSR and the Russian State Library. IFLA/FAIFE Satellite meeting:
Documenting censorship – libraries linking past and present, and preparing for the future.
Sezneva, O., & Karaganis, J. (2011). Chapter 4: Russia. In J. Karaganis (Ed.), Media Piracy in Emerging
Economies. New York: Social Science Research Council.
Skilling, H. G. (1989). Samizdat and an Independent Society in Central and Eastern Europe. Pa[Aleph]rave
Macmillan.
Solzhenitsyn, A. I. (1974). The Gulag Archipelago 1918-1956: An Experiment in Literary Investigation,
Parts I-II. Harper & Row.
Stelmach, V. D. (1993). Reading in Russia: findings of the sociology of reading and librarianship section of
the Russian state library. The International Information & Library Review, 25(4), 273–279.
Stelmakh, V. D. (2001). Reading in the Context of Censorship in the Soviet Union. Libraries & Culture,
36(1), 143–151. doi:10.2307/25548897
Suber, P. (2013). Open Access (Vol. 1). Cambridge, MA: The MIT Press.
doi:10.1109/ACCESS.2012.2226094
UHF. (2005). Где-где - на борде! Хакер, 86–90.

23

Draft Manuscript, 11/4/2014, DO NOT CITE!
Гроер, И. (1926). Авторское право. In Большая Советская Энциклопедия. Retrieved from
http://ru.gse1.wikia.com/wiki/Авторское_право

24

Bodo
Libraries in the Post-Scarcity Era
2015


Libraries in the Post-Scarcity Era
Balazs Bodo

Abstract
In the digital era where, thanks to the ubiquity of electronic copies, the book is no longer a scarce
resource, libraries find themselves in an extremely competitive environment. Several different actors are
now in a position to provide low cost access to knowledge. One of these competitors are shadow libraries
- piratical text collections which have now amassed electronic copies of millions of copyrighted works
and provide access to them usually free of charge to anyone around the globe. While such shadow
libraries are far from being universal, they are able to offer certain services better, to more people and
under more favorable terms than most public or research libraries. This contribution offers insights into
the development and the inner workings of one of the biggest scientific shadow libraries on the internet in
order to understand what kind of library people create for themselves if they have the means and if they
don’t have to abide by the legal, bureaucratic and economic constraints that libraries usually face. I argue
that one of the many possible futures of the library is hidden in the shadows, and those who think of the
future of libraries can learn a lot from book pirates of the 21 st century about how users and readers expect
texts in electronic form to be stored, organized and circulated.
“The library is society’s last non-commercial meeting place which the majority of the population uses.”
(Committee on the Public Libraries in the Knowledge Society, 2010)
“With books ready to be shared, meticulously cataloged, everyone is a librarian. When everyone is
librarian, library is everywhere.” – Marcell Mars, www.memoryoftheworld.org
I have spent the last few months in various libraries visiting - a library. I spent countless hours in the
modest or grandiose buildings of the Harvard Libraries, the Boston and Cambridge Public Library
systems, various branches of the Openbare Bibliotheek in Amsterdam, the libraries of the University of
Amsterdam, with a computer in front of me, on which another library was running, a library which is
perfectly virtual, which has no monumental buildings, no multi-million euro budget, no miles of stacks,
no hundreds of staff, but which has, despite lacking all what apparently makes a library, millions of
literary works and millions of scientific books, all digitized, all available at the click of the mouse for
everyone on the earth without any charge, library or university membership. As I was sitting in these

1

Bodó B. (2015): Libraries in the post-scarcity era.
in: Porsdam (ed): Copyrighting Creativity: Creative values, Cultural Heritage Institutions and Systems of Intellectual Property, Ashgate

physical spaces where the past seemed to define the present, I was wondering where I should look to find
the library of the future: down to my screen or up around me.
The library on my screen was Aleph, one of the biggest of the countless piratical text collections on the
internet. It has more than a million scientific works and another million literary works to offer, all free to
download, without any charge or fee, for anyone on the net. I’ve spent months among its virtual stacks,
combing through the catalogue, talking to the librarians who maintain the collection, and watching the
library patrons as they used the collection. I kept going back to Aleph both as a user and as a researcher.
As a user, Aleph offered me books that the local libraries around me didn’t, in formats that were more
convenient than print. As a researcher, I was interested in the origins of Aleph, its modus operandi, its
future, and I was curious where the journey to which it has taken the book-readers, authors, publishers
and libraries would end.
In this short essay I will introduce some of the findings of a two year research project conducted on
Aleph. In the project I looked at several things. I reconstructed the pirate library’s genesis in order to
understand the forces that called it to life and shaped its development. I looked at its catalogue to
understand what it has to offer and how that piratical supply of books is related to the legal supply of
books through libraries and online distributors. I also acquired data on its usage, so was able to
reconstruct some aspects of piratical demand. After a short introduction, in the first part of this essay I
will outline some of the main findings, and in the second part will situate the findings in the wider context
of the future of libraries.

Book pirates and shadow librarians
Book piracy has a fascinating history, tightly woven into the history of the printing press (Judge, 1934),
into the history of censorship (Wittmann, 2004), into the history of copyright (Bently, Davis, & Ginsburg,
2010; Bodó, 2011a) and into the history of European civilization (Johns, 2010). Book piracy, in the 21st or
in the mid-17th century is an activity that has deep cultural significance, because ultimately it is a story
about how knowledge is circulated beyond and often against the structures of political and economic
power (Bodó, 2011b), and thus it is a story about the changes this unofficial circulation of knowledge
brings.
There are many different types of book pirates. Some just aim for easy money, others pursue highly
ideological goals, but they are invariably powerful harbingers of change. The emergence of black markets
whether they be of culture, of drugs or of arms is always a symptom, a warning sign of a friction between

2

Bodó B. (2015): Libraries in the post-scarcity era.
in: Porsdam (ed): Copyrighting Creativity: Creative values, Cultural Heritage Institutions and Systems of Intellectual Property, Ashgate

supply and demand. Increased activity in the grey and black zones of legality marks the emergence of a
demand which legal suppliers are unwilling or unable to serve (Bodó, 2011a). That friction, more often
than not, leads to change. Earlier waves of book piracy foretold fundamental economic, political, societal
or technological shifts (Bodó, 2011b): changes in how the book publishing trade was organized (Judge,
1934; Pollard, 1916, 1920); the emergence of the new, bourgeois reading class (Patterson, 1968; Solly,
1885); the decline of pre-publication censorship (Rose, 1993); the advent of the Reformation and of the
Enlightenment (Darnton, 1982, 2003), or the rapid modernization of more than one nation (Khan &
Sokoloff, 2001; Khan, 2004; Yu, 2000).
The latest wave of piracy has coincided with the digital revolution which, in itself, profoundly upset the
economics of cultural production and distribution (Landes & Posner, 2003). However technology is not
the primary cause of the emergence of cultural black markets like Aleph. The proliferation of computers
and the internet has just revealed a more fundamental issue which all has to do with the uneven
distribution of the access to knowledge around the globe.
Sometimes book pirates do more than just forecast and react to changes that are independent of them.
Under certain conditions, they themselves can be powerful agents of change (Bodó, 2011b). Their agency
rests on their ability to challenge the status quo and resist cooptation or subjugation. In that effect, digital
pirates seem to be quite resilient (Giblin, 2011; Patry, 2009). They have the technological upper hand and
so far they have been able to outsmart any copyright enforcement effort (Bodó, forthcoming). As long as
it is not completely possible to eradicate file sharing technologies, and as long as there is a substantial
difference between what is legally available and what is in demand, cultural black markets will be here to
compete with and outcompete the established and recognized cultural intermediaries. Under this constant
existential threat, business models and institutions are forced to adapt, evolve or die.
After the music and audiovisual industries, now the book industry has to address the issue of piracy.
Piratical book distribution services are now in direct competition with the bookstore on the corner, the
used book stall on the sidewalk, they compete with the Amazons of the world and, like it or not, they
compete with libraries. There is, however, a significant difference between the book and the music
industries. The reluctance of music rights holders to listen to the demands of their customers caused little
damage beyond the markets of recorded music. Music rights holders controlled their own fates and those
who wanted to experiment with alternative forms of distribution had the chance to do so. But while the
rapid proliferation of book black markets may signal that the book industry suffers from similar problems
as the music industry suffered a decade ago, the actions of book publishers, the policies they pursue have
impact beyond the market of books and directly affect the domain of libraries.

3

Bodó B. (2015): Libraries in the post-scarcity era.
in: Porsdam (ed): Copyrighting Creativity: Creative values, Cultural Heritage Institutions and Systems of Intellectual Property, Ashgate

The fate of libraries is tied to the fate of book markets in more than one way. One connection is structural:
libraries emerged to remedy the scarcity in books. This is true both for the pre-print era as well as in the
Gutenberg galaxy. In the era of widespread literacy and highly developed book markets, libraries offer
access to books under terms publishers and booksellers cannot or would not. Libraries, to a large extent,
are defined to complement the structure of the book trade. The other connection is legal. The core
activities of the library (namely lending, copying) are governed by the same copyright laws that govern
authors and publishers. Libraries are one of the users in the copyright system, and their existence depends
on the limitations of and exceptions to the exclusive rights of the rights holders. The space that has been
carved out of copyright to enable the existence of libraries has been intensely contested in the era of
postmodern copyright (Samuelson, 2002) and digital technologies. This heavy legal and structural
interdependence with the market means that libraries have only a limited control over their own fate in the
digital domain.
Book pirates compete with some of the core services of libraries. And as is usually the case with
innovation that has no economic or legal constraints, pirate libraries offer, at least for the moment,
significantly better services than most of the libraries. Pirate libraries offer far more electronic books,
with much less restrictions and constraints, to far more people, far cheaper than anyone else in the library
domain. Libraries are thus directly affected by pirate libraries, and because of their structural
interdependence with book markets, they also have to adjust to how the commercial intermediaries react
to book piracy. Under such conditions libraries cannot simply count on their survival through their legacy.
Book piracy must be taken seriously, not just as a threat, but also as an opportunity to learn how shadow
libraries operate and interact with their users. Pirate libraries are the products of readers (and sometimes
authors), academics and laypeople, all sharing a deep passion for the book, operating in a zone where
there is little to no obstacle to the development of the “ideal” library. As such, pirate libraries can teach
important lessons on what is expected of a library, how book consumption habits evolve, and how
knowledge flows around the globe.

Pirate libraries in the digital age
The collection of texts in digital formats was one of the first activities that computers enabled: the text file
is the native medium of the computer, it is small, thus it is easy to store and copy. It is also very easy to
create, and as so many projects have since proved, there are more than enough volunteers who are willing
to type whole books into the machine. No wonder that electronic libraries and digital text repositories
were among the first “mainstream” application of computers. Combing through large stacks of matrix-

4

Bodó B. (2015): Libraries in the post-scarcity era.
in: Porsdam (ed): Copyrighting Creativity: Creative values, Cultural Heritage Institutions and Systems of Intellectual Property, Ashgate

printer printouts of sci-fi classics downloaded from gopher servers is a shared experience of anyone who
had access to computers and the internet before it was known as the World Wide Web.
Computers thus added fresh momentum to the efforts of realizing the age-old dream of the universal
library (Battles, 2004). Digital technologies offered a breakthrough in many of the issues that previously
posed serious obstacles to text collection: storage, search, preservation, access have all become cheaper
and easier than ever before. On the other hand, a number of key issues remained unresolved: digitization
was a slow and cumbersome process, while the screen proved to be too inconvenient, and the printer too
costly an interface between the text file and the reader. In any case, ultimately it wasn’t these issues that
put a break to the proliferation of digital libraries. Rather, it was the realization, that there are legal limits
to the digitization, storage, distribution of copyrighted works on the digital networks. That realization
soon rendered many text collections in the emerging digital library scene inaccessible.
Legal considerations did not destroy this chaotic, emergent digital librarianship and the collections the adhoc, accidental and professional librarians put together. The text collections were far too valuable to
simply delete them from the servers. Instead, what happened to most of these collections was that they
retreated from the public view, back into the access-controlled shadows of darknets. Yesterday’s gophers
and anonymous ftp servers turned into closed, membership only ftp servers, local shared libraries residing
on the intranets of various academic, business institutions and private archives stored on local hard drives.
The early digital libraries turned into book piracy sites and into the kernels of today’s shadow libraries.
Libraries and other major actors, who decided to start large scale digitization programs soon needed to
find out that if they wanted to avoid costly lawsuits, then they had to limit their activities to work in the
public domain. While the public domain is riddled with mind-bogglingly complex and unresolved legal
issues, but at least it is still significantly less complicated to deal with than copyrighted and orphan works.
Legally more innovative, (or as some would say, adventurous) companies, such as Google and Microsoft,
who thought they had sufficient resources to sort out the legal issues soon had to abandon their programs
or put them on hold until the legal issues were sorted out.
There were, however, a large group of disenfranchised readers, library patrons, authors and users who
decided to ignore the legal problems and set out to build the best library that could possibly be built using
the digital technologies. Despite the increased awareness of rights holders to the issue of digital book
piracy, more and more communities around text collections started defy the legal constraints and to
operate and use more or less public piratical shadow libraries.

5

Bodó B. (2015): Libraries in the post-scarcity era.
in: Porsdam (ed): Copyrighting Creativity: Creative values, Cultural Heritage Institutions and Systems of Intellectual Property, Ashgate

Aleph1
Aleph2 is a meta-library, and currently one of the biggest online piratical text collections on the internet.
The project started on a Russian bulletin board devoted to piracy in around 2008 as an effort to integrate
various free-floating text collections that circulated online, on optical media, on various public and private
ftp servers and on hard-drives. Its aim was to consolidate these separate text collections, many of which
were created in various Russian academic institutions, into a single, unified catalog, standardize the
technical aspects, add and correct missing or incorrect metadata, and offer the resulting catalogue,
computer code and the collection of files as an open infrastructure.

From Russia with love
It is by no means a mistake that Aleph was born in Russia. In post-Soviet Russia the unique constellation
of several different factors created the necessary conditions for the digital librarianship movement that
ultimately led to the development of Aleph. A rich literary legacy, the Soviet heritage, the pace with
which various copying technologies penetrated the market, the shortcomings of the legal environment and
the informal norms that stood in for the non-existent digital copyrights all contributed to the emergence of
the biggest piratical library in the history of mankind.
Russia cherishes a rich literary tradition, which suffered and endured extreme economic hardships and
political censorship during the Soviet period (Ermolaev, 1997; Friedberg, Watanabe, & Nakamoto, 1984;
Stelmakh, 2001). The political transformation in the early 1990’s liberated authors, publishers, librarians
and readers from much of the political oppression, but it did not solve the economic issues that stood in
the way of a healthy literary market. Disposable income was low, state subsidies were limited, the dire
economic situation created uncertainty in the book market. The previous decades, however, have taught
authors and readers how to overcome political and economic obstacles to access to books. During the
Soviet times authors, editors and readers operated clandestine samizdat distribution networks, while
informal book black markets, operating in semi-private spheres, made uncensored but hard to come by
books accessible (Stelmakh, 2001). This survivalist attitude and the skills that came with it became handy
in the post-Soviet turmoil, and were directly transferable to the then emerging digital technologies.

1

I have conducted extensive research on the origins of Aleph, on its catalogue and its users. The detailed findings, at
the time of writing this contribution are being prepared for publication. The following section is brief summary of
those findings and is based upon two forthcoming book chapters on Aleph in a report, edited by Joe Karaganis, on
the role of shadow libraries in the higher education systems of multiple countries.
2
Aleph is a pseudonym chosen to protect the identity of the shadow library in question.

6

Bodó B. (2015): Libraries in the post-scarcity era.
in: Porsdam (ed): Copyrighting Creativity: Creative values, Cultural Heritage Institutions and Systems of Intellectual Property, Ashgate

Russia is not the only country with a significant informal media economy of books, but in most other
places it was the photocopy machine that emerged to serve such book grey/black markets. In pre-1990
Russia and in other Eastern European countries the access to this technology was limited, and when
photocopiers finally became available, computers were close behind them in terms of accessibility. The
result of the parallel introduction of the photocopier and the computer was that the photocopy technology
did not have time to lock in the informal market of texts. In many countries where the photocopy machine
preceded the computer by decades, copy shops still capture the bulk of the informal production and
distribution of textbooks and other learning material. In the Soviet-bloc PCs instantly offered a less costly
and more adaptive technology to copy and distribute texts.
Russian academic and research institutions were the first to have access to computers. They also had to
somehow deal with the frustrating lack of access to up-to-date and affordable western works to be used in
education and research (Abramitzky & Sin, 2014). This may explain why the first batch of shadow
libraries started in a number of academic/research institutions such as the Department of Mechanics and
Mathematics (MexMat) at Moscow State University. The first digital librarians in Russia were
mathematicians, computer scientists and physicists, working in those institutions.
As PCs and internet access slowly penetrated Russian society, an extremely lively digital librarianship
movement emerged, mostly fuelled by enthusiastic readers, book fans and often authors, who spared no
effort to make their favorite books available on FIDOnet, a popular BBS system in Russia. One of the
central figures in these tumultuous years, when typed-in books appeared online by the thousands, was
Maxim Moshkov, a computer scientist, alumnus of the MexMat, and an avid collector of literary works.
His digital library, lib.ru was at first mostly a private collection of literary texts, but soon evolved into the
number one text repository which everyone used to depose the latest digital copy on a newly digitized
book (Мошков, 1999). Eventually the library grew so big that it had to be broken up. Today it only hosts
the Russian literary classics. User generated texts, fan fiction and amateur production was spin off into the
aptly named samizdat.lib.ru collection, low brow popular fiction, astrology and cheap romance found its
way into separate collections, and so did the collection of academic/scientific books, which started an
independent life under the name of Kolkhoz. Kolkhoz, which borrowed its name from the commons
based agricultural cooperative of the early Soviet era, was both a collection of scientific texts, and a
community of amateur librarians, who curated, managed and expanded the collection.
Moshkov and his library introduced several important norms into the bottom-up, decentralized, often
anarchic digital library movement that swept through the Russian internet in the late 1990’s, early 2000’s.
First, lib.ru provided the technological blueprint for any future digital library. But more importantly,

7

Bodó B. (2015): Libraries in the post-scarcity era.
in: Porsdam (ed): Copyrighting Creativity: Creative values, Cultural Heritage Institutions and Systems of Intellectual Property, Ashgate

Moshkov’s way of handling the texts, his way of responding to the claims, requests, questions, complaints
of authors and publishers paved the way to the development of copynorms (Schultz, 2007) that continue
to define the Russian digital library scene until today. Moshkov was instrumental in the creation of an
enabling environment for the digital librarianship while respecting the claims of authors, during times
when the formal copyright framework and the enforcement environment was both unable and unwilling to
protect works of authorship (Elst, 2005; Sezneva, 2012).

Guerilla Open Access
Around the time of the late 2000’s when Aleph started to merge the Kolkhoz collection with other, freefloating texts collections, two other notable events took place. It was in 2008 when Aaron Swartz penned
his Guerilla Open Access Manifesto (Swartz, 2008), in which he called for the liberation and sharing of
scientific knowledge. Swartz forcefully argued that scientific knowledge, the production of which is
mostly funded by the public and by the voluntary labor of academics, cannot be locked up behind
corporate paywalls set up by publishers. He framed the unauthorized copying and transfer of scientific
works from closed access text repositories to public archives as a moral act, and by doing so, he created
an ideological framework which was more radical and promised to be more effective than either the
creative commons (Lessig, 2004) or the open access (Suber, 2013) movements that tried to address the
access to knowledge issues in a more copyright friendly manner. During interviews, the administrators of
Aleph used the very same arguments to justify the raison d'être of their piratical library. While it seems
that Aleph is the practical realization of Swartz’s ideas, it is hard to tell which served as an inspiration for
the other.
It was also in around the same time when another piratical library, gigapedia/library.nu started its
operation, focusing mostly on making freely available English language scientific works (Liang, 2012).
Until its legal troubles and subsequent shutdown in 2012, gigapedia/library.nu was the biggest English
language piratical scientific library on the internet amassing several hundred thousand books, including
high-quality proofs ready to print and low resolution scans possibly prepared by a student or a lecturer.
During 2012 the mostly Russian-language and natural sciences focused Alephs absorbed the English
language, social sciences rich gigapedia/library.nu, and with the subsequent shutdown of
gigapedia/library.nu Aleph became the center of the scientific shadow library ecosystem and community.

Aleph by numbers

8

Bodó B. (2015): Libraries in the post-scarcity era.
in: Porsdam (ed): Copyrighting Creativity: Creative values, Cultural Heritage Institutions and Systems of Intellectual Property, Ashgate

By adding pre-existing text collections to its catalogue Aleph was able to grow at an astonishing rate.
Aleph added, on average 17.500 books to its collection each month since 2009, and as a result, by April
2014 is has more than 1.15 million documents. Nearly two thirds of the collection is in English, one fifth
of the documents is in Russian, while German works amount to the third largest group with 8.5% of the
collection. The rest of the major European languages, like French or Spanish have less than 15000 works
each in the collection.
More than 50 thousand publishers have works in the library, but most of the collection is published by
mainstream western academic publishers. Springer published more than 12% of the works in the
collection, followed by the Cambridge University Press, Wiley, Routledge and Oxford University Press,
each having more than 9000 works in the collection.
Most of the collection is relatively recent, more than 70% of the collection being published in 1990 or
after. Despite the recentness of the collection, the electronic availability of the titles in the collection is
limited. While around 80% of the books that had an ISBN number registered in the catalogue3 was
available in print either as a new copy or a second hand one, only about one third of the titles were
available in e-book formats. The mean price of the titles still in print was 62 USD according to the data
gathered from Amazon.com.
The number of works accessed through of Aleph is as impressive as its catalogue. In the three months
between March and June, 2012, on average 24.000 documents were downloaded every day from one of
its half-a-dozen mirrors.4 This means that the number of documents downloaded daily from Aleph is
probably in the 50 to 100.000 range. The library users come from more than 150 different countries. The
biggest users in terms of volume were the Russian Federation, Indonesia, USA, India, Iran, Egypt, China,
Germany and the UK. Meanwhile, many of the highest per-capita users are Central and Eastern European
countries.

What Aleph is and what it is not
Aleph is an example of the library in the post scarcity age. It is founded on the idea that books should no
longer be a scarce resource. Aleph set out to remove both sources of scarcity: the natural source of
3

Market availability data is only available for that 40% of books in the Aleph catalogue that had an ISBN number
on file. The titles without a valid ISBN number tend to be older, Russian language titles, in general with low
expected print and e-book availability.
4
Download data is based on the logs provided by one of the shadow library services which offers the books in
Aleph’s catalogue as well as other works also free and without any restraints or limitations.

9

Bodó B. (2015): Libraries in the post-scarcity era.
in: Porsdam (ed): Copyrighting Creativity: Creative values, Cultural Heritage Institutions and Systems of Intellectual Property, Ashgate

scarcity in physical copies is overcome through distributed digitization; the artificial source of scarcity
created by copyright protection is overcome through infringement. The liberation from both constraints is
necessary to create a truly scarcity free environment and to release the potential of the library in the postscarcity age.
Aleph is also an ongoing demonstration of the fact that under the condition of non-scarcity, the library can
be a decentralized, distributed, commons-based institution created and maintained through peer
production (Benkler, 2006). The message of Aleph is clear: users left to their own devices, can produce a
library by themselves for themselves. In fact, users are the library. And when everyone has the means to
digitize, collect, catalogue and share his/her own library, then the library suddenly is everywhere. Small
individual and institutional collections are aggregated into Aleph, which, in turn is constantly fragmented
into smaller, local, individual collections as users download works from the collection. The library is
breathing (Battles, 2004) books in and out, but for the first time, this circulation of books is not a zero
sum game, but a cumulative one: with every cycle the collection grows.
On the other hand Aleph may have lots of books on offer, but it is clear that it is neither universal in its
scope, nor does it fulfill all the critical functions of a library. Most importantly Aleph is disembedded
from the local contexts and communities that usually define the focus of the library. While it relies on the
availability of local digital collections for its growth, it has no means to play an active role in its own
development. The guardians of Aleph can prevent books from entering the collection, but they cannot
pay, ask or force anyone to provide a title if it is missing. Aleph is reliant on the weak copy-protection
technologies of official e-text repositories and the goodwill of individual document submitters when it
comes to the expansion of the collection. This means that the Aleph collection is both fragmented and
biased, and it lacks the necessary safeguards to ensure that it stays either current or relevant.
Aleph, with all its strengths and weaknesses carries an important lesson for the discussions on the future
of libraries. In the next section I’ll try situate these lessons in the wider context of the library in the post
scarcity age.

The future of the library
There is hardly a week without a blog post, a conference, a workshop or an academic paper discussing the
future of libraries. While existing libraries are buzzing with activity, librarians are well aware that they
need to re-define themselves and their institutions, as the book collections around which libraries were
organized slowly go the way the catalogue has gone: into the digital realm. It would be impossible to give

10

Bodó B. (2015): Libraries in the post-scarcity era.
in: Porsdam (ed): Copyrighting Creativity: Creative values, Cultural Heritage Institutions and Systems of Intellectual Property, Ashgate

a faithful summary of all the discussions on the future of libraries is such a short contribution. There are,
however, a few threads, to which the story of Aleph may contribute.

Competition
It is very rare to find the two words: libraries and competition in the same sentence. No wonder: libraries
enjoyed a near perfect monopoly in their field of activity. Though there may have been many different
local initiatives that provided free access to books, as a specialized institution to do so, the library was
unmatched and unchallenged. This monopoly position has been lost in a remarkably short period of time
due to the internet and the rapid innovations in the legal e-book distribution markets. Textbooks can be
rented, e-books can be lent, a number of new startups and major sellers offer flat rate access to huge
collections. Expertise that helps navigate the domains of knowledge is abundant, there are multiple
authoritative sources of information and meta-information online. The search box of the library catalog is
only one, and not even the most usable of all the different search boxes one can type a query in5.
Meanwhile there are plenty of physical spaces which offer good coffee, an AC plug, comfortable chairs
and low levels of noise to meet, read and study from local cafes via hacker- and maker spaces, to coworking offices. Many library competitors have access to resources (human, financial, technological and
legal) way beyond the possibilities of even the richest libraries. In addition, publishers control the
copyrights in digital copies which, absent of well fortified statutory limitations and exceptions, prevent
libraries keeping up with the changes in user habits and with the competing commercial services.
Libraries definitely feel the pressure. “Libraries’ offers of materials […] compete with many other offers
that aim to attract the attention of the public. […] It is no longer enough just to make a good collection
available to the public.” (Committee on the Public Libraries in the Knowledge Society, 2010) As a
response, libraries have developed different strategies to cope with this challenge. The common thread in
the various strategy documents is that they try to redefine the library as a node in the vast network of
institutions that provide knowledge, enable learning, facilitate cooperation and initiate dialogues. Some of
the strategic plans redefine the library space as an “independent medium to be developed” (Committee on
the Public Libraries in the Knowledge Society, 2010), and advise libraries to transform themselves into
culture and community centers which establish partnerships with citizens, communities and with other
public and private institutions. Some librarians propose even more radical ways of keeping the library

5

ArXiv, SSRN, RePEc, PubMed Central, Google Scholar, Google Books, Amazon, Mendeley, Citavi,
ResearchGate, Goodreads, LibraryThing, Wikipedia, Yahoo Answers, Khan Academy, specialized twitter and other
social media accounts are just a few of the available discovery services.

11

Bodó B. (2015): Libraries in the post-scarcity era.
in: Porsdam (ed): Copyrighting Creativity: Creative values, Cultural Heritage Institutions and Systems of Intellectual Property, Ashgate

relevant by, for example, advocating more opening hours without staff and hosting more user-governed
activities.
In the research library sphere, the Commission on the Future of the Library, a task force set up by the
University of California Berkeley defined the values the university research library will add in the digital
age as “1) Human expertise; 2) Enabling infrastructure; and 3) Preservation and dissemination of
knowledge for future generations.” (Commission on the Future of the Library, 2013). This approach is
from among the more conservative ones, still relying on the hope that libraries can offer something
unique that no one else is able to provide. Others, working at the Association of Research Libraries are
more like their public library counterparts, defining the future role of the research libraries as a “convener
of ‘conversations’ for knowledge construction, an inspiring host; a boundless symposium; an incubator;
a 3rd space both physically and virtually; a scaffold for independence of mind; and a sanctuary for
freedom of expression, a global entrepreneurial engine” (Pendleton-Jullian, Lougee, Wilkin, & Hilton,
2014), in other words, as another important, but in no way unique node in the wider network of
institutions that creates and distributes knowledge.
Despite the differences in priorities, all these recommendations carry the same basic message. The unique
position of libraries in the center of a book-based knowledge economy, on the top of the paper-bound
knowledge hierarchy is about to be lost. As libraries are losing their monopoly of giving low cost, low
restrictions access to books which are scarce by nature, and they are losing their privileged and powerful
position as the guardians of and guides to the knowledge stored in the stacks. If they want to survive, they
need to find their role and position in a network of institutions, where everyone else is engaged in
activities that overlap with the historic functions of the library. Just like the books themselves, the power
that came from the privileged access to books is in part dispersed among the countless nodes in the
knowledge and learning networks, and in part is being captured by those who control the digital rights to
digitize and distribute books in the digital era.
One of the main reasons why libraries are trying to redefine themselves as providers of ancillary services
is because the lack of digital lending rights prevents them from competing on their own traditional home
turf - in giving free access to knowledge. The traditional legal limitations and exceptions to copyright that
enabled libraries to fulfill their role in the analogue world do not apply in the digital realm. In the
European Union, the Infosoc Directive (“Directive 2001/29/EC on the harmonisation of certain aspects of
copyright and related rights in the information society,” 2001) allows for libraries to create digital copies
for preservation, indexing and similar purposes and allows for the display of digital copies on their
premises for research and personal study (Triaille et al., 2013). While in theory these rights provide for

12

Bodó B. (2015): Libraries in the post-scarcity era.
in: Porsdam (ed): Copyrighting Creativity: Creative values, Cultural Heritage Institutions and Systems of Intellectual Property, Ashgate

the core library services in the digital domain, their practical usefulness is rather limited, as off-premises
e-lending of copyrighted works is in most cases6 only possible through individual license agreements with
publishers.
Under such circumstances libraries complain that they cannot fulfill their public interest mission in the
digital era. What libraries are allowed to do under their own under current limitations and exceptions, is
seen as inadequate for what is expected of them. But to do more requires the appropriate e-lending
licenses from rights holders. In many cases, however, libraries simply cannot license digitally for e-lending. In those cases when licensing is possible, they see transaction costs as prohibitively high; they
feel that their bargaining positions vis-à-vis rightholders is unbalanced; they do not see that the license
terms are adapted to libraries’ policies, and they fear that the licenses provide publishers excessive and
undue influence over libraries (Report on the responses to the Public Consultation on the Review of the
EU Copyright Rules, 2013).
What is more, libraries face substantial legal uncertainties even where there are more-or-less well defined
digital library exceptions. In the EU, questions such as whether the analogue lending rights of libraries
extend to e-books, whether an exhaustion of the distribution right is necessary to enjoy the lending
exception, and whether licensing an e-book would exhaust the distribution right are under consideration
by the Court of Justice of the European Union in a Dutch case (Rosati, 2014b). And while in another case
(Case C-117/13 Technische Universität Darmstadt v Eugen Ulmer KG) the CJEU reaffirmed the rights of
European libraries to digitize books in their collection if that is necessary to give access to them in digital
formats on their premises, it also created new uncertainties by stating that libraries may not digitize their
entire collections (Rosati, 2014a).
US libraries face a similar situation, both in terms of the narrowly defined exceptions in which libraries
can operate, and the huge uncertainty regarding the limits of fair use in the digital library context. US
rights holders challenged both Google’s (Authors Guild v Google) and the libraries (Authors Guild v
HathiTrust) rights to digitize copyrighted works. While there seems to be a consensus of courts that the
mass digitization conducted by these institutions was fair use (Diaz, 2013; Rosati, 2014c; Samuelson,
2014), the accessibility of the scanned works is still heavily limited, subject to licenses from publishers,
the existence of print copies at the library and the institutional membership held by prospective readers.
While in the highly competitive US e-book market many commercial intermediaries offer e-lending
6

The notable exception being orphan works which are presumed to be still copyrighted, but without an identifiable
rights owner. In the EU, the Directive 2012/28/EU on certain permitted uses of orphan works in theory eases access
to such works, but in practice its practical impact is limited by the many constraints among its provisions. Lacking
any orphan works legislation and the Google Book Settlement still in limbo, the US is even farther from making
orphan works generally accessible to the public.

13

Bodó B. (2015): Libraries in the post-scarcity era.
in: Porsdam (ed): Copyrighting Creativity: Creative values, Cultural Heritage Institutions and Systems of Intellectual Property, Ashgate

licenses to e-book catalogues of various sizes, these arrangements also carry the danger of a commercial
lock-in of the access to digital works, and render libraries dependent upon the services of commercial
providers who may or may not be the best defenders of public interest (OECD, 2012).
Shadow libraries like Aleph are called into existence by the vacuum that was left behind by the collapse
of libraries in the digital sphere and by the inability of the commercial arrangements to provide adequate
substitute services. Shadow libraries are pooling distributed resources and expertise over the internet, and
use the lack of legal or technological barriers to innovation in the informal sphere to fill in the void left
behind by libraries.

What can Aleph teach us about the future of libraries?
The story of Aleph offers two, closely interrelated considerations for the debate on the future of libraries:
a legal and an organizational one. Aleph operates beyond the limits of legality, as almost all of its
activities are copyright infringing, including the unauthorized digitization of books, the unauthorized
mass downloads from e-text repositories, the unauthorized acts of uploading books to the archive, the
unauthorized distribution of books, and, in most countries, the unauthorized act of users’ downloading
books from the archive. In the debates around copyright infringement, illegality is usually interpreted as a
necessary condition to access works for free. While this is undoubtedly true, the fact that Aleph provides
no-cost access to books seems to be less important than the fact that it provides an access to them in the
first place.
Aleph is a clear indicator of the volume of the demand for current books in digital formats in developed
and in developing countries. The legal digital availability, or rather, unavailability of its catalogue also
demonstrates the limits of the current commercial and library based arrangements that aim to provide low
cost access to books over the internet. As mentioned earlier, Aleph’s catalogue is mostly of recent books,
meaning that 80% of the titles with a valid ISBN number are still in print and available as a new or used
print copy through commercial retailers. What is also clear, that around 66% of these books are yet to be
made available in electronic format. While publishers in theory have a strong incentive to make their most
recent titles available as e-books, they lag behind in doing so.
This might explain why one third of all the e-book downloads in Aleph are from highly developed
Western countries, and two third of these downloads are of books without a kindle version. Having access
to print copies either through libraries or through commercial retailers is simply not enough anymore.
Developing countries are a slightly different case. There, compared to developed countries, twice as many

14

Bodó B. (2015): Libraries in the post-scarcity era.
in: Porsdam (ed): Copyrighting Creativity: Creative values, Cultural Heritage Institutions and Systems of Intellectual Property, Ashgate

of the downloads (17% compared to 8% in developed countries) are of titles that aren’t available in print
at all. Not having access to books in print seems to be a more pressing problem for developing countries
than not having access to electronic copies. Aleph thus fulfills at least two distinct types of demand: in
developed countries it provides access to missing electronic versions, in developing countries it provides
access to missing print copies.
The ability to fulfill an otherwise unfulfilled demand is not the only function of illegality. Copyright
infringement in the case of Aleph has a much more important role: it enables the peer production of the
library. Aleph is an open source library. This means that every resource it uses and every resource it
creates is freely accessible to anyone for use without any further restrictions. This includes the server
code, the database, the catalogue and the collection. The open source nature of Aleph rests on the
ideological claim that the scientific knowledge produced by humanity, mostly through public funds
should be open for anyone to access without any restrictions. Everything else in and around Aleph stems
from this claim, as they replicate the open access logic in all the other aspects of Aleph’s operation. Aleph
uses the peer produced Open Library to fetch book metadata, it uses the bittorrent and ed2k P2P networks
to store and make books accessible, it uses Linux and MySQL to run its code, and it allows its users to
upload books and edit book metadata. As a consequence of its open source nature, anyone can contribute
to the project, and everyone can enjoy its benefits.
It is hard to quantify the impact of this piratical open access library on education, science and research in
various local contexts where Aleph is the prime source of otherwise inaccessible books. But it is
relatively easy to measure the consequences of openness at the level of the Aleph, the library. The
collection of Aleph was created mostly by those individuals and communities who decided to digitize
books by themselves for their own use. While any single individual is only capable of digitizing a few
books at the maximum, the small contributions quickly add up. To digitize the 1.15 million documents in
the Aleph collection would require an investment of several hundred million Euros, and a substantial
subsequent investment in storage, collection management and access provision (Poole, 2010). Compared
to these figures the costs associated with running Aleph is infinitesimal, as it survives on the volunteer
labor of a few individuals, and annual donations in the total value of a few thousand dollars. The hundreds
of thousands who use Aleph on a more or less regular basis have an immense amount of resources, and by
disregarding the copyright laws Aleph is able to tap into those resources and use them for the
development of the library. The value of these resources and of the peer produced library is the difference
between the actual costs associated with Aleph, and the investment that would be required to create
something remotely similar.

15

Bodó B. (2015): Libraries in the post-scarcity era.
in: Porsdam (ed): Copyrighting Creativity: Creative values, Cultural Heritage Institutions and Systems of Intellectual Property, Ashgate

The decentralized, collaborative mass digitization and making available of current, thus most relevant
scientific works is only possible at the moment through massive copyright infringement. It is debatable
whether the copyrighted corpus of scientific works should be completely open, and whether the blatant
disregard of copyrights through which Aleph achieved this openness is the right path towards a more
openly accessible body of scientific knowledge. It is also yet to be measured what effects shadow libraries
may have on the commercial intermediaries and on the health of scientific publishing and science in
general. But Aleph, in any case, is a case study in the potential benefits of open sourcing the library.

Conclusion
If we can take Aleph as an expression of what users around the globe want from a library, then the answer
is that there is a strong need for a universally accessible collection of current, relevant (scientific) books
in restrictions-free electronic formats. Can we expect any single library to provide anything even remotely
similar to that in the foreseeable future? Does such a service have a place in the future of libraries? It is as
hard to imagine the future library with such a service as without.
While the legal and financial obstacles to the creation of a scientific library with as universal reach as
Aleph may be difficult the overcome, other aspects of it may be more easily replicable. The way Aleph
operates demonstrates the amount of material and immaterial resources users are willing to contribute to
build a library that responds to their needs and expectations. If libraries plan to only ‘host’ user-governed
activities, it means that the library is still imagined to be a separate entity from its users. Aleph teaches us
that this separation can be overcome and users can constitute a library. But for that they need
opportunities to participate in the production of the library: they need the right to digitize books and copy
digital books to and from the library, they need the opportunity to participate in the cataloging and
collection building process, they need the opportunity to curate and program the collection. In other
words users need the chance to be librarians in the library if they wish to do so, and so libraries need to be
able to provide access not just to the collection but to their core functions as well. The walls that separate
librarians from library patrons, private and public collections, insiders and outsiders can all prevent the
peer production of the library, and through that, prevent the future that is the closest to what library users
think of as ideal.

16

Bodó B. (2015): Libraries in the post-scarcity era.
in: Porsdam (ed): Copyrighting Creativity: Creative values, Cultural Heritage Institutions and Systems of Intellectual Property, Ashgate

References
Abramitzky, R., & Sin, I. (2014). Book Translations as Idea Flows: The Effects of the Collapse of
Communism

on

the

Diffusion

of

Knowledge

(No.

w20023).

Retrieved

from

http://papers.ssrn.com/abstract=2421123
Battles, M. (2004). Library: An unquiet history. WW Norton & Company.
Benkler, Y. (2006). The wealth of networks : how social production transforms markets and freedom.
New Haven: Yale University Press.
Bently, L., Davis, J., & Ginsburg, J. C. (Eds.). (2010). Copyright and Piracy An Interdisciplinary
Critique. Cambridge University Press.
Bodó, B. (2011a). A szerzői jog kalózai. Budapest: Typotex.
Bodó, B. (2011b). Coda: A Short History of Book Piracy. In J. Karaganis (Ed.), Media Piracy in
Emerging Economies. New York: Social Science Research Council.
Bodó, B. (forthcoming). Piracy vs privacy–the analysis of Piratebrowser. IJOC.
Commission on the Future of the Library. (2013). Report of the Commission on the Future of the UC
Berkeley Library. Berkeley: UC Berkeley.
Committee on the Public Libraries in the Knowledge Society. (2010). The Public Libraries in the
Knowledge Society. Copenhagen: Kulturstyrelsen.
Darnton, R. (1982). The literary underground of the Old Regime. Cambridge, Mass: Harvard University
Press.
Darnton, R. (2003). The Science of Piracy: A Crucial Ingredient in Eighteenth-Century Publishing.
Studies on Voltaire and the Eighteenth Century, 12, 3–29.
Diaz, A. S. (2013). Fair Use & Mass Digitization: The Future of Copy-Dependent Technologies after
Authors Guild v. Hathitrust. Berkeley Technology Law Journal, 23.
Directive 2001/29/EC on the harmonisation of certain aspects of copyright and related rights in the
information society. (2001). Official Journal L, 167, 10–19.
Elst, M. (2005). Copyright, freedom of speech, and cultural policy in the Russian Federation.
Leiden/Boston: Martinus Nijhoff.
Ermolaev, H. (1997). Censorship in Soviet Literature: 1917-1991. Rowman & Littlefield.
Friedberg, M., Watanabe, M., & Nakamoto, N. (1984). The Soviet Book Market: Supply and Demand.
Acta Slavica Iaponica, 2, 177–192.
Giblin, R. (2011). Code Wars: 10 Years of P2P Software Litigation. Cheltenham, UK ; Northampton,
MA: Edward Elgar Publishing.

17

Bodó B. (2015): Libraries in the post-scarcity era.
in: Porsdam (ed): Copyrighting Creativity: Creative values, Cultural Heritage Institutions and Systems of Intellectual Property, Ashgate

Johns, A. (2010). Piracy: The Intellectual Property Wars from Gutenberg to Gates. University Of
Chicago Press.
Judge, C. B. (1934). Elizabethan book-pirates. Cambridge: Harvard University Press.
Khan, B. Z. (2004). Does Copyright Piracy Pay? The Effects Of U.S. International Copyright Laws On
The Market For Books, 1790-1920. Cambridge, MA: National Bureau Of Economic Research.
Khan, B. Z., & Sokoloff, K. L. (2001). The early development of intellectual property institutions in the
United States. Journal of Economic Perspectives, 15(3), 233–246.
Landes, W. M., & Posner, R. A. (2003). The economic structure of intellectual property law. Cambridge,
Mass.: Harvard University Press.
Lessig, L. (2004). Free culture : how big media uses technology and the law to lock down culture and
control creativity. New York: Penguin Press.
Liang, L. (2012). Shadow Libraries. e-flux. Retrieved from http://www.e-flux.com/journal/shadowlibraries/
Patry, W. F. (2009). Moral panics and the copyright wars. New York: Oxford University Press.
Patterson, L. R. (1968). Copyright in historical perspective (p. vii, 264 p.). Nashville,: Vanderbilt
University Press.
Pendleton-Jullian, A., Lougee, W. P., Wilkin, J., & Hilton, J. (2014). Strategic Thinking and Design—
Research Library in 2033—Vision and System of Action—Part One. Colombus, OH: Association of
Research

Libraries.

Retrieved

from

http://www.arl.org/about/arl-strategic-thinking-and-design/arl-

membership-refines-strategic-thinking-and-design-at-spring-2014-meeting
Pollard, A. W. (1916). The Regulation Of The Book Trade In The Sixteenth Century. Library, s3-VII(25),
18–43.
Pollard, A. W. (1920). Shakespeare’s fight with the pirates and the problems of the transmission of his
text. Cambridge [Eng.]: The University Press.
Poole, N. (2010). The Cost of Digitising Europe’s Cultural Heritage - A Report for the Comité des Sages
of

the

European

Commission.

Retrieved

from

http://nickpoole.org.uk/wp-

content/uploads/2011/12/digiti_report.pdf
Report on the responses to the Public Consultation on the Review of the EU Copyright Rules. (2013).
European Commission, Directorate General for Internal Market and Services.
Rosati, E. (2014a). Copyright exceptions and user rights in Case C-117/13 Ulmer: a couple of
observations. IPKat. Retrieved October 08, 2014, from http://ipkitten.blogspot.co.uk/2014/09/copyrightexceptions-and-user-rights-in.html

18

Bodó B. (2015): Libraries in the post-scarcity era.
in: Porsdam (ed): Copyrighting Creativity: Creative values, Cultural Heritage Institutions and Systems of Intellectual Property, Ashgate

Rosati, E. (2014b). Dutch court refers questions to CJEU on e-lending and digital exhaustion, and another
Dutch reference on digital resale may be just about to follow. IPKat. Retrieved October 08, 2014, from
http://ipkitten.blogspot.co.uk/2014/09/dutch-court-refers-questions-to-cjeu-on.html
Rosati, E. (2014c). Google Books’ Library Project is fair use. Journal of Intellectual Property Law &
Practice, 9(2), 104–106.
Rose, M. (1993). Authors and owners : the invention of copyright. Cambridge, Mass: Harvard University
Press.
Samuelson, P. (2002). Copyright and freedom of expression in historical perspective. J. Intell. Prop. L.,
10, 319.
Samuelson, P. (2014). Mass Digitization as Fair Use. Communications of the ACM, 57(3), 20–22.
Schultz, M. F. (2007). Copynorms: Copyright Law and Social Norms. Intellectual Property And
Information Wealth v01, 1, 201.
Sezneva, O. (2012). The pirates of Nevskii Prospekt: Intellectual property, piracy and institutional
diffusion in Russia. Poetics, 40(2), 150–166.
Solly, E. (1885). Henry Hills, the Pirate Printer. Antiquary, xi, 151–154.
Stelmakh, V. D. (2001). Reading in the Context of Censorship in the Soviet Union. Libraries & Culture,
36(1), 143–151.
Suber,

P.

(2013).

Open

Access

(Vol.

1).

Cambridge,

MA:

The

MIT

Press.

doi:10.1109/ACCESS.2012.2226094
Swartz,

A.

(2008).

Guerilla

Open

Access

Manifesto.

Aaron

Swartz.

Retrieved

from

https://archive.org/stream/GuerillaOpenAccessManifesto/Goamjuly2008_djvu.txt
Triaille, J.-P., Dusollier, S., Depreeuw, S., Hubin, J.-B., Coppens, F., & Francquen, A. de. (2013). Study
on the application of Directive 2001/29/EC on copyright and related rights in the information society (the
“Infosoc Directive”). European Union.
Wittmann, R. (2004). Highwaymen or Heroes of Enlightenment? Viennese and South German Pirates and
the German Market. Paper presented at the History of Books and Intellectual History conference.
Princeton University.
Yu, P. K. (2000). From Pirates to Partners: Protecting Intellectual Property in China in the Twenty-First
Century.

American

University

Law,

50.

Retrieved

from

http://papers.ssrn.com/sol3/papers.cfm?abstract_id=245548
Мошков, М. (1999). Что вы все о копирайте. Лучше бы книжку почитали (Библиотеке копирайт не
враг). Компьютерры, (300).

19


Bodo
In the Name of Humanity
2016


# In the Name of Humanity

By [Balazs Bodo](https://limn.it/researchers/bodo/)

![In the Name of Humanity](https://limn.it/wp-
content/uploads/2016/02/Gamelin1_t02-745x1024.jpg)

Jacques Gamelin

![](http://limn.it/wp-content/uploads/2016/02
/Fahrenheit_451_1966_Francois_Truffaut-800x435.png)

Fahrenheit 451 (1966).

As I write this in August 2015, we are in the middle of one of the worst
refugee crises in modern Western history. The European response to the carnage
beyond its borders is as diverse as the continent itself: as an ironic
contrast to the newly built barbed-wire fences protecting the borders of
Fortress Europe from Middle Eastern refugees, the British Museum (and probably
other museums) are launching projects to “protect antiquities taken from
conflict zones” (BBC News 2015). We don’t quite know how the conflict
artifacts end up in the custody of the participating museums. It may be that
asylum seekers carry such antiquities on their bodies, and place them on the
steps of the British Museum as soon as they emerge alive on the British side
of the Eurotunnel. But it is more likely that Western heritage institutions,
if not playing Indiana Jones in North Africa, Iraq, and Syria, are probably
smuggling objects out of war zones and buying looted artifacts from the
international gray/black antiquities market to save at least some of them from
disappearing in the fortified vaults of wealthy private buyers (Shabi 2015).
Apparently, there seems to be some consensus that artifacts, thought to be
part of the common cultural heritage of humanity, cannot be left in the hands
of those collectives who own/control them, especially if they try to destroy
them or sell them off to the highest bidder.

The exact limits of expropriating valuables in the name of humanity are
heavily contested. Take, for example, another group of self-appointed
protectors of culture, also collecting and safeguarding, in the name of
humanity, valuable items circulating in the cultural gray/black markets. For
the last decade Russian scientists, amateur librarians, and volunteers have
been collecting millions of copyrighted scientific monographs and hundreds of
millions of scientific articles in piratical shadow libraries and making them
freely available to anyone and everyone, without any charge or limitation
whatsoever (Bodó 2014b; Cabanac 2015; Liang 2012). These pirate archivists
think that despite being copyrighted and locked behind paywalls, scholarly
texts belong to humanity as a whole, and seek to ensure that every single one
of us has unlimited and unrestricted access to them.

The support for a freely accessible scholarly knowledge commons takes many
different forms. A growing number of academics publish in open access
journals, and offer their own scholarship via self-archiving. But as the data
suggest (Bodó 2014a), there are also hundreds of thousands of people who use
pirate libraries on a regular basis. There are many who participate in
courtesy-based academic self-help networks that provide ad hoc access to
paywalled scholarly papers (Cabanac 2015).[1] But a few people believe that
scholarly knowledge could and should be liberated from proprietary databases,
even by force, if that is what it takes. There are probably no more than a few
thousand individuals who occasionally donate a few bucks to cover the
operating costs of piratical services or share their private digital
collections with the world. And the number of pirate librarians, who devote
most of their time and energy to operate highly risky illicit services, is
probably no more than a few dozen. Many of them are Russian, and many of the
biggest pirate libraries were born and/or operate from the Russian segment of
the Internet.

The development of a stable pirate library, with an infrastructure that
enables the systematic growth and development of a permanent collection,
requires an environment where the stakes of access are sufficiently high, and
the risks of action are sufficiently low. Russia certainly qualifies in both
of these domains. However, these are not the only reasons why so many pirate
librarians are Russian. The Russian scholars behind the pirate libraries are
familiar with the crippling consequences of not having access to fundamental
texts in science, either for political or for purely economic reasons. The
Soviet intelligentsia had decades of experience in bypassing censors, creating
samizdat content distribution networks to deal with the lack of access to
legal distribution channels, and running gray and black markets to survive in
a shortage economy (Bodó 2014b). Their skills and attitudes found their way to
the next generation, who now runs some of the most influential pirate
libraries. In a culture, where the know-how of how to resist information
monopolies is part of the collective memory, the Internet becomes the latest
in a long series of tools that clandestine information networks use to build
alternative publics through the illegal sharing of outlawed texts.

In that sense, the pirate library is a utopian project and something more.
Pirate librarians regard their libraries as a legitimate form of resistance
against the commercialization of public resources, the (second) enclosure
(Boyle 2003) of the public domain. Those handful who decide to publicly defend
their actions, speak in the same voice, and tell very similar stories. Aaron
Swartz was an American hacker willing to break both laws and locks in his
quest for free access. In his 2008 “Guerilla Open Access Manifesto” (Swartz
2008), he forcefully argued for the unilateral liberation of scholarly
knowledge from behind paywalls to provide universal access to a common human
heritage. A few years later he tried to put his ideas into action by
downloading millions of journal articles from the JSTOR database without
authorization. Alexandra Elbakyan is a 27-year-old neurotechnology researcher
from Kazakhstan and the founder of Sci-hub, a piratical collection of tens of
millions of journal articles that provides unauthorized access to paywalled
articles to anyone without an institutional subscription. In a letter to the
judge presiding over a court case against her and her pirate library, she
explained her motives, pointing out the lack of access to journal articles.[2]
Elbakyan also believes that the inherent injustices encoded in current system
of scholarly publishing, which denies access to everyone who is not
willing/able to pay, and simultaneously denies payment to most of the authors
(Mars and Medak 2015), are enough reason to disregard the fundamental IP
framework that enables those injustices in the first place. Other shadow
librarians expand the basic access/injustice arguments into a wider critique
of the neoliberal political-economic system that aims to commodify and
appropriate everything that is perceived to have value (Fuller 2011; Interview
with Dusan Barok 2013; Sollfrank 2013).

Whatever prompts them to act, pirate librarians firmly believe that the fruits
of human thought and scientific research belong to the whole of humanity.
Pirates have the opportunity, the motivation, the tools, the know-how, and the
courage to create radical techno-social alternatives. So they resist the
status quo by collecting and “guarding” scholarly knowledge in libraries that
are freely accessible to all.

![](http://limn.it/wp-content/uploads/2016/02/NewtonLibraryBooks-800x484.png)

Water-damaged books drying, 1985.

Both the curators of the British Museum and the pirate librarians claim to
save the common heritage of humanity, but any similarities end there. Pirate
libraries have no buildings or addresses, they have no formal boards or
employees, they have no budgets to speak of, and the resources at their
disposal are infinitesimal. Unlike the British Museum or libraries from the
previous eras, pirate libraries were born out of lack and despair. Their
fugitive status prevents them from taking the traditional paths of
institutionalization. They are nomadic and distributed by design; they are _ad
hoc_ and tactical, pseudonymous and conspiratory, relying on resources reduced
to the absolute minimum so they can survive under extremely hostile
circumstances.

Traditional collections of knowledge and artifacts, in their repurposed or
purpose-built palaces, are both the products and the embodiments of the wealth
and power that created them. Pirate libraries don’t have all the symbols of
transubstantiated might, the buildings, or all the marble, but as
institutions, they are as powerful as their more established counterparts.
Unlike the latter, whose claim to power was the fact of ownership and the
control over access and interpretation, pirates’ power is rooted in the
opposite: in their ability to make ownership irrelevant, access universal, and
interpretation democratic.

This is the paradox of the total piratical archive: they collect enormous
wealth, but they do not own or control any of it. As an insurance policy
against copyright enforcement, they have already given everything away: they
release their source code, their databases, and their catalogs; they put up
the metadata and the digitalized files on file-sharing networks. They realize
that exclusive ownership/control over any aspects of the library could be a
point of failure, so in the best traditions of archiving, they make sure
everything is duplicated and redundant, and that many of the copies are under
completely independent control. If we disregard for a moment the blatantly
illegal nature of these collections, this systematic detachment from the
concept of ownership and control is the most radical development in the way we
think about building and maintaining collections (Bodó 2015).

Because pirate libraries don’t own anything, they have nothing to lose. Pirate
librarians, on the other hand, are putting everything they have on the line.
Speaking truth to power has a potentially devastating price. Swartz was caught
when he broke into an MIT storeroom to download the articles in the JSTOR
database.[3] Facing a 35-year prison sentence and $1 million in fines, he
committed suicide.[4] By explaining her motives in a recent court filing,[5]
Elbakyan admitted responsibility and probably sealed her own legal and
financial fate. But her library is probably safe. In the wake of this lawsuit,
pirate libraries are busy securing themselves: pirates are shutting down
servers whose domain names were confiscated and archiving databases, again and
again, spreading the illicit collections through the underground networks
while setting up new servers. It may be easy to destroy individual
collections, but nothing in history has been able to destroy the idea of the
universal library, open for all.

For the better part of that history, the idea was simply impossible. Today it
is simply illegal. But in an era when books are everywhere, the total archive
is already here. Distributed among millions of hard drives, it already is a
_de facto_ common heritage. We are as gods, and might as well get good at
it.[6]



## About the author

**Bodo Balazs,**  PhD, is an economist and piracy researcher at the Institute
for Information Law (IViR) at the University of Amsterdam. [More
»](https://limn.it/researchers/bodo/)

## Footnotes

[1] On such fora, one can ask for and receive otherwise out-of-reach
publications through various reddit groups such as
[r/Scholar](https://www.reddit.com/r/Scholar) and using certain Twitter
hashtags like #icanhazpdf or #pdftribute.

[2] Elsevier Inc. et al v. Sci-Hub et al, New York Southern District Court,
Case No. 1:15-cv-04282-RWS

[3] While we do not know what his aim was with the article dump, the
prosecution thought his Manifesto contained the motives for his act.

[4] See _United States of America v. Aaron Swartz_ , United States District
Court for the District of Massachusetts, Case No. 1:11-cr-10260

[5] Case 1:15-cv-04282-RWS Document 50 Filed 09/15/15, available at
[link](https://www.unitedstatescourts.org/federal/nysd/442951/).

[6] I of course stole this line from Stewart Brand (1968), the editor of the
Whole Earth catalog, who, in return, claims to have been stolen it from the
British anthropologist Edmund Leach. See
[here](http://www.wholeearth.com/issue/1010/article/195/we.are.as.gods) for
the details.

## Bibliography

BBC News. “British Museum ‘Guarding’ Object Looted from Syria. _BBC News,_
June 5. Available at [link](http://www.bbc.com/news/entertainment-
arts-33020199).

Bodó, B. 2015. “Libraries in the Post-Scarcity Era.” In _Copyrighting
Creativity_ , edited by H. Porsdam (pp. 75–92). Aldershot, UK: Ashgate.

———. 2014a. “In the Shadow of the Gigapedia: The Analysis of Supply and Demand
for the Biggest Pirate Library on Earth.” In _Shadow Libraries_ , edited by J.
Karaganis (forthcoming). New York: American Assembly. Available at
[link](http://ssrn.com/abstract=2616633).

———. 2014b. “A Short History of the Russian Digital Shadow Libraries.” In
Shadow Libraries, edited by J. Karaganis (forthcoming). New York: American
Assembly. Available at [link](http://ssrn.com/abstract=2616631).

Boyle, J. 2003. “The Second Enclosure Movement and the Construction of the
Public Domain.” _Law and Contemporary Problems_ 66:33–42. Available at
[link](http://dx.doi.org/10.2139/ssrn.470983).

Brand, S. 1968. _Whole Earth Catalog,_ Menlo Park, California: Portola
Institute.

Cabanac, G. 2015. “Bibliogifts in LibGen? A Study of a Text‐Sharing Platform
Driven by Biblioleaks and Crowdsourcing.” _Journal of the Association for
Information Science and Technology,_ Online First, 27 March 2015 _._

Fuller, M. 2011. “In the Paradise of Too Many Books: An Interview with Sean
Dockray.” _Metamute._ Available at
[link](http://www.metamute.org/editorial/articles/paradise-too-many-books-
interview-sean-dockray).

Interview with Dusan Barok. 2013. _Neural_ 10–11.

Liang, L. 2012. “Shadow Libraries.” _e-flux._  Available at
[link](http://www.e-flux.com/journal/shadow-libraries/).

Mars, M., and Medak, T. 2015. “The System of a Takedown: Control and De-
commodification in the Circuits of Academic Publishing.” Unpublished
manuscript.

Shabi, R. 2015. “Looted in Syria–and Sold in London: The British Antiques
Shops Dealing in Artefacts Smuggled by ISIS.” _The Guardian,_ July 3.
Available at [link](http://www.theguardian.com/world/2015/jul/03/antiquities-
looted-by-isis-end-up-in-london-shops).

Sollfrank, C. 2013. “Giving What You Don’t Have: Interviews with Sean Dockray
and Dmytri Kleiner.” _Culture Machine_ 14:1–3.

Swartz, A. 2008. “Guerilla Open Access Manifesto.” Available at
[link](https://archive.org/stream/GuerillaOpenAccessManifesto/Goamjuly2008_djvu.txt).


Constant
Tracks in Electronic fields
2009


figure 3 Dmytri Kleiner: Web 2.0
is a business model, it capitalises
on community created values.

figure 1 E-traces: In the reductive
world of Web 2.0 there are no
insignificant actors because once
added up, everybody counts.

figure 4 Christophe Lazaro:
Sociologists and anthropologists
are trying to stick the notion of
‘social network' to the specificities
of digital networks, that is to say
to their horizontal character

figure 2

1

1

1

2

2

figure 5 The Robot Syndicat:
Destined to survive collectively
through multi-agent systems
and colonies of social robots

figure 6

figure 11

figure 7
figure 9

figure 8

figure 10

2

2

2

3

3

figure 12
Destination port:
Every single passing
of a visitor triggers
the projection of
a simultaneous
registration

figure 15

figure 18

figure 16

figure 13

figure 17

figure 19
Doppelgänger: The
electronic double
(duplicate, twin) in
a society of control
and surveillance

figure 14

3

3

3

4

4

figure 20 CookieSensus: Cookies
found on washingtonpost.com

figure 22 Image Tracer: Images
and data accumulate into layers as
the query is repeated over time

figure 21 ... and
cookies sent by tacodo.net
figure 23 Shmoogle: In one click,
Google hierarchy crumbles down

4

4

4

5

5

figure 24 Jussa
Parrikka: We move
onto a baroque world,
a mode of folding
and enveloping new
ways of perception
and movement

figure 25

figure 26 Extended Speakers: A
netting of thin metal wires suspends
from the ceiling of the haunted
house in the La Bellone courtyard

figure 28

figure 27

figure 29

figure 30

5

5

5

6

6

figure 31

figure 32

figure 33

figure 34

figure 35

figure 38

figure 41

figure 44

figure 47

figure 36

figure 39

figure 42

figure 45

figure 48

figure 37

figure 40

figure 43

figure 46

figure 49

6

6

6

7

7

figure 50

figure 55

figure 60

figure 65

figure 70

figure 75

figure 51

figure 56

figure 61

figure 66

figure 71

figure 76

figure 52

figure 57

figure 62

figure 67

figure 72

figure 77

figure 53

figure 58

figure 63

figure 68

figure 73

figure 78

figure 54

figure 59

figure 64

figure 69

figure 74

figure 79

7

7

7

8

8

figure 80 Elgaland-Vargaland:
Since November 2007, the Embassy
permanently resides in La Bellone

figure 81 Ambassadors Yves
Poliart and Wendy Van Wynsberghe

figure 85

figure 82
figure 84

figure 86
figure 83

8

8

8

9

9

figure 87 It could be the
result of psychic echoes from
the past, psychokinesis, or the
thoughts of aliens or nature spirits

figure 89 Manu
Luksch: Our
digital selves are
many dimensional,
alert, unforgetting

figure 88

figure 91

figure 93

figure 92

figure 94

figure 90

9

9

9

10

10

figure 95

figure 97

figure 96

figure 99

figure 98

10

10

10

11

11

figure 100

figure 101

figure 103
Audio-geographic
dérive: Listening to
the electro-magnetic
spectrum of Brussels

figure 106

figure 107

figure 102

figure 104

figure 108

figure 110

figure 105

figure 112

figure 111

figure 109

11

11

11

12

12

figure 113 Michael Murtaugh:
Rather than talking about
leaning forward or backward,
a more useful split might be
between reading and writing

figure 114

figure 117
figure 115 Adrian
Mackenzie: This
opacity reflects the
sheer number of
operations that have
to be compressed
into code ...

figure 116 ... in
order for digital signal
processing to work

figure 118

12

12

12

13

13

figure 119 Sabine Prokhoris and
Simon Hecquet: What happens
precisely when one decides to
consider these margins, these
‘supplementen', as fullgrown
creations – slave, nor attachment?

figure 120 Praticable:
Making the body as a locus of
knowledge production tangible

figure 121

figure 123

figure 122

figure 124

figure 125

13

13

13

14

14

figure 126 Mutual
Motions Video Library:
A physical exchange
between existing
imagery, real-time
interpretation,
experiences
and context

figure 129

figure 130

figure 127 Modern
Times: His gestures
are burlesque responses
to the adversity
in his life, or just
plain ‘exuberant'

figure 131 Michael
Terry: We really
want to have lots of
people looking at it,
and considering it,
and thinking about
the implications

figure 128

figure 132

figure 133 Görkem
Çetin: There's a lack of
usability bug reporting
tool which can be
used to submit, store,
modify and maintain
user submitted videos,
audio files and pictures

figure 134 Simon
Yuill: It is here
where contingency
and notation meet,
but it is here also
that error enters

14

14

14

15

15

figure 135

figure 141
figure 138

figure 136

figure 139

figure 137

figure 140

15

15

15

16

16

figure 144 Séverine Dusollier:
I think amongst many of the
movements that are made, most are
not ‘a work', they are subconscious
movements, movements that
are translations of gestures that
are simply banal or necessary

figure 142

figure 145

figure 143

16

16

16

17

17

figure 146 Sadie Plant: It is
this kind of deep collectivity,
this profound sense of
micro-collaboration, which
has often been tapped into

17

17

17

18

18

18

18

18

19

19

Verbindingen/Jonctions 10
EN
NL
FR

Tracks in electr(on)ic fields

19

19

19

20

20

Introduction
E-Traces

25

EN, NL, FR

35

EN, NL, FR

Nicolas Malevé, Michel Cleempoel
E-traces en contexte NL, FR

38

Dmytri Kleiner, Brian Wyrick
InfoEnclosure 2.0 NL

47

Christophe Lazaro

58

Marc Wathieu

65

Michel Cleempoel
Destination port
Métamorphoz
Doppelgänger
Andrea fiore
Cookiesensus

FR

70

EN, NL, FR

71

FR, NL, EN

73

EN

Tsila Hassine
Shmoogle and Tracer

EN

Jussi Parikka
Insects, Affects and Imagining New
Sensoriums EN

75
77

81

20

20

20

21

21

Pierre Berthet
Concert with various extended objects

EN, NL, FR

93

Leiff Elgren, CM von Hausswolff
Elgaland-Vargaland EN, NL, FR

95

CM von Hausswolff, Guy-Marc Hinant
Ghost Machinery EN, NL

98

Read Feel Feed Real

101

EN, NL, FR

Manu Luksch, Mukul Patel
Faceless: Chasing the Data Shadow

EN

104

Julien Ottavi
Electromagnetic spectrum Research code
0608 FR

119

Michael Murtaugh
Active Archives or: What's wrong with the
YouTube documentary? EN

131


EN, NL, FR

Femke Snelting

NL

139
143

Adrian Mackenzie
Centres of envelopment and intensive
movement in digital signal processing EN

155

Elpueblodechina
El Curanto EN

174

21

21

21

22

22

Alice Chauchat, Frédéric Gies

181

Dance (notation)

184

EN

Sabine Prokhoris, Simon Hecquet
Mutual Motions Video Library

188
198

EN, NL, FR

Inès Rabadan
Does the repetition of a gesture irrevocably
lead to madness?

215

Michael Terry (interview)
Data analysis as a discourse

217

EN

233

254

Sadie Plant
A Situated Report

275

Biographies

EN

287

EN, NL, FR

License register

311

Vocabulary

313

22

22

22

23

23

The Making-of

323

EN

Colophon

331

23

23

23

24

24

24

24

24

25

25

EN

Introduction

25

25

25

26

26


29

EN

Traces in electr(on)ic fields documents the 10 th edition
of Verbindingen/Jonctions with the same name, a bi-annual multidisciplinary festival organised by Constant, association for arts and media. It is a meeting point for a
diverse public that from an artistic, activist and / or theoretical perspective is interested in experimental reflections
on technological culture.
Not for the first time, but during this edition more explicit than ever, we put the question of the interaction
between body and technology on the table. How to think
about the actual effects of surveillance, the ubiquitous presence of cameras and public safety procedures that can only
regard individuals as an amalgamate of analysable data?
What is the status of ‘identity' when it appears both elusive and unchangeable? How are we conditioned by the
technology we use? What is the relationship between commitment and reward? flexibility of work and healthy life?
Which traces does technology leave in our thinking, behavior, our routine movements? And what residue do we
leave behind ourselves on electr(on)ic fields through our
presence in forums, social platforms, databases, log files?
The dual nature of the term ‘notation' formed an important source of inspiration. Systems that choreographers,
composers and computer programmers use to record ideas
and observations, can then be interpreted as instruction,
as a command which puts an actor, software, performing artist or machine in to motion. From punch card to
musical scale, from programming language to Laban notation, we were interested in the standards and protocols
needed to make such documents work. It was the reason
29

29

29

30

30

to organise the festival inside the documentation, library
and workshop for theater and dance, ‘maison du spectacle'
La Bellone. Located in the heart of Brussels, La Bellone
offered hospitality to a diverse group of thinkers, dancers,
artists, programmers, interface designers and others and
its meticulously renovated 17th century façade formed the
perfect backdrop for this intense program.
Throughout the festival we worked with a number of
themes, not meant to isolate areas of thinking, but rather
as ‘spider threads' interlinking various projects:
E-traces (p. 35) subjected the current reality of Web 2.0
to a number of critical considerations. How do we regain
control of the abundant data correlation that mega-companies such as Google and Yahoo produce, in exchange for
our usage of their services? How do we understand ‘service' when we are confronted with their corporate Janus
face: one a friendly interface, the other Machiavellian
user licenses?
Around us, magnetic fields resonate unseen waves (p.
77) took the ghostly presence of technology as a starting
point and Read Feel Feed Real (p. 101) listened to unheard
sounds and looked behind the curtains in Do-It-Yourself,
walks and urban interventions. Through the analysis of radio waves and their use in artistic installations, by making
electro-magnetic fields heard, we made unexplained phenomena tangible.
As machines learn about bodies, bodies learn about machines and the movements that emerge as a result, are
not readily reduced to cause and effect. Mutual movements (p. 139) started in the kitchen, the perfect place to
30

30

30

31

31

reconsider human-machine configurations, without having
to separate these from everyday life and the patterns that
are ingrained in it. Would a different idea of ‘user' also
change our approach to ‘use'?
At the end of the adventure Sadie Plant remarked in
her ‘situated report' on Tracks in electr(on)ic fields (p.
275): “It is ultimately very difficult to distinguish between
the user and the developer, or the expert and the amateur. The experiment, the research, the development is
always happening in the kitchen, in the bedroom, on the
bus, using your mobile or using your computer. (...) this
sense of repetitive activity, which is done in many trades
and many lines, and that really is the deep unconscious
history of human activity. And arguably that's where the
most interesting developments happen, albeit in a very unsung, unseen, often almost hidden way. It is this kind of
deep collectivity, this profound sense of micro-collaboration, which has often been tapped into.”
Constant, October 2009

34

34

35

35

EN

E-Traces

35

35

35

36

36

How does the information we seize in search engines
circulate, what happens to our data entered in social networking sites, health records, news sites, forums and chat
services we use? Who is interested? How does the ‘market' of the electronic profile function? These questions
constitute the framework of the E-traces project.
For this, we started to work on Yoogle!, an online game.
This game, still in an early phase of development, will allow users to play with the parameters of the Web 2.0 economy and to exchange roles between the different actors
of this economy. We presented a first demo of this game,
accompanied by a public discussion with lawyers, artists
and developers. The discussion and lecture were meant
to analyse more deeply the mechanism of the economy
behind its friendly interface, the speculation on profiling,
the exploitation of free labor, but also to develop further
the scenario of the game.

EN

NL

36

36

36

37

37

47

DMYTRI KLEINER, BRIAN WYRICK
License: Dmytri Kleiner & Brian Wyrick, 2007. Anti-Copyright. Use as desired in whole or in part. Independent or collective commercial use encouraged. Attribution optional.
Text first published in English in Mute: http://www.metamute.org/InfoEnclosure-2.0. For translations in
Polish and Portuguese, see http://www.telekommunisten.net

figure 3
Dmytri
Kleiner


MICHEL CLEEMPOEL
License: Free Art License
figure 12
Every single
passing of
a visitor
triggered the
projection
of a
simultaneous
registration

figure 14

EN

Destination port
During the Jonctions festival, Destination port registered the flux
of visitors in the entrance hall of La Bellone. Every single passing
of a visitor triggered the projection of a simultaneous registration
in the hall, and this in superposition with formerly captured images
of visitors, thus creating temporary and unlikely encounters between
persons.

Doppelgänger
Born in September 2001, represented here by Valérie Cordy et
Natalia De Mello, the MéTAmorphoZ collective is a multidisciplinary
association that create installations, spectacles and transdisciplinary
performances that mix artistic experiments and digital practices.
With the project Doppelganger, the collective MéTAmorphoZ focuses on the thematic of the electronic double(duplicate, twin) in a
society of control and surveillance.
“Our electronic identity, symbol of this new society of control,
duplicates our organic and social identity. But this legal obligation
to be assigned a unique, stable and unforgeable identity isn't, in the
end, a danger for our fundamental freedom to claim identitites which
are irreducibly multiple for each of us?”
72

72

72

73

73

ANDREA fiORE
License: Creative Commons Attribution-NonCommercial-ShareAlike
EN

Cookiecensus
Although still largely perceived as a private activity, web surfing
leaves persistent trails. While users browse and interact through the
web, sites watch them read, write, chat and buy. Even on the basis
of a few basic web publishing experiences one can conclude that most
web servers record ‘by default' their entire clickstream in persistent
‘log' files.
‘Web cookies' are sort of digital labels sent by websites to web
browsers in order to assign them a unique identity and automatically
recognize their users over several visits. Today, this technology, which
was introduced with the first version of the Netscape browser in 1994,
constitutes the de facto standard upon which a wide range of interactive functionalities are built that were not conceived by the early web
protocol design. Think, for example, of user accounts and authentications, personalized content and layouts, e-commerce and shopping
charts.
While it has undeniably contributed to the development and the
social spread of the new medium, web cookie technology is still to
be considered as problematic. Especially the so-called ‘third party
cookies' issue – a technological loophole enabling marketeers and advertisement firms to invisibly track users over large networks of syndicated websites – has been the object of a serious controversy, involving
a varied set of actors and stakeholders.
Cookiecensus is a software prototype. A wannabe info tool for
studying electronic surveillance in one of its natively digital environments. Its core functionality consists of mapping and analyzing third
party's cookies distribution patterns within a given web, in order to
identify its trackers and its network of syndicated sites. A further
feature of the tool is the possibility to inspect the content of a web
page in relation to its third party cookie sources.

figure 20
Cookies
found on
Washingtonpost.com

figure 21
Cookies
sent by
Tacodo.net

73

73

73

74

74

It is an attempt to deconstruct the perceived unity and consistency
of web pages by making their underlying content assemblage and their
related attention flows visible.

74

74

74

75

75

TSILA HASSINE
License: Free Art License
EN

Shmoogle and Tracer
What is Shmoogle? Shmoogle is a Google randomizer. In one
click, Google hierarchy crumbles down. Results that were usually exiled to pages beyond user attention get their ‘15 seconds of PageRank
fame'. While also being a useful tool for internet research, Shmoogle
is a comment, a constant reminder that the Google order is not necessarily ‘the good order', and that sometimes chaos is more revealing
than order. While Google serves the users with information ready for
immediate consumption, Shmoogle forces its users to scroll down and
make their own choices. If Google is a search engine, then Shmoogle
is a research engine.

figure 22
Images
and data
accumulate
into layers
as the query
is repeated
over time

figure 23 In
one click,
Google
hierarchy
crumbles
down

In Image Tracer, order is important. Image Tracer is a collaboration between artist group De Geuzen and myself. Tracer was born
out of our mutual interest in the traces images leave behind them on
their networked paths. In Tracer images and data accumulate into
layers as the query is repeated over time. Boundaries between image
and data are blurred further as the image is deliberately reduced to
thumbnail size, and emphasis is placed on the image's context, the
neighbouring images, and the metadata related to that image. Image Tracer builds up an archive of juxtaposed snapshots of the web.
As these layers accumulate, patterns and processes reveal themselves,
and trace a historiography in the making.

75

75

75

76

76

76

76

76

77

77

EN

NL

FR

Around us, magnetic fields resonate
unseen waves
Om ons heen resoneren ongeziene
golven
Autour de nous, les champs
magnétiques font résonner des ondes
invisibles

77

77

77

78

78

In computer terminology many words refer to chimerical images such as bots, demons and ghosts. Dr. Konstantin Raudive, a Latvian psychologist, and Swedish film
producer Friedrich Jurgenson went a step further and explored the territory of the Electric Voice phenomena. Electronic voice phenomena (EVP) are speech or speech-like
sounds that can be heard on electronic devices that were
not present at the time the recording was made. Some
believe these could be of paranormal origin.
For this part of the V/J10 programme, we chose a
metaphorical approach, working with bodiless entities and
hidden processes, finding inspiration in The Embassy of
Elgaland-Vargaland, semi-fictional kingdoms, consisting
of all Border Territories (Geographical, Mental & Digital). These kingdoms were founded by Leiff Elgren and
CM Von Hausswolff. Elgren stated that: “All dead people
are inhabitants of the country Elgaland-Vargaland, unless
they stated that they did not want to be an inhabitant”.


JUSSI PARIKKA
License: Creative Commons Attribution-NonCommercial-ShareAlike
EN

Insects, Affects and Imagining New Sensoriums

figure 24
Jussa
Parrikka
at V/J10

A Media Archaeological Rewiring
from Geniuses to Animals
An insect media artist or a media archaeologist imagining a potential weird medium might end up with something that sounds quite
mundane to us humans. For the insect probe head, the question of
what it feels like to perceive with two eyes and ears and move with two
legs would be a novel one, instead of the multiple legs and compound
eyes that it has to use to manoeuvre through space. The uncanny
formations often used in science fiction to describe something radically inhuman (like the killing machine insects of Alien movies) differ
from the human being in their anatomy, behaviour and morals. The
human brain might be a much more effcient problem solver and the
human hands are quite handy tool making metatools, and the human
body could be seen as an original form of any model of technics, as
Ernst Kapp already suggested by the end of the 19 th century. But
still, such realisations do not take away the fascination that emerges
from the question of what would it be like to move, perceive and think
differently; what does a becoming-animal entail.
I am of course taking my cue here from the philosopher Manuel DeLanda who in his 1991 book War in the Age of Intelligent Machines,
asked what would the history of warfare look like from the viewpoint
of a future robot historian? An exercise perhaps in creative imagination, DeLanda's question also served other ends relating to physics of
self-organization. My point is not to discuss DeLanda, or the history
of war machines, but I want to pick an idea from this kind of an
approach, an idea that could be integrated into media archaeological considerations, concerning actual or imaginary media. As already
said, imagining alternative worlds is not the endpoint of this exercise
81

81

81

82

82

in ‘insect media', but a way to dip into an alternative understanding
of media and technology, where such general categories as ‘humans'
and ‘machines' are merely the endpoints of intensive flows, capacities, tendencies and functions. Such a stance takes much of its force
from Gilles Deleuze's philosophical ontology of abstract materialism,
which focuses primarily on a Spinozian ontology of intensities, capacities and functions. In this sense, the human being is not a distinct
being in the world with secondary qualities, but a “capacity to signify, exchange, and communicate”, as Claire Colebrook has pointed
out in her article ‘The Sense of Space' (Postmodern Culture). This
opens up a new agenda not focused on ‘beings' and their tools, but
on capacities and tendencies that construct and create beings in a
move which emphasizes Deleuze's interest in pre-Kantian worlds of
baroque. In addition, this move includes a multiplication of subjectivities and objects of the world, a certain autonomy of the material
world beyond the privileged observer. Like everybody who has done
gardening knows: there is a world teeming with life outside the human
sphere, with every bush and tree being a whole society in itself.
To put it shortly, still following Colebrook's recent writing on the
concept of affect, what Deleuze found in the baroque worlds of windowless monads was a capacity of perception that does not stem from
a universalising idea of perception in general. Man or any general
condition of perception is not the primary privileged position of perception but perceptions and creations of space and temporality are
multiplied in the numerous monadic worlds, a distributed perception
of a kind that according to Deleuze later found resonance in the philosophy of A.N.Whitehead. For Whitehead, the perceiving subject is
more akin to a ‘superject', a second order construction from the sum
of its perceptions. It is the world perceived that makes up superjects
and based on the variations of perceptions also alternative worlds.
Baroque worlds, argues Deleuze in his book Le Pli from 1988, are
characterised by the primacy of variation and perspectivism which is
a much more radical notion than a relativist idea of different subjects
having different perspectives on the world. Instead, “the subject will
be what comes to the point of view”, and where “the point of view is
not what varies with the subject, at least in the first instance; it is, to
82

82

82

83

83

the contrary, the condition in which an eventual subject apprehends
a variation (metamorphosis). . . ”.
Now why this focus on philosophy, this short excursion that merely
sketches some themes around variation and imagination? What I am
after is an idea of how to smuggle certain ideas of variation, modulation and perception into considerations of media culture, media
archaeology and potentially also imaginary media, where imaginary
media become less a matter of a Lacanian mirror phase looking for
utopian communication offering unity, but a deterritorialising way
of understanding the distributed ontology of the world and media
technologies. Variation and imagination become something else than
the imaginations of a point of view – quite the contrary, the imagination and variation give rise to points of view, which opens up a
whole new agenda of a past paradoxically not determined, and even
further, future as open to variation. This would mean taking into
account perceptions unheard of, unfelt, unthought-of, but still real in
their intensive potentiality, a becoming-other of the sensorium so to
speak. Hence, imagination becomes not a human characteristic but
an epistemological tool that interfaces analytics of media theory and
history with the world of animals and novel affects.
Imaginary media and variations at the heart of media cultural
modes of seeing and hearing have been discussed in various recent
books. The most obvious one is The Book of Imaginary Media, edited
by Eric Kluitenberg. According to the introduction, all media consist
of a real and an imagined part, a functional coupling of material characteristics and discursive dreams which fabricate the crucial features
of modern communication tied intimately with utopian ideals. Imaginary media – or actual media imagined beyond its real capacities
– have been dreamed to compensate insuffcient communication, a
realisation that Kluitenberg elaborates with the argument that “central to the archaeology of imaginary media in the end are not the
machines, but the human aspirations that more often than not are
left unresolved by the machines. . . ”. Powers of imagination are then
based in the human beings doing the imagining, in the human powers
able to transcend the actual and factual ways of perception and to

83

83

83

84

84

grasp the unseen, unheard and unthought of media creations. Variation remains connected to the principle of the central point where
variation is perceived.
Talking of the primacy of variation, we are easily reminded of
Siegfried Zielinski's application of the idea of ‘variantology' as an
‘anarchaeology of media', a task dedicated to the primacy of variation resisting the homogeneous drive of commercialised media spheres.
Excavating dreams of past geniuses, from Empedocles to Athanius
Kircher's cosmic machines and communication networks to Ernst florens Friedrich Chladni's visualisation of sound, Zielinski has been underlining the creative potential in an exercise of imagining media. In
this context, he defines in threefold the term ‘imaginary media' in his
chapter in the Book of Imaginary Media:
• Untimely media/apparatus/machines: “Media devised and designed
either much too late or much too early. . . ”
• Conceptual media/apparatus/machines: “Artefacts that were only
ever sketched as models. . . but never actually built.”
• Impossible media/apparatus/machines: “Imaginary media in the
true sense, by which I mean hermetic and hermeneutic machines. . .
they cannot actually be built, and whose implied meanings nonetheless have an impact on the factual world of media.”
A bit reminiscent of the baroque idea, variation is primary, claims
Zielinski. Whereas the capitalist orientated consumer media culture
is working towards a psychopathia medialis of homogenized media
technological environments, variantology is committed to promoting
heterogeneity, finding dynamic moments of media archaeological past,
and excavating radical experiments that push the limits of what can
be seen, heard and thought. Variantology is then implicitly suggested
as a mode of ontogenesis, of bringing forth, of modulation and change
– an active mode of creation instead of distanced contemplation.
Indeed, the aim of promoting diversity is a much welcomed one,
but I would like to propose a slight adjustment to this task, something that I engage under the banner of ‘insect media'. Whereas
Zielinski and much of the existing media archaeological research still
84

84

84

85

85

starts off from the human world of male inventor-geniuses, I propose
a slightly more distributed look at the media archaeology of affects,
capacities, modes of perception and movement, which are primarily
not attached to a specific substance (animal, technology), but since
the 19 th century at least, refer to a certain passage, vector from animals to technology and vice versa. Here, a mode of baroque thought,
a thought tuned in terms of variations becomes unravelled with the
help of animality that is not to be seen as a metaphor, but as a metamorphosis, as ‘teachings' in weird perceptions, novel ways of moving,
new ways of sensing, opening up to the world of sensations and contracting them. Instead of looking for variations through inventions of
people, we can turn to the ‘storehouses of invention' of for example
insects that from the 19 th century on were introduced as an alien
form of media in themselves. Next I will elaborate how we can use
these tiny animals as philosophical and media archaeological tools to
address media and technology as intensities that signal weird sensory
experiences.
Novel Sensoriums

During the latter half of the 19 th century, insects were seen as
uncanny but powerful forms of media in themselves, capable of weird
sensory and kinaesthetic experiences. Examples range from popular newspaper discourse to scientific measurements and such early
best-sellers as An Introduction to Entomology; or, Elements of the
Natural History of Insects: Comprising an Account of Noxious and
Useful Insects, of Their Metamorphoses, Hybernation, Instinct (1815—
1826) by William Kirby and William Spence.
Since the 19 th century, insects and animal affects are not only
found in biology but also in art, technology and popular culture. In
this sense, the 19 th century interest in insects produces a valuable
perspective on the intertwining of biology (entomology), technology
and art, where the basics of perception are radically detached from
human-centred models towards the animal kingdom. In addition, this
science-technology-art trio presents a challenge to rethink the forces
which form what we habitually refer to as ‘media' as modes of perception. By expanding our notions of ‘media' from the technological
85

85

85

86

86

apparatuses to the more comprehensive assemblages that connect biological, technological, social and aesthetic issues, we are also able to
bring forth novel contexts for contemporary analysis and design of media systems. In a way, then, the concept of the ‘insect' functions here
as a displacing and a deterritorialising force that seeks a questioning
of where and in what kind of conditions we approach media technologies. This is perhaps an approach that moves beyond a focus on
technology per se, but still does not remain blind to the material forces
of the world. It presents an alternative to the ‘substance-approaches'
that start from a stability or a ground like ‘technology' or ‘humans'.
It is my claim that Deleuzian biophilosophy, that has taken elements
from Spinozian ontology, von Uexküll's ethology, Whitehead's ideas
as well as Simondon's notions on individuation, is able to approach
the world as media in itself: a contracting of forces and analysing
them in terms of their affects, movements, speeds and slownesses.
These affects are primary defining capacities of an entity, instead of
a substance or a class it belongs to, as Deleuze explains in his short
book Spinoza: Practical Philosophy. From this perspective we can
adopt a novel media archaeological rewiring that looks at media history not as one of inventors, geniuses and solid technologies, but as a
field of affects, interactions and modes of sensation and perception.
Examples from the 19 th century popular discourse are illustrative.
In 1897, New York Times addressed spiders as ‘builders, engineers
and weavers', and also as ‘the original inventors of a system of telegraphy'. Spiders' webs offer themselves as ingenious communication
systems which do not merely signal according to a binary setting
(something has hit the web/has not hit the web) but transmits information regarding the “general character and weight of any object
touching it (. . . )” Or take for example the book Beautés et merveilles
de la nature et des arts by Eliçagaray from the 18 th century which
lists both technological and animal wonders, for example bees and
ants, electricity and architectural constructions as marvels of artifice
and nature.
Similar accounts abound since the mid 19 th century. Insects sense,
move, build, communicate and even create art in various ways that
raised wonder and awe for example in U.S. popular culture. Apt
86

86

86

87

87

example of the 19 th century insect mania is the New York Times
story (May 29, 1880) about the ‘cricket mania' of a certain young
lady who collected and trained crickets as musical instruments:
200 crickets in a wirework-house, filled with ferns and shells,
which she called a ‘fernery'. The constant rubbing of the wings
of these insects, producing the sounds so familiar to thousands
everywhere seemed to be the finest music to her ears. She
admitted at once that she had a mania for capturing crickets.
Besides entertainment, and in a much earlier framework, the classic
of modern entomology, the aforementioned An Introduction to Entomology by Kirby and Spence already implicitly presented throughout
its four volume best seller the idea of a primitive technics of nature –
insect technics that were immanent to their surroundings.
Kirby and Spence's take probably attracted the attention it did
because of the catchy language but also what could be called its
ethological touch. Insects were approached as living and interacting
entities that are intimately coupled with their environment. Insects
intertwine with human lives (“Direct and indirect injuries caused by
insects, injuries to our living vegetable property but also direct and
indirect benefits derived from insects”), but also engage in ingenious
building projects, stratagems, sexual behaviour and other expressive
modes of motion, perception and sensation. Instead of pertaining to a
taxonomic account of the interrelations between insect species, their
forms, growth or for example structural anatomy, An Introduction to
Entomology (vol. 1) is traversed by a curiosity cabinet kind of touch
on the ethnographics of insects. Here, insects are for example war
machines, like the horse-fly (Tabanus L.): “Wonderful and various
are the weapons that enable them to enforce their demand. What
would you think of any large animal that should come to attack you
with a tremendous apparatus of knives and lancets issuing from its
mouth?”.
From Kirby and Spence to later entomologists and other writers,
insects' powers of building continuously attracted the early entomological gaze. Buildings of nature were described as more fabulous than
87

87

87

88

88

the pyramids of Egypt or the aqueducts of Rome. Suddenly, in this
weird parallel world, such minuscule and admittedly small-brained
entities like termites were pictured as alike to the ancient monarchies
and empires of Western civilization. The Victorian appreciation of
ancient civilization could also incorporate animal kingdoms and their
buildings of monarchic measurements. Perhaps the parallel was not
to be taken literally, but in any case it expressed a curious interest
towards microcosmical worlds. A recurring trope was that of ‘insect
geometrics' which seemed with accuracy, paralleled only in mathematics, to follow and fold nature's resources into micro versions of
emerging urban culture. To quote Kirby and Spence's An Introduction to Entomology, vol. 2:
No thinking man ever witnesses the complexness and yet regularity and effciency of a great establishment, such as the Bank
of England or the Post Offce without marvelling that even human reason can put together, with so little friction and such
slight deviations from correctness, machines whose wheels are
composed not of wood and iron, but of fickle mortals of a thousand different inclinations, powers, and capacities. But if such
establishments be surprising even with reason for their prime
mover, how much more so is a hive of bees whose proceedings
are guided by their instincts alone!
Whereas the imperialist powers of Europe headed for overseas conquests, the mentality of exposition and mapping new terrains turned
also towards other fields than the geographical. The Seeing Eye – a
key figure of hierarchical modern power – could also be a non-human
eye, as with the fly which according to Steven Connor can be seen as
the recurring mode of “radically alien mode of entomological vision”
with its huge eyes consisting of 4000 sensors. Hence, it is fitting how
in 1898 the idea of “photographing through a fly's eye” was suggested
as a mode of experimental vision – able also to catch queen Victoria
with “the most infinitesimal lens known to science”, that of a dragon
fly.

88

88

88

89

89

Jean-Jacques Lecercle explains how the Victorian enthusiasm for
entomology and insect worlds is related to a general discourse of natural history that as a genre labelled the century. Through the themes
of ‘exploration' and ‘taxonomy' Lecercle claims how Alice in Wonderland can be read as a key novel of the era in its evaluation and
classification of various life worlds beyond the human. Like Alice in
the 1865 novel, new landscapes and exotic species are offered as an
armchair exploration of worlds not merely extensive but also opened
up by intensive gaze into microcosms. Uncanny phenomenal worlds
are what tie together the entomological quest, Darwinian inspired biological accounts of curious species and Alice's adventures into imaginative worlds of twisting logic. In taxonomic terms, the entomologist
is surrounded by a new cult of private and public archiving. New
modes of visualizing and representing insect life produce a new phase
of taxonomy becoming a public craze instead of merely a scientific
tool. Again the wonder worlds of Alice or Edward Lear, the Victorian nonsense poet, are the ideal point of reference for 19 th century
natural historian and entomologist, as Lecercle writes:
And it is part of a craze for discovering and classifying new
species. Its advantage over natural history is that it can invent those species (like the Snap-dragon-fly) in the imaginative
sense, whereas natural history can invent them only in the
archaeological sense, that is discover what already exists. Nonsense is the entomologist's dream come true, or the Linnaean
classification gone mad, because gone creative (. . . )
For Alice, the feeling of not being herself and “being so many different sizes in a day is very confusing”, which of course is something
incomprehensible to the Caterpillar she encounters. It is not queer for
the Caterpillar whose mode of being is defined by the metamorphosis
and the various perception/action-modulations it brings about. It
is only the suddenness of the becoming-insect of Alice that dizzies
her. A couple of years later, in The Population of an Old-Pear Tree,
or Stories of insect life (1870) an everyday meadow is disclosed as
a vivacious microcosm in itself. The harmonious scene, “like a great
89

89

89

90

90

amphitheatre”, is filled with life that easily escapes the (human) eye.
Like Alice, the protagonist wandering in the meadow is “lulled and
benumbed by dreamy sensations” which however transport him suddenly into new perceptions and bodily affects. What is revealed to
our boy hero in this educational novel fashioned in the style of travel
literature (connecting it thus to the colonialist contexts of its age)
is a world teeming with sounds, movements, sensations and insect
beings (huge spiders, cruel mole-crickets, energetic bees) that are beyond the human form (despite the constant tension of such narratives
as educational and moralising tales that anthropomorphize affective
qualities into human characteristics). True to entomological classification, a big part is reserved for the structural-anatomical differences
of the insect life but also the affect-life of how insects relate to their
surroundings is under scrutiny.
As precursors of ethology, such natural historical quests (whether
archaeological, entomological or imaginative) were expressing an appreciation of phenomenal worlds differing from that of the human
with its two hands, two eyes and two feet. In a way, this entailed a
kind of an extended Kantianism interested not only in the conditions
of possibility of experiences, but the emergence of alternative potentials on the immanent level of life that functions through a technics of
nature. Curiously the inspiration with new phenomenal worlds was
connected to the emergence of new technologies of movements, sensation and communication (all challenging the Kantian apperception of
Man as the historically constant basis of knowledge and perception).
Nature was gradually becoming the “new storehouse of invention”
(New York Times, August 4, 1901) that was to entice inventors into
perfecting their developments. What I argue is that this theme can
also be read as an expression of a shift in understanding technology
– a shift that marked the rise of modern discourse concerning media
technologies since the end of the 19 th century and that has usually
been attributed to an anthropological and ethnological turn in understanding technology. I also address this theme in another text of
mine, ‘Insect Technics'. For several writers such as Ernst Kapp who
became one of the predecessors of later theories of media as ‘extensions of man', it was the human body that served as a storage house
90

90

90

91

91

of potential media. However, at the same time, another undercurrent
proposed to think of technologies, inventions and solutions to problems posed by life as stemming from a much more different class of
bodies, namely insects.
So beyond Kant, we move onto a baroque world, not as a period of
art, but as a mode of folding and enveloping new ways of perception
and movement. The early years and decades of technical media were
characterized by the new imaginary of communication, from work
by inventors such as Nikola Tesla to various modes of e.g. spiritualism analyzed recently in her art works by Zoe Beloff. However, one
can radicalize the viewpoint even further and take an animal turn and
not look for alien but for animal and insect ways of sensing the world.
Naturally, this is exactly what is being proposed in a variety of media
art pieces and exhibitions. Insects have made their appearance for
example in Toshio Iwai's Music Insects (1990), Sarah Peebles' electroacoustic Insect Grooves as an example of imaginary soundscapes,
David Dunn's acoustic ecology pieces with insect sounds, the Sci-Art:
Bio-Robotic Choreography project (2001, with Stelarc as one of the
participators), and Laura Beloff's Spinne (2002), a networked spider installation that works according to the web spider/ant/crawler
technology.
Here we are dealing not just with representing the insect, but engaging with the animal affects, indistinguishable from those of the
technological, as in Stelarc's work where the experimentation with
new bodily realities is a form of becoming-insect of the technological
human body. Imagining by doing is a way to engage directly with
affects of becoming-animal of media where the work of sound and
body artists doubles the media archaeological analysis of historical
strata. In other words, one should not reside on the level of intriguing representations of imagined ways of communication, or imagined
apparatuses that never existed, but realize the overabundance of real
sensations, perceptions to contract, to fold, the neomaterialist view
towards imagined media.

91

91

91

92

92

Literature
Ernest van Bruyssel, The population of an old pear-tree; or, Stories
of insect life. (New York: Macmillan and co., 1870).
Lewis Carroll, Alice's Adventures in Wonderland and Through the
Looking Glass. Edited with an Introduction and Notes by Roger
Lancelyn Green. (Oxford: Oxford University Press, 1998).
Claire Colebrook, ‘The Sense of Space. On the Specificity of Affect
in Deleuze and Guattari.' In: Postmodern Culture, vol. 15, issue 1,
2004.
Steven Connor, fly. (London: Reaktion Books, 2006).
Manuel DeLanda, War in the Age of Intelligent Machines. (New
York: Zone Books, 1991).
Gilles Deleuze, Spinoza: Practical Philosophy. Transl. Robert
Hurley. (San Francisco: City Lights, 1988).
Gilles Deleuze, The Fold. Transl. Tom Conley. (Minneapolis:
University of Minnesota Press, 1993).
Ernst Kapp, Grundlinien einer Philosophie der Technik: Zur Entstehungsgeschichte der Kultur aus neuen Gesichtspunkten. (Braunschweig:
Druck und Verlag von George Westermann, 1877).
William Kirby & William Spence, An Introduction to Entomology,
or Elements of the Natural History of Insects. Volumes 1 and 2.
Unabridged Faximile of the 1843 edition. (London: Elibron, 2005).
Eric Kluitenberg (ed.), Book of Imaginary Media. Excavating the
Dream of the Ultimate Communication Medium. (Rotterdam: NAi
publishers, 2006).
Jean-Jacques Lecercle, Philosophy of Nonsense: The Intuitions of
Victorian Nonsense Literature. (London: Routledge, 1994).
Jussi Parikka, ‘Insect Technics: Intensities of Animal Bodies.' In:
(Un)Easy Alliance - Thinking the Environment with Deleuze/Guattari, edited by Bernd Herzogenrath. (Newcastle: Cambridge Scholars
Press, Forthcoming 2008).
Siegfried Zielinski, ‘Modelling Media for Ignatius Loyola. A Case
Study on Athanius Kircher's World of Apparatus between the Imaginary and the Real.' In: Book of Imaginary Media, edited by Kluitenberg. (Rotterdam: NAi, 2006).

92

92

92

93

93

PIERRE BERTHET
License: Creative Commons Attribution-NonCommercial-ShareAlike
EN

Extended speakers
& Concert with various extended objects
We invited Belgian artist Pierre Berthet to create an installation
for V/J10 that explores the resonance of EVP voices. He made a
netting of thin metal wires which he suspended from the ceiling of
the haunted house in the La Bellone courtyard.
Through these metal wires, loudspeakers without membranes were
connected to a network of resonating cans. Sinus tones and radio
recordings were transmitted through the speakers, making the metal
wires vibrate which, in their turn, caused the cans to resonate.

figure 26
A netting
of thin
metal wires
suspended
from the
ceiling of
the haunted
house in the
La Bellone
courtyard

figure 27


93

93

93

94

94

Concert with various extended objects

94

94

94

95

95

LEIff ELGREN, CM VON Hausswolff
License: Fully Restricted Copyright
EN

Elgaland-Vargaland
The Embassy of the The Kingdoms of Elgaland-Vargaland
(KREV)
The Kingdoms were proclaimed in 1992 and consist of all ‘Border
Territories': geographical, mental and digital. Elgaland-Vargaland is
the largest – and most populous – realm on Earth, incorporating all
boundaries between other nations as well as ‘Digital Territory' and
other states of existence. Every time you travel somewhere, and every
time you enter another form of being, such as the dream state, you
visit Elgaland-Vargaland, the kingdom founded by Leiff Elgren and
CM von Hausswolff.
During the Venice Biennale, Elgren stated that all dead people
are inhabitants of the country Elgaland-Vargaland unless they had
declared that they did not want to be an inhabitant.
Since V/J10, the Elgaland-Vargaland Embassy permanently resides in La Bellone.

figure 80
Since V/J10,
the Elgaland-Vargaland
Embassy permanently
resides in
La Bellone

figure 82

figure 81
Ambassadors
Yves
Poliart and
Wendy Van
Wynsberghe

figure 83

figure 85

figure 86

95

95

95

96

96

NL

Elgaland-Vargaland
figure 84
Every time
you travel
somewhere,
and every
time you
enter another form of
being, you
visit Elgaland-Vargaland.


CM VON Hausswolff, GUY-MARC HINANT
License: Creative Commons Attribution-NonCommercial-ShareAlike
figure 88
Drawings by
Dominique
Goblet,
EVP sounds
by Carl
Michael von
Hausswolff,
images by
Guy-Marc
Hinant

figure 87
EVP could
be the result
of psychic
echoes from
the past,
psychokinesis, or the
thoughts
of aliens
or nature
spirits.

For more information on EVP, see: http://en.wikipedia.org/wiki/Electronic_voice_phenomenon##_note
-fontana1
EN

Ghost Machinery
During V/J10 we showed an audiovisual installation entitled Ghost
Machinery, with drawings by Dominique Goblet, EVP sounds by Carl
Michael von Hausswolff, and images by Guy-Marc Hinant, based on
Dr. Stempnicks Electronic Voice Phenomena recordings.
EVP has been studied primarily by paranormal researchers since
the 1950s, who have concluded that the most likely explanation for
the phenomena is that they are produced by the spirits of the deceased. In 1959, Attila Von Szalay first claimed to have recorded the
‘voices of the dead', which led to the experiments of Friedrich Jürgenson. The 1970s brought increased interest and research including
the work of Konstantine Raudive. In 1980, William O'Neill backed by
industrialist George Meek built a ‘Spiricom' device, which was said to
facilitate very clear communication between this world and the spirit
world.
Investigation of EVP continues today through the work of many
experimenters, including Sarah Estep and Alexander McRae. In addition to spirits, paranormal researchers have claimed that EVP could
be due to psychic echoes from the past, psychokinesis unconsciously
produced by living people, or the thoughts of aliens or nature spirits.
Paranormal investigators have used EVP in various ways, including
as a tool in an attempt to contact the souls of dead loved ones and in
ghost hunting. Organizations dedicated to EVP include the American
Association of Electronic Voice Phenomena, the International Ghost
Hunters Society, as well as the skeptical Rorschach Audio project.

98

98

98

99

99

Read Feel Feed Real

101

101

101

102

102

Electro Magnetic fields of ordinary objects acted as EN
source material for an audio performance, surveillance
camera's and legislation are ingredients for a science fiction film, live annotation of videostreaming with the help
of IRC chats. . .
A mobile video laboratory was set up during the festival, to test out how to bring together scripting, annotation, data readings and recordings in digital archives.
Operating somewhere between surveillance and observation, the Open Source video team mixed hands-on Icecast
streaming workshops with experiments looking at the way
movements are regulated through motion control and vice
versa.

MANU LUKSCH, MUKUL PATEL
License: Creative Commons Attribution - NonCommercial - ShareAlike license
figure 94
CCTV
sculpture
in a park
in London

EN

Faceless: Chasing the Data Shadow
Stranger than fiction
Remote-controlled UAVs (Unmanned Aerial Vehicles) scan the city
for anti-social behaviour. Talking cameras scold people for littering
the streets (in children's voices). Biometric data is extracted from
CCTV images to identify pedestrians by their face or gait. A housing project's surveillance cameras stream images onto the local cable
channel, enabling the community to monitor itself.

figure 95
Poster in
London

These are not projections of the science fiction film that this text
discusses, but techniques that are used today in Merseyside 1. The
Guardian has reported the MoD rents out an RAF-staffed spy plane
for public surveillance, carrying reconnaissance equipment able to
monitor telephone conversations on the ground. It can also be used
for automatic number plate recognition: “Cheshire police recently revealed they were using the Islander [aircraft] to identify people speeding, driving when using mobile phones, overtaking on double white
lines, or driving erratically.”, Middlesborough 2, Newham and Shoreditch 3 in the UK. In terms of both density and sophistication, the UK
1

“Police spy in the sky fuels ‘Big Brother fears'”, Philip Johnston, Telegraph, 23/05/2007
http://www.telegraph.co.uk/news/main.jhtml?xml=/news/2007/05/22/ndrone22.xml
‘Talking' CCTV scolds offenders', BBC News, 4 April 2007 http://news.bbc.co.uk/2
/hi/uk_news/england/6524495.stm
3
“If the face fits, you're nicked”, Independent, Nick Huber, Monday, 1 April 2002 http:/
/www.independent.co.uk/news/business/analysis-and-features/if-the-face-fits-youre-nicked
-656092.html
“In 2001 the Newham system was linked to a central control room operated by the
London Metropolitan Police Force. In April 2001 the existing CCTV system in Birmingham city centre was upgraded to smart CCTV. People are routinely scanned by both
systems and have their faces checked against the police databases.”
Centre for Computing and Social Responsibility http://www.ccsr.cse.dmu.ac.uk
/resources/general/ethicol/Ecv12no1.html
2

104

104

104

105

105

leads the world in the deployment of surveillance technologies. With
an estimated 4.2 million CCTV cameras in place, its inhabitants are
the most watched in the world. 4 Many London buses have five or more
cameras inside, plus several outside, including one recording cars that
drive in bus lanes.
But CCTV images of our bodies are only one of many traces of
data that we leave in our wake, voluntarily and involuntarily. Vehicles are tracked using Automated Number Plate Recognition systems, our movements revealed via location-aware devices (such as
cell phones), the trails of our online activities recorded by Internet
Service Providers, our conversations overheard by the international
communications surveillance system Echelon, shopping habits monitored through store loyalty cards, individual purchases located using
RfiD (Radio-frequency identification) tags, and our meal preferences
collected as part of PNR (flight passenger) data. 5 Our digital selves
are many dimensional, alert, unforgetting.

4
5

A Report on the Surveillance Society. For the Information Commissioner by the Surveillance Studies Network, September 2006, p.19. Available from http://www.ico.gov.uk
‘e-Borders' is a £ 1.2bn passenger-screening programme to be introduced in 2009 and
to be complete by 2014. The single border agency, combining immigration, customs
and visa checks, includes a £ 650m contract with consortia Trusted Borders for a passenger-screening IT system: anyone entering or leaving Britain are to give 53 pieces
of information in advance of travel. This information, taken when a travel ticket is
bought, will be shared among police, customs, immigration and the security services
for at least 24 hours before a journey is due to take place. Trusted Borders consists
of US military contractor Raytheon Systems who will work with Accenture, Detica,
Serco, QinetiQ, Steria, Capgemini, and Daon. Ministers are also said to be considering
the creation of a list of ‘disruptive' passengers. It is expected to cost travel companies
£ 20million a year compiling the information. These costs will be passed on to customers via ticket prices, and the Government is considering introducing its own charge
on travellers to recoup costs. A pilot of the e-borders technology, known as Project
Semaphore, has already screened 29 million passengers.
Similarly, the arms manufacturer Lockheed Martin, the biggest defence contractor in
the U.S., that undertakes intelligence work as well as contributing to the Trident programme in the UK, is bidding to run the UK 2011 Census. New questions in the 2011
Census will include information about income and place of birth, as well as existing
questions about languages spoken in the household and many other personal details.
The Canadian Federal Government granted Lockheed Martin a $43.3 million deal to
conduct its 2006 Census. Public outcry against it resulted in only civil servants handling the actual data, and a new government task force being set up to monitor privacy
during the Census.
http://censusalert.org.uk/
http://www.vivelecanada.ca/staticpages/index.php/20060423184107361

105

105

105

106

106

Increasingly, these data traces are arrayed and administered in
networked structures of global reach. It is not necessary to posit a
totalitarian conspiracy behind this accumulation – data mining is an
exigency of both market effciency and bureaucratic rationality. Much
has been written on the surveillance society and the society of control,
and it is not the object here to construct a general critique of data
collection, retention and analysis. However, it should be recognised
that, in the name of effciency and rationality – and, of course, security – an ever-increasing amount of data is being shared (also sold,
lost and leaked 6) between the keepers of such seemingly unconnected
records as medical histories, shopping habits, and border crossings.
6

Sales: “Personal details of all 44 million adults living in Britain could be sold to
private companies as part of government attempts to arrest spiralling costs for the new
national identity card scheme, set to get the go-ahead this week. [...] ministers have
opened talks with private firms to pass on personal details of UK citizens for an initial
cost of £ 750 each.”
“Ministers plan to sell your ID card details to raise cash”, Francis Elliott, Andy McSmith and Sophie Goodchild, Independent, Sunday 26 June 2005
http://www.independent.co.uk/news/uk/politics/ministers-plan-to-sell-your-id-card-details
-to-raise-cash-496602.html
Losses: In January 2008, hundreds of documents with passport photocopies, bank
statements and benefit claims details from the Department of Work and Pensions were
found on a road near Exeter airport, following their loss from a TNT courier vehicle.
There were also documents relating to home loans and mortgage interest, and details
of national insurance numbers, addresses and dates of birth.
In November 2007, HM Revenue and Customs (HMRC) posted, unrecorded and unregistered via TNT, computer discs containing personal information on 25 million people
from families claiming child benefit, including the bank details of parents and the dates
of birth and national insurance numbers of children. The discs were then lost.
Also in November, HMRC admitted a CD containing the personal details of thousands
of Standard Life pension holders has gone missing, leaving them at heightened risk
of identity theft. The CD, which contained data relating to 15,000 Standard Life
pensions customers including their names, National Insurance numbers and pension
plan reference numbers was lost in transit from the Revenue offce in Newcastle to the
company's headquarters in Edinburgh by ‘an external courier'.
Thefts: In November 2007, MoD acknowledged the theft of a laptop computer containing the personal details of 600,000 Royal Navy, Royal Marines, and RAF recruits
and of people who had expressed interest in joining, which contained, among other
information, passport, and national insurance numbers and bank details.
In October 2007, a laptop holding sensitive information was stolen from the boot of
an HMRC car. A staff member had been using the PC for a routine audit of tax
information from several investment firms. HMRC refused to comment on how many
individuals may be at risk, or how many financial institutions have had their data
stolen as well. BBC suggest the computer held data on around 400 customers with
high value individual savings accounts (ISAs), at each of five different companies –
including Standard Life and Liontrust. (In May, Standard Life sent around 300 policy
documents to the wrong people.)

106

106

106

107

107

Legal frameworks intended to safeguard a conception of privacy by
limiting data transfers to appropriate parties exist. Such laws, and in
particular the UK Data Protection Act (DPA, 1998) 7, are the subject
of investigation of the film Faceless.
From Act to Manifesto
“I wish to apply, under the Data Protection Act,
for any and all CCTV images of my person held
within your system. I was present at [place] from
approximately [time] onwards on [date].” 8
For several years, ambientTV.NET conducted a series of exercises
to visualise the data traces that we leave behind, to render them
into experience and to dramatise them, to watch those who watch
us. These experiments, scrutinising the boundary between public
and private in post-9/11 daily life, were run under the title ‘the Spy
School'. In 2002, the Spy School carried out an exercise to test the
reach of the UK Data Protection Act as it applies to CCTV image
data.
The Data Protection Act 1998 seeks to strike a balance between
the rights of individuals and the sometimes competing interests
of those with legitimate reasons for using personal information.
The DPA gives individuals certain rights regarding information
held about them. It places obligations on those who process information (data controllers) while giving rights to those who are
the subject of that data (data subjects). Personal information
covers both facts and opinions about the individual. 9

7
9

The full text of the DPA (1998) is at http://www.opsi.gov.uk/ACTS/acts1998
/19980029.htm
Data Protection Act Fact Sheet available from the UK Information Commissioners
Offce, http://www.ico.gov.uk

107

107

107

108

108

The original DPA (1984) was devised to ‘permit and regulate'
access to computerised personal data such as health and financial
records. A later EU directive broadened the scope of data protection
and the remit of the DPA (1998) extended to cover, amongst other
data, CCTV recordings. In addition to the DPA, CCTV operators
‘must' comply with other laws related to human rights, privacy, and
procedures for criminal investigations, as specified in the CCTV Code
of Practice (http://www.ico.gov.uk).
As the first subject access request letters were successful in delivering CCTV recordings for the Spy School, it then became pertinent
to investigate how robust the legal framework was. The Manifesto for
CCTV filmmakers was drawn up, permitting the use only of recordings obtained under the DPA. Art would be used to probe the law.

figure 92
Still from
Faceless,
2007

figure 94
Multiple,
conflicting
timecode
stamps

A legal readymade
Vague spectres of menace caught on time-coded surveillance
cameras justify an entire network of peeping vulture lenses. A
web of indifferent watching devices, sweeping every street, every
building, to eliminate the possibility of a past tense, the freedom
to forget. There can be no highlights, no special moments: a
discreet tyranny of now has been established. Real time in its
most pedantic form. 10
Faceless is a CCTV science fiction fairy tale set in London, the city
with the greatest density of surveillance cameras on earth. The film
is made under the constraints of the Manifesto – images are obtained
from existing CCTV systems by the director/protagonist exercising
her/his rights as a surveilled person under the DPA. Obviously the
protagonist has to be present in every frame. To comply with privacy
legislation, CCTV operators are obliged to render other people in
the recordings unidentifiable – typically by erasing their faces, hence
the faceless world depicted in the film. The scenario of Faceless thus
derives from the legal properties of CCTV images.
10

(Ian Sinclair: Lights out for the territory, Granta, London, 1998, p. 91)

108

108

108

109

109

“RealTime orients the life of every citizen. Eating, resting, going
to work, getting married – every act is tied to RealTime. And every
act leaves a trace of data – a footprint in the snow of noise...” 11
The film plays in an eerily familiar city, where the reformed RealTime calendar has dispensed with the past and the future, freeing
citizens from guilt and regret, anxiety and fear. Without memory or
anticipation, faces have become vestigial – the population is literally
faceless. Unimaginable happiness abounds – until a woman recovers
her face...
There was no traditional shooting script: the plot evolved during
the four-year long process of obtaining images. Scenes were planned
in particular locations, but the CCTV recordings were not always
obtainable, so the story had to be continually rewritten.
Faceless treats the CCTV image as an example of a legal readymade (‘objet trouvé'). The medium, in the sense of raw materials
that are transformed into artwork, is not adequately described as
simply video or even captured light. More accurately, the medium
comprises images that exist contingent on particular social and legal
circumstances – essentially, images with a legal superstructure. Faceless interrogates the laws that govern the video surveillance of society
and the codes of communication that articulate their operation, and
in both its mode of coming into being and its plot, develops a specific
critique.
Reclaiming the data body
Through putting the DPA into practice and observing the consequences over a long exposure, close-up, subtle developments of the
law were made visible and its strengths and lacunae revealed.
“I can confirm there are no such recordings of
yourself from that date, our recording system was
not working at that time.” (11/2003)

11

Faceless, 2007

109

109

109

110

110

Many data requests had negative outcomes because either the surveillance camera, or the recorder, or the entire CCTV system in question
was not operational. Such a situation constitutes an illegal use of
CCTV: the law demands that operators: “comply with the DPA by
making sure [...] equipment works properly.” 12
In some instances, the non-functionality of the system was only
revealed to its operators when a subject access request was made. In
the case below, the CCTV system had been installed two years prior
to the request.
“Upon receipt of your letter [...] enclosing the
required 10£ fee, I have been sourcing a company
who would edit these tapes to preserve the privacy of other individuals who had not consented
to disclosure. [...] I was informed [...] that all
tapes on site were blank. [.. W]hen the engineer
was called he confirmed that the machine had not
been working since its installation.
Unfortunately there is nothing further that can be
done regarding the tapes, and I can only apologise
for all the inconvenience you have been caused.”
(11/2003)
Technical failures on this scale were common. Gross human errors
were also readily admitted to:

12

CCTV Systems and the Data Protection Act 1998, available from http://www.ico.gov
.uk

110

110

110

111

111

“As I had advised you in my previous letter, a request was made to remove the tape and for it not
to be destroyed. Unhappily this request was not
carried out and the tape was wiped according with
the standard tape retention policy employed by
[deleted]. Please accept my apologies for this and
assurance that steps have been taken to ensure a
similar mistake does not happen again.” (10/2003)

figure 98
The Rotain
Test, devised
by the
UK Home
Offce Police
Scientific
Development
Branch,
measures
surveillance
camera
performance.

Some responses, such as the following, were just mysterious (data
request made after spending an hour below several cameras installed
in a train carriage).
“We have carried out a careful review of all relevant tapes and we confirm that we have no images of
you in our control.” (06/2005)
Could such a denial simply be an excuse not to comply with the costly
demands of the DPA?
“Many older cameras deliver image quality so poor
that faces are unrecognisable. In such cases the
operator fails in the obligation to run CCTV for
the declared purposes.
You will note that yourself and a colleague's faces
look quite indistinct in the tape, but the picture you sent to us shows you wearing a similar
fur coat, and our main identification had been made
through this and your description of the location.”
(07/2002)

111

111

111

112

112

To release data on the basis of such weak identification compounds
the failure.
Much confusion is caused by the obligation to protect the privacy
of third parties in the images. Several data controllers claimed that
this relieved them of their duty to release images:
“[... W]e are not able to supply you with the images you requested because to do so would involve
disclosure of information and images relating to
other persons who can be identified from the tape
and we are not in a position to obtain their consent to disclosure of the images. Further, it is
simply not possible for us to eradicate the other
images. I would refer you to section 7 of the Data
Protection Act 1998 and in particular Section 7
(4).” (11/2003)
Even though the section referred to states that it is:
“not to be construed as excusing a data controller
from communicating so much of the information
sought by the request as can be communicated without disclosing the identity of the other individual concerned, whether by the omission of names or
other identifying particulars or otherwise.”
Where video is concerned, anonymisation of third parties is an expensive, labour-intensive procedure – one common technique is to occlude
each head with a black oval. Data controllers may only charge the
statutory maximum of 10 £ per request, though not all seemed to be
aware of this:

112

112

112

113

113

“It was our understanding that a charge for production of the tape should be borne by the person
making the enquiry, of course we will now be checking into that for clarification. Meanwhile please
accept the enclosed video tape with compliments of
[deleted], with no charge to yourself.” (07/2002)

figure 90
Off with
their heads!

Visually provocative and symbolically charged as the occluded heads
are, they do not necessarily guarantee anonymity. The erasure of a
face may be insuffcient if the third party is known to the person requesting images. Only one data controller undeniably (and elegantly)
met the demands of third party privacy, by masking everything but
the data subject, who was framed in a keyhole. (This was an uncommented second offering; the first tape sent was unprocessed.) One
CCTV operator discovered a useful loophole in the DPA:
“I should point out that we reserve the right, in
accordance with Section 8(2) of the Data Protection
Act, not to provide you with copies of the information requested if to do so would take disproportionate effort.” (12/2004)
What counts as ‘disproportionate effort'? The gold standard was set
by an institution whose approach was almost baroque – they delivered
hard copies of each of the several hundred relevant frames from the
time-lapse camera, with third parties heads cut out, apparently with
nail scissors.
Two documents had (accidentally?) slipped in between the printouts – one a letter from a junior employee tendering her resignation
(was it connected with the beheading job?), and the other an ironic
memo:

113

113

113

114

114

“And the good news -- I enclose the 10 £ fee to be
passed to the branch sundry income account.” (Head
of Security, internal communication 09/2003)
From 2004, the process of obtaining images became much more difficult.
“It is clear from your letter that you are aware
of the provisions of the Data Protection Act and
that being the case I am sure you are aware of
the principles in the recent Court of Appeal decision in the case of Durant vs. financial Services Authority. It is my view that the footage you
have requested is not personal data and therefore
[deleted] will not be releasing to you the footage
which you have requested.” (12/2004)
Under Common Law, judgements set precedents. The decision in
the case Durant vs. financial Service Authority (2003) redefined
‘personal data'; since then, simply featuring in raw video data does
not give a data subject the right to obtain copies of the recording.
Only if something of a biographical nature is revealed does the subject
retain the right.

114

114

114

115

115

“Having considered the matter carefully, we do not
believe that the information we hold has the necessary relevance or proximity to you. Accordingly
we do not believe that we are obligated to provide
you with a copy pursuant to the Data Protection Act
1988. In particular, we would remark that the video
is not biographical of you in any significant way.”
(11/2004)
Further, with the introduction of cameras that pan and zoom, being
filmed as part of a crowd by a static camera is no longer grounds for
a data request.
“[T]he Information Commissioners office has indicated that this would not constitute your personal
data as the system has been set up to monitor the
area and not one individual.” (09/2005)
As awareness of the importance of data rights grows, so the actual
provision of those rights diminishes:

115

115

115

116

116

figure 89
Still from
Faceless,
2007

"I draw your attention to CCTV systems and the Data
Protection Act 1998 (DPA) Guidance Note on when the
Act applies. Under the guidance notes our CCTV system is no longer covered by the DPA [because] we:
• only have a couple of cameras
• cannot move them remotely
• just record on video whatever the cameras pick
up
• only give the recorded images to the police to
investigate an incident on our premises"
(05/2004)
Data retention periods (which data controllers define themselves)
also constitute a hazard to the CCTV filmmaker:
“Thank you for your letter dated 9 November addressed to our Newcastle store, who have passed
it to me for reply. Unfortunately, your letter was
delayed in the post to me and only received this
week. [...] There was nothing on the tapes that you
requested that caused the store to retain the tape
beyond the normal retention period and therefore
CCTV footage from 28 October and 2 November is no
longer available.” (12/2004)
Amidst this sorry litany of malfunctioning equipment, erased tapes,
lost letters and sheer evasiveness, one CCTV operator did produce
reasonable justification for not being able to deliver images:

116

116

116

117

117

“We are not in a position to advise whether or not
we collected any images of you at [deleted]. The
tapes for the requested period at [deleted] had
been passed to the police before your request was
received in order to assist their investigations
into various activities at [deleted] during the
carnival.” (10/2003)

figure 91
Still from
Faceless,
2007

In the shadow of the shadow
There is debate about the effcacy, value for money, quality of
implementation, political legitimacy, and cultural impact of CCTV
systems in the UK. While CCTV has been presented as being vital in solving some high profile cases (e.g. the 1999 London nail
bomber, or the 1993 murder of James Bulger), at other times it has
been strangely, publicly, impotent (e.g. the 2005 police killing of Jean
Charles de Menezes). The prime promulgators of CCTV may have
lost some faith: during the 1990s the UK Home Offce spent 78% of
its crime prevention budget on installing CCTV, but in 2005, an evaluation report by the same offce concluded that, “the CCTV schemes
that have been assessed had little overall effect on crime levels.” 13
An earlier, 1992, evaluation reported CCTV's broadly positive
public reception due to its assumed effectiveness in crime control,
acknowledging “public acceptance is based on limited and partly inaccurate knowledge of the functions and capabilities of CCTV systems
in public places.” 14
By the 2005 assessment, support for CCTV still “remained high in
the majority of cases” but public support was seen to decrease after
implementation by as much as 20%. This “was found not to be the
reflection of increased concern about privacy and civil liberties, as
this remained at a low rate following the installation of the cameras,”
13

Gill, M. and Spriggs, A., Assessing the impact of CCTV. London: Home Offce
Research, Development and Statistics Directorate 2005, pp.60-61.
www.homeoffce.gov.uk/rds/pdfs05/hors292.pdf
14
http://www.homeoffce.gov.uk/rds/prgpdfs/fcpu35.pdf

117

117

117

118

118

but “that support for CCTV was reduced because the public became
more realistic about its capabilities” to lower crime.
Concerns, however, have begun to be voiced about function creep
and the rising costs of such systems, prompted, for example, by the
disclosure that the cameras policing London's Congestion Charge remain switched on outside charging hours and that the Met are to
have live access to them, having been exempted from parts of the
Data Protection Act to do so. 15 As such realities of CCTV's daily
operation become more widely known, existing acceptance may be
somewhat tempered.
Physical bodies leave data traces: shadows of presence, conversation, movement. Networked databases incorporate these traces into
data bodies, whose behaviour and risk are priorities for analysis and
commodification, by business and by government. The securing of
a data body is supposedly necessary to secure the human body, either preventatively or as a forensic tool. But if the former cannot
be assured, as is the case, what grounds are there for trust in the
hollow promise of the latter? The all-seeing eye of the panopticon is
not complete, yet. Regardless, could its one-way gaze ever assure an
enabling conception of security?

15

Surveillance State Function Creep – London Congestion Charge “real-time bulk data”
to be automatically handed over to the Metropolitan Police etc. http://p10.hostingprod
.com/@spyblog.org.uk/blog/2007/07/surveillance_state_function_creep_london_congestion
_charge_realtime_bulk_data.html

118

118

118

119

119

MICHAEL MURTAUGH

figure 113
Start
broadcasting
yourself!

License: Free Art License
EN

Active Archives
or: What's wrong with the YouTube documentary?
As someone who has shot video and programmed web-based interfaces to video over the past decade, it has been exciting to see how
distributing video via the Internet has become increasingly popularized, thanks in large part to video sharing sites like YouTube. At the
same time, I continue to design and write software in search of new
forms of collaborative and ‘evolving' documentaries; and for myself,
and others around me, I feel disinterest, even aversion, to posting
videos on YouTube. This essay has two threads: (1) I revisit an
earlier essay describing the ‘Evolving Documentary' model to get at
the roots of my enthusiasm for working with video online, and (2) I
examine why I find YouTube problematic, and more a reflection of
television than the possibilities that the web offers.
In 1996, I co-authored an essay with Glorianna Davenport, then
my teacher and director of the Interactive Cinema group at the MIT
Media Lab, called Automatist storyteller systems and the shifting
sands of story. 1 In it, we described a model for supporting ‘Evolving
Documentaries', or an “approach to documentary storytelling that
celebrates electronic narrative as a process in which the author(s), a
networked presentation system, and the audience actively collaborate
in the co-construction of meaning.” In this paper, Glorianna included
a section entitled ‘What's wrong with the Television Documentary?'
The main points of this argument were as follows:

1

figure 114
Join the
largest
worldwide
video-sharing
community!

http://www.research.ibm.com/journal/sj/363/davenport.html

131

131

131

132

132

1.
[... T]elevision consumes the viewer. Sitting passively in front
of a TV screen, you may appreciate an hour-long documentary;
you may even find the story of interest; however, your ability to
learn from the program is less than what it might be if you were
actively engaged with it, able to control its shape and probe its
contents.
Here, it is crucial to understand what is meant by the word ‘active'
. In a naive comparison between the activities of watching television
and surfing the web, one might say that the latter is inherently more
active in the sense that the process is ‘driven' by the choices of the
user; in the early days of the web it became popular to refer to this
split as ‘lean back vs. lean forward' media. Of course, if one means
to talk about cognitive activity, this is clearly misleading as aimlessly surfing the net can be achieved at near comatose levels of brain
function (as any late night surfer can attest to) and watching a particularly sharp television program can be incredibly engaging, even
life changing. Glorianna would often describe her frustration with
traditional documentary by observing the vast difference between her
own sense of engagement with a story gained through the process of
shooting and editing, versus the experience of an audience member
from simply viewing the end result. Thus ‘active' here relates to the
act of authoring and the construction of meaning. Rather than talking about leaning forward or backward, a more useful split might be
between reading and writing. Rather than being a question of bad
versus good access, the issue becomes about two interconnected cognitive processes, both hopefully ‘active' and involving thought. An
ideal platform for online documentary would be one that facilitates a
fluid movement between moments of reflection (reading) and of construction (writing).

132

132

132

133

133

2.
Television severely limits the ways in which an author can
‘grow' a story. A story must be composed into a fixed, unchanging form before the audience can see and react to it: there is no
obvious way to connect viewers to the process of story construction. Similarly, the medium offers no intrinsic, immediately
available way to interconnect the larger community of viewers
who wish to engage in debate about a particular story.
Part of the promise of crossing video with computation is the potential to combine the computers' ability to construct models and
run simulations with the random access possibilities of digitized media. Instead of editing a story down into a fixed form or ‘final cut',
one can program a ‘storytelling system' that can act as an ‘editor in
software'. Thus the system can maintain a dynamic representation
of the context of a particular telling, on which to base (or support a
viewer in making) editing decisions ‘on the fly'. The ‘Evolving Documentary' was intended to support complex stories that would develop
over time, and which could best be told from a variety of points of
view.
3.
Like published books and movies, television is designed for
unidirectional, one-to-many transmission to a mass audience,
without variation or personalization of presentation. The remote-control unit and the VCR (videocassette recorder) - currently the only devices that allow the viewer any degree of independent control over the play-out of television - are considered
anathema by commercial broadcasters. Grazing, time-shifting,
and ‘commercial zapping' run contrary to the desire of the industry for a demographically correct audience that passively
absorbs the programming - and the intrusive commercial messages - that the broadcasters offer.
133

133

133

134

134

Adding a decentralized means of distribution and feedback such
as the Internet provides the final piece of the puzzle in creating a
compelling new medium for the evolving documentary. No longer
would footage have to be excluded for reasons of reaching a ‘broad'
or average audience. An ideal storytelling system would be one that
could connect an individual viewer to whatever material was most
personally relevant. The Internet is a unique ‘mass media' in its
potential support for enabling access to non-mainstream, individually
relevant and personal subject matter.
What's wrong with the YouTube documentary?
YouTube has massively popularized the sharing and consumption
of video online. That said, most of the core concerns made in the
arguments related to television, are still relevant to YouTube when
considered as a platform for online collaborative documentary.
Clips are primarily ‘view-only'
Already in it's name, ‘YouTube' consciously invokes the television
set, thus inviting visitors to ‘lean back' and watch. The YouTube
interface functions primarily as a showcase of static monolithic elements. Clips are presented as fixed and finished, to be commented
upon, rated, and possibly bookmarked, but no more. The clip is
‘atomic' in the sense that it's not possible to make selections within a
clip, to export images or sound, or even to link to a particular starting
point. Without special plugins, the site doesn't even allow downloading of the clip. While users are encouraged ‘to embed' YouTube content in other websites (by cutting and pasting special HTML codes
that refer back to the YouTube site), the resulting video plays using
the YouTube player, complete with ‘related' links back into the service. It is in fact a violation of the YouTube terms of use to attempt
to display videos from the service in any other way.

134

134

134

135

135

The format of the clip is fixed and uniform for all kinds
of content
Technically, YouTube places some rather arbitrary limits on the
format of clips: all clips must contain an image and a sound track
and may not be longer than 10 minutes in length. Furthermore all
clips are treated equally, there is no notion of a ‘lecture', versus a
‘slideshow', versus a ‘music video', together with a sense that these
different kinds of material might need to be handled differently. Each
clip is compressed in a uniform way, meaning at the moment into a
flash format video file of fixed data rate and screen size.
Clips have no history
Despite these limitations, users of YouTube have found workarounds
to, for instance, download clips to then rework them into derived clips.
Although the derived works are often placed back again on YouTube,
the system itself has no means representing this kind of relationship.
(There is a mechanism for posting video responses to other clips, but
this kind of general purpose solution seems not to be understood or
used to track this kind of ‘derived' relationship.) The system is unable to model or otherwise make available the ‘history' of a particular
piece of media. Contrast this with a system like Wikipedia, where the
full history of an article, with a record of what was changed, by whom,
when, and even ‘meta-level' discussions about the changes (including
possible disagreement) is explicitly facilitated.
Weak or ‘flat' narrative structure
YouTube's primary model for narrative is a broad (and somewhat
obscure) sense of ‘relatedness' (based on user-defined tags) modulated
by popularity. As with many ‘social networking' and media sharing
sites, YouTube relies on ‘positive feedback' popularity mechanisms,
such as view counts, ‘star' ratings and favorites, to create ranked lists
of clips. Entry points like ‘Videos being watched right now', ‘Most
Viewed', ‘Top Favorites', only close the loop of featuring what's already popular to begin with. In addition, YouTube's commercial

135

135

135

136

136

model of enabling special paid levels of membership leads to ambiguous selection criteria, complicated by language as in the ‘Promoted
Videos' and ‘Featured Videos' of YouTube's front page (promoting
what?, featured by whom?).
The ‘editing logic' threading the user through the various clips is
flat, in that a clip is shown the same way regardless of what has been
viewed before it. Thus YouTube makes no visible use of a particular viewing history (though the fact that this information is stored
has been brought to the attention of the public via the ongoing Viacom lawsuit, http://news.bbc.co.uk/2/hi/technology/7506948.stm).
In this way it's difficult to get a sense of being in a particular ‘story
arc' or thread when moving from clip to clip in YouTube as in a sense
each click and each clip restarts the narrative experience.
No licenses for sharing / reuse
The lack of a download feature in YouTube could be said to protect the interests of those who wish to assert a claim of copyright.
However, YouTube ignores and thus obscures the question of license
altogether. One can find for instance the early films of Hitchcock,
now part of the public domain, in 10 minute chunks on YouTube;
despite this status (not indicated on the site), these clips are, like all
YouTube clips, unavailable for any kind of manipulation. This approach, and the limitations it places on the use of YouTube material,
highlights the fact that YouTube is primarily focused on getting users
to consume YouTube material, framed in YouTube's media player, on
YouTube's terms.
Traditional models for (software) authorship
While YouTube is built using open source software (Python and
ffmpeg for instance), the source code of the system itself is closed,
leaving little room for negotiation about how the software of the
site itself operates. This is a pity on a variety of levels. Free and
open source software is inextricably bound to the web not only in
terms of providing many of the underlying software (like the Apache
web server), but also in the reverse, as the possibilities for collaborative development that the web provides has catalyzed the process of
136

136

136

137

137

open source development. Software designed to support collaborative
work on code, like Subversion and other CVS's (concurrent versioning systems), and platforms for tracking and discussing software (like
TRAC), provide much richer models of use and relationship to work
than those which YouTube offer for video production.
Broadcasting over coherence
From it's slogan (‘Broadcast yourself'), to the language the service
uses around joining and uploading videos (see images), YouTube falls
very much into a traditional model of commercial broadcast television. In this model sharing means getting others to watch your clips,
with the more eyeballs the better.
The desire for broadness and the building of a ‘worldwide' community united only by a desire to ‘broadcast one's self' means creating
coherence is not a top priority. YouTube comments, for instance,
seem to suffer from this lack of coherence and context. Given no
particular focus, comments seem doomed to be similarly ungrounded
and broad. Indeed, comments in YouTube often seem to take on
more the character of public toilets than of public broadcasting, replete with the kind of sexism, racism, and homophobia that more or
less anonymous ‘blank wall' access seems to encourage.
A problematic space for ‘sharing'
The combination of all these aspects make YouTube for many a
problematic space for ‘sharing' - particularly when the material is of
a personal or particular nature. While on the one hand appearing
to pose an alternative platform to television, YouTube unfortunately
transposes many of that form's limitations and conventions onto the
web.
Looking to the future, what still remains challenging, is figuring
out how to fuse all those aspects that make the Internet so compelling
as a medium and enable them in the realm of online video: the net's
decentralized nature, the possibilities for participatory/collaboration
production, the ability to draw on diverse sources of knowledge (from
‘amateur' and home-based, to ‘expert'). How can the successful examples of collaborative text-based projects like Wikipedia inspire new
137

137

137

138

138

forms of collaborative video online; and in a way that escapes the
‘heaviness' and inertia of traditional forms of film/video. This fusion
can and needs to take place on a variety of levels, from the concept
of what a documentary is and can be, to the production tools and
content management systems media makers use, to a legal status of
media that reflects an understanding that culture is something which
is shared, down to the technical details of the formats and codecs
carrying the media in a way that facilitates sharing, instead of complicating it.

138

138

138

139

139

EN
NL
FR

Mutual Motions

139

139

139

140

140

Whether we operate a computer with the help of a command line interface, or by using buttons, switches and
clicks. . . the exact location of interaction often serves as
conduit for mutual knowledge - machines learn about bodies and bodies learn about machines. Dialogues happen
at different levels and in various forms: code, hardware,
interface, language, gestures, circuits.
Those conversations are sometimes gentle in tone - ubiquitous requests almost go unnoticed - and other times
they take us by surprise because of their authoritative
and demanding nature: “Put That There”. How can we
think about such feed back loops in productive ways?
How are interactions translated into software, and how
does software result in interaction? Could the practice of
using and producing free software help us find a middle
ground between technophobia and technofetishism? Can
we imagine ourselves and our realities differently, when we
try to re-design interfaces in a collaborative environment?
Would a different idea about ‘user' change our approach
to ‘use' as well?


7

“Classic puff pastry begins with a basic dough called a détrempe (pronounced day-trahmp) that is rolled out and
wrapped around a slab of butter. The
dough is then repeatedly rolled, folded,
and turned.”, Molly Stevens, A Shortcut
to flaky Puff Pastry. http://www.taunton
.com/finecooking/articles/how-to/rough-puff
-pastry.aspx 2008

146

146

146

147

147

figure XI

figure XIII

ADRIAN MACKENZIE
License: Creative Commons Attribution-NonCommercial-ShareAlike
EN

Centres of envelopment and intensive movement
in digital signal processing

figure 115
Adrian
Mackenzie
at V/J10

Abstract
The paper broadly concerns algorithmic processes commonly found
in wireless networks, video and audio compression. The problem it
addresses is how to account for the convoluted nature of the digital
signal processing (DSP). Why is signal processing so complex and relatively inaccessible? The paper argues that we can only understand
what is at stake in these labyrinthine calculations by switching focus away from abstract understandings of calculation to the dynamic
re-configuration of space and movement occurring in signal processing. The paper works through one example in detail of this reconfigured
movement in order to illustrate how digital signal processing enables
different experiences of proximity, intimacy, co-location and distance.
It explores how wireless signal processing algorithms envelope heterogeneous spaces in the form of hidden states, and logistical networks.
Importantly, it suggests that the ongoing dynamism of signal processing could be understood in terms of intensive movement produced by
a centre of envelopment. Centres of envelopment generate extensive
changes, but they also change the nature of change itself.
From sets to signals: digital signal processing
In new media art, in new media theory and in various forms of
media activism, there has been so much work that seizes on the possibilities of using digital technologies to design interactions, sound,
image, text, and movement that challenge dominant forms of experience, habit and selfhood. In various ways, the processes of branding,
commodification, consumption, control and surveillance associated
155

155

155

156

156

with contemporary media have been critically interrogated and challenged.
However, there are some domains of contemporary technological
and media culture that are really hard to work with. They may
be incredibly important, they may be an intimate part of everyday
life, yet remain relatively intractable. They resist contestation, and
engagement with may even seem pointless. This is because they may
contain intractable materials, or be organised in such complicated
ways that they are hard to change.
This paper concerns one such domain, digital signal processing
(DSP). I am not saying that new media has not engaged with DSP. Of
course it has, especially in video art and sound art, but there is little
work that helps us make sense of how the sensations, textures, and
movements associated with DSP come to be taken for granted, come
to appear as normal, and everyday, or how they could be contested.
A promotional video from Intel for the UltraMobilePC 1 promotes
change in relation to mobile media. Intel, because it makes semiconductors, is highly invested in digital signal processing in various forms.
In any case, video itself is a prime example of contemporary DSP at
work. Two aspects of this promotional video for the UMPC, the UltraMobile PC, relate to digital signal processing. There is much signal
processing here. It connects the individual's eyes, mouths and ears
to screens that display information services of various kinds. There
is also much signal processing in the wireless network infrastructures
that connect all these gadgets to each other and to various information services (maps, calendars, news feeds). In just this example,
sound, video, speech recognition, fibre, wireless and satellite, imaging
technologies in medicine all rely on DSP. We could say a good portion
of our experience is DSP-based.
This paper is an attempt to develop a theory of digital signal processing, a theory that could be used to talk about ways of contesting,
critiquing, or making alternatives. The theory under development
here relies a lot on two notions, ‘intensive movement' and ‘centre
of envelopment' that Deleuze proposed in Difference and Repetition.

figure 117
A promotional video
from Intel
for the UltraMobilePC

1

http://youtube.com/watch?v=GFS2TiK3AI

156

156

156

157

157

However, I want to keep the philosophy in the background as much as
possible. I basically want to argue that we need to ask: why does so
much have to be enveloped or interiorised in wireless or audiovisual
DSP?
How does DSP differ from other algorithmic processes?
What can we say about DSP? firstly, influenced by recent software
studies-based approaches (Fuller, Chun, Galloway, Manovich), I think
it is worth comparing the kinds of algorithmic processes that take
place in DSP with those found in new media more generally. Although
it is an incredibly broad generalisation, I think it is safe to say that
DSP does not belong to the set-based algorithms and data-structures
that form the basis of much interest in new media interactivity or
design.
DSP differs from set-based code. If we think of social software such
as flickr, Google, or Amazon, if we think of basic information infrastructures such as relational databases or networks, if we think of
communication protocols or search engines, all of these systems rely
on listing, enumerating, and sorting data. The practices of listing,
indexing, addressing, enumerating and sorting, all concern sets. Understood in a fairly abstract way, this is what much software and code
does: it makes and changes sets. Even areas that might seem quite
remote from set-making, such as the 3D-projective geometry used in
computer game graphics are often reduced algorithmically to complicated set-theoretical operations on shapes (polygons). Even many
graphic forms are created and manipulated using set operations.
The elementary constructs of most programming languages reflect
this interest in set-making. For instance, networks or, in computer
science terms, graphs, are visually represented like using lines and
boxes. But in terms of code, they are presented as either edge or
‘adjacency lists', like this: 2
graph = {'A': ['B', 'C'],
'B': ['C', 'D'],
2

http://www.python.org/doc/essays/graphs/

157

157

157

158

158

'C':
'D':
'E':
'F':

['D'],
['C'],
['F'],
['C']}

A graph or network can be seen as a list of lists. This kind of
representation in code of relations is very neat and nice. It means that
something like the structure of the internet, as a hybrid of physical
and logical relations, can be recorded, stored, sorted and re-ordered
in code. Importantly, it is highly open to modification and change.
Social software, or Web2.0, as exemplified in websites like Facebook or
YouTube also can be understood as massive deployments of set theory
in the form of code. Their sociality is very much dependent on set
making and set changing operations, both in the composition of the
user interfaces and in the underlying databases that make constantly
seek to attach new relations to data, to link identities and attributes.
In terms of activism, and artwork, relations that can be expressed in
the form of sets and operations on sets, are highly manipulable. They
can be learned relatively easily, and they are not too difficult to work
with. For instance, scripts that crawl or scrape websites have been
widely used in new media art and activism.
By contrast, DSP code is not based on set-making. It relies on
a different ordering of the world that lies closer to streams of signals that come from systems such as sensors, transducers, cameras,
and that propagate via radio or cable. Indeed, although it is very
widely used, DSP is not usually taught as part of the computer science or software engineering. The textbooks in these areas often do
not mention DSP. The distinction between DSP and other forms of
computation is clearly defined in a textbook of DSP:
Digital Signal Processing is distinguished from other areas in
computer science by the unique type of data it uses: signals.
In most cases, these signals originate as sensory data from the
real world: seismic vibrations, visual images, sound waves, etc.
DSP is the mathematics, the algorithms, and the techniques

158

158

158

159

159

used to manipulate these signals after they have been converted
into a digital form. (Smith, 2004)
While it draws on some of the logical and set-based operations
found in code in general, DSP code deals with signals that usually involve some kind of sensory data – vibrations, waves, electromagnetic
radiation, etc. These signals often involve forms of rapid movement,
rhythms, patterns or fluctuations. Sometimes these movements are
embodied in physical senses, such as the movements of air involved in
hearing, or the flux of light involved in seeing. Because they are often
irregular movements, they cannot be easily captured in the forms of
movement idealised in classical mechanics – translation, rotation, etc.
Think for instance of a typical photograph of a city street. Although
there are some regular geometrical forms, the way in which light is
reflected, the way shadows form, is very difficult to describe geometrically. It is much easier, as we will see, to think of an image as a
signal that distributes light and colour in space. Once an image or
sound can be seen as a signal, it can undergo digital signal processing.
What distinguishes DSP from other algorithmic processes is its
reliance on transforms rather than functions. This is a key difference.
The ‘transform' deals with many values at once. This is important
because it means it can deal with things that are temporal or spatial,
such as sounds, images, or signals in short. This brings algorithms
much closer to sensation, and to what bodies feel. While there is
codification going on, since the signal has to be treated digitally as
discrete numerical values, it is less reducible to the sequence of steps or
operations that characterise set-theoretical coding. Here for instance
is an important section of the code used in MPEG video encoding in
the free software ffmpeg package:

figure 116
The simplest
mpeg encoder

**
* @file mpegvideo.c
* The simplest mpeg encoder (well, it was the simplest!).
*
...
159

159

159

160

160

* for jpeg fast DCT */
#define CONST_BITS 14
static const uint16_t aanscales[64] = {
/* precomputed values scaled up by 14 bits */
16384, 22725, 21407, 19266, 16384, 12873, 8867, 4520,
22725, 31521, 29692, 26722, 22725, 17855, 12299, 6270,
21407, 29692, 27969, 25172, 21407, 16819, 11585, 5906,
19266, 26722, 25172, 22654, 19266, 15137, 10426, 5315,
16384, 22725, 21407, 19266, 16384, 12873, 8867, 4520,
12873, 17855, 16819, 15137, 12873, 10114, 6967, 3552,
8867, 12299, 11585, 10426, 8867, 6967, 4799, 2446,
4520, 6270, 5906, 5315, 4520, 3552, 2446, 1247
};
...
for(i=0;i<64;i++) {
const int j=
dsp{}->}idct_permutation[i];
qmat[qscale][i] = (int)((uint64_t_C(1)
<< (QMAT_SHIFT + 14))
(aanscales[i]
* qscale * quant_matrix[j]));
I don't think we need to understand this code in detail. There is
only one thing I want to point out in this code: the list of ‘precomputed' numerical values is used for ‘jpeg fast DCT'. This is a typical
piece of DSP type code. It refers to the way in which video frames are
encoding using Fast Fourier Transforms. The key point here is that
these values have been carefully worked out in advance to scale different colour and luminosity components of the image differently. The
transform, DCT (Discrete Cosine Transform), is applied to chunks of
sensation – video frames – to make them into something that can be
manipulated, stored, changed in size or shape, and circulated. Notice
160

160

160

161

161

that the code here is quite opaque in comparison to the graph data
structures discussed previously. This opacity reflects the sheer number of operations that have to be compressed into code in order for
digital signal processing to work.
Working with DSP: architecture and geography
So we can perhaps see from the two code examples above that there
is something different about DSP in comparison to the set-based
processing. DSP seems highly numerical and quantified, while the
set-based code is symbolic and logical. What is at stake in this difference? I would argue that it is something coming into the code from
outside, something that is difficult to read in the code itself because
it is so opaque and convoluted. Why is DSP code hard to understand
and also hard to write?
You will remember that I said at the outset that there are some
facets of technological cultures that resist appropriation or intervention. I think the mathematics of DSP is one of those facets. If I just
started explaining some of the mathematical models that have been
built into the contemporary world, I think it would be shoring up
or reinforcing a certain resistance to change associated with DSP, at
least in its main mathematical formalisations. I do think the mathematical models are worth engaging with, partly because they look
so different from the set-based operations found in much code today.
The mathematical models can tell us why DSP is difficult to intervene
in at a low level.
However, I don't think it is the mathematics as such that makes
digital signal processing hard to grapple with. The mathematics is an
architectural response to a geographical problem, a problem of where
code can go and be in the world. I would argue that it is the relation
between the architecture and geography of digital signal processing
itself that we should grapple with. It is something to do about the
immersion in everyday life, the proximity to sensation, the shifting
multi-sensory patterning of sociality, the movements of bodies across
variable distances, and the effervescent sense of impending change
that animates the convoluted architecture of DSP.

161

161

161

162

162

We could think of the situations in which DSP is commonly found.
For instance, in the background of the scenes in the daily lives of
businessmen shown in Intel's UPMC video, lie wireless infrastructures
and networks. Audiovisual media and wireless networks both use
signal processing, but for different reasons. Although they seem quite
disparate from each other in terms of how we embody them, they
actually sometimes use the same DSP algorithms. (In other work, I
have discussed video codecs. 3
3

The case of video codecs
In the foreground of the UMPC vision, stand images, video images in particular, and
to a lesser extent, sounds. They form a congested mass, created by media and information networks. People in electronic media cultures constantly encounter images in
circulation. Millions of images flash across TV, cinema and computer screens. DVD's
shower down on us. The internet is loaded down with video at the moment (Google
Video, YouTube.com, Yahoo video, etc.). A powerful media-technological imagining of
video moving everywhere, every which way, has taken root.
The growth of video material culture is associated with a key dynamic: the proliferation
of software and hardware codecs. Codecs generate linear transforms of images and
sound. Transformed images move through communication networks much more quickly
than uncompressed audiovisual materials. Without codecs, an hour of raw digital video
would need 165 CD-ROMs or take roughly 24 hours to move across a standard computer
network (10Mbit/sec ethernet). Instead of 165 CDs, we take a single DVD on which a
film has been encoded by a codec. We play it on a DVD player that also has a codec,
usually implemented in hardware. Instead of 32Mbyte/sec, between 1-10 MByte/sec
streams from the DVD into the player and then onto the television screen.
The economic and technical value of codecs can hardly be overstated. DVD, the transmission formats for satellite and cable digital television (DVB and ATSC), HDTV
as well as many internet streaming formats such as RealMedia and Windows Media,
third generation mobile phones and voice-over-ip (VoIP), all depend on video and audio codecs. They form a primary technical component of contemporary audiovisual
culture.
Physically, codecs take many forms, in software and hardware. Today, codecs nestle in
set-top boxes, mobile phones, video cameras and webcams, personal computers, media
players and other gizmos. Codecs perform encoding and decoding on a digital data
stream or signal, mainly in the interest of finding what is different in a signal and what
is mere repetition. They scale, reorder, decompose and reconstitute perceptible images
and sounds. They only move the differences that matter through information networks
and electronic media. This performance of difference and repetition of video comes at
a cost. Enormous complication must be compressed in the codec itself.
Much is at stake in this logistics from the perspective of cultural studies of technology
and media. On the one hand, codecs analyse, compress and transmit images that
fascinate, bore, fixate, horrify and entertain billions of spectators. Many of these
videos are repetitive or cliched. There are many re-runs of old television series or
Hollywood classics. YouTube.com, a video upload site, offers 13,500 wedding videos.
Yet the spatio-temporal dynamics of these images matters deeply. They open new
patterns of circulation. To understand that circulation matters deeply, we could think
of something we don't want to see, for instance, the execution of many hostages (Daniel
Perl, Nick Berg, and others) in Jihadist videos since 2002. Islamist and ‘shock-site' web

162

162

162

163

163

While images are visible, wireless signals are relatively hard to
sense. So they are a ‘hard case' to analyse. We know they surround
us, but we hardly have any sensation of them. A tightly packed
labyrinth of digital signal processing lies between antenna and what
reaches the business travellers' eyes and ears. Much of what they
look at and listen has passed through wireless chipsets. The chipsets,
produced by Broadcom, Intel, Texas Instruments, Motorola, Airgo or
Pico, are tiny (1 cm) fragments that support highly convoluted and
concatenated paths on nanometre scales. In wireless networks such
as Wi-fi, Bluetooth, and 3G mobile phones with their billions of
miniaturised chipsets, we encounter a vast proliferation of relations.
What is at stake in these convoluted, compressed packages of relationality, these densely patterned architectures dedicated to wireless
communication?
Take for instance the picoChip, a latest-generation wireless digital
signal processing chip, designed by a ‘fabless' semiconductor company,
picoChip Designs Ltd, in Bath, UK. The product brief describes the
chip as:
[t]he architecture of choice for next-generation wireless. Expressly designed to address the new air-interfaces, picoChip's
multi-core DSP is the most powerful baseband processor on
the market. Ideally suited to WiMAX, HSPA, UMTS-LTE,
802.16m, 802.20 and others, the picoArray delivers ten-times
better MIPS/$ than legacy approaches. Crucially, the picoArray is easy to program, with a robust development environment
and fast learning curve. (PicoChip, 2007)
Written for electronics engineers, the key points here are that the
chip is designed for wireless communication or ‘air-interface', that
servers streamed these videos across the internet using the low-bitrate Windows Media
Video codec, a proprietary variant of the industry-standard MPEG-4. The shock of
such events – the sight of a beheading, the sight of a journalist pleading for her life –
depends on its circulation through online and broadcast media. A video beheading lies
at the outer limit of the ordinary visual pleasures and excitations attached to video
cultures. Would that beheading, a corporeal event that takes video material culture to
its limits, occur without codecs and networked media?

163

163

163

164

164

its purpose is to receive and transmit information wirelessly, and
that it accommodates a variety of wireless communication standards
(WiMAX, HSPA, 802.16m, etc). In this context, much of the terminology of performance and low cost is familiar. The chip combines computing performance and value for money (“ten times better
MIPS/$ – Million Instructions Per Second/$”) as a ‘baseband processor'. That means that it could find its way into many different version of hardware being produced for applications that range between
large-scale wireless information infrastructures and small consumer
electronics applications. Only the last point is slightly surprisingly
emphatic: “[c]rucially, the picoArray is easy to program, with a robust development environment and fast learning curve.” Why should
ease of programming be important?
And why should so many processors be needed for wireless
signal processing?
The architecture of the picoChip stands on shifting ground. We
are witnessing, as Nigel Thrift writes, “a major change in the geography of calculation. Whereas ‘computing' used to consist of centres
of calculation located at definite sites, now, through the medium of
wireless, it is changing its shape” (Thrift, 2004, 182). The picoChip's
architecture is a respond to the changing geographies of calculation.
Calculation is not carried out at definite sites, but at almost any
site. We can see the picoChip as an architectural response to the
changing geography of computing. The architecture of the picoChip
is typical in the ways that it seeks to make a constant re-shaping
of computation possible, normal, affordable, accessible and programmable. This is particularly evident in the parallel character of its
architecture. Digital signal processing requires massive parallellisation: more chips everywhere, and chips that do more in parallel. The
advanced architecture of the picoChip is typical of the shape of things
more generally:
[t]he picoArray™ is a tiled processor architecture in which hundreds of processors are connected together using a deterministic
interconnect. The level of parallelism is relatively fine grained
164

164

164

165

165

with each processor having a small amount of local memory.
... Multiple picoArrayTM devices may be connected together to
form systems containing thousands of processors using on-chip
peripherals which effectively extend the on-chip bus structure.
(Panesar, et al., 2006, 324)
The array of processors shown then, is a partial representation, an
armature for a much more extensive diffusion of processors in wireless
digital signal processing: in wireless base stations, 3G phones, mobile
computing, local area networks, municipal, community and domestic
Wi-fi network, in femtocells, picocells, in backhaul, last-mile or first
mile infrastructures.

figure 118
Typical contemporary
wireless infrastructure
DSP chip architecture
PicoChip202

Architectures and intensive movement
It is as if the picoChip is a miniaturised version of the urban geography that contains the many gadgets, devices, and wireless and wired
infrastructures. However, this proliferation of processors is more than
a diffusion of the same. The interconnection between these arrays of
processors is not just extensive, as if space were blanketed by an ever
finer and wider grid of points occupied by processors at work shaping
signals. As we will see, the interconnection between processors in DSP
seeks to potentialise an intensive movement. It tries to accommodate
a change in the nature of movement. Since all movement is change,
intensive movement is a change in change. When intensive movement
occurs, there is always a change in kind, a qualitative change.
Intensive movements always respond to a relational problem. The
crux of the relational problem of wirelessness is this: how can many
things (signals, messages, flows of information) occupy the same space
at the same time, yet all be individualised and separate? The flow of
information and messages promises something highly individualised
(we saw this in the UMPC video from Intel). In terms of this individualising change, the movement of images, messages and data, and the
movement of people, have become linked in very specific ways today.
The greater the degree of individualization, the more dense becomes
the mobility of people and the signals they transmit and receive. And
as people mobilise, they drag personalised flows of communication on
165

165

165

166

166

the move with them. Hence flows of information multiply massively,
and networks must proliferate around those flows. The networks need
to become more dense, and imbricate lived spaces more closely in response to individual mobility.
This poses many problems for the architecture of communication infrastructure. The infrastructural problems of putting networks everywhere are increasingly, albeit only partially, solved by packing radio-frequency waves with more and more intricately modulated signal
patterns. This is the core response of DSP to the changing geography
of calculation, and to the changing media embodiments associated
with it. To be clear on this: were it not for digital signal processing,
the problems of interference, of unrelated communications mixing together, would be potentially insoluble. The very possibility of mobile
devices and mobility depends on ways of increasing the sheer density
of wireless transmissions. Radio spectrum becomes an increasingly
valuable, tightly controlled resource. For any one individual communication, not much space or time can be available. And even when
there is space, it may be noisy and packed with other people and
things trying to communicate. different kinds of wireless signals are
constantly added to the mix. Signals may have to work their way
through crowds of other signals to reach a desired receiver. Communication does not take place in open, uncluttered space. It takes
place in messy configurations of buildings, things and people, which
obstruct waves and bounce signals around. The same signal may
be received many times through different echoes (‘multipath echo'
). Because of the presence of crowds of other signals, and the limited spectrum available for any one transmission, wirelessness needs
to be very careful in its selection of paths if experience is to stream
rather than just buzz. The problem for wireless communication is to
micro-differentiate many paths and to allow them to interweave and
entwine with each other without coming into relation.
So the changing architectures of code and computation associated
with DSP in wireless networks does more, I would argue, than fit in
with changing geography of computing. It belongs to a more intensive, enveloped, and enveloping set of movements. To begin addressing this dynamic, we might say that wireless DSP is the armature
166

166

166

167

167

of a centre of envelopment. This is a concept that Gilles Deleuze
proposes late in Difference and Repetition. ‘Centres of envelopment'
are a way of understanding how extensive movements arise from intensive movement. Such centres crop up in ‘complex systems' when
differences come into relation:
to the extent that every phenomenon finds its reason in a difference of intensity which frames it, as though this constituted
the boundaries between which it flashes, we claim that complex
systems increasingly tend to interiorise their constitutive differences: the centres of envelopment carry out this interiorisation
of the individuating factors. (Deleuze, 2001, 256)
Much of what I have been describing as the intensive movement
that folds spaces and times inside DSP can be understood in terms
of an interiorisation of constitutive differences. An intensive movement always entails a change in the nature of change. In this case,
a difference in intensity arises when many signals need to co-habit
that same place and moment. The problem is: how can many signals
move simultaneously without colliding, without interfering with each
other? How can many signals pass by each other without needing
more space? These problems induce the compression and folding of
spaces inside wireless processing, the folding that we might understand as a ‘centre of envelopment' in action.
The Fast Fourier Transform: transformations between time
and space
I have been arguing that the complications of the mathematics
and the convoluted nature of the code or hardware used in DSP,
stems from an intensive movement or constitutive difference that is
interiorised. We can trace this interiorisation in the DSP used in
wireless networks. I do not have time to show how this happens
in detail, but hopefully one example of DSP that occurs but in the
video codecs and wireless networks will illustrate how this happens
in practice.
167

167

167

168

168

Late in the encoding process, and much earlier in the decoding
process in contemporary wireless networks, a fairly generic computational algorithm comes into action: the Fast Fourier Transform
(ffT). In some ways, it is not surprising to find the ffT in wireless networks or in digital video. Dating from the mid-1960s, ffTs
have long been used to analyse electrical signals in many scientific
and engineering settings. It provides the component frequencies of
a time-varying signal or waveform. Hence, in ‘spectral analysis', the
ffT can show the spectrum of frequencies present in a signal.
The notion of the Fourier transform is mathematical and has been
known since the early 19th century: it is an operation that takes
an arbitrary waveform and turns it into a set of periodic waves (sinusoids) of different frequencies and amplitudes. Some of these sinusoids
make more important contributions to overall shape of the waveform
than others. Added together again, these sine or cosine waves should
exactly re-constitute the original signal. Crucially, a Fourier transform can turn something that varies over time (a signal) into a set of
simple components (sine or cosine waves) that do not vary over time.
Put more technically, it switches between ‘time' and ‘frequency' domains. Something that changes in time, a signal, becomes a set of
distinct components that can be handled separately. 4
In a way, this analysis of a complex signal into simple static component signals means that DSP does use the set-based approaches I
described earlier. Once a complex signal, such as an image, has been
analysed into a set of static components, we can imagine code that

4

Humanities and social science work on the Fast Fourier Transform is hard to find, even
though the ffT is the common mathematical basis of contemporary digital image,
video and sound compression, and hence of many digital multimedia (in JPEG, MPEG
files, in DVDs). In the early 1990s, Friedrich Kittler wrote an article that discussed
it {Kittler, 1993 #753}. His key point was largely to show that there is no realtime
in digital signal processing. The ffT works by defining a sliding window of time for
a signal. It treats a complicated signal as a set of blocks that it lifts out of the time
domain and transforms into the frequency domain. The ffT effectively plots an event
in time as a graph in space. The experience of realtime is epiphenomenal. In terms of
the ffT, a signal is always partly in the future or the past. Although Kittler was not
referring to the use of ffT in wireless networks, the same point applies – there is no
realtime communication. However, while this point about the impossibility of realtime
calculation was important to make during the 1990s, it seems well-established now.

168

168

168

169

169

would select the most important or relevant components. This is precisely what happens in video and sound codecs such as MPEG and
MP3.
The ffT treats sounds and images as complicated superimpositions of waveforms. The envelope of a signal becomes something that
contains many simple signals. It is interesting that wireless networks
tend to use this process in reverse. It deliberately takes a well-separated and discrete set of signals – a digital datastream – and turns it
into a single complex signal. In contrast to the normal uses of ffT in
separating important from insignificant parts of a signal, in wireless
networks, and in many other communications setting, ffT is used to
put signals together in such a way as to contain them in a single envelope. The ffT is found in many wireless computation algorithms
because it allows many different digital signals to be put together on
a single wave and then extracted from it again.
Why would this superimposition of many signals onto a single complex waveform be desirable? Would it not increase the possibilities of
confusion or interference between signals? In some ways the ffT is
used to slow everything down rather than speed it up. Rather than
simply spatialising a duration, the ffT as used in wireless networks
defines a different way of inhabiting the crowded, noise space of electromagnetic radiation. Wireless transmitters are better at inhabiting
crowded signal spectrum when they don't try to separate themselves
off from each other, but actually take the presence of other transmitters into account. How does the ffT allow many transmitters to
inhabit the same spectrum, and even use the same frequencies?
The name of this technique is OFDM (Orthogonal Frequency Division Multiplexing). OFDM spreads a single data stream coming
from a single device across a large number of sub-carriers signals (52
in IEEE 802.11a/g). It splits the data stream into dozens of separate signals of slightly different frequency that together evenly use
the whole available radio spectrum. This is done in such a way that
many different transmitters can be transmitting at the same time,
on the same frequency, without interfering with each other. The advantage of spreading a single high speed data stream across many
signals (wideband) is that each individual signal can carry data at a
169

169

169

170

170

much slower rate. Because the data is split into 52 different signals,
each signal can be much slower (1/50). That means each bit of data
can be spaced apart more in time. This has great advances in urban
environments where there are many obstacles to signals, and signals
can reflect and echo often. In this context, the slower the data is
transmitted, the better.
At the transmitter, a reverse ffT (IffT) is used to re-combine
the 50 signals onto 1 signal. That is, it takes the 50 or so different
sub-carriers produced by OFDM, each of which has a single slightly
different, but carefully chosen frequency, and combines them into one
complex signal that has a wide spectrum. That is, it fills the available
spectrum quite evenly because it contains many different frequency
components. The waveform that results from the IffT looks like
'white noise': it has no remarkable or outstanding tendency whatsoever, except to a receiver synchronised to exactly the right carrier
frequency. At the receiver, this complex signal is transformed, using ffT, back into a set of 50 separate data streams, that are then
reconstituted into a single high speed stream.
Even if we cannot come to grips with the techniques of transformation using in DSP in any great detail, I hope that one point stands
out. The transformation involves ‘c'hanges in kind. Data does not
simply move through space. It changes in kind in order to move
through space, a space whose geography is understood as too full of
potential relations.
Conclusion
A couple of points in conclusion:
a. The spectrum of different wireless-audiovisual devices competing
to do more or less the same thing, are all a reproduction of the
same. Extensive movement associated with wireless networks and
digital video occur in various forms. firstly in the constant enveloping of spaces by wireless signals, and secondly in the dense

170

170

170

171

171

population of wireless spectrum by competing, overlapping signals, vying for market share in highly visible, well-advertised campaigns to dominate spectrum while at the same time allowing for
the presence of many others.
b. Actually, in various ways, wirelessness puts the very primacy of
extension as space-making in question. Signals seem to be able to
occupy the same space at the same time, something that should
not happen in space as usually understood. We can understand
this by re-conceptualising movement as intensive. Intensive movement occurs in multiple ways. Here I have emphasised the constant folding inwards or interiorisation of heterogeneous movements via algorithms used in digital signal processing. Intensive
movement ensues occurs when a centre of envelopment begins to
interiorise differences. While these interiorised spaces are computationally intensive (as exemplified by the picoChip's massive
processing power), the spaces they generate are not perceived as
calculated, precise or rigid. Wirelessness is a relatively invisible,
messy, amorphous, shifting sets of depths and distances that lacks
the visible form and organisation of other entities produced by
centres of calculation (for instance, the shape of a CAD-designed
building or car). However, similar processes occur around sound
and images through DSP. In fact, different layers of DSP are increasingly coupled in wireless media devices.
c. Where does this leave the centre of envelopment? The cost of
this freeing up of movement, of mobility, seems to me to be an
interiorisation of constitutive differences, not just in DSP code
but in the perceptual fields and embodiment of the mobile user.
The irony of the DSP is that it uses code to quantify sensations
or physical movements that lie at the fringes of representation
or awareness. We can't see DSP as such, but it supports our
seeing and moving. It brings code quite close to the body. It
can work with audio and images in ways that bring them much
closer to us. The proliferation of mobile devices such as mp3 and
digital cameras is one consequence of that. Yet the price DSP
pays for this proximity to sensation, to sounds, movement, and
others, is the envelopment I have been describing. DSP acts as
171

171

171

172

172

a centre of envelopment, as something that tends to interiorise
intensive movements, the changing nature of change, the intensive
movements that give rise to it.
d. This brings us back to the UMPC video: it shows two individuals.
Their relation can never, it seems, get very far. The provision
of images, sound and wireless connectivity has come so far, that
they hardly need encounter each other at all. There is something
intensely monadological here: DSP is heavily engaged in furnishing the interior walls of the monad, and with orienting the monad
in relation to other monads, but making sure that nothing much
need pass between them. So much has already been pre-processed
between, that nothing much need happen between. They already
have a complete perception of their relation to the other.
e. On a final constructive note, it seems that there is room for contestation here. The question is how to introduce the set-based
code processes that have proven productive in other areas into
the domain of DSP. What would that look like? How would it be
sensed? What could it do to our sensations of video or wireless
media?

172

172

172

173

173

References
Deleuze, Gilles. Difference and Repetition. Translated by Paul
Patton, Athlone Contemporary European Thinkers. (London; New
York: Continuum, 2001).
Panesar, Gajinder, Daniel Towner, Andrew Duller, Alan Gray, and
Will Robbins. ‘D'eterministic Parallel Processing, International Journal of Parallel Programming 34, no. 4 (2006): 323-41.
PicoChip. 'Advanced Wireless Technologies', (2007). http://www
.picochip.com/solutions/advanced_wireless_technologies
PicoChip. 'Pc202 Integrated Baseband Processor Product Brief',
(2007). http://www.picochip.com/downloads/03989ce88cdbebf5165e2f095a1cb1c8
/PC202_product_brief.pdf
Smith, Steven W. The Scientist and Engineer's Guide to Digital
Signal Processing: California Technical Publishing, 2004).
Thrift, Nigel. ‘R'emembering the Technological Unconscious by
Foregrounding Knowledges of Position, Environment & Planning D:
Society & Space 22, no. 1 (2004): 175-91.

173

173

173

174

174

ELPUEBLODECHINA A.K.A.
ALEJANDRA MARIA PEREZ NUNEZ
License: ??
EN

El Curanto
Curanto is a traditional method of cooking in the ground by the
people of Chiloe, in the south of Chile. This technique is practiced
throughout the world under different names. What follows is a summary of the ELEMENTS and steps enunciated and executed during el
curanto, which was performed in the centre of Brussels during V/J10.

Recipe

?

For making a curanto you need
to take the following steps and
arrange the following ELEMENTS:

This image is repeated in many
different cultures. Might be an
ancient way of cooking. What
does this underground cooking
imply? Most of all, it takes a lot
of TIME.

Free Libre Open Source
Curanto in the center
of Bruxelles

OVEN, a hole in the ground
filled with fire resistant STONES.

? find a way to get a good deal
at the market to get fresh
MUSSELS for x people.
It
helps to have a CHARISMATIC
WOMAN do it for you.

figure A

a slow cooking

OVEN

174

174

174

175

175

onomies of immaterial labour.
?

A BRIGHT WOMAN FRIEND to
find out about BELGIAN PORPHYRY and tell you about the
mining carrière in Quenast
(Hainaut).

? A CAMERA WOMAN to hand
you a MARBLE STONE to put
inside the OVEN.

figure B a TERRAIN VAGUE in
the centre of Brussels and a
NEIGHBOUR willing to let you in.

?

or some other MULwho is
extremely PATIENT and HUMOURISTIC and who helps
you to focus and takes pictures.
WENDY

TITASKING WOMAN

?

or some
that
TRUSTS the carrier of the
performance, will tell their
STORY about TRAVELING MUSSELS.
FEMKE

and

PETER

EXCENTRIC COUPLE

figure C A HOLE in the
ground 1.5 m deep, 1 m
diameter. (It makes me
think of a hole in my head).

A hole in the ground reminds me
of the unknown. FOOD cooked
inside the ground relates to ideas,
creativity and GIFT. It helps to
have GUILLAUME or a strong and
positive MAN to help you dig the
hole. A second PERSON would be
of great help, especially if, while
digging, he would talk about tax-

Mussels eaten in the centre of
Brussels are grown in Ireland and
immersed in Dutch seawater and
are then offcially called Dutch.
After 2 days in Dutch water, they
are ready to be exported to Brussels and become Belgian mussels
that are in fact Dutch-Irish.

175

175

175

176

176

figure D Original curanto
STONES are round fire
resistant stones. I couldn't
find them in Brussels.

figure E A good BUCKET
to scoop the rain out
of your newly dug HOLE

The only round and granite stones
were very expensive design ones.
In Chile you just dig a hole anywhere and find them. The only
fire resistant rock in Brussels was
the STREET itself.
? Square shaped rocks collected
randomly throughout the city
by means of appropriation.
Streets are made of a type of
granite rock, might be Belgian
porphyry. Note that there is a
message on one of the stones we
picked up in the centre. It reads
'watch your head'.

figure F A tent to protect
your fiRE from random RAIN

176

176

176

177

177

figure G LAIA or some
psychonaut, hierophant friend.

Should be someone who is able to
transmit confidence to the execution of el curanto and who will
keep you company while you are
appropriating stones in Brussels.
? A good BOUILLON made of
cheap white wine and concentrated bio vegetables and
spices is one of the secrets.

figure I GIRL that will
randomly come to the place
with her MOTHER and
speak in Spanish to the
carrier of the performance.

She will play the flute, give
the OVEN some orders to cook
well and sing improvised SONGS.
She and some other children will
play around by digging holes and
making their own CURANTO.

figure J A big fiRE to heat up
the wet cold ground of Brussels
figure H You need to find
or some Palestinian fellow
to help you keep the fire burning

MOAM

177

177

177

178

178

figure K

figure M A SACK CLOTH
to cover the food and to
retain STEAM for cooking.

RED HOT COAL

figure L Using some
cabbage leaves to cover
the RED HOT COAL to
place the FOOD on top of

figure N

or some
who is
happy to SHARE his expert
knowledge and willing to
join in the performance.
DIDIER

PANIC COOK MAN

178

178

178

179

179

?

?

HOLE

?

MUSSELS

?
figure O ONIONS,
and SPECULATIONS.

GESTURES

?

While reading VALIS, the carrier
of the performance will become
reverend TIMOTHY ARCHER and
read about TIME (something that
has mainly been forgotten is
Palestine).

figure P el curanto is
to be made together with
PEOPLE and for EVERYONE.

WOOD found in a dismantled
house. It helps to find a ride
to transport it.

SPICES,

leaf.

rosemary and bay

MICHAEL or some DEDICATED
friend that will assist with the
execution of the performance
and keep the pictures of it afterwards for months.

figure Q You can eat from
the shell by using your hands
or a little WOODEN SPOON.

If you want to eat later, take the
mussels out of their shell, add
OLIVE OIL, make a spread and
keep it cold in a jar. find QUEER
couples to savour it with BREAD
while talking about SEX.
179

179

179

180

180

?

fiRE

?

RED HOT COAL

?

FOOD

?

from the cooking MUSIt helps to use 'hot'
PIEZZO MICROPHONES.
NOISE

SELS.

Here TIME turns into space.
“Time can be overcome”, Mircea
Eliade wrote. That's what it's all
about.
The great mystery of Eleusis, of
the Orphics, of the early Christians, of Sarapis, of the Greco

1

-Roman mystery religions, of
Hermes Trismegistos, of the Renaissance Hermetic alchemists,
of the Rose Cross Brotherhood,
of Apollonius of Tyana, of Simon
Magus, of Asklepios, of Paracelsus, of Bruno, consists of the abolition of time. The techniques are
there. Dante discusses them in
the Comedy. It has to do with
the loss of amnesia; when forgetfulness is lost, true memory
spreads out backward and forward, into the past and into the
future, and also, oddly, into alternate universes; it is orthogonal as well as linear. 1

Philip K. Dick Valis (1972)

180

180

180

181

181

ALICE CHAUCHAT, FRÉDÉRIC GIES
License: Attribution-Noncommercial-No Derivative Work
EN

Praticable
Praticable is a collaborative research project between several artists
(currently: Alice Chauchat, Frédéric de Carlo, Frédéric Gies, Isabelle
Schad and Odile Seitz).
Praticable proposes itself as a horizontal work structure, which
brings research, creation, transmission and production structure into
relation with each other. This structure is the basis for the creation
of a variety of performances by either one or several of the project's
participants. In one way or another, these performances start from
the exploration of body practices, leading to a questioning of its representation. More concretely, Praticable takes the form of collective
periods of research and shared physical practices, both of which are
the basis for various creations. These periods of research can either
be independent of the different creation projects or integrated within
them.
During Jonctions/Verbindingen 10, Alice Chauchat and Frédéric
Gies gave a workshop for participants dealing with different ‘body
practices'. On the basis of Body-Mind Centering (BMC) techniques,
the body as a locus of knowledge production was made tangible. The
notation of the Dance performance with which Frédéric Gies concluded the day is reproduced in this book and published under an
open license.

figure 120
Workshop for
participants
with different
body
practices
at V/J10

figure 121
The body as
a locus of
knowledge
production
was made
tangible

figure 122

figure 123

184

Dance (Notation)
20 sec.
31. INTERCELLULAR flUID
Initiate movement in your intercellular fluid. Start slowly and
then put more and more energy
and speed in your movement, using intercellular fluid as a pump
to make you jump.

20 sec.
32. VENOUS BLOOD
Initiate movement in your venous
blood, rising and falling and following its waves.

20 sec.
33. VENOUS BLOOD
Initiate movement in your venous blood, slowing down progressively.

184

184

184

185

185

Less than 5 sec.
34. TRANSITION
Make visible in your movement a
transition from venous blood to
cerebrospinal fluid. finish in the
same posture you chose to start
PART 3.

1 min.
35. EACH flUID
Go through each fluid quality you
have moved with since the beginning of PART 3. The 1st one has
to be cerebrospinal fluid. After
this one, the order is free.

185

185

185

186

186

61. ALL GLANDS
Stand up slowly, building your
vertical axis from coccygeal body
to pineal gland. Use this time to
bound with earth through your
feet, as if you were growing roots.

INSTRUMENTAL (during the voice echo)
Down, down, down in your heart
find, find, find the secret
62. LOWER GLANDS OF THE
PELVIS
Dance as if you were dancing
in a club. Focus on your lower
glands, in your pelvis, to initiate your dance. Your arms, torso,
neck and head are also involved
in your dance.
SMALL PERIMETER
Turn, turn, turn your head around
63. MAMILLARY BODIES
Turn and turn your head around,
initiating this movement in
mamillary bodies. Let your head
drive the rest of your body into
turns.

186

186

186

187

187

Baby we can do it
We can do it alright
64. LOWER GLANDS OF THE
PELVIS
Dance as if you were dancing
in a club. Focus on your lower
glands, in your pelvis, to initiate your dance. Your arms, torso,
neck and head are also involved
in your dance.
Do you believe in love at first sight
It's an illusion, I don't care
Do you believe I can make you feel better
Too much confusion, come on over here
65. HEART BODY
Keep on dancing as if you were
dancing in a club and initiate
movements in your heart body,
connecting with your forearms
and hands.

License: Attribution-Noncommercial-No Derivative Work

187

187

187

188

188

Mutual Motions Video Library
To be browsed, a vision to be displaced

figure 126

figure 125

Wearing the video library, performer Isabelle Bats presents a selection of films related to the themes of V/J10. As a living memory, the
discs and media players in the video library are embedded in a dress
designed by artists collective De Geuzen. Isabelle embodies an accessible interface between you (the viewer), and the videos. This human
interface allows for a mutual relationship: viewing the films influences
the experience of other parts of the program, and the situation and
context in which you watch the films play a role in experiencing and
interpreting the videos. A physical exchange between existing imagery, real-time interpretation, experiences and context, emerges as
a result.
The V/J10 video library collects excerpts of performance and dance
video art, and (documentary) film, which reflect upon our complex
body–technique relations. Searching for the indicating, probing, disturbing or subverting gesture(s) in the endless feedback loop between
technology, tools, data and bodies, we collected historical as well as
contemporary material for this temporary archive.

Modern Times or the Assembly Line
Reflects the body in work environments, which are structured by
technology, ranging from the pre-industrial manual work with analogue
tools, to the assembly line, to postmodern surveillance configurations.
24 Portraits
Excerpt from a series of documentary portraits by Alain Cavalier, FR,
1988-1991.

umentaries paying tribute to women's
manual work. The intriguing and sensitive portraits of 24 women working
in different trades reveal the intimacy
of bodies and their working tools.

24 Portraits is a series of short doc-

198

198

198

199

199

Humain, trop humain
Quotes from a documentary by Louis
Malle, FR, 1972.
A documentary filmed at the Citroen
car factory in Rennes and at the 1972
Paris auto show, documenting the monotonous daily routines of working the
assembly lines, the close interaction
between bodies and machines.

Performing the Border
Video essay by Ursula Biemann, CH,
1999, 45 min.
“Performing the Border is a video
essay set in the Mexican-U.S. border town Ciudad Juarez, where the
U.S. industries assemble their electronic and digital equipment, located
right across El Paso, Texas.
The
video discusses the sexualization of
the border region through labour division, prostitution, the expression of
female desires in the entertainment industry, and sexual violence in the public sphere. The border is presented
as a metaphor for marginalization and
the artificial maintenance of subjective boundaries at a moment when
the distinctions between body and machine, between reproduction and production, between female and male,
have become more fluid than ever.”
(Ursula Biemann)
http://www.geobodies.org

Maquilapolis (city of factories)
A film by Vicky Funari and Sergio
De La Torre, Mexico/U.S.A., 2006, 68
min.

Carmen works the graveyard shift in
one of Tijuana's maquiladoras, the
multinationally-owned factories that
came to Mexico for its cheap labour.
After making television components
all night, Carmen comes home to a
shack she built out of recycled garage
doors, in a neighbourhood with no
sewage lines or electricity. She suffers
from kidney damage and lead poisoning from her years of exposure to toxic
chemicals. She earns six dollars a day.
But Carmen is not a victim. She is a
dynamic young woman, busy making
a life for herself and her children.
As Carmen and a million other
maquiladora workers produce televisions, electrical cables, toys, clothes,
batteries and IV tubes, they weave
the very fabric of life for consumer nations. They also confront labour violations, environmental devastation and
urban chaos – life on the frontier of
the global economy. In Maquilapolis Carmen and her colleague Lourdes reach beyond the daily struggle for
survival to organize for change: Carmen takes a major television manufacturer to task for violating her labour
rights, Lourdes pressures the government to clean up a toxic waste dump
left behind by a departing factory.
As they work for change, the world
changes too: a global economic crisis
and the availability of cheaper labour
in China begin to pull the factories
away from Tijuana, leaving Carmen,
Lourdes and their colleagues with an
uncertain future.
A co-production of the Independent
Television Service (ITVS), project of
Creative Capital.
http://www.maquilapolis.com

199

199

199

200

200

Practices of everyday life
Everyday life as the place of a performative encounter between bodies
and tools, from the U.S.A. of the 70s to contemporary South Africa.

Saute ma ville
Chantal Akerman, B, 1968, 13 min.

states that, “When the woman speaks,
she names her own oppression.”

A girl returns home happily. She locks
herself up in her kitchen and messes up
the domestic world. In her first film,
Chantal Akerman explores a scattered
form of being, where the relationship
with the controlled human world literally explodes. Abolition of oneself,
explosion of oneself.

“I was concerned with something like
the notion of ‘language speaking the
subject', and with the transformation
of the woman herself into a sign in
a system of signs that represent a
system of food production, a system
of harnessed subjectivity.” (Martha
Rosler)

Semiotics of the Kitchen

Choreography

Video by Martha Rosler, U.S.A., 1975,
05:30 min.
Semiotics of the Kitchen adopts the
form of a parodic cooking demonstration in which, Rosler states, “An
anti-Julia Child replaces the domesticated ‘meaning' of tools with a lexicon
of rage and frustration.” In this performance-based work, a static camera is
focused on a woman in a kitchen. On
a counter before her are a variety of
utensils, each of which she picks up,
names and proceeds to demonstrate,
but with gestures that depart from the
normal uses of the tool. In an ironic
grammatology of sound and gesture,
the woman and her implements enter
and transgress the familiar system of
everyday kitchen meanings – the securely understood signs of domestic
industry and food production erupt
into anger and violence. In this alphabet of kitchen implements, Rosler

Video installation preview by Anke
Schäfer, NL/South Africa, 13:07 min
(loop), 2007.
Choreography reflects on the notion
‘Armed Response' as an inner state
of mind. The split screen projection
shows the movements of two women
commuting to their work. On the one
side, the German-South African Edda
Holl, who lives in the rich Northern
suburbs of Johannesburg. Her search
for a safe journey is characterized
by electronic security systems, remote
controls, panic buttons, her constant
cautiousness, the reassuring glances
in the tinted car windows. On the
other side, you see the African-South
African Gloria Fumba, who lives in
Soweto and whose security techniques
are very basic: clutching her handbag to her body, the way she cues for
the bus, avoiding to go home alone
when it's dark. A classical continuity

200

200

200

201

201

editing, as seen fiction film, suggests
at first a narrative storyline, but is
soon interrupted by moments of pause.
These pauses represent the desires of
both women to break with the safety
mechanism that motivates their daily
movements.

Television
Ximena Cuevas, Mexico, 1999, 2 min.
“The vacuum cleaner becomes the device of the feminist ‘liberation', or the
monster that devours us.” (Insite 2000
program, San Diego Museum of Art)

http://www.livemovie.org

Perform the script, write the score
Considers dance and performance as knowledge systems where movement and data interact. With excerpts of performance documents,
interviews and (dance) films. But also the script, the code, as system
of perversion, as an explorative space for the circulation of bodies.
William Forsythe's works
Choreography can be understood as
writing moving bodies into space, a
complex act of inscription, which is
situated on the borderline between
creating and remembering, future and
past. Movement is prescribed and is
passing at the same time. It can be
inscribed into the visceral body memory through constant repetition, but
it is also always undone:
As Laurie Anderson says:
“You're walking. And you don't always realize it, but you're always
falling. With each step you fall forward slightly. And then catch yourself from falling.
Over and over,
you're falling.
And then catching
your self from falling.” (Quoted after
Gabriele Brandstetter, ReMembering
the Body)
William Forsythe, for instance, considers classical ballet as a historical
form of a knowledge system loaded

with ideologies about society, the self,
the body, rather than a fixed set
of rules, which simply can be implemented. An arabesque is a platonic ideal for him, a prescription,
but it can't be danced: “There is
no arabesque, there is only everyone's arabesque.” His choreography
is concerned with remembering and
forgetting: referencing classical ballet, creating a geometrical alphabet,
which expands the classical form, and
searching for the moment of forgetfulness, where new movement can arise.
Over the years, he and his company
developed an understanding of dance
as a complex system of processing information with some analogies to computer programming.

Chance favours
pared mind

the

pre-

Educational dance film, produced by
Vlaams Theaterinstituut, Ministerie
van Onderwijs dienst Media and Informatie, dir. Anne Quirynen, 1990,

201

201

201

202

202

Rehearsal Last Supper

25 min.
Chance favours the prepared mind
features discussions and demonstrations by William Forsythe and four
Frankfurt Ballet Dancers about their
understanding of movement and their
working methods: “Dance is like writing or drawing, some sort of inscription.” (William Forsythe)

The way of the weed
Experimental dance film featuring
William Forsythe, Thomas McManus
and dancers of the Frankfurt Ballet,
An-Marie Lambrechts, Peter Missotten and Anne Quirynen, soundtrack:
Peter Vermeersch, 1997, 83 min.
In this experimental dance film, investigator Thomas is dropped in a desert
in 7079, not only to investigate the
growth movements of the plant life
there, but also the life's work of the
obscure scientist William F. (William
Forsythe), who has achieved numerous insights and discoveries on the
growth and movement of plants. This
knowledge is stored in the enormous
data bank of an underground laboratory. It is Thomas's task to hack into
his computer and check the professor's secret discoveries. His research
leads him into the catacombs of a
complex building, where he finds people stored in cupboards in a comatose
state. They are loaded with professor F.'s knowledge of vegetation. He
puts the ‘people-plants' into a large
transparent pool of water and notices
that in the water the ‘samples' come
to life again. . . A complex reflection
on (body) memory, (digital) archives
and movement as repetition and interference.

Video installation preview by Anke
Schäfer, NL/South Africa, 16:40 min.
(loop), 2007.
The work Rehearsal Last Supper combines a kind of ‘Three Stooges' physical, slapstick-style comedy, but with
far more serious subject matters such
as abuse, gender violence, and the
general breakdown of family relationships. It's a South African and mixed
couple re-enactment of a similar scene
that Bruce Nauman realized in the 70s
with a white, middle-aged man and
woman.
The experience, the ‘Gestalt' of the
experienced violence, the frustration
and the unwillingly or even forced internalization are felt to the core of the
voice and the body. Humour can help
to express the suppressed and to use
your pain as power.
Actors: Nat Ramabulana, Tarryn Lee,
Megan Reeks, Raymond Ngomane
(from Wits University Drama department), Kekeletso Matlabe, Lebogang
Inno, Thabang Kwebu, Paul Noko
(from Market Theatre Laboratory).
http://www.livemovie.org

Nest Of Tens
Miranda July, U.S.A., 1999, 27 min.
Nest Of Tens is comprised of four alternating stories, which reveal mundane yet personal methods of control.
These systems are derived from intuitive sources. Children and a retarded
adult operate control panels made out
of paper, lists, monsters, and their
own bodies.
“A young boy, home alone, performing

202

202

202

203

203

a bizarre ritual with a baby; an uneasy, aborted sexual flirtation between
a teenage babysitter and an older man;
an airport lounge encounter between a
businesswoman (played by July) and a
young girl. Linked by a lecturer enumerating phobias in a quasi-academic
seminar, these three perverse, unnerving scenarios involving children and
adults provide authentic glimpses into
the queasy strangeness that lies behind the everyday.” (New York Video
Festival, 2000)

In the field of players
Jeanne Van Heeswijk & Marten Winters, 2004, NL
Duration: 25.01.2004 – 31.01.2004
Location: TENT.Rotterdam
Participants: 106 through casting, 260
visitors of TENT.
Together with artist Marten Winters,
Van Heeswijk developed a ‘game:set'.
In cooperation with graphic designer
Roger Teeuwen, they marked out a
set of lines and fields on the ground.
Just like in a sporting venue, these
lines had no meaning until used by the
players. The relationship between the
players was revealed by the rules of the
game.
Designer Arienne Boelens created special game cards that were handed out
during the festival by the performance
artists Bliss. Both Bliss and the cards
turned up all over the festival, showing
up at every hot spot or special event.
Through these game cards people were

invited to fulfil the various roles of
the game – like ‘Round Miss' (the
girl who walks around the ring holding up a numbered card at the start
of each round at boxing matches),
‘40-plus male in (high) cultural position', ‘Teen girl with star ambitions',
‘Vital 65-plus'. But even ‘Whisperer',
and ‘Audience' were specific roles.

Writing Desire
Video essay by Ursula Biemann, CH,
2000, 25 min.

Writing Desire is a video essay on
the new dream screen of the Internet, and its impact on the global circulation of women's bodies from the
‘Third World' to the ‘first World'
. Although underage Philippine ‘pen
pals' and post-Soviet mail-order brides
have been part of the transnational
exchange of sex in the post-colonial
and post-Cold War marketplace of desire before the digital age, the Internet has accelerated these transactions.
The video provides the viewers with
a thoughtful meditation on the obvious political, economic and gender inequalities of these exchanges by simulating the gaze of the Internet shopper
looking for the imagined docile, traditional, pre-feminist, but Web-savvy
mate.
http://www.geobodies.org

203

203

203

204

204


INÈS RABADAN
License: Creative Commons Attribution-NonCommercial-ShareAlike
EN

Does the repetition of a gesture irrevocably
lead to madness?

figure 127
Screening
Modern
Times at
V/J10

A personal introduction to Modern Times
(Charles Chaplin, 1936)
figure 128

One of the most memorable moments of Modern Times, is the one
where the tramp goes mad after having spent the whole day screwing
bolts on the assembly line. He is free: neither husband, nor worker,
nor follower of some kind of movement, nor even politically engaged.
His gestures are burlesque responses to the adversity in his life, or
just plain ‘exuberant'. But through the interaction with the machine,
however, he completely goes off the rails and ends up in prison.
Inès Rabadan made two short films in which a female protagonist
is confined by the fast-paced work of the assembly line. Tragically
and mercilessly, the machine changes the woman and reduces her to
a mechanical gesture – a gesture in which she sometimes takes pride,
precisely in order not to lose her sanity. Or else, she really goes mad,
ruined by the machine, eventually managing to free herself.

figure 129

figure 130


MICHAEL TERRY
License: Free Art License
EN

Data analysis as a discourse

figure 131
Michael
Terry in
between
LGM sessions

An interview with Michael Terry
Michael Terry is a computer scientist working at the Human Computer Interaction Lab of the University of Waterloo, Canada. His
main research focus is on improving usability in open source software, and ingimp is the first result of that work.
In a Skype conversation that was live broadcast in La Bellone during Verbindingen/Jonctions 10, we spoke about ingimp, a clone of the
popular image manipulation programme Gimp, but with an important difference. Ingimp allows users to record data about their usage
in to a central database, and subsequently makes this data available
to anyone.
At the Libre Graphics Meeting 2008 in Wroclaw, just before Michael
Terry presents ingimp to an audience of Gimp developers and users,
Ivan Monroy Lopez and Femke Snelting meet up with Michael Terry
again to talk more about the project and about the way he thinks
data analysis could be done as a form of discourse.

figure 132
Interview
at Wroclaw

Femke Snelting (FS) Maybe we could start this face-to-face conversation with a description of the ingimp project you are developing
and – what I am particularly interested in –, why you chose to work
on usability for Gimp?
Michael Terry (MT) So the project is ‘ingimp', which is an instrumented version of Gimp, it collects information about how the
software is used in practice. The idea is you download it, you install
it, and then with the exception of an additional start up screen, you
use it just like regular Gimp. So, our goal is to be as unobtrusive as
possible to make it really easy to get going with it, and then to just
217

217

217

218

218

forget about it. We want to get it into the hands of as many people
as possible, so that we can understand how the software is actually
used in practice. There are plenty of forums where people can express
their opinions about how Gimp should be designed, or what's wrong
with it, there are plenty of bug reports that have been filed, there
are plenty of usability issues that have been identified, but what we
really lack is some information about how people actually apply this
tool on a day to day basis. What we want to do is elevate discussion
above just anecdote and gut feelings, and to say, well, there is this
group of people who appear to be using it in this way, these are the
characteristics of their environment, these are the sets of tools they
work with, these are the types of images they work with and so on,
so that we have some real data to ground discussions about how the
software is actually used by people.
You asked me now why Gimp? I actually used Gimp extensively
for my PhD work. I had these little cousins come down and hang
out with me in my apartment after school, and I would set them up
with Gimp, and quite often they would start off with one picture,
they would create a sphere, a blue sphere, and then they played with
filters until they got something really different. I would turn to them
looking at what they had been doing for the past twenty minutes,
and would be completely amazed at the results they were getting
just by fooling around with it. And so I thought, this application
has lots and lots of power; I'd like to use that power to prototype
new types of interface mechanisms. So I created JGimp, which is
a Java based extension for the 1.0 Gimp series that I can use as a
back-end for prototyping novel user interfaces. I think that it is a
great application, there is a lot of power to it, and I had already an
investment in its code base, so it made sense to use that as a platform
for testing out ideas of open instrumentation.
FS: What is special about ingimp, is the fact that the data you
collect, is equally free to use, run, study and distribute, as the software
you are studying. Could you describe how that works?

218

218

218

219

219

MT: Every bit of data we collect, we make available: you can go to
the website, you can download every log file that we have collected.
The intent really is for us to build tools and infrastructure so that the
community itself can sustain this analysis, can sustain this form of
usability. We don't want to create a situation where we are creating
new dependencies on people, or where we are imposing new tasks on
existing project members. We want to create tools that follow the
same ethos as open source development, where anyone can look at
the source code, where anyone can make contributions, from filing
a bug to doing something as simple as writing a patch, where they
don't even have to have access to the source code repository, to make
valuable contributions. So importantly, we want to have a really low
barrier to participation. At the same time, we want to increase the
signal-to-noise ratio. Yesterday I talked with Peter Sikking, an information architect working for Gimp, and he and I both had this
experience where we work with user interfaces, and since everybody
uses an interface, everybody feels they are an expert, so there can be
a lot of noise. So, not only did we want to create an open environment for collecting this data, and analysing it, but we also wanted to
increase the chance that we are making valuable contributions, and
that the community itself can make valuable contributions. Like I
said, there is enough opinion out there. What we really need to do
is to better understand how the software is being used. So, we have
made a point from the start to try to be as open as possible with
everything, so that anyone can really contribute to the project.
FS: Ingimp has been running for a year now. What are you finding?
MT: I have started analysing the data, and I think one of the things
that we realised early on is that it is a very rich data set; we have lots
and lots of data. So, after a year we've had over 800 installations, and
we've collected about 5000 log files, representing over half a million
commands, representing thousands of hours of the application being
used. And one of the things you have to realise is that when you have
a data set of that size, there are so many different ways to look at it
that my particular perspective might not be enough. Even if you sit
219

219

219

220

220

someone down, and you have him or her use the software for twenty
minutes, and you videotape it, then you can spend hours analysing
just those twenty minutes of videotape. And so, I think that one of
the things we realised is that we have to open up the process so that
anyone could easily participate. We have the log files available, but
they really didn't have an infrastructure for analysing them. So, we
created this new piece of software called ‘Stats Jam', an extension
to MediaWiki, which allows anyone to go to the website and embed
SQL-queries against the ingimp data set and then visualise those
results within the Wiki text. So, I'll be announcing that today and
demonstrating that, but I have been using that tool now for a week
to complement the existing data analysis we have done.
One of the first things that we realized is that we have over 800
installations, but then you have to ask, how many of those are really serious users? A lot of people probably just were curious, they
downloaded it and installed it, found that it didn't really do much
for them and so maybe they don't use it anymore. So, the first thing
we had to do is figure out which data points should we really pay
attention to. We decided that a person should have used ingimp on
two different occasions, preferably at least a day apart, where they'd
saved an image on both of the instances. We used that as an indication of what a serious user is. So with that filter in place, the ‘800
installations' drops down to about 200 people. So we had about 200
people using ingimp; and looking at the data, this represents about
800 hours of use, about 4000 log files, and again still about half a
million commands. So, it's still a very significant group of people.
200 people are still a lot, and that's a lot of data, representing about
11000 images they have been working on – there's just a lot.
From that group, what we found is that use of ingimp is really
short and versatile. So, most sessions are about fifteen minutes or
less, on average. There are outliers, there are some people who use it
for longer periods of time, but really it boils down to them using it for
about fifteen minutes, and they are applying fewer than a hundred
operations when they are working on the image. I should probably
be looking at my data analysis as I say this, but they are very quick,
220

220

220

221

221

short, versatile sessions, and when they use it, they use less than 10
different tools, or they apply less than 10 different commands.
What else did we find? We found that the two most popular monitor resolutions are 1280 by 1024, and 1024 by 768. So, those represent
collectively 60 % of the resolutions, and really 1280 by 1024 represents
pretty much the maximum for most people, although you have some
higher resolutions. So one of the things that's always contentious
about Gimp, is its window management scheme and the fact that it
has multiple windows, right? And some people say, well you know,
this works fine if you have two monitors, because you can throw out
the tools on one monitor and then your images are on another monitor. Well, about 10 to 15 % of ingimp users have two monitors, so
that design decision is not working out for most of the people, if that
is the best way to work. These are things I think that people have
been aware of, it's just now we have some actual concrete numbers
where you can turn to and say: now this is how people are using it.
There is a wide range of tasks that people are performing with the
tool, but they are really short, quick tasks.
FS: Every time you start up ingimp, a screen comes up asking
you to describe what you are planning to do and I am interested in
the kind of language users invent to describe this, even when they
sometimes don't know exactly what it is they are going to do. So
inventing language for possible actions with the software has in a
way become a creative process that is now shared between interface
designer, developer and user. If you look at the ‘activity tags' you
are collecting, do you find a new vocabulary developing?
MT: I think there are 300 to 600 different activity tags that people
register within that group of ‘significant users'. I didn't have time to
look at all of them, but it is interesting to see how people are using
that as a medium for communicating to us. Some people will say,
“Just testing out, ignore this!” Or, people are trying to do things like
insert HTML code, to do like a cross-site scripting attack, because,
you have all the data on the website, so they will try to play with
that. Some people are very sparse and they say ‘image manipulation'
221

221

221

222

222

or ‘graphic design' or something like that, but then some people are
much more verbose, and they give more of a plan, “This is what I
expect to be doing.” So, I think it has been interesting to see how
people have adopted that and what's nice about it, is that it adds a
really nice human element to all this empirical data.
Ivan Monroy Lopez (IM): I wanted to ask you about the data;
without getting too technical, could you explain how these data are
structured, what do the log files look like?
MT: So the log files are all in XML, and generally we compress
them, because they can get rather large. And the reason that they
are rather large is that we are very verbose in our logging. We want
to be completely transparent with respect to everything, so that if
you have some doubts or if you have some questions about what kind
of data has been collected, you should be able to look at the log file,
and figure out a lot about what that data is. That's how we designed
the XML log files, and it was really driven by privacy concerns and
by the desire to be transparent and open. On the server side we take
that log file and we parse it out, and then we throw it into a database,
so that we can query the data set.
FS: Now we are talking about privacy. . . I was impressed by the
work you have done on this; the project is unusually clear about why
certain things are logged, and other things not; mainly to prevent
the possibility of ‘playing back' actions so that one could identify
individual users from the data set. So, while I understand there are
privacy issues at stake I was wondering... what if you could look at the
collected data as a kind of scripting for use, as writing a choreography
that might be replayed later?
MT: Yes, we have been fairly conservative with the type of information that we collect, because this really is the first instance where
anyone has captured such rich data about how people are using software on a day to day basis, and then made it all that data publicly
222

222

222

223

223

available. When a company does this, they will keep the data internally, so you don't have this risk of someone outside figuring something out about a user that wasn't intended to be discovered. We
have to deal with that risk, because we are trying to go about this
in a very open and transparent way, which means that people may
be able to subject our data to analysis or data mining techniques
that we haven't thought of, and extract information that we didn't
intent to be recording in our file, but which is still there. So there are
fairly sophisticated techniques where you can do things like look at
audio recordings of typing and the timings between keystrokes, and
then work backwards with the sounds made to figure out the keys
that people are likely pressing. So, just with keyboard audio and
keystroke timings alone, you can often give enough information to be
able to reconstruct what people are actually typing. So we are always
sort of weary about how much information is in there.
While it might be nice to be able to do something like record people's actions and then share that script, I don't think that that is
really a good use of ingimp. That said, I think it is interesting to
ask: could we characterize people's use enough, so that we can start
clustering groups of people together and then providing a forum for
these people to meet and learn from one another? That's something
we haven't worked out. I think we have enough work cut out for us
right now just to characterize how the community is using it.
FS: It was not meant as a feature request, but as a way to imagine
how usability research could flip around and also become productive
work.
MT: Yes, totally. I think one of the things that we found when
bringing people into the basic usability of the ingimp software and
ingimp website, is that people like looking at what commands other
people are using, what the most frequently used commands are; and
part of the reason that they like that, is because of what it teaches
them about the application. So they might see a command they were
unaware of. So we have toyed with the idea of then providing not
223

223

223

224

224

only the command name, but then a link from that command name
to the documentation – but I didn't have time to implement it, but
certainly there are possibilities like that, you can imagine.
FS: Maybe another group can figure something out like that? That's
the beauty of opening up your software plus data set of course.
Well, just a bit more on what is logged and what not... Maybe you
could explain where and why you put the limit, and what kind of use
you might miss out on as a result?
MT: I think it is important to keep in mind that whatever instrument you use to study people, you are going to have some kind of
bias, you are going to get some information at the cost of other information. So if you do a video taped observation of a user and you
just set up a camera, then you are not going to find details about
the monitor maybe, or maybe you are not really seeing what their
hands are doing. No matter what instrument you use, you are always
getting a particular slice.
I think you have to work backwards and ask what kind of things
do you want to learn. And so the data that we collect right now, was
really driven by what people have done in the past in the area of instrumentation, but also by us bringing people into the lab, observing
them as they are using the application, and noticing particular behaviours and saying, hey, that seems to be interesting, so what kind of
data could we collect to help us identify those kind of phenomena, or
that kind of performance, or that kind of activity? So again, the data
that we were collecting was driven by watching people, and figuring
out what information will help us to identify these types of activities.
As I've said, this is really the first project that is doing this, and
we really need to make sure we don't poison the well. So if it happens that we collect some bit of information, that then someone can
later say, “Oh my gosh, here is the person's file system, here are the
names they are using for the files” or whatever, then it's going to
make the normal user population weary of downloading this type of
224

224

224

225

225

instrumented application. The thing that concerns me most about
open source developers jumping into this domain, is that they might
not be thinking about how you could potentially impact privacy.
IM: I don't know, I don't want to get paranoid. But if you are
doing it, then there is a possibility someone else will do it in a less
considerate way.
MT: I think it is only a matter of time before people start doing
this, because there are a lot of grumblings about, “We should be
doing instrumentation, someone just needs to sit down and do it.”
Now there is an extension out for firefox that will collect this kind
of data as well, so you know. . .
IM: Maybe users could talk with each other, and if they are aware
that this type of monitoring could happen, then that would add a
different social dimension. . .
MT: It could. I think it is a matter of awareness, really. We have a
lengthy concern agreement that details the type of information we are
collecting and the ways your privacy could be impacted, but people
don't read it.
FS: So concretely... what information are you recording, and what
information are you not recording?
MT: We record every command name that is applied to a document,
to an image. Where your privacy is at risk with that, is that if you
write a custom script, then that custom script's name is going to be
inserted into a log file. And so if you are working for example for Lucas
or DreamWorks or something like that, or ILM, in some Hollywood
movie studio and you are using ingimp and you are writing scripts,
then you could have a script like ‘fixing Shrek's beard', and then that
is getting put into the log file and then people are going to know that
the studio uses ingimp.
225

225

225

226

226

We collect command names, we collect things like what windows
are on the screen, their positions, their sizes, and we take hashes of
layer names and file names. We take a string and then we create a
hash code for it, and we also collect information about how long is
this string, how many alphabetical characters, numbers; things like
that, to get a sense of whether people are using the same files, the
same layer names time and time again, and so on. But this is an
instance where our first pass at this, actually left open the possibility
of people taking those hashes and then reconstructing the original
strings from that. Because we have the hash code, we have the length
of the string – all you have to do is generate all possible strings of
that length, take the hash codes and figure out which hashes match.
And so we had to go back and create a new scheme for recording this
type of information where we create a hash and we create a random
number, we pair those up on the client machine but we only log the
random number. So, from log to log then, we can track if people
use the same image names, but we have no idea of what the original
string was.
There are these little ‘gotchas' like that, that I don't think most
people are aware of, and this is why I get really concerned about
instrumentation efforts right now, because there isn't this body of
experience of what kind of data should we collect, and what shouldn't
we collect.
FS: As we are talking about this, I am already more aware of what
data I would allow being collected. Do you think by opening up this
data set and the transparent process of collecting and not collecting,
this will help educate users about these kinds of risks?
MT: It might, but honestly I think probably the thing that will
educate people the most is if there was a really large privacy error
and that it got a lot of news, because then people would become more
aware of it because right now – and this is not to say that we want
that to happen with ingimp – but when we bring people in and we ask
them about privacy, “Are you concerned about privacy?” and they
say “No”, and we say “Why?” Well, they inherently trust us, but the
226

226

226

227

227

fact is that open source also lends a certain amount of trust to it,
because they expect that since it is open source, the community will
in some sense police it and identify potential flaws with it.
FS: Is that happening? Are you in dialogue with the open source
community about this?
MT: No, I think probably five to ten people have looked at the
ingimp code – realistically speaking I don't think a lot of people looked
at it. Some of the Gimp developers took a gander at it to see “How
could we put this upstream?” But I don't want it upstream, because
I want it to always be an opt-in, so that it can't be turned on by
mistake.
FS: You mean you have to download ingimp and use it as a separate
program? It functions in the same way as Gimp, but it makes the
fact that it is a different tool very clear.
MT: Right. You are more aware, because you are making that
choice to download that, compared to the regular version. There is
this awareness about that.
We have this lengthy text based consent agreement that talks about
the data we collect, but less than two percent of the population reads
license agreements. And, most of our users are actually non-native
English speakers, so there are all these things that are working against
us. So, for the past year we have really been focussing on privacy, not
only in terms of how we collect the data, but how we make people
aware of what the software does.
We have been developing wordless diagrams to illustrate how the
software functions, so that we don't have to worry about localisation
errors as much. And so we have these illustrations that show someone
downloading ingimp, starting it up, a graph appears, there is a little
icon of a mouse and a keyboard on the graph, and they type and you
see the keyboard bar go up, and then at the end when they close the
application, you see the data being sent to a web server. And then
227

227

227

228

228

we show snapshots of them doing different things in the software, and
then show a corresponding graph change. So, we developed these by
bringing in both native and non-native speakers, having them look at
the diagrams and then tell us what they meant. We had to go through
about fifteen people and continual redesign until most people could
understand and tell us what they meant, without giving them any
help or prompts. So, this is an ongoing research effort, to come up
with techniques that not only work for ingimp, but also for other
instrumentation efforts, so that people can become more aware of the
implications.
FS: Can you say something about how this type of research relates
to classic usability research and in particular to the usability work
that is happening in Gimp?
MT: Instrumentation is not new, commercial software companies
and researchers have been doing instrumentation for at least ten years,
probably ten to twenty years. So, the idea is not new, but what is
new – in terms of the research aspects of this –, is how do we do this
in a way where we can make all the data open? The fact that you
make the data open, really impacts your decision about the type of
data you collect and how you are representing it. And you need to
really inform people about what the software does.
But I think your question is... how does it impact the Gimp's
usability process? Not at all, right now. But that is because we have
intentionally been laying off to the side, until we got to the point
where we had an infrastructure, where the entire community could
really participate with the data analysis. We really want to have
this to be a self-sustaining infrastructure, we don't want to create a
system where you have to rely on just one other person for this to
work.
IM: What approach did you take in order to make this project
self-sustainable?
228

228

228

229

229

MT: Collecting data is not hard. The challenge is to understand
the data, and I don't want to create a situation where the community
is relying on only one person to do that kind of analysis, because this
is dangerous for a number of reasons. first of all, you are creating
a dependency on an external party, and that party might have other
obligations and commitments, and might have to leave at some point.
If that is the case, then you need to be able to pass the baton to
someone else, even if that could take a considerate amount of time
and so on.
You also don't want to have this external dependency, because of
the richness in the data, you really need to have multiple people
looking at it, and trying to understand and analyse it. So how are
we addressing this? It is through this Stats Jam extension to the
MediaWiki that I will introduce today. Our hope is that this type
of tool will lower the barrier for the entire community to participate
in the data analysis process, whether they are simply commenting on
the analysis we made or taking the existing analysis, tweaking it to
their own needs, or doing something brand new.
In talking with members of the Gimp project here at the Libre
Graphics Meeting, they started asking questions like, “So how many
people are doing this, how many people are doing this and how many
this?” They'll ask me while we are sitting in a café, and I will be able
to pop the database open and say, “A certain number of people have
done this.” or, “No one has actually used this tool at all.”
The danger is that this data is very rich and nuanced, and you
can't really reduce these kinds of questions to an answer of “N people
do this”, you have to understand the larger context. You have to
understand why they are doing it, why they are not doing it. So, the
data helps to answer some questions, but it generates new questions.
They give you some understanding of how the people are using it,
but then it generates new questions of, “Why is this the case?” Is this
because these are just the people using ingimp, or is this some more
widespread phenomenon?
They asked me yesterday how many people are using this colour
picker tool – I can't remember the exact name – so I looked and there
229

229

229

230

230

was no record of it being used at all in my data set. So I asked them
when did this come out, and they said, “Well it has been there at
least since 2.4.” And then you look at my data set, and you notice
that most of my users are in the 2.2 series, so that could be part of
the reasons. Another reason could be, that they just don't know that
it is there, they don't know how to use it and so on. So, I can answer
the question, but then you have to sort of dig a bit deeper.
FS: You mean you can't say that because it is not used, it doesn't
deserve any attention?
MT: Yes, you just can't jump to conclusions like that, which is
again why we want to have this community website, which shows the
reasoning behind the analysis: here are the steps we had to go through
to get this result, so you can understand what that means, what the
context means – because if you don't have that context, then it's sort
of meaningless. It's like asking, “What are the most frequently used
commands?” This is something that people like to ask about. Well
really, how do you interpret that? Is it the numbers of times it has
been used across all log files? Is it the number of people that have
used it? Is it the number of log files where it has been used at least
once? There are lots and lots of ways in which you can interpret
this question. So, you really need to approach this data analysis as
a discourse, where you are saying: here are my assumptions, here is
how I am getting to this conclusion, and this is what it means for
this particular group of people. So again, I think it is dangerous if
one person does that and you become to rely on that one person. We
really want to have lots of people looking at it, and considering it,
and thinking about the implications.
FS: Do you expect that this will impact the kind of interfaces that
can be done for Gimp?
MT: I don't necessarily think it is going to impact interface design,
I see it really as a sort of reality check: this is how communities are
using the software and now you can take that information and ask,
230

230

230

231

231

do we want to better support these people or do we. . . For example
on my data set, most people are working on relatively small images
for short periods of time, the images typically have one or two layers,
so they are not really complex images. So regarding your question,
one of the things you can ask is, should we be creating a simple tool
to meet these people's needs? All the people are just doing cropping
and resizing, fairly common operations, so should we create a tool
that strips away the rest of the stuff? Or, should we figure out why
people are not using any other functionality, and then try to improve
the usability of that?
There are so many ways to use data – I don't really know how
it is going to be used, but I know it doesn't drive design. Design
happens from a really good understanding of the users, the types of
tasks they perform, the range of possible interface designs that are
out there, lots of prototyping, evaluating those prototypes and so on.
Our data set really is a small potential part of that process. You can
say, well, according to this data set, it doesn't look like many people
are using this feature, let's not too much focus on that, let's focus on
these other features or conversely, let's figure out why they are not
using them. . . Or you might even look at things like how big their
monitor resolutions are, and say, well, given the size of the monitor
resolution, maybe this particular design idea is not feasible. But I
think it is going to complement the existing practices, in the best
case.
FS: And do you see a difference in how interface design is done in
free software projects, and in proprietary software?
MT: Well, I have been mostly involved in the research community,
so I don't have a lot of exposure to design projects. I mean, in my
community we are always trying to look at generating new knowledge,
and not necessarily at how to get a product out the door. So, the
goals or objectives are certainly different.

231

231

231

232

232

I think one of the dangers in your question is that you sort of
lump a lot of different projects and project styles into one category
of ‘open source'. ‘Open source' ranges from volunteer driven projects
to corporate projects, where they are actually trying to make money
out of it. There is a huge diversity of projects that are out there;
there is a wide diversity of styles, there is as much diversity in the
open source world as there is in the proprietary world.
One thing you can probably say, is that for some projects that are
completely volunteer driven like Gimp, they are resource strapped.
There is more work than they can possibly tackle with the number of
resources they have. That makes it very challenging to do interface
design; I mean, when you look at interface code, it costs you 50 or 75
% of a code base. That is not insignificant, it is very difficult to hack,
and you need to have lots of time and manpower to be able to do
significant things. And that's probably one of the biggest differences
you see for the volunteer driven projects: it is really a labour of
love for these people and so very often the new things interest them,
whereas with a commercial software company developers are going to
have to do things sometimes they don't like, because that is what is
going to sell the product.

232

232

232

233

233


SADIE PLANT
License: Creative Commons Attribution-NonCommercial-ShareAlike
Interwoven with her own thoughts and experiences, Sadie Plant gave a situated report on the Mutual
Motions track, and responded to the issues discussed during the week-end.

figure 146
Sadie Plant
reports
at V/J10

EN

A Situated Report
I have to begin with many thanks to Femke and Laurence, because
it really has been a great pleasure for me to have been here this weekend. It's nearly five years since I came to an event like this, believe
it or not, and I really cannot say enough how much I have enjoyed it,
and how stimulating I have found it. So yes, a big thank you to both
for getting me here. And as you say, it's ten years since I wrote Zeros
+ Ones, and you are marking ten years of this festival too, so it's an
interesting moment to think about a lot of the issues that have come
up over the weekend. This is a more or less spontaneous report, very
much an ‘open performance', to use Simon Yuill's words, and not to
be taken as any kind of definitive account of what has happened this
weekend. But still I hope it can bring a few of the many and varied strands of this event together, not to form a true conclusion, but
perhaps to provide some kind of digestif after a wonderful meal.
I thought I should begin as Femke very wisely began, with the
theme of cooking. Femke gave us a recipe at the beginning of the
weekend, really a kind of recipe for the whole event, with cooking as
an example of the fact that there are many models, many activities,
many things that we do in our everyday lives, which might inform
and expand our ideas about technology and how we work with them.
So, I too will begin with this idea of cooking, which is as Femke
said a very magical, transformative experience. Femke's clip from
the Cathérine Deneuve film was a really lovely instance of the kind
of deep elemental, magical chemistry which goes on in cooking. It is
this that makes it such an instructive and interesting candidate, for a
model to illuminate the work of programming, which itself obviously
has this same kind of potential to bring something into effect in a very
275

275

275

276

276

direct and immediate sense. And cooking is also the work behind the
scene, the often forgotten work, again a little bit like programming,
that results in something which – again like a lot of technology – can
operate on many different scales. Cooking is in one sense the most
basic kind of activity, a simple matter of survival, but it can also
work on a gourmet level too, where it becomes the most refined – and
well paid – kind of work. It can be the most detailed, fiddly, sort of
decorative work; it can be the most backbreaking, heavy industrial
work – bread making for example as well. So it really covers the whole
panoply of these extremes.
If we think about a recipe, and ask ourselves about the machine that
the recipe requires, it's obviously running on an incredibly complex
assemblage: you have the kitchen, you have all the ingredients, you
have machines for cooling things, machines for heating things, you
have the person doing the cooking, the tools in question. We really
are talking here about a complex process, and not just an end result.
The process is also, again, a very ‘open' activity. Simon Yuill defined
an `open performance' as a partial composition completed in the
performance.
Cooking is always about experimentation and the kitchen really is
a kind of lab. The instructions may be exact, the conditions may be
more or less precise but the results are never the same twice. There
are just too many variables, too many contingencies involved. Of
course like any experimental work, it can go completely wrong, it
often does go wrong: sometimes it really is all about process, and
not about eating at all! But as Simon again said today, quoting Sun
Ra: there are no real mistakes, there are no truly wrong things. This
was certainly the case with the fantastic cooking process that we
had throughout the whole day yesterday, which ended with us eating
these fantastic mussels, which I am sure elpueblodechina thought in
fact were not as they should have been. But only she knew what
she was aiming at: for the people who ate them they were delicious,
their flavour enhanced by the whole experience of their production.
elpueblodechina's meal made us ask: what does it mean for something
to go wrong? She was using a cooking technique which has come out
of generations and generations of errors, mistakes, probings, fallings
276

276

276

277

277

backs, not just simply a continuous kind of story of progress, success,
and forward movement. So the mistakes are clearly always a very big
part of how things work in life, in any context in life, but especially
of course in the context of programming and working with software
and working with technologies, which we often still tend to assume
are incredibly reliable, logical systems, but in fact are full of glitches
and errors. As thinkers and activists resistant to and critical of mainstream methods and cultures, this is something that we need to keep
encouraging.
I have for a long time been interested in textiles, and I can't resist mentioning the fact that the word ‘recipe' was the old word for
knitting patterns: people didn't talk about knitting patterns, but
‘recipes' for knitting. This brings us to another interesting junction
with another set of very basic, repetitive kinds of domestic and often
overlooked activities, which are nevertheless absolutely basic to human existence. Just as we all eat food, so we all wear clothes. As with
cooking, the production of textiles again has this same kind of sense
of being very basic to our survival, very elemental in that sense, but
it can also function at a high level of detailed, refined activity as well.
With a piece of knitting it is difficult to see the ways in which a single
thread becomes looped into a continuous textile. But if you look at a
woven pattern, the program that has led to the pattern is right there
in front of you, as you see the textile itself. This makes weaving a
very nice, basic and early example of how this kind of immediacy can
be brought into operation. What you look at in a piece of woven cloth
is not just a representation of something that can happen somewhere
else, but the actual instructions for producing and reproducing that
piece of woven cloth as well. So that's the kind of deep intuitive connection that it has with computer programming, as well as the more
linear historical connections of which I have often spoken.
There are some other nice connections between textiles, cooking
and programming as well. Several times yesterday there was a lot
of talk about both experts and amateurs, and developers and users.
These are divisions which constantly, and often perhaps with good
reason, reassert themselves, and often carry gendered connotations
too. In the realm of cooking, you have the chef on the one hand,
277

277

277

278

278

who is often male and enjoys the high status of the inventive, creative expert, and the cook on the other, who is more likely to be
female and works under quite a different rubric. In reality, it might
be said that the distinction is far from precise: the very practise of
using computers, of cooking, of knitting, is almost inevitably one of
constantly contributing to their development, because they are all relatively open systems and they all evolve through people's constant,
repetitive use of them. So it is ultimately very difficult to distinguish
between the user and the developer, or the expert and the amateur.
The experiment, the research, the development is always happening
in the kitchen, in the bedroom, on the bus, using your mobile or
using your computer. Fernand Braudel speaks about this kind of ‘micro-histories', this sense of repetitive activity, which is done in many
trades and many lines, and that really is the deep unconscious history
of human activity. And arguably that's where the most interesting
developments happen, albeit in a very unsung, unseen, often almost
hidden way. It is this kind of deep collectivity, this profound sense of
micro-collaboration, which has often been tapped into this weekend.
Still, of course, the social and conceptual divisions persist, and
still, just as we have our celebrity chefs, so we have our celebrity
programmers and dominant corporate software developers. And just
as we have our forgotten and overlooked cooks, so we have people who
are dismissed, or even dismiss themselves, as ‘just computer users'.
The technological realities are such that people are often forced into
this role, with programmes that really are so fixed and closed that
almost nothing remains for the user to contribute. The structural
and social divisions remain, and are reproduced on gendered lines as
well.
In the 1940s, computer programming was considered to be extremely menial, and not at all a glamorous or powerful activity.
Then of course, the business of dealing with the software was strictly
women's work, and it was with the hardware of the system that the
most powerful activity lay. That was where the real solid development was done, and that was where the men were working, with what
were then the real nuts and bolts of the machines. Now of course, it
has all turned around. It is women who are building the chips and
278

278

278

279

279

putting the hardware – such as it is these days – together, while the
male expertise has shifted to the writing of software. In only half a
century, the evolution of the technology has shifted the whole notion
of where the power lies. No doubt – and not least through weekends
like this – the story will keep moving on.
But as the world of computing does move more and more into
software and leave the hardware behind, it is accompanied by the
perceived danger that the technology and, by extension, the cultures
around it, tend to become more and more disembodied and intangible.
This has long been seen as a danger because it tends to reinforce what
have historically, in the Western world at least, been some of the more
oppressive tendencies to affect women and all the other bodies that
haven't quite fitted the philosophical ideal. Both the Platonic and
Christian traditions have tended to dismissing or repress the body,
and with it all the kind of messy, gritty, tangible stuff of culture,
as transient, difficult, and flawed. And what has been elevated is of
course the much more formal, idealist, disembodied kind of activities
and processes. This is a site of continual struggle, and I guess part of
the purpose of a weekend like this is to keep working away, re-injecting
some sense of materiality, of physicality, of the body, of geography,
into what are always in danger of becoming much more formal and
disembodied worlds. What Femke and Laurence have striven to remind us this weekend is that however elevated and removed our work
appears to be from the matter of bodies and physical techniques,
we remain bodies, complex material processes, working in a complex
material work.
Once again, there still tends to be something of a gendered divide.
The dance workshop organised this morning by Alice Chauchat and
Frédéric Gies was an inspiring but also difficult experience for many
of us, unused as we are to using our bodies in such literally physical
and public ways. It was not until we came out of the workshop into
a space which was suddenly mixed in terms of gender, that I realised
that the participants in the workshop had been almost exclusively
female. It was only the women who had gone to this kind of more
physical, embodied, and indeed personally challenging part of the
weekend. But we all need to continually re-engage with this sense
279

279

279

280

280

of the body, all this messiness and grittiness, which it is in many
vested interests to constantly cleanse from the world. We have to
make ourselves deal with all the embarrassment, the awkwardness,
and the problematic side of this more tangible and physical world.
For that reason it has been fantastic that we have had such strong
input from people involved in dance and physical movement, people
working with bodies and the real sense of space. Sabine Prokhoris
and Simon Hecquet made us think about what it means to transcribe
the movements of the body; Séverine Dusollier and Valérie Laure
Benabou got us to question the legal status of such movements too.
And what we have gained from all of this is this sense that we are all
always working with our bodies, we are always using our bodies, with
more or less awareness and talent, of course, whether we are dancing
or baking or knitting or slumped over our keyboards. In some ways we
shouldn't even need to say it, but the fact that we do need to remind
ourselves of our embodiment shows just how easy it is for us to forget
our physicality. This morning's dance workshop really showed some
of the virtues of being able to turn off one's self-consciousness, to
dismiss the constantly controlling part of one's self and to function
on a different, slightly more automatic level. Or perhaps one might
say just to prioritise a level of bodily activity, of bodily awareness,
of a sense of spatiality that is so easy to forget in our very cerebral
society.
What Frédéric and Alice showed us was not simply about using the
body, but rather how to overcome the old dualism of thinking of the
body as a kind of servant of the mind. Perhaps this is how we should
think about our relationships to our technologies as well, not just to
see them as our servants, and ourselves as the authors or subjects of
the activity, but rather to perceive the interactivity, the sense of an
interplay, not between two dualistic things, the body and the mind, or
the agent and the tool, the producer and the user, but to try and see
much more of a continuum of different levels and different kinds and
different speeds of material activity, some very big and clunky, others at extremely complex micro-levels. During the dance workshop,
Frédéric talked about all the synaptic connections that are happening as one moves one's body, in order to instil in us this awareness
280

280

280

281

281

of ourselves as physical, material, thinking machines, assemblages of
many different kinds of activity. And again, I think this idea of bringing together dance, food, software, and brainpower, to see ourselves
operating at all these different levels, has been extremely rewarding.
Femke asked a question of Sabine and Simon yesterday, which perhaps never quite got answered, but expressed something about how
as people living in this especially wireless world, we are now carrying more and more technical devices, just as I am now holding this
microphone, and how these additional machines might be changing
our awarenesses of ourselves. Again it came up this morning in the
workshop when we were asked to imagine that we might have different parts of our bodies, another head, or our feet may have mirrors
in them, or in one brilliant example that we might have magnets,
so that we were forced to have parts of our bodies drawn together
in unlikely combinations, just to imagine a different kind of sense of
self that you get from that experience, or a different way of moving
through space. But in many ways, because of our technologies now,
we don't need to imagine such shifts: we are most of us now carrying
some kind of telecommunicating device, for example, and while we
are not physically attached to our machines – not yet anyway –, we
are at least emotionally attached to them. Often they are very much
with us and part of us: the mobile phone in your pocket is to hand,
it is almost a part of us. And I too am very interested in how that
has changed not only our more intellectual conceptions of ourselves,
but also our physical selves. The fact that I am holding this thing
[the microphone] obviously does change my body, its capacities, and
its awareness of itself. We are all aware of this to some extent: everyone knows that if you put on very formal clothes, for example, you
behave in different ways, your body and your whole experience of its
movement and spatiality changes. Living in a very conservative part
of Pakistan a few years ago, where I had to really be completely covered up and just show my eyes, gave me an acute sense of this kind
of change: I had to sit, stand, walk and turn to look at things in an
entirely new set of ways. In a less dramatic but equally affective way,
wirelessness obviously introduces a new sense of our bodies, of what
we can do with our bodies, of what we carry with us on our bodies,
281

281

281

282

282

and consequently of who we are and how we interact with our environment. And in this sense wirelessness has also brought the body
back into play, rescuing us from what only ten years ago seemed to
be the very real dangers of a more formal and disembodied sense of a
virtual world, which was then imagined as some kind of ‘other place'
, a notion of cyberspace, up there somehow, in an almost heavenly
conception. Wirelessness has made it possible for computer devices to
operate in an actual, geographical environment: they can now come
with us. We can almost start to talk more realistically about a much
more interesting notion of the cyborg, rather than some big clunky
thing trailing wires. It really can start to function as a more interesting idea, and I am very interested in the political and philosophical
implications of this development as well, and in that it does reintroduce the body to as I say what was in danger of becoming a very
kind of abstract and formal kind of cyberspace. It brings us back into
touch with ourselves and our geographies.
The interaction between actual space and virtual space, has been
another theme of this weekend; this ability to translate, to move between different kinds of spaces, to move from the analogue to the
digital, to negotiate the interface between bodies and machines. Yesterday we heard from Adrian Mackenzie about digital signal processing, the possibility of moving between that real sort of analogue world
of human experience and the coding necessary to computing. Sabine
and Simon talked about the possibilities of translating movement into
dance, and this also has come up several times today, and also with
Simon's work in relation to music and notation. Simon and Sabine
made the point that with the transcription and reading of a dance,
one is offered – rather as with a recipe – the same ingredients, the
same list of instructions, but once again as with cooking, you will
never get the same dance, or you will never get the same food as a
consequence. They were interested in the idea of notation, not to
preserve or to conserve, but rather to be able to send food or dance
off into the future, to make it possible in the future. And Simon
referred to these fantastic diagrams from The Scratch Orchestra, as
an entirely different way of conceiving and perceiving music, not as
a score, a notation in this prescriptive, conserving sense of the word,
282

282

282

283

283

but as the opportunity to take something forward into the future.
And to do so not by writing down the sounds, or trying to capture
the sounds, but rather as a way of describing the actions necessary
to produce those sounds, is almost to conceive the production of music as a kind of dance, and again to emphasise its embodiment and
physicality.
This sense of performance brings into play the idea of ‘play' itself,
whether ‘playing' a musical instrument, ‘playing' a musical score, or
‘playing' the body in an effort to dance. I think in some dance traditions one speaks about ‘playing the body'; in Tai Chi it is certainly
said that one plays the body, as though it was an instrument. And
when I think about what I have been doing for the last five years,
it's involved having children, it's involved learning languages, it's involved doing lots of cooking, and lots of playing, funny enough. And
what has been lovely for me about this weekend is that all of these
things have been discussed, but they haven't been just discussed, they
have actually been done as well. So we have not only thought about
cooking, but cooking has happened, not only with the mussels, but
also with the fantastic food that has been provided all weekend. We
haven't just thought about dancing, but dancing has actually been
done. We haven't just thought about translating, but with great
thanks to the translators – who I think have often had a very difficult job – translating has also happened as well. And in all of these
cases we have seen what might so easily have been a simply theoretical discussion, has itself been translated into real bodily activity:
they have all been, literally, brought into play. And this term ‘play'
, which spans a kind of mathematical play of numbers, in relation to
software and programming, and also the world of music and dance,
has enormous potential for us all: Simon talked about ‘playing free'
as an alternative term to ‘improvisation', and this notion of ‘playing
free' might well prove very useful in relation to all these questions of
making music, using the body, and even playing the system in terms
of subverting or hacking into the mainstream cultural and technical
programs with which we presented.

283

283

283

284

284

This weekend was inspired by several desires and impulses to which
I feel very sympathetic, and which remain very urgent in all our debates about technology. As we have seen, one of the most important
of those desires is to reinsert the body into what is always in danger of becoming a disembodied realm of computing and technology.
And to reinsert that body not as a kind of Chaplinesque cog in the
wheel that we saw when Inès Rabadán introduced Modern Times last
night, but as something more problematic, something more complex
and more interesting. And also not to do so nostalgically, with some
idea of some kind of lost natural activity that we need to regain, or to
reassert, or to reintroduce. There is no true body, there is no natural
body, that we can recapture from some mythical past and bring back
into play. At the same time we need to find a way of moving forward,
and inserting our senses of bodies and physicality into the future, to
insist that there is something lively and responsive and messy and
awkward always at work in what could have the tendency otherwise
to be a world of closed systems and dead loops.
One of the ways of doing this is to constantly problematise both
individualised conceptions of the body and orthodox notions of communities and groups. Michael Terry's presentation about ingimp, developed in order to imagine the community of people who are using
his image manipulation software, raised some very problematic issues
about the notion of community, which were also brought up again by
Simon today, with this ideas about collaboration and collectivity, and
what exactly it means to come together and try to escape an individualised notion of one's own work. Femke's point to Michael exemplified
the ways in which the notion of community has some real dangers:
Michael or his team had done the representations of the community
themselves – so if people told them they were graphic artists, they
had found their own kind of symbols for what a graphic artist would
look like –, and when Femke suggested that people – especially if
they were graphic artists – might be capable of producing their own
representations and giving their own way of imagining themselves,
Michael's response was to the effect that people might then come up
with what he and his team would consider to be ‘undesirable images'
of themselves. And this of course is the age old problem with the idea
284

284

284

285

285

of a community: an open, democratic grouping is great when you're
in it and you all agree what's desirable, but what happens to all the
people that don't quite fit the picture? How open can one afford to
be? We need some broader, different senses of how to come together
which, as Alice and Frédéric were discussed, are ways of collaborating
without becoming a new fixed totality. If we go back to the practices
of cooking, weaving, knitting, and dancing, these long histories of
very everyday activities that people have performed for generation
after generation, in every culture in the world – it is at this level that
we can see a kind of collective activity, which is way beyond anything
one might call a ‘community' in the self-conscious sense of the term.
And it's also way beyond any simple notion of a distributed collection of individuals: it is perhaps somewhere at the junction of these
modes, an in-between way of working which has come together in its
own unconscious ways over long periods of time.
This weekend has provided a rich menu of questions and themes to
feed in and out of the writing and use of software, as well as all our
other ways of dealing with our machines, ourselves, and each other.
To keep the body and all its flows and complexities in play, in a lively
and productive sense; to keep all the interruptive possibilities alive;
to stop things closing down; to keep or to foster the sense of collectivity in a highly individualised and totalising world; to find new
ways – constantly find new ways – of collaborating and distributing
information: these are all crucial and ongoing struggles in which we
must all remain continually engaged. And I notice even now that I
used this term ‘to keep', as though there was something to conserve
and preserve, as though the point of making the recipes and writing
the programs is to preserve something. But the ‘keeping' in question
here is much more a matter of ‘keeping on', of constantly inventing
and producing without, as Simon said earlier, leaving ourselves too
vulnerable to all the new kinds of exploitation, the new kinds of territorialisation, which are always waiting around the corner to capture
even the most fluid and radical moves we make. This whole weekend
has been an energising reminder, a stimulating and inspiriting call to

285

285

285

286

286

keep problematising things, to keep inventing and to keep reinventing, to keep on keeping on. And I thank you very much for giving me
the chance to be here and share it all. Thank you.
A quick postscript. After this ‘spontaneous report' was made,
the audience moved upstairs to watch a performance by the dancer
Frédéric Gies, who had co-hosted the morning's workshop. I found
the energy, the vulnerability, and the emotion with which he danced
quite overwhelming. The Madonna track - Hung Up (Time Goes by
so Slowly) – to which he danced ran through my head for the whole
train journey back to Birmingham, and when I got home and checked
out the Madonna video on YouTube I was even more moved to see
what a beautiful commentary and continuation of her choreography
Frédéric had achieved. This really was an example not only of playing
the body, the music, and the culture, but also of effecting the kind of
‘free play' and ‘open performance', which had resonated through the
whole weekend and inspired us all to keep our work and ourselves in
motion. So here's an extra thank you to Frédéric Gies. Madonna will
never sound the same to me.

286

286

286

287

287

Biographies
Valérie Laure Benabou
http://www.juriscom.net/minicv/vlb
EN

Valérie Laure Benabou is associate
Professor at the University of Versailles-Saint Quentin and teaches at
the Ecole des Mines. She is a member of the Centre d'Etude et de
Recherche en Droit de l'Immatériel
(CERDI), and of the Editorial Board
of Propriétés Intellectuelles. She also
teaches civil law at the University
of Barcelona and taught international
commercial law at the Law University
in Phnom Penh, Cambodia. She was a
member of the Commission de réflexion du Conseil d'Etat sur Internet et
les réseaux numériques, co-ordinated
by Ms Falque-Pierrotin, which produced the Rapport du Conseil d'Etat,
(La Documentation française, 1998).
She is the author of a number of works
and articles, including ‘La directive
droit d'auteur, droits voisins et société
de l'information: valse à trois temps
avec l'acquis communautaire', in Europe, No. 8-9, September 2001, p.
3, and in Communication Commerce
Electronique, October 2001, p. 8., and
‘Vie privée sur Internet: le traçage', in
Les libertés individuelles à l'épreuve
des NTIC, PUL, 2001, p. 89.

Pierre Berthet
http://pierre.berthet.be/
EN

Studied percussion with André Van

287

287

287

288

288

Belle and Georges-Elie Octors, improvisation with Garrett List, composition with Frederic Rzewski, and music theory with Henri Pousseur. Designs and builds sound objects and installations (composed of steel, plastic,
water, magnetic fields etc.). Presents
them in exhibitions and solo or duo
performances with Brigida Romano
(CD Continuum asorbus on the Sub
Rosa label) or Frédéric Le Junter (CD
Berthet Le Junter on the Vandœuvres
label). Collaborated with 13th tribe
(CD Ping pong anthropology). Played
percussion in Arnold Dreyblatt's Orchestra of excited strings (CD Animal magnetism, label Tzadik; CD The
sound of one string, label Table of the
elements).

avec Garrett List, la composition avec
Frederic Rzewski, et la théorie de
la musique avec Henri Pousseur. Il
conçoit et construit des objets et installations sonores (en acier, plastique, eau, champs magnétiques etc.),
et les a présentés lors d'expositions et
de performances en solo ou en duo
avec Brigida Romano (CD Continuum asorbus sur le label Sub Rosa)
or Frédéric Le Junter (CD Berthet Le
Junter sur le label Vandœuvres). A
collaboré avec 13th tribe (CD Ping
pong anthropology). A joué de la
percussion chez Orchestra of excited
strings d'Arnold Dreyblatt (CD Animal magnetism, label Tzadik; CD The
sound of one string, sur le label Table
of the elements).

NL

Alice Chauchat
Geluidskunstenaar.
Studeerde percussie met André Van Belle en Georges-Eliehttp://www.theselection.net/dance/
Octors, improvisatie met Garrett List,
EN
compositie met Frederic Rzewski, en
muziektheorie met Henri Pousseur.
Member of the Praticable collective.
Hij ontwerpt en bouwt sonore voorAlice Chauchat was born in 1977 in
werpen en installaties (in staal, plasSaint-Etienne (France) and lives in
tiek, water, magnetische velden etc.).
Paris. She studied at the ConservaDeze toont hij tijdens tentoonstellintoire National Supérieur de Lyon and
gen en performances, solo of samen
P.A.R.T.S in Brussels. She is a foundmet Brigida Romano (cd Continuum
ing member of the collective B.D.C.
asorbus bij het label Sub Rosa) en
With other members such as Tom PlisFrédéric Le Junter (cd Berthet Le
chke, Martin Nachbar and Hendrik
Junter bij het label Vandœuvres).
Laevens she created Events for TeleBerthet werkte samen met 13th tribe
vision, Affects and(Re)sort, between
(cd Ping pong anthropology). Hij ver1999 and 2001. In 2001 she presented
zorgde de percussie voor Arnold Dreyher first solo Quotation marks me.
blatts Orchestra of excited strings (cd
In 2003 she collaborated with Vera
Animal magnetism, label Tzadik; cd
Knolle (A Number of Classics in the
The sound of one string, bij het label
Age of Performance). In 2004 she
Table of the elements).
made J'aime, together with Anne JuFR

Plasticien sonore. A étudié la percussion avec André Van Belle et
Georges-Elie Octors, l'improvisation

ren, and CRYSTALLL, a collaboration with Alix Eynaudi. She also takes
part in other people's projects, such as
Projet, initiated by Xavier Le Roy, or

288

288

288

289

289

Michel Cleempoel
http://www.michelcleempoel.be/
EN

Graduated from the National Superior Art School La Cambre in Brussels.
Author of numerous digital art works
and exhibitions. Worked in collaboration with Nicolas Malevé:
http://www.deshabillez-vous.be

289

289

289

290

290

http://www.geuzen.org/

EN

EN

Femke Snelting, Renée Turner and
Riek Sijbring form the art and design
collective De Geuzen (a foundation for
multi-visual research). De Geuzen develop various strategies on and off line,
to explore their interests in the female
identity, critical resistance, representation and narrative archives.

Séverine Dusollier

Doctor in Law, Professor at the University of Namur (Belgium), Head of
the Department of Intellectual Property Rights at the Research Center for
Computer and Law of the University
of Namur, and Project Leader Creative Commons Belgium, Namur.
NL

EN

Leif Elggren (born 1950, Linköping,
Sweden) is a Swedish artist who lives
and works in Stockholm.
Active since the late 1970s, Leif
Elggren has become one of the most
constantly surprising conceptual artists
to work in the combined worlds of
audio and visual. A writer, visual
artist, stage performer and composer,
he has many albums to his credits, solo and with the Sons of God,
on labels such as Ash International,

http://www.fundp.ac.be/universite/personnes
/page_view/01003580/

290

290

290

291

291

Touch, Radium and his own firework Edition. His music, often conceived as the soundtrack to a visual
installation or experimental stage performance, usually presents carefully
selected sound sources over a long
stretch of time and can range from
mesmerising quiet electronics to harsh
noise. His wide-ranging and prolific
body of art often involves dreams and
subtle absurdities, social hierarchies
turned upside-down, hidden actions
and events taking on the quality of
icons.
Together with artist Carl Michael
von Hausswolff, he is a founder of
the Kingdoms of Elgaland-Vargaland
(KREV), where he enjoys the title of
King.

EN

elpueblodechina a.k.a.
Alejandra
Perez Nuñez is a sound artist and
performer working with open source

291

291

291

292

292

tools, electronic wiring and essay writing. In collaborative projects with
Barcelona based group Redactiva, she
works on psychogeography and social science fiction projects, developing narratives related to the mapping of collective imagination. She received an MA in Media Design at the
Piet Zwart Institute in 2005, and has
worked with the organization V2_ in
Rotterdam. She is currently based in
Valparaíso, Chile, where she is developing a practice related to appropriation, civil society and self-mediation
through electronic media.



EN

Born in Bari (Italy) in 1980, and graduated in May 2005 in Communication
Sciences at the University of Rome
La Sapienza, with a dissertation thesis on software as cultural and social
artefact. His educational background
is mostly theoretical: Humanities and
Media Studies. More recently, he has
been focussing on programming and
the development of web based applications, mostly using open source technologies. In 2007 he received an M.A.
in Media Design at the Piet Zwart Institute in Rotterdam.
His areas of interest are:
social
software, actor network theory, digital archives, knowledge management,
machine readability, semantic web,
data mining, information visualization, profiling, privacy, ubiquitous
computing, locative media.

292

292

293

293

ware, de compilatie van data en de
exploratie van numerieke archieven en
privacy. In 2007 behaalde hij een M.A.
in Media Design aan het Piet Zwart
Instituut in Rotterdam.

amazons (1st version in Tanzfabrik,
2nd in Ausland, Berlin) and The
bitch is back under pressure (reloaded) (Basso, Berlin). As a memeber of the Praticable collective, he
created Dance and The breast piece,
in collaboration with Alice Chauchat.
He also collaborated on Still Lives
(Good Work: Anderson/ Gies/ Pelmus/ Pocheron/ Schad).

EN
After studying ballet and contemfaut (CND, Parijs), Le principal déporary dance, Frédéric Gies worked
faut-solo (Tipi de Beaubourg, Parijs),
with various choreographers such as
En corps (CND, Parijs), Post porn
Daniel Larrieu, Bernard Glandier,
traffc (Macba, Barcelona), In bed
Jean-François Duroure, Olivia Grandville with Rebecca (Vooruit, Gent), (don't)
and Christophe Haleb. In 1995, he
Show it! (Scène nationale, Dieppe),
created a duet in collaboration with
Second hand vintage collector (someOdile Seitz (Because I love). In 1998
times we like to mix it up!) (Ausland,
he started working with Frédéric De
Berlijn).
Carlo. Together they have created
In 2004 danst hij in The better you
various performances such as Le prinlook, the more you see


293

293

293

294

294

Dominique Goblet
http://www.dominique-goblet.be/
EN

Visual artist. She shows her work in
galleries and publishes her stories in
magazines and books. In all cases,
what she tries to pursue is an art of
the multi-faceted narrative. Her exhibitions of paintings – from frame to
frame and in the whole space of the
gallery – could be ‘read' as fragmented
stories. Her comic books question the
deep or thin relations between human
beings. As an author, she has taken
part in almost all the Frigobox series
published by Fréon (Brussels) and to
several Lapin magazines, published by
L'Association (Paris). A silent comic
book was published in the gigantic
Comix 2000 (L'Association). In the
beginning of 2002, a second book is
published by the same editor: Souvenir d'une journée parfaite - Memories of a perfect day - a complex story
that combines autobiographical facts
and fictions.

Tsila Hassine
http://www.missdata.org/

EN

Tsila Hassine is a media artist / designer.
Her interests lie with the
hidden potentialities withheld in the
electronic data mines. In her practice she endeavours to extrude undercurrents of information and traces of
processes that are not easily discerned
through regular consumption of mass
networked media. This she accomplishes through repetitive misuse of
available platforms.
She completed a BScs in Mathematics and Computer Science and spent
2003 at the New Media department
of the HGK Zürich.
In 2004 she
joined the Piet Zwart Institute in Rotterdam, where she pursued an MA
in Media Design, until graduating in
June 2006 with Google randomizer
Shmoogle.
She is currently a researcher at the Design department of
the Jan van Eyck Academie.

Simon Hecquet
EN

Dancer and choreographer. Educated
in classical and contemporary dance,
Hecquet has worked with many different dance companies, specialised
in contemporary as well as baroque
dance.
During this time, he also
studied different notation systems to
describe movement, after which he
wrote scores for several dance pieces
from the contemporary choreographic
repertory. He also contributed, among
others, with the Quatuor Knust,
to projects that restaged important
dance pieces of the 20th century. Together with Sabine Prokhoris he made
a movie, Ceci n'est pas une danse
chorale (2004), and a book, Fabriques
de la Danse (PUF, 2007). He teaches

transcription systems for movement,
among others, at the department of
Dance at the Université de Paris VIII.


Guy Marc Hinant
EN

Guy Marc Hinant is a filmmaker of
films like The Garden is full of Metal
(1996), Éléments d'un Merzbau oublié (1999), The Pleasure of Regrets
– a Portrait of Léo Kupper (2003),
Luc Ferrari face to his Tautology
(2006) and I never promised you a
rose garden – a portrait of David
Toop through his records collection
(2008), all developed together with
Dominique Lohlé. He is the curator
of An Anthology of Noise and Electronic Music CD Series, and manages
the Sub Rosa label. He writes fragmented fictions and notes on aesthetics (some of his texts have been published by Editions de l'Heure, Luna
Park, Leonardo Music Journal etc.).

Dmytri Kleiner
http://www.telekommunisten.net/
EN

Dmytri Kleiner is a USSR-born, Canadian software developer and cultural
producer. In his work, he investigates the intersections of art, technology and political economy. He is a
founder of Telekommunisten, an anarchist technology collective, and lives
in Berlin with his wife Franziska and
his daughter Henriette.


Bettina Knaup
EN

Cultural producer and curator with a
background in theatre and film studies, political science and gender studies. She is interested in the interface
of live arts, politics and knowledge
production, and has curated and/or
produced transnational projects such
as the public arts and science program ‘open space' of the International Women's University (Hannover,
1998-2000), and the transdisciplinary
performing arts laboratory, IN TRANSIT (Berlin, House of World Cultures
2002-2003). Between 2001 and 2004,
she has co-curated and co-directed
the international festival of contemporary arts, CITY OF WOMEN (Ljubljana). After directing the new European platform for cultural exchange
LabforCulture during its launch phase
(Amsterdam, 2004-06), Knaup works
again as an independent curator with
a base in Berlin.


EN

Christophe Lazaro is a scientific collaborator at the Law department
of the Facultés Notre-Dame de la
Paix, Namur, and researcher at the
Research Centre for Computer and
Law. His interest in legal matters is
complemented by socio-anthropological research on virtual communities
(free software community), the human/artefact relationship (prothesis,
implants, RfiD chips), transhumanism and posthumanism.

Manu Luksch, founder of ambientTV.NET,
is a filmmaker who works outside the
frame. The ‘moving image', and in
particular the evolution of film in the
digital or networked age, has been
a core theme of her works. Characteristic is the blurring of boundaries between linear and hypertextual
narrative, directed work and multiple
authorship, and post-produced and
self-generative pieces. Expanding the
idea of the viewing environment is also
of importance; recent works have been
NL
shown on electronic billboards in pub



Nicolas Malevé

He has recently been working on sigSince 1998 multimedia artist Nicolas
nal processing, looking at how artists,
Malevé has been an active member of
activists, development projects, and
the organization of Constant. As such,
community groups are making alterhe has taken part in organizing varinate or competing communication inous activities connected with alternafrastructures.
tives to copyrights, such as ‘Copy.cult



Michael Murtaugh
http://automatist.org/

EN

Born in September 2001, represented
here by Valérie Cordy and Natalia
De Mello, the MéTAmorphoZ collective is a multidisciplinary association that create installations, spectacles and transdisciplinary performances that mix artistic experiments
and digital practices.

EN

Freelance developer of (tools for) online documentaries and other forms of
digital archives. He works and lives in
the Netherlands and online at automatist.org. He teaches at the MA Media
Design program at the Piet Zwart Institute in Rotterdam.

301

301

301

302

302

Julien Ottavi
http://www.noiser.org/

Ottavi is the founder, artistic programmer, audio computer researcher
(networks and audio research) and
sound artist of the experimental music
organization Apo33. Founded in 1997,
Apo33 is a collective of artists, musicians, sound artists, philosophers and
computer scientists, who aim to promote new types of music and sound
practices that do not receive large media coverage. The purpose of Apo33
is to create the conditions for the development of all of the kinds of music
and sound practices that contribute
to the advancement of sound creation,
including electronic music, concrete
music, contemporary written music,
sound poetry, sound art and other
practices which as yet have no name.
Apo33 refers to all of these practices
as ‘Audio Art'.

EN

Jussi Parikka teaches and writes on
the cultural theory and history of new
media. He has a PhD in Cultural
History from the University of Turku,
finland, and is Senior Lecturer in
Media Studies at Anglia Ruskin University, Cambridge, UK. Parikka has
published a book on ‘cultural theory
in the age of digital machines' (Koneoppi, in finnish) and his Digital
Contagions: A Media Archaeology of
Computer Viruses has been published
by Peter Lang, New York, Digital Formations-series (2007). Parikka is currently working on a book on ‘Insect
Media', which focuses on the media
theoretical and historical interconnections of biology and technology.


Sadie Plant

Sadie Plant is the author of The Most
Radical Gesture, Zeros and Ones,
and Writing on Drugs.
She has
taught in the Department of Cultural
Studies, University of Birmingham,
and the Department of Philosophy,
University of Warwick. For the last
ten years she has been working independently and living in Birmingham,
where she is involved with the Ikon
Gallery, Stan's Cafe Theatre Company, and the Birmingham Institute
of Art and Design.




EN

Praticable proposes itself as a horizontal work structure, which brings into
relation research, creation, transmission and production structure. This
structure is the basis for the creation
of many performances that will be
signed by one or more participants in
the project. These performances are
grounded, in one way or another, in
the exploration of body practices to
approach representation. Concretely,
the form of Praticable is periods of
common research of /on physical practices which will be the soil for the various creations. The creation periods
will be part of the research periods.
Thus, each specific project implies the
involvement of all participants in the
practice, the research and the elaboration of the practice from which the
piece will ensue.

304

304

304

305

305

Sabine Prokhoris

EN

EN

Psychoanalyst and author of, among
others, Witch's Kitchen:
Freud,
Faust, and the Transference (Cornell
University Press, 1995), and co-author
with Simon Hecquet of Fabriques de la
Danse (PUF, 2007). She is also active
in contemporary dance, as a critic and
a choreographer. In 2004 she made the
film Ceci n'est pas une danse chorale
together with Simon Hecquet.



After obtaining a master's degree in
Philosophy and Letters, Inès Rabadan
studied film at the IAD. Her short
films (Vacance, Surveiller les Tortues,
Maintenant, Si j'avais dix doigts,
Le jour du soleil), were shown at
about sixty festivals. Surveiller les
tortues and Maintenant were awarded
at the festivals of Clermont, Vendôme,
Chicago, Aix, Grenoble, Brest and
Namur. Occasionally she supervises
scenario workshops.
Her first feature film, Belhorizon, was selected
for the festivals of Montréal, Namur, Créteil, Buenos Aires, Santiago de Chile, Santo Domingo and
Mannheim-Heidelberg.
At the end
of 2006, it was released in Belgium,
France and Switzerland.

305

305

305

306

306
EN

Antoinette Rouvroy is researcher at
the Law department of the Facultés
Notre-Dame de la Paix in Namur,
and at the Research Centre for Computer and Law. Her domains of expertise range from rights and ethics
of biotechnologies, philosophy of Law
and ‘critical legal studies' to interdisciplinary questions related to privacy
and non-discrimination, science and
technology studies, law and language.
NL

Antoinette Rouvroy is onderzoekster
aan het departement Rechten van de
Facultés Notre-Dame de la Paix in Namen, en aan het Centre de Recherche
Informatique et Droit van de Universiteit van Namen. Zij is gespecialiseerd in het recht en de ethiek

Femke Snelting is a member of the
art and design collective De Geuzen
and of the experimental design agency
OSP.
NL


Michael Terry
http://www.ingimp.org/

Computer Scientist, University of Waterloo, Canada.

Carl Michael von Hausswolff

Von Hausswolff was born in 1956 in
Linkšping, Sweden.
He lives and
works in Stockholm. Since the end
of the 70s, von Hausswolff has been
working as a composer using the tape
recorder as his main instrument and
as a conceptual visual artist working with performance art, light- and
sound installations and photography.
His audio compositions from 1979 to
1992, constructed almost exclusively
from basic material taken from earlier audiovisual installations and performance works, essentially consist of
complex macromal drones with a surface of aesthetic elegance and beauty.
In later works, von Hausswolff retained the aesthetic elegance and the
drone, and added a purely isolationistic sonic condition to composing.


Marc Wathieu
http://www.erg.be/sdr/blog/

Marc Wathieu teaches at Erg (digital arts) and HEAJ (visual communication). He is a digital artist (he
works with the Brussels based collective LAB[au]) and sound designer.
He is also an offcial representative of
the Robots Trade Union with the human institutions. During V/J10 he
presented the Robots Trade Union's
Chart and ambitions.


Peter Westenberg

Brian Wyrick

FR

Peter Westenberg is an artist and film
and video maker, and member of Constant. His projects evolve from an
interest in social cartography, urban
anomalies and the relationships between locative identity and cultural

Brian Wyrick is an artist, filmmaker
and web developer working in Berlin
and Chicago. He is also co-founder
of Group 312 films, a Chicago-based
film group.


Simon Yuill
http://www.spring-alpha.org/
EN

Artist and programmer based in Glasgow, Scotland. He is a developer in
the spring_alpha and Social Versioning System (SVS) projects. He has
helped to set up and run a number
of hacklabs and free media labs in
Scotland including the Chateau Institute of Technology (ChIT) and Electron Club, as well as the Glasgow
branch of OpenLab. He has written
on aspects of Free Software and cultural praxis, and has contributed to
publications such as Software Studies
(MIT Press, 2008), the flOSS Manuals and Digital Artists Handbook project (GOTO10 and Folly).


License Register
??

65, 174

a
Attribution-Noncommercial-No Derivative Work

181, 188

c
Copyright Presses Universitaires de France, 2007 188
Creative Commons Attribution-NonCommercial-ShareAlike 58, 71,
73, 81, 93, 98, 155, 215, 254, 275
Creative Commons Attribution - NonCommercial - ShareAlike license
104
d
Dmytri Kleiner & Brian Wyrick, 2007. Anti-Copyright. Use as desired in whole or in part. Independent or collective commercial use
encouraged. Attribution optional.
47
f
Free Art License 38, 70, 75, 131, 143, 217
Fully Restricted Copyright 95
g
GNUFDL 119

311

311

311

312

312

t
The text is under a GPL. The images are a little trickier as none of
them belong to me. The images from ap and David Griffths can
be GPL as well, the Scratch Orchestra images (the graphic music
scores) were always published ‘without copyright' so I guess are
public domain. The photograph of the Scratch Orchestra performance can be GPL or public domain and should be credited to
Stefan Szczelkun. The other images, Sun Ra, Black Arts Group
and Lester Bowie would need to mention ‘contact the photographers'. Sorry the images are complicated but they largely come
from a time before copyleft was widespread.
233

312

312

312

313

313

This publication was produced with a set of digital tools that are
rarely used outside the world of scientific publishing: TEX, LATEX and
ConTEXt. As early as the summer of 2008, when most contributions
and translations to Tracks in electronic fields were reaching their final
stage, we started discussing at OSP 1 how we could design and produce
a book in a way that responded to the theme of the festival itself. OSP
is a design collective working with Free Software, and our relation to
the software we design with, is particular on purpose. At the core
of our design practice is the ongoing investigation of the intimate
connection between form, content and technology. What follows, is a
report of an experiment that stretched out over a little more than a
year.
For the production of previous books, OSP used Scribus, an Open
Source Desktop Publishing tool which resembles its proprietary variants PageMaker, InDesign or QuarkXpress. In this type of software,
each single page is virtually present as a ‘canvas' that has the same
proportions as a physical page and each of these ‘pages' can be individually altered through adding or manipulating the virtual objects
on it. Templates or ‘master pages' allow the automatic placement
of repeated elements such as page numbers and text blocks, but like
in a paper-based design workflow, each single page can be treated as
an autonomous unit that can be moved, duplicated and when necessary removed. Scribus would have certainly been fit for this job,
though the rapidly developing project is currently in a stage that the
production of books with more than 40 pages can become tedious.
Users are advised to split up such documents into multiple sections
which means that in able to keep continuity between pages, design
decisions are best made beforehand. As a result, the design workflow
is rendered less flexible than you would expect from state-of-the-art

5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35

1

Open Source Publishing http://ospublish.constantvzw.org

36

323

323

323

324

324

creative software. In previous projects, Scribus' rigid workflow challenged us to relocate our creative energy to another territory: that
of computation. We experimented with its powerful Python scripting
API to create 500 unique books. In another project, we transformed
a text block over a sequence of pages with the help of a fairy-tale
script. But for Tracks in electronic fields we dreamed of something
else.
Pierre Huyghebaert takes on the responsibility for the design of
the book. He had been using various generations of lay-out software
since the early 90's, and gathered an extensive body of knowledge
about their potential and limitations. More than once he brought up
the desire to try out a legendary typesetting system called TEX a
sublime typographic engine that allegedly implemented the work of
grandmaster Jan Tshichold 2 with mathematical precision.
TEX is a computer language designed by Donald Knuth in the
1970's, specifically for typesetting mathematical and other scientific
material. Powerful algorithms automatize widow and orphan control and can handle intelligent image placement. It is renowned for
being extremely stable, for running on many different kinds of computers and for being virtually bug free. In the academic tradition
of free knowledge exchange, Knuth decided to make TEX available
‘for no monetary fee' and modifications of or experimentations with
the source code are encouraged. In typical self referential style, the
near perfection of its software design is expressed in a version number
which is converging to π 3.
For OSP, TEX represents the potential of doing design differently.
Through shifting our software habits, we try to change our way of
working too. But Scribus, like the kinds of proprietary softwares it is
modeled on, has a ‘productionalist' view of design built into it 4, which

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30

2

In Die neue Typographie (1928), Jan Tschichold formulated the classic canon of modernist bookdesign.

3

The value of Π (3.141592653589793...) is the ratio of any circle's circumference to its
diameter and it's decimal representation never repeats. The current version number of
TEX is 3.141592

4

“A DTP program is the equivalent of a final assembly in an industrial process”
Christoph Schäfer, Gregory Pittman et al. The Offcial Scribus Manual.fles Books,
2009

31
32
33
34
35
36

324

324

324

325

325

is undeniably seeping through in the way we use it. An exotic Free
Software tool like TEX, rooted firmly in an academic context rather
than in commercial design, might help us to re-imagine the familiar
skill of putting type on a page. By making this kind of ‘domain
shift' 5 we hope to discover another experience of making, and find a
more constructive relation between software, content and form. So
when Pierre suggests that this V/J10 publication is possibly the right
occasion to try, we respond with enthusiasm.
By the end of 2008, Pierre starts carving out a path in the dense
forest of manuals, advice, tips-and-tricks with the help of Ivan Monroy Lopez. Ivan is trained as mathematician and more or less familiar with the exotic culture of TEX. They decide to use the popular
macro-package LATEX 6 to interface with TEX and find out about the
tong-in-cheek concept of ‘badness' (depending on the tension put on
hyphenated paragraphs, compiling a .tex document produces ‘badness' for each block on a scale from 0 to 10.000), and encounter a
long history of wonderful but often incoherent layers of development
that envelope the mysterious lasagna beauty of TEX's typographic
algorithms.
Laying-out a publication in LATEX is an entirely different experience than working with a canvas-based software. first of all, design decisions are executed through the application of markup which
vaguely reminds of working with CSS or HTML. The actual design is
only complete after ‘compiling' the document, and this is where TEX
magic happens. The software passes several times over a marked up
.tex file, incrementally deciding where to hyphenate a word, place a
paragraph or image. In principle, the concept of a page only applies
after compilation is complete. Design work therefore radically shifts
from the act of absolute placement to co-managing a flow. All elements remain relatively placed until the last tour has passed, and
while error messages, warnings and hyphenation decisions scroll by on
the command line, the sensation of elasticity is almost tangible. And

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33

5

See: Richard Sennett. The Craftsman. Allen Lane (Penguin Press), 2008

6 L
ATEX

is a high-level markup language that was first developed by Leslie Lamport in
1985. Lamport is a computer scientist also known for his work on distributed systems
and multi-treading algorithms.

34
35
36

325

325

325

326

326

indeed, when within the acceptable ‘stretch' of the program placement of a paragraph is exceeded, words literally break out of the grid
(see page 34 example).
When I join Pierre to continue the work in January 2009, the
book is still far from finished. By now, we can produce those typical
academic-style documents with ease, but we still have not managed to
use our own fonts 7. flipping back and forth in the many manuals and
handbooks that exist, we enjoy discovering a new culture. Though
we occasionally cringe at the paternalist humour that seems to have
infected every corner of the TEX community and which is clearly
inspired by witticisms of the founding father, Donald Knuth himself,
we experience how the lightweight, flexible document structure of
TEX allows for a less hierarchical and non-linear workflow, making
it easier to collaborate on a project. It is an exhilarating experience
to produce a lay-out in dialogue with a tool and the design process
takes on an almost rhythmical quality, iterative and incremental. It
also starts to dawn on us, that souplesse comes with a price.
“Users only need to learn a few easy-to-understand commands that
specify the logical structure of a document” promises The Not So
Short Introduction to LATEX. “They almost never need to tinker with
the actual layout of the document”. It explains why using LATEX
stops being easy-to-understand once you attempt to expand its strict
model of ‘book', ‘article' or ‘thesis': the ‘users' that LATEX addresses
are not designers and editors like us. At this point, we doubt whether
to give up or push through, and decide to set ourselves a limit of a
week in which we should be able to to tick off a minimal amount of
items from a list of essential design elements. Custom page size and
headers, working with URL's... they each require a separate ‘package'
that may or may not be compatible with another one. At the end of
the week, just when we start to regain confidence in the usability of
LATEX for our purpose, our document breaks beyond repair when we
try to use custom paper size with custom headers at the same time.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33

7

“Installing fonts in LATEX has the name of being a very hard task to accomplish. But
it is nothing more than following instructions. However, the problem is that, first, the
proper instructions have to be found and, second, the instructions then have to be read
and understood”. http://www.ntg.nl/maps/29/13.pdf

34
35
36

326

326

326

327

327

In February, more than 6 months into the process, we briefly consider switching to OpenOffce instead (which we had never tried for
such a large publication) or go back to Scribus (which means for
Pierre, learning a new tool). Then we remember ConTEXt, a relatively young ‘macro package' that uses the TEX engine as well. “While
LATEX insulates the writer from typographical details, ConTEXt takes
a complementary approach by providing structured interfaces for handling typography, including extensive support for colors, backgrounds,
hyperlinks, presentations, figure-text integration, and conditional compilation” 8. This is what we have been looking for.
ConTEXt was developed in the 1990's by a Dutch company specialised in ‘Advanced Document Engineering'. They needed to produce complex educational materials and workplace manuals and came
up with their own interface to TEX. “The development was purely
driven by demand and configurability, and this meant that we could
optimize most workflows that involved text editing”. 9
However frustrating it is to re-learn yet another type of markup
(even if both are based on the same TEX language, most of the LATEX
commands do not work in ConTEXt and vice versa), many of the
things that we could only achieve by means of ‘hack' in LATEX, are
built in and readily available in ConTEXt. With the help of the
very active ConTEXt mailinglist we find a way to finally use our own
fonts and while plenty of questions, bugs and dark areas remain, it
feels we are close to producing the kind of multilingual, multi-format,
multi-layered publication we imagine Tracks in Electr(on)ic fields to
be.
However, Pierre and I are working on different versions of Ubuntu,
respectively on a Mac and on a PC and we soon discover that our
installations of ConTEXt produce different results. We can't find
a solution in the nerve-wrackingly incomplete, fragmented though
extensive documentation of ConTEXt and by June 2009, we still have
not managed to print the book. As time passes, we find it increasingly

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33

8

Interview with Hans Hagen http://www.tug.org/interviews/interview-files/hans-hagen
.html

9

Interview with Hans Hagen http://www.tug.org/interviews/interview-files/hans-hagen
.html

34
35
36

327

327

327

328

328

difficult to allocate concentrated time for learning and it is a humbling
experience that acquiring some sort of fluency seems to pull us in all
directions. The stretched out nature of the process also feeds our
insecurity: Maybe we should have tried this package also? Have we
read that manual correctly? Have we read the right manual? Did we
understand those instructions really? If we were computer scientists
ourselves, would we know what to do? Paradoxically, the more we
invest into this process, mentally and physically, the harder it is to
let go. Are we refusing to see the limits of this tool, or even scarier,
our own limitations? Can we accept that the experience we'd hoped
for, is a lot more banal than the sublime results we secretly expected?
A fellow Constant member suggests in desperation: “You can't just
make a book, can you?”
In July, Pierre decides to pay for a consult with the developers
of ConTEXt themselves, and once and for all solve some of the issues we continue to struggle with. We drive up expectantly to the
headquarters of Pragma in Hasselt (NL) and discuss our problems,
seated in the recently redecorated rooms of a former bank building.
Hans Hagen himself reinstalls markIV (the latest in ConTEXt) on the
machine of Pierre, while his colleague Ton Otten tours me through
samples of the colorful publications produced by Pragma. In the afternoon, Hans gathers up some code examples that could help us place
thumbnail images and before we know it we are on our way South
again. Our visit confirms the impression we had from the awkwardly
written manuals and peculiar syntax, that ConTEXt is in essence a
one man mission. It is hard to imagine that a tool written to solve
particular problems of a certain document engineer, will ever grow
into the kind of tool that we desire too as well.
In August, as I type up this report, the book is more or less ready
to go to print. Although it looks ‘handsome' according to some, due
to unexpected bugs and time restraints, we have had to let go of
some of the features we hoped to implement. Looking at it now, just
before going to print, it has certainly not turned out to be the kind of
eye-opening typographic experience we dreamt of and sadly, we will
never know whether that is due to our own limited understanding
of TEX, LATEX and ConTEXt, to the inherent limits of those tools

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36

328

328

328

329

329

themselves, or to the crude decision to finally force through a lay-out
in two weeks. Probably a mix of all of the above, it is first of all a
relief that the publication finally exists. Looking back at the process, I
am reminded of the wise words of Joseph Weizenbaum, who observed
that “Only rarely, if indeed ever, are a tool and an altogether original
job it is to do, invented together” 10.
While this book nearly crumbled under the weight of the projections it had to carry, I often thought that outside academic publishing, the power of TEX is much like a Fata Morgana. Mesmerizing
and always out of reach, TEX continues to represent a promise of an
alternative technological landscape that keeps our dream of changing
software habits alive.

1
2
3
4
5
6
7
8
9
10
11
12
13
14

Femke Snelting (OSP), August 2009

15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35

10

Joseph Weizenbaum. Computer power and human reason: from judgment to calculation.
MIT, 1976

36

329

329

329

330

330

330

330

330

331

331

Colophon
Tracks in electr(on)ic fields is a publication of Constant, Association for Art
and Media, Brussels.
Translations: Steven Tallon, Anne Smolar, Yves Poliart, Emma Sidgwick
Copy editing: Emma Sidgwick, Femke Snelting, Wendy Van Wynsberghe
English editing and translations: Sophie Burm
Design: Pierre Huyghebaert, Femke Snelting (OSP)
Photos, unless otherwise noted: Constant (Peter Westenberg). figure 5-9: Marc
Wathieu, figure 31-96: Constant (Christina Clar, video stills), figure 102-104:
Leiff Elgren, CM von Hausswolff, figure 107-116: Manu Luksch, figure A-Q:
elpueblodechina, figure 151 + 152: Pierre Huyghebaert, figure 155: Cornelius
Cardew, figure 160-162: Scratch Orchestra, figure 153 + 154: Michael E. Emrick
(Courtesy of Ben Looker), figure 156-157 + 159: photographer unknown, figure
158: David Griffths, pages 19, 25, 35, 77 and 139: public domain or unknown.
This book was produced in ConTEXt, based on the TEX typesetting engine, and
other Free Softwares (OpenOffce, Gimp, Inkscape). For a written account of
the production process see The Making Of on page 323.
Printing: Drukkerij Geers Offset, Gent

EN

FR

NL

Copyright © 2009, Constant.
Copyleft: this book is free. You can distribute and modify it according to the
terms of the Free Art Licence. You can find an example of this licence on the
site ‘Copyleft Attitude' http://www.artlibre.org
Copyleft : cette oeuvre est libre, vous pouvez la redistribuer et/ou la modifier selon les termes de la Licence Art Libre. Vous trouverez un exemplaire de
cette Licence sur le site Copyleft Attitude http://www.artlibre.org ainsi que sur
d'autres sites.
Copyleft: dit boek is een vrij werk. Je kunt het verspreiden en/of veranderen
volgens de termen van de Free Art Licence. Je vindt de tekst van deze licentie
onder andere op de site ‘Copyleft Attitude' http://www.artlibre.org
This book can be downloaded from: http://www.constantvzw.org/verlag. Sources
are available from http://osp.constantvzw.org/sources/vj10

331

331

331

332

332

figure 148 De Vlaamse Minister van Cultuur,
Jeugd, Sport en Brussel

figure 149 De Vlaamse Gemeenschapscommissie

332

332

332

Constant
Conversations
2015


This book documents an ongoing dialogue between developers and designers involved in the wider ecosystem of Libre
Graphics. Its lengthy title, I think that conversations are the
best, biggest thing that Free Software has to offer its user, is taken
from an interview with Debian developer Asheesh Laroia, Just
ask and that will be that, included in this publication. His remark points at the difference that Free Software can make when
users are invited to consider, interrogate and discuss not only
the technical details of software, but its concepts and histories
as well.
Conversations documents discussions about tools and practices
for typography, layout and image processing that stretch out
over a period of more than eight years. The questions and answers were recorded in the margins of events such as the yearly
Libre Graphics Meeting, the Libre Graphics Research Unit,
a two-year collaboration between Medialab Prado in Madrid,
Worm in Rotterdam, Piksel in Bergen and Constant in Brussels,
or as part of documenting the work process of the Brussels’
design team OSP. Participants in these intersecting events and
organisations constitute the various instances of ‘we’ and ‘I’ that
you will discover throughout this book.
The transcriptions are loosely organised around three themes:
tools, communities and design. At the same time, I invite you
to read Conversations as a chronology of growing up in Libre
Graphics, a portrait of a community gradually grasping the interdependencies between Free Software and design practice.
Femke Snelting
Brussels, December 2014

Introduction

A user should not be able to shoot himself in the foot

I think the ideas behind it are beautiful in my mind

We will get to know the machine and we will understand
ConTeXt and the ballistics of design
Meaningful transformations

Tools for a Read Write World
Etat des Lieux

Distributed Version Control

Even when you are done, you are not done
Having the tools is just the beginning
Data analysis as a discourse

Why you should own the beer company you design for
Just Ask and That Will Be That
Tying the story to data
Unicodes

If the design thinking is correct, the tools should be irrelevant
You need to copy to understand
What’s the thinking here

The construction of a book (Aether9)
Performing Libre Graphics

The Making of Conversations

7
13
23
37
47
71
84
99
109
135
155
171
187
201
213
261
275
287
297
311
319
333

Colophon

351

Keywords

353

Free Art License

359

Larisa Blazic:

Introduction

Computational concepts, their technological language and the hybridisation of creative practice have been successfully explored in Media Arts for a
few decades now. Digital was a narrative, a tool and a concept, an aesthetic
and political playground of sorts. These experiments created a notion of
the digital artisan and creative technologist on the one hand and enabled
a new view of intellectual property on the other. They widened a pathway
to participation, collaboration and co-creation in creative software development, looking critically at the software as cultural production as well as
technological advance.
This book documents conversations between artists, typographers, designers, developers and software engineers involved in Libre Graphics, an independent, self-organised, international community revolving around Free,
Libre, Open Source software (F/LOSS). Libre Graphics resembles the community of Media arts of the late twentieth Century, in so far that it is using
software as a departure point for creative exploration of design practice. In
some cases it adopts software development processes and applies them to
graphic design, using version control and platforms such as GitHub, but it
also banks on a paradigm shift that Free Software offers – an active engagement with software to bend it, fork it, reshape it – and in that it establishes
conversations with a developers community that haven’t taken place before.
This pathway was, however, at moments full of tension, created by diverging views on what the development process entails and what it might
mean. The conversations brought together in this book resulted from the
need to discuss those complex issues and to adress the differences and similarities between design, design production, Free Culture and software development. As in theatre, where it is said that conflict drives the plot forward,
so it does here. It makes us think harder about the ethics of our practices
while we develop tools and technologies for the benefit of all.
The Libre Graphics Meeting (LGM) was brought to my attention in
2012 as an interesting example of dialogue between creative types and developers. The event was running since 2006 and was originally conceived as an
annual gathering for discussions about Free and Open Source software used
in graphics. At the time I was teaching at the University of Westminster
for nearly ten years. The subject was computers, arts and design and it took
a variety of forms; sometimes focused on graphic design, sometimes on
contemporary media practice, interaction design, software design and mysterious hypermedia. F/LOSS was part of my artistic practice for many years,
7

Larisa Blazic:

Introduction

but its inclusion to the UK Higher Education was a real challenge. My
frustration with difficult computer departments grew exponentially year by
year and LGM looked like a place to visit and get much needed support.
Super fast-forward to Madrid in April 2013: I landed. Little did I know
that this journey would change everything. Firstly, the wonderfully diverse
group of people present: artists, designers, software developers, typographers, interface designers, more software developers! It was very exciting
listening to talks, overhearing conversations in breaks, observing group discussions and slowly engaging with the Libre Graphics community. Being
there to witness how far the F/LOSS community has come was so heartwarming and uplifting, that my enthusiasm was soaring.
The main reason for my attendance at the Madrid LGM was to join
the launch of a network of Free Culture aware educators in art, music and
design education. 1 Aymeric Mansoux and his colleagues from the Willem
De Kooning Academie and the Piet Zwart Institute in Rotterdam convened
the first ever meeting of the network with the aim to map out a landscape
of current educational efforts as well as to share experiences. I was aware of
Aymeric’s efforts through his activities with GOTO10 and the FLOSS+Art
book 2 that they published a couple of years before we finally met. Free
Culture was deeply embedded in his artistic and educational practice, and it
was really good to have someone like him set the course of discussion.
Lo’ and behold the conversation started – we sat in a big circle in the
middle of Medialab Prado. The introduction round began, and I thought:
there are so many people using F/LOSS in their teaching! Short courses,
long courses, BA courses, MA courses, summer schools, all sorts! There
were so many solutions presented for overcoming institutional barricades,
Adobe marriages and Apple hostages. Individual efforts and group efforts,
long term and short, a whole world of conventional curriculums as well as
a variety of educational experimentations were presented. Just sitting there,
listening about shared troubles and achievements was enough to give me a
new surge of energy to explore new strategies for engaging BA level students
with F/LOS tools and communities.
Taking part in LGM 2013 was a useful experience that has informed
my art and educational practice since. It was clear from the gathering that
1
2

http://eightycolumn.net/
Aymeric Mansoux and Marloes de Valk. FLOSS+Art. OpenMute, 2008.
http://things.bleu255.com/floss-art

8

Larisa Blazic:

Introduction

F/LOSS is not a ghetto for idealists and techno fetishists – it was ready for
an average user, it was ready for a specialist user, it was ready for all and
what is most important the communication lines were open. Given that
Linux distributions extend the life of a computer by at least ten years, in
combination with the likes of Libre Graphics, Open Video and a plethora
of other F/LOS software, the benefits are manyfold, important for all and
not to be ignored by any form of creative practice worldwide.

Libre Graphics seems to offer a very exciting transformation of graphic design practice through implementation of F/LOS software development and
production processes. A hybridisation across these often separated fields of
practice that take under consideration openness and freedom to create, copy,
manipulate and distribute, while contributing to the development of visual
communication itself. All this may lease a new life to an over-commercialised
graphic design practice, banalised by mainstream culture.
This book brings together reflections on collaboration and co-creation
in graphic design, typography and desktop publishing, but also on gender
issues and inclusion to the Libre Graphics community. It offers a paradigm
shift, supported by historical research into graphic and type design practice,
that creates strong arguments to re-engage with the tools of production.
The conversations conducted give an overview of a variety of practices and
experiences which show the need for more conversations and which can help
educate designers and developers alike. It gives detailed descriptions of the
design processes, productions and potential trade-offs when engaged in software design and development while producing designed artefacts. It points
to the importance of transparent software development, breaking stereotypes and establishing a new image of the designer-developer combo, a fresh
perspective of mutual respect between disciplines and a desire to engage in
exchange of knowledge that is beneficial beyond what any proprietary software could ever be.
Larisa Blazic is a media artist living and working in London. Her interests range from

creative collaborations to intersections between video art and architecture. As senior lecturer
at the Faculty of Media, Arts and Design of the University of Westminster, she is currently
developing a master’s program on F/LOSS art & design.

9

While in the background participants of the Libre Graphics
Meeting 2007 start saying goodbye to each other, Andreas
Vox makes time to sit down with us to talk about Scribus,
the Open Source application for professional page layout.
The software is significant not only to it’s users that do design with it, but also because Scribus helps us think about
links between software, Free Culture and design. Andreas
is a mathematician with an interest in system dynamics,
who lives and works in Lübeck, Germany. Together with
Franz Schmid, Petr Vanek (subik), Riku Leino (Tsoots),
Oleksandr Moskalenko (malex), Craig Bradney (MrB), Jean
Ghali and Peter Linnel (mrdocs) he forms the core Scribus
developer team. He has been working on Scribus since
2003 and is currently responsible for redesigning the internal workings of its text layout system.
This weekend Peter Linnel presented amongst many other new Scribus features 1 ,
‘The Color Wheel’, which at the click of a button visualises documents the way
they would be perceived by a colour blind person. Can you explain how such a
feature entered into Scribus? Did you for example speak to accessibility experts?

I don’t think we did. The code was implemented by subik 2 , a developer
from the Czech Republic. As far as I know, he saw a feature somewhere else
or he found an article about how to do this kind of stuff, and I don’t know
where he did it, but I would have to ask him. It was a logic extension of the
colour wheel functionality, because if you pick different colours, they look
different to all people. What looks like red and green to one person, might
look like grey and yellow to other persons. Later on we just extended the
code to apply to the whole canvas.
1

2

http://wiki.scribus.net/index.php/Version_1.3.4%2B-New_Features
Petr Vanek

13

It is quite special to offer such a precise preview of different perspectives in your
software. Do you think it it is particular to Scribus to pay attention to these kind
of things?

Yeah, sure. Well, the interesting thing is ... in Scribus we are not depending
on money and time like other proprietary packages. We can ask ourselves:
Is this useful? Would I have fun implementing it? Am I interested in seeing
how it works? So if there is something we would like to see, we implement
it and look at it. And because we have a good contact with our user base,
we can also pick up good ideas from them.
There clearly is a strong connection between Scribus and the world of prepress
and print. So, for us as users, it is an almost hallucinating experience that while
on one side the software is very well developed when it comes to .pdf export for
example, I would say even more developed than in other applications, but than
still it is not possible to undo a text edit. Could you maybe explain how such a
discrepancy can happen, to make us understand better?

One reason is, that there are more developers working on the project,
and even if there was only one developer, he or she would have her own
interests. Remember what George Williams said about FontForge ... 3 he is
not that interested in nice Graphical User Interfaces, he just makes his own
functionality ... that is what interests him. So unless someone else comes
up who compensates for this, he will stick to what he likes. I think that
is the case with all Open Source applications. Only if you have someone
interested and able to do just this certain thing, it will happen. And if it
is something boring or something else ... it will probably not happen. One
way to balance this, is to keep in touch with real users, and to listen to
the problems they have. At least for the Scribus team, if we see people
complaining a lot about a certain feature missing ... we will at some point
say: come on, let’s do something about it. We would implement a solution and
when we get thanks from them and make them happy, that is always nice.

Can you tell us a bit more about the reasons for putting all this work into
developing Scribus, because a layout application is quite a complex monster with
all the elements that need to work together ... Why is it important you find, to
develop Scribus?
3

I think the ideas behind it are beautiful in my mind

14

I use to joke about the special mental state you need to become a Scribus
developer ... and one part of it is probably megalomania! It is kind of mountain climbing. We just want to do it, to prove it can be done. That must
have been also true for Franz Schmid, our founder, because at that time,
when he started, it was very unlikely that he would succeed. And of course
once you have some feedback, you start to think: hey, I can do it ... it works.
People can use it, people can print with it, do things ... so why not make it even
better? Now we are following InDesign and QuarkXpress, and we are playing
the top league of page layout applications ... we’re kind of in a competition
with them. It is like climbing a mountain and than seeing the next, higher
mountain from the top.

In what way is it important to you that Scribus is Free Software?

Well ... it would not work with closed software. Open software allows you to
get other people that also are interested in working on the project involved,
so you can work together. With closed software you usually have to pay
people; I would only work because someone else wants me to do it and
we would not be as motivated. It is totally different. If it was closed, it
would not be fun. In Germany they studied what motivates Open Source
developers, and they usually list: ‘fun’; they want to do something more
challenging than at work, and some social stuff is mentioned as well. Of
course it is not money.
One of the reasons the Scribus project seems so important to us, is that it might
draw in other kinds of users, and open up the world of professional publishing to
people who can otherwise not afford proprietary packages. Do you think Scribus
will change the way publishing works? Does that motivate you, when you work
on it?

I think the success of Open Source projects will also change the way people
use software. But I do not think it is possible to foresee or plan, in what
way this will change. We see right now that Scribus is adopted by all kinds
of idealists, who think that is interesting, lets try how far we can go, and
do it like that. There are other users that really just do not have the money
to pay for a professional page layout application such as very small newspapers associations, sports groups, church groups. They use Scribus because
otherwise they would have used a pirated copy of some other software, or
15

another application which is not up to that task, such as a normal word processor. Or otherwise they would have used a deficient application like MS
Publisher to do it. I think what Scribus will change, is that more people
will be exposed to page layout, and that is a good thing, I think.

In another interview with the Scribus team 4 , Craig Bradney speaks about the
fact that the software is often compared with its proprietary competition. He
brings up the ‘Scribus way of doing things’. What do you think is ‘The Scribus
Way’?

I don’t think Craig meant it that way. Our goal is to produce good output,
and make that easy for users. If we are in doubt, we think for example:
InDesign does this in quite an OK way, so we try to do it in a similar way;
we do not have any problems with that. On the other hand ... I told you a
bit about climbing mountains ... We cannot go from the one top to the next
one just in one step. We have to move slowly, and have to find our ways and
move through valleys and that sometimes also limits us. I can say: I want it
this way but then it is not possible now, it might be on the roadmap, but we
might have to do other things first.

When we use Scribus, we actually thought we were experiencing ‘The Scribus
Way’ through how it differences from other layout packages. First of all, in
Scribus there is a lot more attention for everything that happens after the layout
is done, i.e. export, error checking etc. and second, working with the text editor
is clearly the preferred way of doing layout. For us it links the software to a more
classic ways of doing design: a strictly phased process where a designer starts with
writing typographic instructions which are carried out by a typesetter, after which
the designer pastes everything into the mock-up. In short: it seems easier to do a
magazine in Scribus, than a poster. Do you recognize that image?
That is an interesting thought, I have never seen it that way before. My
background is that I did do a newspaper, magazine for a student group, and
we were using PageMaker, and of course that influenced me. In a small
group that just wants to bring out a magazine, you distribute the task of
writing some articles, and usually you have only one or two persons who are
capable of using a page layout application. They pull in the stories and make
some corrections, and then do the layout. Of course that is a work flow I am
4

http://www.kde.me.uk/index.php?page=fosdem-interview-scribus

16

familiar with, and I don’t think we really have poster designers or graphic
artists in the team. On the other hand ... we do ask our users what they
think should be possible with Scribus and if a functionality is not there, we
ask them to put in a bug report so we do not forget it and some time later
we will pick it up and implement it. Especially the possibility to edit from
the canvas, this will approve in the upcoming versions.
Some things we just copied from other applications. I think Franz 5 had no
previous experience with PageMaker, so when I came to Scribus, and saw
how it handled text chains, I was totally dismayed and made some changes
right away because I really wanted it to work the way it works in PageMaker,
that is really nice. So, previous experience and copying from another applications was one part of the development. Another thing is just technical
problems. Scribus is at the moment internally not that well designed, so we
first have to rewrite a lot of code to be able to reach some elements. The
coding structure for drawing and layout was really cumbersome inside and
it was difficult to improve. We worked with 2.500 lines of code, and there
were no comments in between. So we broke it down in several elements,
put some comments in and also asked Franz: why did you did this or that, so
we could put some structure back into the code to understand how it works.
There is still a lot of work to be done, and we hope we can reach a state
where we can implement new stuff more easily.
It is interesting how the 2.500 lines of code are really tangible when you use
Scribus old-style, even without actually seeing them. When Peter Linnel was
explaining how to make the application comply to the conservative standards of
the printing business, he used this term ‘self-defensive code’ ...
At Scribus we have a value that a file should never break in a print shop.
Any bug report we receive in this area, is treated with first priority.

We can speak from experience, that this is really true! But this robustness shifts
out of sight when you use the inbuilt script function; then it is as if you come
in to the software through the backdoor. From self-defence to the heart of the
application?

It is not really self-defence ... programmers and software developers sometimes use the expression: ‘a user should not shoot himself in the foot’.
5

Schmid

17

Scribus will not protect you from ugly layout, if that would be possible at
all! Although I do sometimes take deliberate decisions to try and do it ...
for example that for as long as I am around, I will not make an option to
do ‘automatic letter spacing’, because I think it is just ugly. If you do it
manually, that is your responsibility; I just do not feel like making anything
like that work automatically. What we have no problems with, is to prevent
you from making invalid output. If Scribus thinks a certain font is not OK,
and it might break on one or two types of printers ... this is reason enough
for us to make sure this font is not used. The font is not even used partially,
it is gone. That is the kind of self-defence Peter Linnel was talking about.
It is also how we build .pdf files and PostScript. Some ways of building
PostScript take less storage, some of it would be easier to read for humans,
but we always take an approach that would be the least problematic in a
print shop. This meant for example, that you could not search in a .pdf. 6
I think you can do that now, but there are still limitations; it is on the
roadmap to improve over time, to even add an option to output a web oriented .pdf and a print oriented .pdf ... but it is an important value in Scribus
is to get the output right. To prevent people to really shoot themselves in
the foot.

Our last question is about the relation between the content that is layed out
in Scribus, and the fact that it is an Open Source project. Just as an example,
Microsoft Word will come out with an option to make it easy to save a document
with a Creative Commons License 7 . Would this, or not, be an interesting option
to add to Scribus? Would you be interested in making that connection, between
software and content?
It could well be we would copy that, if it is not already been patented by
Microsoft! To me it sounds a bit like a marketing trick ... because it is such
an easy function to do. But, if someone from Creative Commons would ask
for this function, I think someone would implement it for Scribus in a short
time, and I think we would actually like it. Maybe we would generalize it a
little, so that for example you could also add other licenses too. We already
have support for some meta data, and in the future we might put some more
function in to support license managing, for example also for fonts.
6
7

because the fonts get outlined and/or reencoded
http://creativecommons.org/press-releases/entry/5947

18

About the relation between content and Open Source software in general
... there are some groups who are using Scribus I politically do not really
identify with. Or more or less not at all. If I meet those people on the IRC
chat, I try to be very neutral, but I of course have my own thoughts in the
back of my head.

Do you think using a tool like Scribus produces a certain kind of use?

No. Preferences for work tools and political preference are really orthogonal,
and we have both. For example when you have some right wing people they
could also enjoy using Scribus and socialist groups as well. It is probably the
best for Scribus to keep that stuff out of it. I am not even sure about the
political conviction of the other developers. Usually we get along very well,
but we don’t talk about those kinds of things very much. In that sense I
don’t think that using Scribus will influence what is happening with it.
As a tool, because it makes creating good page layouts much easier, it will
probably change the landscape because a lot of people get exposed to page
layout and they learn and teach other people; and I think that is growing,
and I hope it will be growing faster than if it is all left to big players like
InDesign and Quark ... I think this will improve and it will maybe also
change the demands that users will make for our application. If you do page
layout, you get into a new frame of mind ... you look in a different way at
publications. It is less content oriented, but more layout oriented. You will
pick something up and it will spread. People by now have understood that
it is not such a good idea to use twelve different fonts in one text ... and I
think that knowledge about better page layout will also spread.

19

When we came to the Libre Graphics Meeting
for the first time in 2007, we recorded this rare
conversation with George Williams, developer of
FontForge, the editing tool for fonts. We spoke
about Shakespeare, Unicode, the pleasure of making beautiful things, and pottery.
We‘re doing these interviews, as we’re working as designers on Open Source
OK.

With Open Source tools, as typographers, but often when we speak to
developers they say well, tell me what you want, or they see our interest in
what they are doing as a kind of feature request or bug report.

(laughs) Yes.

Of course it’s clear that that’s the way it often works, but for us it’s also
interesting to think about these tools as really tools, as ways of shaping
work, to try and understand how they are made or who is making them.
It can help us make other things. So this is actually what we want to talk
about. To try and understand a bit about how you’ve been working on
FontForge. Because that’s the project you’re working on.

OK.

And how that connects to other ideas of tools or tools’ shape that you
make. These kind of things. So maybe first it’s good to talk about what
it is that you make.

OK. Well ... FontForge is a font editor.
I started playing with fonts when I bought my first Macintosh, back in the
early eighties (actually it was the mid-eighties) and my father studied textual bibliography and looked at the ways the printing technology of the
Renaissance affected the publication of Shakespeare’s works. And what that
meant about the errors in the compositions we see in the copies we have
left from the Renaissance. So my father was very interested in Renaissance
printing (and has written books on this subject) and somehow that meant
23

that I was interested in fonts. I’m not quite sure how that connection happened, but it did. So I was interested in fonts. And there was this program
that came out in the eighties called Fontographer which allowed you to create PostScript 1 and later TrueType 2 fonts. And I loved it. And I made lots
of calligraphic fonts with it.

You were ... like 20?

I was 20~30. Lets see, I was born in 1959, so in the eighties I was in my
twenties mostly. And then Fontographer was bought up by Macromedia 3
who had no interest in it. They wanted FreeHand 4 which was done by
the same company. So they dropped Fon ... well they continued to sell
Fontographer but they didn’t update it. And then OpenType 5 came out and
Unicode 6 came out and Fontographer didn’t do this right and it didn’t do
that right ... And I started making my own fonts, and I used Fontographer
to provide the basis, and I started writing scripts that would add accents to
latin letters and so on. And figured out the Type1 7 format so that I could
decompose it — decompose the Fontographer output so that I could add
1
2
3
4
5

6
7

PostScript fonts are outline font specifications developed by Adobe Systems for professional
digital typesetting, which uses PostScript file format to encode font information.
Wikipedia. PostScript fonts — Wikipedia, The Free Encyclopedia, 2014. [Online; accessed 18.12.2014]

TrueType is an outline font standard developed by Apple and Microsoft in the late 1980s as a
competitor to Adobe’s Type 1 fonts used in PostScript.
Wikipedia. TrueType — Wikipedia, The Free Encyclopedia, 2014. [Online; accessed 18.12.2014]

Macromedia was an American graphics, multimedia and web development software company
(1992–2005). Its rival, Adobe Systems, acquired Macromedia on December 3, 2005.
Wikipedia. Macromedia — Wikipedia, The Free Encyclopedia, 2014. [Online; accessed 18.12.2014]

Adobe FreeHand (formerly Macromedia Freehand) is a computer application for creating
two-dimensional vector graphics. Adobe discontinued development and updates to the
program. Wikipedia. Adobe FreeHand — Wikipedia, The Free Encyclopedia, 2014. [Online; accessed 18.12.2014]
OpenType is a format for scalable computer fonts. It was built on its predecessor TrueType,
retaining TrueType’s basic structure and adding many intricate data structures for prescribing
typographic behavior. Wikipedia. Opentype — wikipedia, the free encyclopedia, 2014. [Online; accessed 18.12.2014]
Unicode is a computing industry standard for the consistent encoding, representation, and
handling of text expressed in most of the world’s writing systems.
Wikipedia. Unicode — Wikipedia, The Free Encyclopedia, 2014. [Online; accessed 18.12.2014]

Type 1 is a font format for single-byte digital fonts for use with Adobe Type Manager
software and with PostScript printers. It can support font hinting. It was originally a
proprietary specification, but Adobe released the specification to third-party font
manufacturers provided that all Type 1 fonts adhere to it.
Wikipedia. PostScript fonts — Wikipedia, The Free Encyclopedia, 2014. [Online; accessed 18.12.2014]

24

my own things to it. And then Fontographer didn’t do Type0 8 PostScript
fonts, so I figured that out.
And about this time, the little company I was working for, a tiny little
startup — we wrote a web HTML editor — where you could sit at your
desk and edit pages on the web — it was before FrontPage 9 , but similar to
FrontPage. And we were bought by AOL and then we were destroyed by
AOL, but we had stock options from AOL and they went through the roof.
So ... in the late nineties I quit. And I didn’t have to work.
And I went off to Madagascar for a while to see if I wanted to be a primatologist. And ... I didn’t. There were too many leaches in the rainforest.

(laughs)

So I came back, and I wrote a font editor instead.
And I put it up on the web and in late 99, and within a month someone
gave me a bug report and was using it.
(laughs) So it took a month

Well, you know, there was no advertisement, it was just there, and someone
found it and that was neat!
(laughs)

And that was called PfaEdit (because when it began it only did PostScript)
and I ... it just grew. And then — I don’t know — three, four, five years ago
someone pointed out that PfaEdit wasn’t really appropriate any more, so I
asked various users what would be a good name and a french guy said How
’bout FontForge? So. It became FontForge then. — That’s a much better
name than PfaEdit.

(laughs)

Used it ever since.

But your background ... you talked about your father studying ...
8
9

Type 0 is a ‘composite’ font format . A composite font is composed of a high-level font that
references multiple descendent fonts.
Wikipedia. PostScript fonts — Wikipedia, The Free Encyclopedia, 2014. [Online; accessed 18.12.2014]

Microsoft FrontPage is a WYSIWYG HTML editor and Web site administration tool from
Microsoft discontinued in December 2006.
Wikipedia. Microsoft FrontPage — Wikipedia, The Free Encyclopedia, 2014. [Online; accessed 18.12.2014]

25

I grew up in a household where Shakespeare was quoted at me every day,
and he was an English teacher, still is an English teacher, well, obviously
retired but he still occasionally teaches, and has been working for about 30
years on one of those versions of Shakespeare where you have two lines of
Shakespeare text at the top and the rest of the page is footnotes. And I went
completely differently and became a mathematician and computer scientist
and worked in those areas for almost twenty years and then went off and
tried to do my own things.

So how did you become a mathematician?
(pause) I just liked it.
(laughs) just liked it

I was good at it. I got pushed ahead in high school. It just never occurred
to me that I’d do anything else — until I met a computer. And then I still
did maths because I didn’t think computers were — appropriate — or — I
was a snob. How about that.

(laughs)

But I spent all my time working on computers as I went through university.
And then got my first job at JPL 10 and shortly thereafter the shuttle 11
blew up and we had some — some of our experiments — my little group
— flew on the shuttle and some of them flew on an airplane which went
over the US took special radar pictures of the US. We also took special radar
pictures of the world from the shuttle (SIR-A, SIR-B, SIR-C). And then
our airplane burned up. And JPL was not a very happy place to work after
that. So then I went to a little company with some college friends of mine,
that they’d started, created compilers and debuggers — do you know what
those are?
Mm-hmm.

And I worked a long time on that, and then the internet came out and found
another little company with some friends — and worked on HTML.
10
11

Jet Propulsion Laboratory
The Space Shuttle Challenger disaster occurred on January 28, 1986, when the NASA Space
Shuttle orbiter Challenger broke apart 73 seconds into its flight, leading to the deaths of its
seven crew members.
Wikipedia. Space Shuttle Challenger disaster — Wikipedia, The Free Encyclopedia, 2014. [Online; accessed 18.12.2014]

26

So when, before we moved, I was curious about, I wanted you to talk
about a Shakespearian influence on your interest in fonts. But on the
other hand you talk about working in a company where you did HTML
editors at the time you actually started, I think. So do you think that
is somehow present ... the web is somehow present in your — in how
FontForge works? Or how fonts work or how you think about fonts?

I don’t think the web had much to do with my — well, that’s not true.
OK, when I was working on the HTML editor, at the time, mid-90s, there
weren’t any Unicode fonts, and so part of the reason I was writing all these
scripts to add accents and get Type0 support in PostScript (which is what
you need for a Unicode font) was because I needed a Unicode font for our
HTML product.
To that extent — yes-s-s-s.
It had an effect. Aside from that, not really.
The web has certainly allowed me to distribute it. Without the web I doubt
anyone would know — I wouldn’t have any idea how to ‘market’ it. If that’s
the right word for something that doesn’t get paid for. And certainly the
web has provided a convenient infrastructure to do the documentation in.
But — as for font design itself — that (the web) has certainly not affected
me.
Maybe with this creative commons talk that Jon Phillips was giving, there
may be, at some point, a button that you can press to upload your fonts to
the Open Font Library 12 — but I haven’t gotten there yet, so I don’t want
to promise that.
(laughs) But no, indeed there was – hearing you speak about ccHost 13 –
that’s the ...

Mm-hmm.

... Software we are talking about?

That’s what the Open Font Library uses, yes.
12
13

Open Font Library is a project devoted to the hosting and encouraged creation of fonts
released under Free Licenses.
Wikipedia. Open Font Library — Wikipedia, The Free Encyclopedia, 2014. [Online; accessed 18.12.2014]

ccHost is a web-based media hosting engine upon which Creative Commons’ ccMixter remix
web community is built. Wikipedia. CcHost — Wikipedia, The Free Encyclopedia, 2012. [Online; accessed 18.12.2014]

27

Yeah. And a connection to FontForge could change the way, not only
how you distribute fonts, but also how you design fonts.

It — it might. I don’t know ... I don’t have a view of the future.
I guess to some extent, obviously font design has been affected by requiring
it (the font) to be displayed on a small screen with a low resolution display.
And there are all kinds of hacks in modern fonts formats for dealing with
low resolution stuff. PostScript calls them hints and TrueType calls them
instructions. They are different approaches to the same thing. But that,
that certainly has affected font design in the last — well since PostScript
came out.
The web itself? I don’t think that has yet been a significant influence on
font design, but then — I’m no longer a designer. I discovered I was much
better at designing font editors than at designing fonts.
So I’ve given up on that aspect of things.
Mm-K, because I’m curious about your making a division about being a
designer, or being a font-editor-maker, because for me that same definition of maker, these two things might be very related.

Well they are. And I only got in to doing it because the tools that were
available to me were not adequate. But I have found since — that I’m
not adequate at doing the design, there are many people who are better at
designing — designing fonts, than I am. And I like to design fonts, but I
have made some very ugly ones at times.
And so I think I will — I’ll do that occasionally, but that’s not where I’m
going to make a mark.
Mostly now —
I just don’t have the —
The font editor itself takes up so much of time that I don’t have the energy,
the enthusiasm, or anything like that to devote to another major creative
project. And designing a font is a major creative project.
Well, can we talk about the major creative project of designing a font
editor? I mean, because I’m curious how — how that is a creative project
for you — how you look at that.

I look at it as a puzzle. And someone comes up to me with a problem, and I
try and figure out how to solve it. And sometimes I don’t want to figure out
28

how to solve it. But I feel I should anyway. And sometimes I don’t want to
figure out how to solve it and I don’t.
That’s one of the glories of being one’s own boss, you don’t have to do
everything that you are asked.
But — to me — it’s just a problem. And it’s a fascinating problem. But
why is it fascinating? — That’s just me. No one else, probably, finds
it fascinating. Or — the guys who design FontLab probably also find it
fascinating, there are two or three other font design programs in the world.
And they would also find it fascinating.

Can you give an example of something you would find fascinating?

Well. Dave Crossland who was sitting behind me at the end was talking
to me today — he sat down — we started talking after lunch but on the
way up the stairs — at first he was complaining that FontForge isn’t written
with a standard widget set. So it looks different from everything else. And
yes, it does. And I don’t care. Because this isn’t something which interests
me.
On the other hand he was saying that what he also wanted was a paragraph
level display of the font. So that as he made changes in the font he could
see a ripple effect in the paragraph.
Now I have a thing which does a word level display, but it doesn’t do multilines. Or it does multi-lines if you are doing Japanese (vertical writing mode)
but it doesn’t do multi-columns then. So it’s either one vertical row or one
horizontal row of glyphs.
And I do also have a paragraph level display, but it is static. You bring
it up and it takes the current snapshot of the font and it generates a real
TrueType font and pass it off to the X Window 14 rasterizer — passes it off
to the standard Linux toolchain (FreeType) as that static font and asks that
toolchain to display text.
So what he’s saying is OK, do that, but update the font that you pass off every
now and then. And Yeah, that’d be interesting to do. That’s an interesting project
to work on. Much more interesting than changing my widget set which is
just a lot of work and tedious. Because there is nothing to think about.
It’s just OK, I’ve got to use this widget instead of my widget. My widget does

14

The X Window System is a windowing system for bitmap displays, common on UNIX-like
computer operating systems. X provides the basic framework for a GUI environment:
drawing and moving windows on the display device and interacting with a mouse and
keyboard. Wikipedia. X Window System — Wikipedia, The Free Encyclopedia, 2014. [Online; accessed 18.12.2014]

29

exactly what I want — because I designed it that way — how do I make this
thing, which I didn’t design, which I don’t know anything about, do exactly
what I want?
And — that’s dull. For me.

Yeah, well.

Dave, on the other hand, is very hopeful that he’ll find some poor fool
who’ll take that on as a wonderful opportunity. And if he does, that would
be great, because not having a standard widget set is one of the biggest
complaints people have. Because FontForge doesn’t look like anything else.
And people say Well the grey background is very scary. 15
I thought it was normal to have a grey background, but uh ... that’s why we
now have a white background. A white background may be equally scary,
but no one has complained about it yet.

Try red.

I tried light blue and cream. One of them I was told gave people migraines
— I don’t remember specifically what the comment was about the light
blue, but

(someone from inkscape): Make it configurable.

Oh, it is configurable, but no one configures it.

(someone from inkscape): Yeah, I know.

So ...

So, you talked about spending a lot of time on this project, how does that
work, you get up in the morning and start working on FontForge? Or ...
Well, I do many things. Some mornings, yes, I get up in the morning and I
start working on FontForge and I cook breakfast in the background and eat
breakfast and work on FontForge. Some mornings I get up at four in the
morning and go out running for a couple of hours and come back home and
sort of collapse and eat a little bit and go off to yoga class and do a pilates
class and do another yoga class and then go to my pottery class, and go to
the farmers’ market and come home and I haven’t worked on FontForge at
all. So it varies according to the day. But yes I ...
15

It used to have a grey background, now it has a white background

30

There was a period where I was spending 40, 50 hours a week working
on FontForge, I don’t spend that much time on it now, it’s more like 20
hours, though the last month I got all excited about the release that I put
out last Tuesday — today is Sunday. And so I was working really hard —
probably got up to — oh — 30 hours some of that time. I was really excited
about the change. All kinds of things were different — I put in Python
scripting, which people had been asking for — well, I’m glad I’ve done it,
but it was actually kind of boring, that bit — the stuff that came before was
— fascinating.

Like?

I — are you familiar with the OpenType spec? No. OK. The way you ...
the way you specify ligatures and kerning in OpenType can be looked at at
several different levels. And the way OpenType wants you to look at it, I
felt, was unnecessarily complicated. So I didn’t look at it at that level. And
then after about 5 years of looking at it that way I discovered that the reason
I thought it was unnecessarily complicated was because I was only used to
Latin or Cyrillic or Greek text, and for Latin, Cyrillic or Greek, it probably
is unnecessarily complicated. But for Indic scripts it is not unnecessarily
complicated, and you need all those things. So I ripped out all of the code
for specifying strange glyph conversions. You know in Arabic a character
looks different at the beginning of a word and so on? So that’s also handled
in this area. And I ripped all that stuff out and redid it in the way that
OpenType wanted it to be done and not the somewhat simplified but not
sufficiently powerful method that I’d been using up until then.
And that I found, quite fascinating.
And once I’d done that, it opened up all kinds of little things that I could
change that made the font editor itself bettitor. Better. Bettitor?

(laughs) That’s almost Dutch.

And so after I’d done that the display I talked about which could show a
word — I realized that I should redo that to take advantage of what I had
done. And so I redid that, and it’s now, it’s now much more usable. It now
shows — at least I hope it shows — more of what people want to see when
they are working with these transformations that apply to the font, there’s
now a list of the various transformations, that can be enabled at any time
and then it goes through and does them — whereas before it just sort of —
31

well it did kerning, and if you asked it to it would substitute this glyph so
you could see what it would look like — but it was all sort of — half-baked.
It wasn’t very elegant.
And — it’s much better now, and I’m quite proud of that.
It may crash — but it’s much better.

So you bring up half-baked, and when we met we talked about bread
baking.

Oh, yes.

And the pleasure of handling a material when you know it well. Maybe
make reliable bread — meaning that it comes out always the same way,
but by your connection to the material you somehow — well — it’s a
pleasure to do that. So, since you’ve said that, and we then went on
talking about pottery — how clay might be of the same — give the same
kind of pleasure. I’ve been trying to think — how does FontForge have
that? Does it have that and where would you find it or how is the ...
I like to make things. I like to make things that — in some strange
definition are beautiful. I’m not sure how that applies to making bread,
but my pots — I think I make beautiful pots. And I really like the glazing I
put onto them.
It’s harder to say that a font editor is beautiful. But I think the ideas behind
it are beautiful in my mind — and in some sense I find the user interface
beautiful. I’m not sure that anyone else in the world does, because it’s what
I want, but I think it’s beautiful.
And there’s a satisfaction in making something — in making something
that’s beautiful. And there’s a satisfaction too (as far as the bread goes) in
making something I need. I eat my own bread — that’s all the bread I eat
(except for those few days when I get lazy and don’t get to make bread that
day and have to put it off until the next day and have to eat something that
day — but that doesn’t happen very often).
So it’s just — I like making beautiful things.

OK, thank you.
Mm-hmm.

That was very nice, thank you very much.

Thank you. I have pictures of my pots if you’d like to see them?
Yes, I would very much like to see them.
32

This conversation with Juliane de Moerlooze was recorded in March 2009.

When you hear people talk about women having more sense
for the global, intuitive and empathic ... and men are more
logical ... even if it is true ... it seems quite a good thing to
have when you are doing math or software?

Juliane is a Brussels based computer scientist, feminist
and Linux user from the beginning. She studied math,
programming and system administration and participates in Samedies. 1 In February 2009 she was voted
president of the Brussels Linux user group (BXLug).

I will start at the end ... you have recently become president of the BXLug. Can
you explain to us what it is, the BXLug?
It is the Brussels Linux user group, a group of Linux users who meet
regularly to really work together on Linux and Free Software. It is the most
active group of Linux users in the French speaking part of Belgium.

How did you come into contact with this group?

That dates a while back. I have been trained in Linux a long time ago ...
Five years? Ten years? Twenty years?

Almost twenty years ago. I came across the beginnings of Linux in 1995 or
1996, I am not sure. I had some Slackware 2 installed, I messed around with
friends and we installed everything ... then I heard people talk about Linux
distributions 3 and decided to discover something else, notably Debian. 4
1
2
3
4

Femmes et Logiciels Libres, group of women maintaining their own server
http://samedi.collectifs.net
one of the earliest Linux distributions
a distribution is a specific collection of applications and a software kernel
one of the largest Linux distributions

37

It is good to know that with Linux you really have a diversity, there are
distributions specially for audio, there are distributions for the larger public
with graphical interfaces, there are distributions that are a bit more ‘geek’,
in short you find everything: there are thousands of distributions but there
are a few principal ones and I heard people talk about an interesting development, which was Debian. I wanted to install it to see, and I discovered
the BXLug meetings, and so I ended up there one Sunday.

What was your experience, the first time you went?

(laughs) Well, it was clear that there were not many women, certainly not. I
remember some sessions ...
What do you mean, not many women? One? Or five?

Usually I was there on my own. Or maybe two. There was a time that we
were three, which was great. There was a director of a school who pushed
Free Software a lot, she organised real ’Journées du Libre’ 5 at her school,
to which she would invite journalists and so on. She was the director but
when she had free time she would use it to promote Free Software, but
I haven’t seen her in a while and I don’t know what happened since. I
also met Faty, well ... I wasn’t there all the time either because I had also
other things to do. There was a friendly atmosphere, with a little bar where
people would discuss with each other, but many were cluttered together in
the middle of the room, like autists hidden behind their computers, without
much communication. There were other members of the group who like me
realised that we were humans that were only concentrating on our machines
and not much was done to make new people feel welcome. Once I realised,
I started to move to the back of the room and say hello to people arriving.
Well, I was not the only one who started to do that but I imagine it might
have felt like a closed group when you entered for the first time. I also
remember in the beginning, as a girl, that ... when people asked questions
... nobody realised that I was actually teaching informatics. It seemed there
was a prejudice even before I had a chance to answer a question. That’s a
funny thing to remember.
Could you talk about the pleasure of handling computers? You might not be the
kind of person that loses herself in front of her computer, but you have a strong
5

Journées du Libre is a yearly festival organised by the BXLug

38

relationship with technology which comes out when you open up the commandline
... there’s something in you that comes to life.

Oh, yes! To begin with, I am a mathematician (‘matheuse’), I was a math
teacher, and I have been programming during my studies and yes, there
was something fantastic about it ... informatics for me is all about logic, but
logic in action, dynamic logic. A machine can be imperfect, and while I’m
not specialised in hardware, there is a part on which you can work, a kind
of determinism that I find interesting, it poses challenges because you can
never know all, I mean it is not easy to be a real system administrator that
knows every detail, that understands every problem. So you are partially in
the unknown, and discovering, in a mathematical world but a world that
moves. For me a machine has a rhythm, she has a cadence, a body, and her
state changes. There might be things that do not work but it can be that
you have left in some mistakes while developing etcetera, but we will get
to know the machine and we will understand. And after, you might create
things that are maybe interesting in real life, for people that want to write
texts or edit films or want to communicate via the Internet ... these are all
layers one adds, but you start ... I don’t know how to say it ... the machine is
at your service but you have to start with discovering her. I detest the kind
of software that asks you just to click here and there and than it doesn’t
work, and than you have to restart, and than you are in a situation where
you don’t have the possibility to find out where the problem is.
When it doesn’t show how it works?

For me it is important to work with Free Software, because when I have
time, I will go far, I will even look at the source code to find out what’s
wrong with the interface. Luckily, I don’t have to do this too often anymore
because software has become very complicated, twenty years later. But we
are not like persons with machines that just click ... I know many people,
even in informatics, who will say ‘this machine doesn’t work, this thing
makes a mistake’

The fact that Free Software proposes an open structure, did that have anything
to do with your decision to be a candidate for BXLug?
Well, last year I was already very active and I realised that I was at a point
in my life that I could use informatics better, and I wanted to work in this
39

field, so I spent much time as a volunteer. But the moment that I decided,
now this is enough, I need to put myself forward as a candidate, was after a
series of sexist incidents. There was for example a job offer on the BXLug
mailing list that really needed to be responded to ... I mean ... what was
that about? To be concrete: Someone wrote to the mailing list that his
company was looking for a developer in so and so on and they would like
a Debian developer type applying, or if there weren’t any available, it would
be great if it would be a blond girl with large tits. Really, a horrible thing so
I responded immediately and than it became even worse because the person
that had posted the original message, sent out another one asking whether
the women on the list were into castration and it took a large amount of
diplomacy to find a way to respond. We discussed it with the Samediennes 6
and I though about it ... I felt supported by many people that had well
understood that this was heavy and that the climate was getting nasty but
in the end I managed to send out an ironic message that made the other
person excuse himself and stop these kind of sexist jokes, which was good.
And after that, there was another incident, when the now ex-president of
the group did a radio interview. I think he explained Free Software relatively
well to a public that doesn’t know about it, but as an example how easy it is
to use Free Software, he said even my wife, who is zero with computers, knows
how it works, using the familiar cliché without any reservation. We discussed
this again with the Samediennes, and also internally at the BXLug and than
I thought: well, what is needed is a woman as president, so I need to present
myself. So it is thanks to the Samedies, that this idea emerged, out of the
necessity to change the image of Free Software.

In software and particularly in Free Software, there are relatively few women
participating actively. What kinds of possibilities do you see for women to enter?
It begins already at school ... all the clichés girls hear ... it starts there. We
possibly have a set of brains that is socially constructed, but when you hear
people talk about women having more sense for the global, intuitive and
empathic ... and men are more logic ... even if it is true ... it seems quite a
good thing to have when you are doing math or software? I mean, there is
no handicap we start out with, it is a social handicap ... convincing girls to
become a secretary rather than a system administrator.
6

Participants in the Samedies: Femmes et logiciels libres (http://www.samedies.be)

40

I am assuming there is a link between your feminism and your engagement with
Free Software ...

It is linked at the point where ... it is a political liaison which is about reappropriating tools, and an attempt to imagine a political universe where we
are ourselves implicated in the things we do and make, and where we collectively can discuss this future. You can see it as something very large, socially,
and very idealist too. You should also not idealise the Free Software community itself. There’s an anthropologist who has made a proper description 7 ...
but there are certainly relational and organisational problems, and political
problems, power struggles too. But the general idea ... we have come to the
political point of saying: we have technologies, and we want to appropriate
them and we will discuss them together. I feel I am a feminist ... but I know
there are other kinds of feminism, liberal feminism for example, that do not
want to question the political economical status quo. My feminism is a bit
different, it is linked to eco-feminism, and also to the re-appropriation of
techniques that help us organise as a group. Free Software can be ... well,
there is a direction in Free Software that is linked to ‘Free Enterprise’ and
the American Dream. Everything should be possible: start-ups or pin-ups,
it doesn’t matter. But for me, there is another branch much more ‘libertaire’
and left-wing, where there is space for collective work and where we can ask
questions about the impact of technology. It is my interest of course, and I
know well that even as president of the BXLug I sometimes find myself on
the extreme side, so I will not speak about my ‘libertaire’ ideas all the time
in public, but if anyone asks me ... I know well what is at stake but it is not
necessarily representative of the ideas within the BXLug.

Are their discussions between members, about the varying interests in Free Software?
I can imagine there are people more excited about efficiency and performativity
of these tools, and others attracted by it’s political side.
Well, these arguments mix, and also since some years there is unfortunately
less of a fundamental discussion. At the moment I have the impression that
we are more into ‘things to do’ when we meet in person. On the mailing
list there are frictions and small provocations now and then, but the really
interesting debates are over, since a few years ... I am a bit disappointed in
7

Christophe Lazarro. La liberté logicielle. Une ethnographie des pratiques d’échange et de
coopération au sein de la communauté Debian. Academia editons, 2008

41

that, actually. But it is not really a problem, because I know other groups
that pose more interesting questions and with whom I find it more interesting to have a debate. Last year we have been working away like small busy
bees, distributing the general idea of Free Software with maybe a hint to the
societal questions behind but in fact not marking it out as a counterweight
to a commercialised society. We haven’t really deepened the problematics,
because for me ... it is clear that Free Software has won the battle, they have
been completely recuperated by the business world, and now we are in a
period where tendencies will become clear. I have the impression that with
the way society is represented right now ... where they are talking about the
economical crisis ... and that we are becoming a society of ‘gestionnaires’
and ideological questions seem not very visible.
So do you think it is more or less a war between two tendencies, or can both
currents coexist, and help each other in some way?

The current in Free Software that could think about resistance and ask
political questions and so on, does not have priority at the moment. But
what we can have is debates and discussions from person to person and we
can interpolate members of the BXLug itself, who really sometimes start to
use a kind of marketing language. But it is relational ... it is from person
to person. At the moment, what happens on the level of businesses and
society, I don’t know. I am looking for a job and I see clearly that I will
need to accept the kinds of hierarchies that exist but I would like to create
something else. The small impact a group like BXLug can make ... well,
there are several small projects, such as the one to develop a distribution
specifically designed for small organisations, to which nobody could object
of course. Different directions coexist, because there is currently not any
project with enough at stake that it would shock the others.
To go once again from a large scale to a small scale ... how would you describe
your own itinerary from mathematics to working on and with software?

I did two bachelors at the University Libre de Bruxelles, and than I studied
to become a math teacher. I had a wonderful teacher, and we were into
the pleasure of exercising our brains, and discovering theory but a large part
of our courses were concentrated on pedagogy and how to become a good
teacher, how to open up the mind of a student in the context of a course.
That’s when I discovered another pleasure, of helping a journey into a kind
42

of math that was a lot more concrete, or that I learned to render concrete.
One of the difficult subjects you need to teach in high schools, is scales and
plans. I came up with a rendering of a submarine and all students, boys as
well as girls, were quickly motivated, wanting to imagine themselves at the
real scale of the vessel. I like math, because it is not linked to a pre-existing
narrative structure, it is a theoretical construct we accept or not, like the
rules of a game. For me, math is an ideal way to form a critical mind.
When you are a child, math is fundamentally fiction, full stop. I remember
that when I learned modern math at school ... I had an older teacher, and
she wasn’t completely at ease with the subject. I have the impression that
because of this ... maybe it was a question of the relation between power and
knowledge ... she did not arrive with her knowledge all prepared, I mean it
was a classical form of pedagogy, but it was a new subject to her and there
was something that woke up in me, I felt at ease, I followed, we did not go
too fast ...
It was open knowledge, not already formed and closed?

Well, we discovered the subject together with the teacher. It might sound
bizarre, and she certainly did not do this on purpose, but I immediately felt
confident, which did not have too much to do with the subject of the class,
but with the fact that I felt that my brains were functioning.
I still prefer to discover the solution to a mathematical problem together
with others. But when it comes to software, I can be on my own. In
the end it is me, who wants to ask myself: why don’t I understand? Why
don’t I make any progress? In Free Software, there is the advantage of
having lots of documentation and manuals available online, although you
can almost drown in it. For me, it is always about playing with your brain,
there is at least always an objective where I want to arrive, whether it is
understanding theory or software ... and in software, it is also clear that you
want something to work. There is a constraint of efficiency that comes in
between, that of course somehow also exists in math, but in math when you
have solved a problem, you have solved it on a piece of paper. I enjoy the
game of exploring a reality, even if it is a virtual one.

43

In September 2013 writer, developer, freestyle rapper and
poet John Haltiwanger joined the ConTeXt user meeting in
Brejlov (Czech Republic) 1 to present his ideas on Subtext,
‘A Proposed Processual Grammar for a Multi-Output PreFormat’. The interview started as a way to record John’s
impressions fresh from the meeting, but moved into discussing the future of layout in terms of ballistics.

How did you end up going to the ConTeXt meeting? Actually, where was it?

It was in Brejlov, which apparently might not even be a town or city. It
might specifically be a hotel. But it has its own ... it’s considered a location,
I guess. But arriving was already kind of a trick, because I was under the
impression there was a train station or something. So I was asking around:
Where is Brejlov? What train do I take to Brejlov? But nobody had any clue,
that this was even something that existed. So that was tricky. But it was really a beautiful venue. How I ended up at the conference specifically? That’s
a good question. I’m not an incredibly active member on the ConTeXt
mailing list, but I pop up every now and again and just kind of express a
few things that I have going on. So initially I mentioned my thesis, back in
January or maybe March, back when it was really unformulated. Maybe it
was even in 2009. But I got really good responses from Hans. 2 Originally,
when I first got to the Netherlands in 2009 in August, the next weekend
was the third annual ConTeXt meeting. I had barely used the software at
that point, but I had this sort of impulse to go. Well anyway, I did not have
the money for it at that time. So the fact that there was another one coming
round, was like: Ok, that sounds good. But there was something ... we got
into a conversation on the mailing list. Somebody, a non-native English
speaker was asking about pronouns and gendered pronouns and the proper
way of ‘pronouning’ things. In English we don’t have a suitable gender neutral pronoun. So he asked the questions and some guy responded: The
1
2

http://meeting.contextgarden.net/2013/
Hans Hagen is the principal author and developer of ConTeXt, past president of NTG, and
active in many other areas of the TeX community
Hans Hagen – Interview – TeX Users Group. http://tug.org/interviews/hagen.html, 2006. [Online; accessed 18.12.2014]

47

proper way to do it, is to use he. It’s an invented problem. This whole question is
an invented question and there is no such thing as a need for considering any other
options besides this. 3 So I wrote back and said: That’s not up to you to decide,
because if somebody has a problem, than there is a problem. So I kind of naively
suggested that we could make a Unicode character, that can stand in, like a
typographical element, that does not necessarily have a pronounciation yet.
So something that, when you are reading it, you could either say he or she
or they and it would be sort of [emergent|dialogic|personalized].
Like delayed political correctness or delayed embraciveness. But, little did I
know, that Unicode was not the answer.

Did they tell you that? That Unicode is not the answer?

Well, Arthur actually wrote back 4 , and he knows a lot about Unicode and
he said: With Unicode you have to prove that it’s in use already. In my sense,
Unicode was a playground where I could just map whatever values I wanted
to be whatever glyph I wanted. Somewhere, in some corner of unused
namespace or something. But that’s not the way it works. But TeX works
like this. So I could always just define a macro that would do this. Hans
actually wrote a macro 5 that would basically flip a coin at the beginning of
your paper. So whenever you wanted to use the gender neutral, you would
just use the macro and then it wouldn’t be up to you. It’s another way of
obfuscating, or pushing the responsibility away from you as an author. It’s
like ok, well, on this one it was she, the next it was he, or whatever.

So in a way gender doesn’t matter anymore?

Right. And then I was just like, that’s something we should talk about at the
meeting. I guess I sent out something about my thesis and Hans or Taco,
they know me, they said that it would great for you to do a presentation of
this at the meeting. So that’s very much how I ended up there.
You had never met anyone from ConTeXt before?
3
4
5

http://www.ntg.nl/pipermail/ntg-context/2010/051058.html
http://www.ntg.nl/pipermail/ntg-context/2010/051098.html
http://www.ntg.nl/pipermail/ntg-context/2010/051116.html

48

No. You and Pierre were the only people I knew, that have been using it,
besides me, at the time. It was interesting in that way, it was really ... I mean
I felt a little bit ... nervous isn’t exactly the word, but I sort of didn’t know
what exactly my positon was meant to be. Because these guys ... it’s a users’
meeting, right? But the way that tends to work out for Open Source projects
is developers talking to developers. So ... my presentation was saturated ...
I think, I didn’t realise how quickly time goes in presentations, at the time.
So I spent like 20 minutes just going through my attack on media theory in
the thesis. And there was a guy, falling asleep on the right side of the room,
just head back. So, that was entertaining. To be the black sheep. That’s
always a fun position. It was entertaining for me, to meet these people
and to be at the same time sort of an outsider. Not a really well known
user contrasted with other people, who are more like cornerstones of the
community. They were meeting everybody in person for the first time. And
somehow I could connect. So now, a month and a half later we’re starting
this ConTeXt group, an international ConTeXt users’ group and I’m on the
board, I’m editing the journal. So it’s like, it ...
... that went fast!

It went fast indeed!

What is this ‘ConTeXt User Group’?

To a certain extent the NTG, which is the Netherlands TeX Group, had sort
of been consumed from the inside by the heavyness of ConTeXt, specifically
in the Netherlands. The discussion started to shift to be more ConTeXt.
Now the journal, the MAPS journal, there are maybe 8 or 10 articles, two of
which are not written by either Hans or Taco, who are the main developers
of ConTeXt. And there is zero on anything besides ConTeXt. So the NTG
is almost presented as ok, if you like ConTeXt or if you wanna be in a ConTeXt
user group, you join the NTG. Apparently the journal used to be quite thick
and there are lots of LaTeX users, who are involved. So partially the attempt
is sort of ease that situation a little bit.
It allowed the two communities to separate?
49

Yeah, and not in any way like fast or abrupt fashion. We’re trying to be
very conscious about it. I mean, it’s not ConTeXt’s fault that LaTeX users
are not submitting any articles for the journal. That user group will always have the capacity, those people could step up. The idea is to setup a
more international forum, something that has more of the sense of support
for ... because the software is getting bigger and right now we’re really reliant on this mailing list and if you have your stupid question either Hans,
Taco or Wolfgang will shoot something back. And they become reliant on
Wolfgang to be able to answer questions, because there are more users coming. Arthur was really concerned, among other people, with the scalability
of our approach right now. And how to set up this infrastructure to support
the software as it grows bigger. I should forward you this e-mail that I
wrote, that is a response to their name choices. They were contemplating
becoming a group called ‘cows’. Which is clearly an inside joke because they
loved to do figure demonstrations with cows. And seeing ConTeXt as I do,
as a platform, a serious platform, for the future, something that ... it’s almost like it hasn’t gotten to its ... I mean it’s in such rapid development ...
it’s so undocumented ... it’s so ... like ... it’s like rushing water or something.
But at some point ... it’s gonna fill up the location. Maybe we’re still building this platform, but when it’s solid and all the pieces are ... everything
is being converted to metric, no more inches and miles and stuff. At that
point, when we have this platform, it will turn into a loadable Lua library.
It won’t even be an executable at that point.
It is interesting how quickly you have become part of this community. From being
complete outsider not knowing where to go, to now speaking about a communal
future.
To begin with, I guess I have to confront my own seemingly boundless
propensity for picking obscure projects ... as sort of my ... like the things
that I champion. And ... it often boils down to flexibility.
You think that obscurity has anything to do with the future compatibility of
ConTeXt?
50

Well, no. I think the obscurity is something that I don’t see this actually
lasting for too long in the situation of ConTeXt. As it gets more stable it’s
basically destined to become more of a standard platform. But this is all
tied into to stuff that I’m planning to do with the software. If my generative
typesetting platform ... you know ... works and is actually feasible, which is
maybe a 80% job.

Wait a second. You are busy developing another platform in parallel?

Yes, although I’m kind of hovering over it or sort of superceeding it as
an interface. You have LaTeX, which has been at version 2e since the
mid-nineties, LaTeX 3 is sort of this dim point on the horizon. Whereas
ConTeXt is changing every week. It’s converting the entire structure of this
macro package from being written in TeX to being written in Lua. And
so there is this transition from what could be best described as an archaic
approach to programming, to this shiny new piece of software. I see it as
being competitive strictly because it has so much configurability. But that’s
sort of ... and that’s the double edged sword of it, that the configuration
is useless without the documentation. Donald Knuth is famous for saying
that he realises he would have to write the software and the manual for the
software himself. And I remember in our first conversation about the sort
of paternalistic culture these typographic projects seem to have. Or at least
in the sense of TeX, they seem to sort of coagulate around a central wizard
kind of guy.

You think ConTeXt has potential for the future, while TeX and LaTeX belong
... to the past?

I guess that’s sort of the way it sounds, doesn’t it?

I guess I share some of your excitement, but also have doubts about how far the
project actually is away from the past. Maybe you can describe how you think it
will develop, what will be that future? How you see that?

Right. That’s a good way to start untangling all the stuff I was just talking
about, when I was sort of putting the cart before the horse. I see it developing in some ways ... the way that it’s used today and the way that current,
51

heavy users use it. I think that they will continue to use in it in a similar
way. But you already have people who are utilising LuaTeX ... and maybe
this is an important thing to distinguish between ConTeXt and LuaTeX.
Right now they’re sort of very tied together. Their development is intrinsic,
they drive each other. But to some extent some of the more interesting
stuff that is been being done with these tools is ... like ... XML processing.
Where you throw XML into Lua code and run LuaTeX kerning operations
and line breaking and all this kind of stuff. Things that, to a certain extent,
you needed to engage TeX on its own terms in the past. That’s why macro
packages develop as some sort of sustainable way to handle your workflow.
This introduction of LuaTeX I think is sort of ... You can imagine it being
loaded as a library just as a way to typeset the documentation for code. It
could be like this holy grail of literate programming. Not saying this is the
answer, but that at least it will come out as a nice looking .pdf.

LuaTeX allows the connection to TeX to widen?

Yeah. It takes sort of the essence of TeX. And this is, I guess, the crucial
thing about LuaTeX that up until now TeX is both a typesetting engine and
a programming language. And not a very good one. So now that TeX can
be the engine, the Tschicholdian algorithms, the modernist principles, that,
for whatever reason, do look really good, can be utilised and connected to
without having to deal with this 32 year old macro programming language.
On top of that and part of how directly engaging with that kind of movement foreward is ... not that I am switching over to LuaTeX entirely at this
point ... but that this generative typesetting platform that was sort of the
foundation of this journal proposal we did. Where you could imagine actual
humanity scholars using something that is akin to markdown or a wiki formatting kind of system. And I have a nice little buzzword for that: ‘visually
semantic markup’. XML, HTML, TeX, ... none of those are visually semantic. Because it’s all based around these primitives ‘ok, between the angle
brackets’. Everything is between angle brackets. You have to look what’s
inside the angle brackets to know what is happening to what’s between the
angle brackets. Whereas a visually semantic markup ... OK headers! OK
so it’s between two hashmarks or it’s between two whatever ... The whole
52

design of those preformatting languages, maybe not wiki markup, but at
least markdown was that it could be printed as a plaintext document and
you could still get a sense of the structure. I think that’s a really crucial
development. So ... in a web browser, on one half of the browser you have
you text input, on the other half you have an real-time rendering of it into
HTML. In the meantime, the way that the interface works, the way that
the visually semantic markup works, is that it is a mutable interface. It
could be tailored to your sense of what it should look like. It can be tailored
specifically to different workflows. And because there is such a diversity
within typographic workflows, typesetting workflows ... that is akin to the
separation of form and content in HTML and CSS, but it’s not meant to be
... as problematic as that. I’m not sure if that is a real goal, or if that goal
is feasible or not. But it’s not meant to be drawing an artificial line, it’s just
meant to make things easier.

So by pulling apart historically grown elements, it becomes ... possibly modern?
Hypermodern?

Something for now and later.

Yes. Part of this idea, the trick ... This software is called ‘Subtext’ and at
this point it’s a conceptual project, but that will change pretty soon. Its
trick is this idea of separation instead of form and content, it’s translation
and effect. The parser itself has to be mutable, has to be able to pull in
the interface, print like decorations basically from a YAML configuration
file or some sort of equivalent. One of this configuration mechanisms that
was designed to be human readable and not machine readable. Like, well
both, striking that balance. Maybe we can get to that kind of ... talking
about agency a little bit. Its trick to really pull that out so that if you want
to ... for instance now in markdown if you have quotes it will be translated
in ConTeXt into \quotation. In ConTeXt that’s a very simple switch
to turn it into German quotes. Or I guess that’s more like international
quotes, everything not English. For the purposes of markdown there is
no, like really easy way, to change that part of the interface. So that when
53

I’m writing, when I use the angle brackets as a quote it would turn into
a \quotation in the output. Whereas with ‘Subtext’ you would just go
into the interface type like configuration and say: These are converted into
a quote basically. And then the effects are listed in other configuration files
so that the effects of quotes in HTML can be ...
... different.

Yes. Maybe have specific CSS properties for spacing, that kind of stuff. And
then in ConTeXt the same sort of ... both the environmental setup as well
as the raw ‘what is put into the document when it’s translated’. This kind of
separation ... you know at that point if both those effects are already the way
that you want them, then all you have to do is change the interface. And
then later on typesetting system, maybe iTeX comes out, you know, Knuth’s
joke, anyway. 6 That kind of separation seems to imply a future proofing
that I find very elegant. That you can just add later on the effects that you
need for a different system. Or a different version of a system, not that you
have to learn ‘mark 6’, or something like that ...
Back to the future ... I wonder about ConTeXt being bound to a particular
practise located with two specific people. Those two are actually the ones that
produce the most complete use cases and thereby define the kind of practise that
ConTeXt allows. Do you think this is a temporary stage or do you think that by
inviting someone like you on the board, as an outsider, that it is a sign of things
going to change?
Right. Well, yeah, this is another one of those put-up or shut-up kind of
things because for instance at the NTG meeting on Wednesday my presentation was very much a user presentation in a room of developers. Because I
basically was saying: Look like this is gonna be a presentation – most presentation are about what you know – and this presentation is really about
what I don’t know ... but what I do know is that there is a lot of room for
teaching ConTeXt in a more practical fashion, you could say. So my idea is
to basically write this documentation on how to typeset poetry, which gets
6

http://en.wikipedia.org/wiki/Donald_Knuth#Humor

54

into a lot of interesting questions, just a lot of interesting things. Like you
gonna need to write your own macros just at the start ... to make sure you
have not to go in and change every width value at some point. you know,
this kind of thing like ... really baby steps. How to make a cover page. These
kinds of things are not documented.
Documentation is let’s say an interesting challenge for ConTeXt. How do you
think the ConTeXt community could enable different kinds of use, beyond the
ones that are envisioned right now? I guess you have a plan?

Yeah ... that’s a good question. Part of it is just to do stuff, like to get you
more involved in the ConTeXt group for instance, because I was talking to
Arthur and he hadn’t even read the article from V/J10 7 . I think that kind
of stuff is really important. It’s like the whole Blender Foundation kind
of impulse. We have some developers who are paid to do this and that’s
kind of rare already in an Open Source/Free Software project. But then to
kind of have users pushing the boundaries and hitting limits. It’s rare that
Hans will encounter some kind of use case that he didn’t think of and react
in a negative way. Or react in a way like I’m not gonna even entertain that
possibility. Part of it is moving beyond this ... even the sort of centralisation
as you call it ... how to do that directly ... I see it more as baby steps for
me personally at this point. Just getting a tutorial on how to typeset a cd
booklet. Just basically what I’m writing. That at the same time, you know,
gets you familiar with ConTeXt and TeX in general. Before my presentation
I was wondering, I was like: how do you set a variable in TeX. Well, it’s a
macro programming language so you just make a macro that returns a value.
Like that kind of stuff is not initially obvious if you’re used to a different
paradigm or you know .. So these baby steps of kind of opening the field up
a little bit and then using it my own practise of guerilla typesetting and kind
of putting it out there. and you know ... And people gonna start being like:
oh yeah, beautiful documents are possible or at least better looking documents
are possible. And then once we have them at that, like, then how do you we
7

Constant, Clementine Delahaut, Laurence Rassel, and Emma Sidgwick.
Verbindingen/Jonctions: Tracks in electr(on)ic fields. Constant Verlag, 2009.
http://ospublish.constantvzw.org/sources/vj10

55

take it to the next level. How do I turn a lyric sheet from something that
is sort of static to ... you know ... two pages that are like put directly on the
screen next to each other. Like a screen based system where it’s animated
to the point ... and this is what we actually started to karaoke last night ...
so you have an English version and a Spanish version – for instance in the
case of the music that I’ve been doing. And we can animate. We can have
timed transitions so you can have a ‘current lyric indicator’ move down the
page. That kind of use case is not something that Pragma 8 is ever going
to run into. But as soon as it is done and documented then what’s the next
thing, what kind of animations are gonna be ... or what kind of ... once that
possibility is made real or concrete ... you know, so I kind of see it as a very
iterative process at this point. I don’t have any kind of grand scheme other
than ‘Subtext’ kind of replacing Microsoft Word as the dominant academic
publishing platform, I think. (laughs)

Just take over the world.

That’s one way to do it, I think.

You talked about manuals for things that you would maybe not do in another
kind of software ...

Right.

Manuals that not just explain ‘this is how you do it’ but also ‘this is the kind of
user you could be’.

Right.

I’m not sure if instructions for how to produce a cd cover would draw me in, but
if it helped me understand how to set a variable, it would.
Right.
8

Hans Hagen’s company for Advanced Document Engineering

56

You want the complete manual of course?
Yeah!

You were saying that ConTeXt should replace Microsoft Word as the standard
typesetting tool for academic publishing. You are thinking about the future for
ConTeXt more in the context of academic publishing than in traditional design
practise?

Yes. In terms of ‘Subtext’, I mean the origins of that project, very much
... It’s an interesting mix because it’s really a hybridity of many different
processes. Some, much come directly from this obscure art project ‘the abstraction’. So I have stuff like the track changes using Git version control
and everything being placed on plaintext as a necessity. That’s a holdover
from that project as well as the idea of gradiated presence. Like software
enabling a more real-time peer review, anonymous peer review system. And
even a collaborative platform where you don’t know who you’re writing with,
until the article comes out. Someting like out that. So these interesting
tweaks that you can kind of make, those all are holdovers from this very,
very much maybe not traditional design practise but certainly like ... twisted
artistic project that was based around hacking a hole from signified to siginifier and back again. So ... In terms of its current envisionment and the
use case for which we were developing it at the beginning, or I’m developing
it, whatever ... I’ll say it the royal way, is an academic thing. But I think
that ... doesn’t have to stop there and ...

At some point at OSP we decided to try ConTeXt because we were stuck with
Scribus for page layout as the only option in Free Software. We wanted escape
that kind of stiffness of the page, or of the canvas in a way. But ConTeXt
was not the dream solution either. For us it had a lot to do, of course, with
issues of documentation ... of not understanding, not coming from that kind of
automatism of treating it as another programming language. So I think we could
have had much more fun if we had understood the culture of the project better.
I think the most frustrating experience was to find out how much the model of
typesetting is linked to the Tschichold universe, that at the moment you try to
57

break out, the system completely looses all flexibility. And it is almost as if you
can hear it freeze. So if we blame half of our troubles with ConTeXt on our
inability to actually understand what we could do with ConTeXt, I think there is
a lot also in its assumption what a legible text would look like, how it’s structured,
how it’s done. Do you think a modern version of ConTeXt will keep that kind
of inflexibility? How can it become more flexible in it’s understanding of what a
page or a book could be?

That’s an interesting question, because I’m not into the development side
of LuaTex at all, but I would be surprised if the way that it was being
implemented was not significantly more modular than for instance when
it was written in Pascal, you know, how that was. Yeah, that’s a really
interesting question of how swappable is the backend. How much can we
go in and kind of ... you know. And it its an inspirational question to me,
because now I’m trying to envision a different page. And I’m really curious
about that. But I think that ConTeXt itself will likely be pretty stable in its
scope ... in that way of being ... sort of ... deterministic in its expectations.
But where that leaves us as users ... first I’d be really surprised if the engine
itself, if LuaTeX was not being some way written to ... I feel really ignorant
about this, I wish I just knew. But, yeah, there must be ... There is no way
to translate this into a modern programming language without somehow
thinking about this in terms of the design. I guess to certain extent the
answer to your question is dependent on the conscientiousness of Taco and
the other LuaTex developers for this kind of modularity. But I don’t ... you
know ... I’m actually feeling very imaginatively lacking in terms of trying to
understand what you’re award-winning book did not accomplish for you ...
Yeah, what’s wrong with that?

I think it would be good to talk with Pierre, not Pierre Marchand but Pierre ...
... Huggybear.

Yeah. We have been talking about ‘rivers’ as a metaphor for layout ... like were
you could have things that are ... let’s say fluid and other things that could be
placed and force things around it. Layout is often a combination of those two
58

things. And this is what is frustrating in canvas based layout that it is all fixed
and you have to make it look like it’s fluid. And here it’s all fluid and sometimes
you want it to be fixed. And at the moment you fix something everything breaks.
Then it’s up to you. You’re on your own.

Right.

The experience of working with ConTeXt is that it is very much elastic, but there
is very little imagination about what this elasticity could bring.
Right.

It’s all about creating universally beautiful pages, in a way it is using flexibility
to arrive at something that is already fixed.

Right.

Well, there is a lot more possible than we ever tried, but ... again ... this goes
back to the sort of centralist question: If those possibilities are mainly details in
the head of the main developers than how will I ever start to fantasize about the
book I would want to make with it?

Right.

I don’t even need access to all the details. Because once I have a sort of sense of
what I want to do, I can figure it out. Right now you’re sort of in the dark about
the endless possibilities ...

Its existence is very opaque in some ways. The way that it’s implemented,
like everything about it is sort of ... looking at the macros that they wrote,
the macros that you invoke ... like ... that takes ... flow control in TeX is like
... I mean you might as well write it in Bash or ... I mean I think Bash would
even be more sensible to figuring out what’s going on. So, the switch to Lua
there is kind of I think a useful step just in being more transparent. To allow
you to get into becoming more intimate with the source or the operation
59

of the system ... you know ... without having to go ... I mean I guess ... the
TeX Book would still be useful in some ways but that’s ... I mean ... to go
back and learn TeX when you’re just trying to use ConTeXt is sort of ...
it’s not ... I’m not saying it’s, you know ... it’s a proper assumption to say oh
yeah, don’t worry about the rules and the way TeX is organised but you’re not
writing your documents in ConTeXt the way you would write them if you’re
using plain TeX. I mean that’s just ... it’s just not ... It’s a different workflow
... it has a completely different set of processes that you need to arrange. So
it has a very distinct organisational logic ... that I think that ... yeah ... like
being able to go into the source and be like oh OK, like I can see clearly this
is ... you know. And then you can write in your own way, you can write back
in Lua.

This kind of documentation would be the killer feature of ConTeXt ...
Yeah.

It’s kind of strange paradox in the TeX community. At one hand you’re sort of
supposed to be able to do all of it. But at the same time on every page you’re told
not to do it, because it’s not for you to worry about this.

Right. That’s why the macro packages exist.

With ConTeXt there is this strange sense of very much wanting to understand the
way the logic works, or ... what the material is, you’re dealing with. And at the
same time being completely lost in the labyrinth between the old stuff from TeX
and LaTeX, the newer stuff from LuaTex, Mark 4, 3, 5, 6 ...

So that was sort of my idea with the cd typesetting project, is not to say,
that that is something that is immediately interesting to anybody who is
not trying to do that specifically, right? But at the same time if I’m ... if it’s
broken down into ‘How to do a bitmap cover page’ (=Lesson 1).
Lesson 2: ‘How to start defining you own macros’. And so you know, it’s
this thing that could be at one point a very ... because the documentation as
it stands right now is ... I think it’s almost ... fixing that documentation, I’m
60

not sure is even possible. I think that it has to be completely approached
differently. I mean, like a real ConTeXt manual, that documents ... you
know ... command by command exactly what those things do. I mean our
reference manual now just shows you what arguments are available, but
doesn’t even list the available arguments. It’s just like: These are the positions
of the arguments. And it’s interesting.

So expecting writers of the program to write the manual fails?
Right.

What is the difference between your plans for ‘Subtext’ and a page layout program
like Scribus?

You mentioned ‘Subtext’ coming from a more academic publishing rather
than a design background. I think that this belies where I have come into
typesetting and my understanding of typography. Because in reality DTP
has never kind of drawn me in in that way. The principle differences are
really based on this distribution of agency, in my mind. That when you’re
demanding the software to be ‘what you see is what you get’ or when you
place that metaphor between you and your process. Or you and your engagement, you’re gaining the usefulness of that metaphor, which is ... it’s
almost ... I hope I don’t sound offensive ... but it’s almost like child’s play.
It’s almost like point, click, place. To me it just seems so redundant or ...
time-consuming maybe ... to really deal with it that way. There are advantages to that metaphor. For instance I don’t plan on designing covers in
ConTeXt. Or even a poster or something like that. Because it doesn’t really
give affordances for that kind of creativity. I mean you can do generative
stuff with the MetaFun package. You can sort of play around with that. But
I haven’t seen a ConTeXt generated cover that I liked, to be honest.

OK.

OK. Principle differences. I’m trying to ... I’m struggling a little bit. I think
that’s partially because I’m not super comfortable with the layout mechanism
61

and stuff yet. And you have things like \blank in order to move down the
page. Because it has this sort of literal sense of a page and movement on
a page. Obviously Scribus has a literal idea of a page as well, but because
it’s WYSIWYG it has that benefit where you don’t have to think OK, well,
maybe it should be 1.6 ems down or maybe it should be 1.2 ems down. You
move it until it looks right. And then you can measure it and you’re like
ok, I’m gonna use this measurement for the further on in my document. So it’s
that whole top-down vs. bottom-up approach. It really breaks down into
the core organisational logics of those softwares.
I think it’s too easy to make the difference based on the fact that there is a
metaphorical layer or not. I think there is a metaphorical layer in ConTeXt too
...

Right. Yeah for sure.

And they come at a different moment and they speak a different language. But I
think that we can agree that they’re both there. So I don’t think it’s about the one
being without and the other being with. Of course there is another sense of placing
something in a canvas-based software than in a ... how would you call this?

So I guess it is either ‘declarative’ or ‘sequence’ based. You could say generative in a way ... or compiled or ... I don’t even know. That’s a cool question.

What is the difference really and why would you choose the one or the other? Or
what would you gain from one to the other? Because it’s clear that posters are not
easily made in ConTeXt. And that it’s much easier to typeset a book in ConTeXt
than it is in Scribus, for example.

Declarative maybe ...

So, there’s hierarchy. There’s direction. There’s an assumption about structure
being good or bad.
62

Yeah. Boxes, Glue. 9

What is exciting in something like this is that placement is relative always.
Relative to a page, relative to a chapter, relative to itself, relative to what’s next
to it. Where in a canvas based software your page is fixed.

Right.

This is very different from a system where you make a change, then you compile
and then you look at it and then you go back into your code. So where there is a
larger distinction between output and action. It’s almost gestural ...

It’s like two different ways of having a conversation. Larry Wall has this really great metaphor. He talks about ‘ballistic design’. So when you’re doing
code, maybe he’s talking more about software design at this point, basically
it’s a ‘ballistic practise’ to write code. Ballistics comes from artillery. So you
shoot at a thing. If you hit it, you hit it. If you miss it, you change the
amount of gun powder, the angle. So code is very much a ‘ballistic practise’.
I think that filters into this difference in how the conversation works. And
this goes back to the agencies where you have to wait for the computer to
figure out. To come with its into the conversation. You’re putting the code
in and then the computer is like ok; this is what the code means
and then is this what you wanted? Whereas with the WYSIWYG
kind of interface the agency is distributed in a different way. The computer is just like ok, I m a canvas; I m just here to hold what
you re putting on and I m not going to change it any way or
affect it in any way that you don t tell me to. I mean it’s
the same way but I ... is it just a matter of the compilation time? In one
you’re sort of running a experiment, in another you’re just sort of painting.
If that’s a real enough distinction or if that’s ... you know ... it’s sort of ... I
mean I kind of see that it is like this. There is ballistics vs. maybe fencing
or something.
9

Boxes, which are things can be drawn on a page, and glue, which is invisible stretchy stuff that sticks
boxes together. Mark C. Chu-Carroll. The Genius of Donald Knuth: Typesetting with Boxes and Glue, 2008

63

Fencing?

Fencing. Like more of a ...
Or wrestling?

Or wrestling.

When you said just sort of painting I felt offended. ( laughs)
I’m sorry. I didn’t mean it like that.

Maybe back to wrestling vs. ballistics. Where am I and where is the machine?
Right.

I understand that there’s lots of childish way of solving this need to make the
computer dissapear. Because if you are not wrestling ... you’re dancing, you know.

Yeah.

But I think it’s interesting to see that ballistics, that the military term of shooting
at something, is the kind of metaphor to be used. Which is quite different than a
creative process where there is a direct feedback between something placed and the
responses you have.
Right.

And it’s not always about aiming, but also sometimes about trying and about
kind of subtle movements that spark off something else. Which is very immediate.
And needs an immediate connection to ... let’s say ... what you do and what you
get. It would be interesting to think about ways to talking about ‘what you see
is what you get’ away from this assumption that is always about those poor users
that are not able do it in code.

Right.

64

Because I think there is essential stuff that you can not do in a tool like this –
that you can do in canvas-based tools. And so ... I think it’s really a pity when
... yeah ... It’s often overlooked and very strange to see. There is not a lot of good
thinking about that kind of interaction. Like literal interaction. Which is also
about agency with the painter. With the one that makes the movement. Where
here the agency is very much in this confrontational relation between me aiming
and ...

So yeah, when we put it in those metaphors. I’m on the side with the
painting, because ...

But I mean it’s difficult to do a book while wrestling. And I think that’s why a
poster is very difficult to do in this sort of aiming sense. I mean it’s fun to do but
it’s a strange kind of posters you get.

You can’t fit it all in your head at once. It’s not possible.
No. So it’s okay to have a bit of delay.

I wondered to what extent, if it were updated in real time, all the changes
you’re making in the code, if compilation was instantaneous, how that would
affect the experience. I guess it would still have this ballistic aspect, because
what you are doing is ... and that’s really the side of the metaphor ... or
a metaphorical difference between the two. One is like a translation. The
metaphor of ok this code means this effect ... That’s very different from picking
a brush and choosing the width of the stroke. It’s like when you initialise
a brush in code, set the brush width and then move it in a circle with a
radius of x. It’s different than taking the brush in Scribus or in whatever
WYSIWYG tool you are gonna use. There is something intrinsically different about a translation from primitives to visual effect than this kind of
metaphorical translation of an interaction between a human and a canvas ...
kind of put into software terms.

But there is a translation from me, the human, to the machine, to my human eye
again, which is hard to grasp. Without wanting it to be made invisible somehow.
65

Or to assume that it is not there. This would be my dream tool that would
allow you to sense that kind of translation without losing the ... canvasness of the
canvas. Because it’s frustrating that the canvas has to not speak of itself to be able
to work. That’s a very sad future for the canvas, I think.

I agree.

But when it speaks of itself it’s usually seen as buggy or it doesn’t work. So that’s
also not fair to the canvas. But there is something in drawing digitally, which
is such a weird thing to do actually, and this is interesting in this sort of cyborgs
we’re becoming, which is all about forgetting about the machine and not feeling
what you do. And it’s completely a different world in a way than the ballistics of
ConTeXt, LaTeX or whatever typesetting platform.

Yeah, that’s true. And it’s something that my students were forced to confront and it was really interesting because that supposed invisibility or almost
necessitated invisibility of the software. As soon as they’re in Inkscape instead of Illustrator they go crazy. Because it’s like they know what they want
to do, but it’s a different mechanism. It’s the same underlying process which
itself is only just meant to give you a digital version of what you could easily
do on a piece of paper. Provided you have the right paints and stuff. So
perhaps it’s like the difference between moving from a brush to an air brush.
It’s a different ... interface. It’s a different engagement. There is a different
thing between the human and the canvas. You engage in this creative process where it’s like ok, we’ll now have an airbrush and I can play around to
see what the capacities are without being stuck in well I can’t get it to do
my fine lines the same way I can when I have my brush. It’s like when you
switch the software out from between the person and the canvas. It’s that
sort of invisibility of the interface and it’s intense for people. They actually
react quite negatively. They’re not gonna bother to learn this other software
because in the end they’re doing less. The reappearance of this software
... of software between them and their ideas is kinda too much. Whereas
people who don’t have any preconceived notions are following the tutorials
and they’re learning and they’re like ok, I’m gonna continue to play with this.
Because this software is starting to become more invisible.
66

But on a sort of theoretical level the necessitated invisibility, as you said it nicely, is
something I would always speak against. Because that means you hide something
that’s there. Which seems a stupid thing to do, especially when you want to find
a kind of more flexible relation to your tools. I want to find a better word for
describing that sort of quick feedback. Because if it’s too much in the way, then
the process stops. The drawing can not be made if I’m worried too much about
the point of my pencil that might break ... or the ... I dont’t know ... the nozzle
being blocked.
Dismissing the other tools is ... I was kinda joking, but ... there is something sort of blocklike: Point. Move. This. But at the same time, like I
said, I wouldn’t do a cover in ConTeXt. Just like I probably wouldn’t try to
do something like a recreation of a Pre-Raphaelite painting in Processing or
something like that. There is just points where our metaphors break down.
And so ... It sounded sort of, ok, bottom-up über alles like always.

Ok, there’s still painters and there’s still people doing Pre-Raphaelite paintings
with Pre-Raphaelite tools, but most of us are using computers. So there should be
more clever ways of thinking about this.
Yeah. To borrow a quote from my old buddy Donald Rumsfeld: There are
the known knowns, the known unknowns and the unknown unknowns. That
actually popped into my head earlier because when we were talking about
the potentials of the software and the way that we interact and stuff, it’s like
we know that we don’t know ... other ways of organizing. We know that
there are, like there has to be, another way, whether it is a middle path between these two or some sort of ... Maybe it’s just tenth dimensional, maybe
it’s fourth dimensional, maybe it’s completely hypermodern or something.
Anyway. But the unknown unknowns ... It’s like the stuff that we can’t
even tell we don’t know about. The questions that we don’t know about
that would come up once we figure out these other ways of organising it.
That’s when I start to get really interested in this sort of thing. How do you
even conceive of a practise that you don’t know? And once you get there,
there’s going to be other things that you know you don’t know and have to
keep finding them. And then there’s gonna be things that you don’t know
you don’t know and they just appear from nowhere and ... it’s fun.
67

We discovered the work of Tom Lechner for the first time at
the Libre Graphics Meeting 2010 in Brussels. Tom traveled
from Portland to present Laidout, an amazing tool that he
made to produce his own comic books and also to work on
three dimensional mathematical objects. We were excited
about how his software represents the gesture of folding,
loved his bold interface decisions plus were impressed by the
fact that Tom decided to write his own programming framework for it. A year later, we met again in Montreal, Canada
for the Libre Graphics Meeting 2011 where he presents a
follow-up. With Ludivine Loiseau 1 and Pierre Marchand 2 ,
we finally found time to sit down and talk.
What is Laidout?

Well, Laidout is software that I wrote to lay out my cartoon books in an
easy fashion. Nothing else fit my needs at the time, so I just wrote it.
It does a lot more than laying out cartoons?

It works for any image, basically, and gradients. It does not currently do
text. It is on my todo list. I usually write my own text, so it does not really
need to do text. I just make an image of it.
It can lay out T-shirts?

But that’s all images too. I guess it’s two forms of laying out. It’s laying
out pieces of paper that remain whole in themselves, or you can take an
image and lay it out on smaller pieces of paper. Tiling, I guess you could
call it.
Can you talk us through the process of doing the T-shirt?

1
2

amateur bookbinder and graphic designer
artist/developer, contributing amongst others to PodofoImpose and Scribus

71

OK. So, you need a pattern. I had just a shirt that sort of fit and I
approximated it on a big piece of paper, to figure out what the pieces were
shaped like, and took a photograph of that. I used a perspective tool to
remove the distortion. I had placed rulers on the ground so that I could
remember the actual scale of it. Then once it was in the computer, I traced
over it in Inkscape, to get just the basic outline so that I could manipulate
further. Blender didn’t want to import it so I had to retrace it. I had to
use Blender to do it because that lets me shape the pattern, take it from
flat into something that actually makes 3D shapes so whatever errors were
in the original pattern that I had on the paper, I could now correct, make
the sides actually meet and once I had the molded shape, and in Blender
you have to be extremely careful to keep any shape, any manipulation that
you do to make sure your surface is still unfoldable into something flat. It is
very easy to get away from flat surfaces in Blender. Once I have the molded
shape, I can export that into an .off file which my unwrapper can import
and that I can then unwrap into the sleeves and the front and the back as
well as project a panoramic image onto those pieces. Once I have that, it
becomes a pattern laid out on a giant flat surface. Then I can use Laidout
once again to tile pages across that. I can export into a .pdf with all the
individual pieces of the image that were just pieces of the larger image that
I can print on transfer paper. It took forty iron-on transfer papers I ironed
with an iron provided to me by the people sitting in front of me so that
took a while but finally I got it all done, cut it all out, sewed it up and there
you go.
Could you say something about your interest in moving from 2D to 3D
and back again? It seems everything you do is related to that?
I don’t know. I’ve been making sculpture of various kinds for quite a
long time. I’ve always drawn. Since I was about eighteen, I started making
sculptures, mainly mathematical woodwork. I don’t quite have access to a
full woodwork workshop anymore, so I cannot make as much woodwork as
I used to. It’s kind of an instance of being defined by what tools you have
available to you, like you were saying in your talk. I don’t have a woodshop,
but I can do other stuff. I can still make various shapes, but mainly out of
paper. Since I had been doing woodwork, I picked up photography I guess
and I made a ton of panoramic images. It’s kind of fun to figure out how
72

to project these images out of the computer into something that you can
physically create, for instance a T-shirt or a ball, or other paper shapes.
Is there ever any work that stays in the computer, or does it always need
to become physical?

Usually, for me, it is important to make something that I can actually
physically interact with. The computer I usually find quite limiting. You
can do amazing things with computers, you can pan around an image, that
in itself is pretty amazing but in the end I get more out of interacting with
things physically than just in the computer.
But with Laidout, you have moved folding into the computer! Do you
enjoy that kind of reverse transformation?

It is a challenge to do and I enjoy figuring out how to do that. In making
computer tools, I always try to make something that I can not do nearly as
quickly by hand. It’s just much easier to do in a computer. Or in the case
of spherical images, it’s practically impossible to do it outside the computer.
I could paint it with airbrushes and stuff like that but that in itself would
take a hundred times longer than just pressing a couple of commands and
having the computer do it all automatically.

My feeling about your work is that the time you spent working on the
program is in itself the most intriguing part of your work. There is of course a
challenge and I can imagine that when you are doing it like the first time you
see a rectangle, and you see it mimic a perspective you think wow I am folding
a paper, I have really done something. I worked on imposition too but more
to figure out how to work with .pdf files and I didn’t go this way of the gesture
like you did. There is something in your work which is really the way you wrote
your own framework for example and did not use any existing frameworks. You
didn’t use existing GUIs and toolboxes. It would be nice to listen to you about
how you worked, how you worked on the programming.
I think like a lot of artists, or creative people in general, you have to
enjoy the little nuts and bolts of what you’re doing in order to produce any
final work, that is if you actually do produce any final work. Part of that is
making the tools. When I first started making computer tools to help me
73

in my artwork, I did not have a lot of experience programming computers.
I had some. I did little projects here and there. So I looked around at the
various toolkits, but everything seemed really rigid. If you wanted to edit
some text, you had this little box and you write things in this little box and
if you want to change numbers, you have to erase it and change tiny things
with other tiny things. It’s just very restrictive. I figured I could either
figure out how to adapt those to my own purposes, or I could just figure
out my own, so I figured either way would probably take about that same
amount of time I guessed, in my ignorance. In the process, that’s not quite
been true. But it is much more flexible, in my opinion, what I’ve developed,
compared to a lot of other toolkits. Other people have other goals, so I’m
sure they would have a completely different opinion. For what I’m doing,
it’s much more adaptable.
You said you had no experience in programming? You studied in art school?

I don’t think I ever actually took computer programming classes. I grew
up with a Commodore 64, so I was always making letters fly around the
screen and stuff like that, and follow various curves. So I was always doing
little programming tricks. I guess I grew up in a household where that
sort of thing was pretty normal. I had two brothers, and they both became
computer programmers. And I’m the youngest, so I could learn from their
mistakes, too. I hope.
You’re looking for good excuses to program.
(laughs) That could be.

We can discuss at length about how actual toolkits don’t match your needs,
but in the end, you want to input certain things. With any recent toolkit, you
can do that. It’s not that difficult or time consuming. The way you do it, you
really enjoy it, by itself. I can see it as a real creative work, to come up with new
digital shapes.
Do you think that for you, the program itself is part of the work?

I think it’s definitely part of the work. That’s kind of the nuts and bolts
that you have to enjoy to get somewhere else. But if I look back on it, I
74

spend a huge amount of time just programming and not actually making
the artwork itself. It’s more just making the tools and all the programming
for the tools. I think there’s a lot of truth to that. When it comes time to
actually make artwork, I do like to have the tool that’s just right for the job,
that works just the way that seems efficient.
I think the program itself is an artwork, very much. To me it is also
a reflection on moving between 2D and 3D, about physical computation.
Maybe this is the actual work. Would you agree?
I don’t know. To an extent. In my mind, I kind of class it differently.
I’ve certainly been drawing more than I’ve been doing technical stuff like
programming. In my mind, the artwork is things that get produced, or a
performance or something like that. And the programming or the tools
are in service to those things. That’s how I think of it. I can see that ...
I’ve distributed Laidout as something in itself. It’s not just some secret tool
that I’ve put aside and presented only the artwork. I do enjoy the tools
themselves.
I have a question about how the 2D imagines 3D. I’ve seen Pierre and
Ludi write imposition plans. I really enjoy reading this, almost as a sort of
poetry, about what it would be to be folded, to be bound like a book. Why is
it so interesting for you, this tension between the two dimensions?
I don’t know. Perhaps it’s just the transformation of materials from
something more amorphous into something that’s more meaningful, somehow. Like in a book, you start out with wood pulp, and you can lay it out in
pages and you have to do something to that in order to instil more meaning
to it.
Is binding in any way important to you?
Somewhat. I’ve bound a few things by hand. Most of my cartoon books
ended up being just stapled, like a stack of paper, staple in the middle and
fold. Very simple. I’ve done some where you cut down the middle and lay
the sides on top and they’re perfect bound. I’ve done just a couple where
it’s an actual hand bound, hard cover. I do enjoy that. It’s quite a time
75

consuming thing. There’s quite a lot of craft in that. I enjoy a lot of hand
made, do-it-yourself activities.
Do you look at classic imposition plans?

I guess that’s kind of my goal. I did look up classic book binding
techniques and how people do it and what sort of problems they encounter.
I’m not sure if I’ve encompassed everything in that, certainly. But just the
basics of folding and trimming, I’ve done my best to be able to do the same
sort of techniques that have been done in the past, but only manually. The
computer can remember things much more easily.
Imposition plans are quite fixed, you have this paper size and it works with
specific imposition plans. I like the way your tool is very organic, you can play
with it. But in the end, something very classic comes out, an imposition plan you
can use over and over, which gives a sort of continuity.
What’s impressive is the attention you put into the visualization. There are
some technical programs which do really big imposition stuff, but it’s always at the
printer. Here, you can see the shape being peeled. It’s really impressive. I agree
with Femke that the program is an artwork too, because it’s not only technical,
it’s much more.
How is the material imagined in the tool?

So, far not really completely. When you fold, you introduce slight twists
and things like that. And that depends on the stiffness of the paper and
the thickness of the paper and I’ve not adequately dealt with that so much.
If you just have one fold, it’s pretty easy to figure out what the creep is for
that. You can do tests and you can actually measure it. That’s pretty easy
to compensate for. But if you have many more folds than that, it becomes
much more difficult.
Are you thinking about how to do that?

I am.

That would be very interesting. To imagine paper in digital space, to give
an idea of what might come out in the end. Then you really have to work
your metaphors, I think?
76

A long time ago, I did a lot of T-shirt printing. Something that I did not
particularly have was a way to visualize your final image on some kind of shirt
and the same thing applies for book binding, too. You might have a strange
texture. It would be nice to be able to visualize that beforehand, as well
as the thickness of the paper that actually controls physical characteristics.
These are things I would like to incorporate somehow but haven’t gotten
around to.
You talked about working with physical input, having touchpads ... Can
you talk a bit more about why you’re interested in this?

You can do a lot of things with just a mouse and a keyboard. But it’s
still very limiting. You have to be sitting there, and you have to just control
those two things. Here’s your whole body, with which you can do amazing
things, but you’re restricted to just moving and clicking and you only have a
single point up on the screen that you have to direct very specifically. It just
seems very limiting. It’s largely an unexplored field, just to accept a wider
variety of inputs to control things. A lot of the multitouch stuff that’s been
done is just gestures for little tiny phones. It’s mainly for browsing, not
necessarily for actual work. That’s something I would like to explore quite a
lot more.
Do you have any fantasies about how these gestures could work for real?

There’s tons of sci fi movies, like ‘Minority Report’, where you wear these
gloves and you can do various things. Even that is still just mainly browsing.
I saw one, it was a research project by this guy at Caltech. He had made
this table and he wore polarized glasses so he could look down at this table
and see a 3D image. And then he had gloves on, and he could sculpt things
right in the air. The computer would keep track of where his hand is going.
Instead of sculpting clay, you’re sculpting this 3D mesh. That seemed quite
impressive to me.
You’re thinking about 3D printers, actually?

It’s something that’s on my mind. I just got something called the
Eggbot. You can hold spheres in this thing and it’s basically a plotter that
can print on spherical surfaces or round surfaces. That’s something I’d like
77

to explore some more. I’ve made various balls with just my photographic
panoramas glued onto them. But that could be used to trace an outline for
something and then you could go in with pens or paints and add more detail.
If you’re trying to paint on a sphere, just paint and no photograph, laying out
an outline is perhaps the hardest part. If you simplify it, it becomes much
easier to make actual images on spheres. That would be fun to explore.

I’d like to come back to the folding. Following your existing aesthetic, the
stiffness and the angles of the drawing are very beautiful. Is it important you,
preserving the aesthetic of your programs, the widgets, the lines, the arrows ...

I think the specific widgets, in the end, are not really important to me
at all. It’s more just producing an actual effect. So if there is some better
way, more efficient way, more adaptable way to produce some effect, then it’s
better to just completely abandon what doesn’t work and make something
that’s new, that actually does work. Especially with multitouch stuff, a lot of
old widgets make no more sense. You have to deal with a lot of other kinds
of things, so you need different controls.

It makes sense, but I was thinking about the visual effect. Maybe it’s not
Laidout if it’s done in Qt.
Your visuals and drawings are very aesthetically precise. We’re wondering
about the aesthetics of the program, if it’s something that might change in the
future.
You mean would the quality of the work produced be changed by the
tools?

That’s an interesting question as well. But particularly the interface, it’s
very related to your drawings. There’s a distinct quality. I was wondering
how you feel about that, how the interaction with the program relates to the
drawings themselves.

I think it just comes back to being very visually oriented. If you have to
enter a lot of values in a bunch of slots in a table, that’s not really a visual
way to do it. Especially in my artwork, it’s totally visual. There’s no other
component to it. You draw things on the page and it shows up immediately.
78

It’s just very visual. Or if you make a sculpture, you start with this chunk
of stuff and you have to transform it in some way and chop off this or sand
that. It’s still all very visual. When you sit down at a computer, computers
are very powerful, but what I want to do is still very visually oriented. The
question then becomes: how do you make an interface that retains the visual
inputs, but that is restricted to the types of inputs computers need to have
to talk to them?
The way someone sets up his workshop says a lot about his work. The way
you made Laidout and how you set up its screen, it’s important to define a spot
in the space of the possible.

What is nice is that you made the visualisation so important. The windows
and the rest of the interface is really simple, the attention is really focused on
what’s happening. It is not like shiny windows with shadows everywhere, you feel
like you are not bothered by the machine.
At the same time, the way you draw the thickness of the line to define the
page is a bit large. For me, these are choices, and I am very impressed because I
never manage to make choices for my own programs. The programs you wrote,
or George Williams, make a strong aesthetic assertion like: This is good. I can’t
do this. I think that is really interesting.
Heavy page borders, that still comes down to the visual thing you end
up with, is still the piece of paper so it is very important to find out where
that page outline actually is. The more obvious it is, the better.

Yes, I think it makes sense. For a while now, I paid more attention than
others in Scribus to these details like the shape of the button, the thickness of the
lines, what pattern do you chose for the selection, etcetera. I had a lot of feedback
from users like: I want this, this is too big and at some point you want to please
everybody and you don’t make choices. I don’t think that you are so busy with
what others think.
Are there many other users of the program?

Not that I know of (laughter). I know that there is at least one other
person that actually used it to produce a booklet. So I know that it is
79

possible for someone other than myself to make things with it. I’ve gotten
a couple of patches from people to not make it crash at various places but
since Laidout is quite small, I can just not pay any attention to criticism.
Partially because there isn’t any, and I have particular motivations to make
it work in a certain way and so it is easier to just go forward.

I think people that want to use your program are probably happy with this
kind of visualisation. Because you wrote it alone, there is also a consistency across
the program. It is not like Scribus, that has parts written by a lot of people so you
can really recognize: this is Craig (Bradney), this is Andreas (Vox), this is Jean
(Ghali), this is myself. There is nothing to follow.
I remember Donald Knuth talking about TeX and he was saying that
the entire program was written from scratch three times before its current
incarnation. I am sympathetic to that style of programming.
Start again.
I think it is a good idea, to start again. To come back to a little detail. Is
there a fileformat for your imposition tool, to store the imposition plan? Is it a
text or a binary format?

It is text-based, an indented file format, sort of like Python. I did
not want to use XML, every time I try to use XML there are all these
greater thans and less thans. It is better than binary, but it is still a huge
mess. When everything is indented like a tree, it is very easy to find things.
The only problem is to always input tabs, not spaces. I have two different
imposition types, basically, the flat-folding sheets and the three dimensional
ones. The three dimensional one is a little more complicated.
If you read the file, do you know what you are folding?

Not exactly. It lists what folds exists. If you have a five by five grid, it
will say Fold along this line, over in such and such direction. What it actually
translates to in the end, is not currently stored in the file. Once you are in
Laidout you can export into a PodofoImpose plan file.
Is this file just values, or are there keywords, is it like a text?
80

I try to make it pretty readable, like trimright or trimleft.
Does it talk about turning pages? This I find beautiful in PodofoImpose
plans, you can almost follow the paper through the hands of the program.
Turn now, flip backwards, turn again. It is an instruction for a dance.
Pretty much.

The text you can read in the PodofoImpose plans was taken from what Ludi
and me did by hand. One of us was folding the paper, and the other was writing
it into the plan. I think a lot of the things we talk about, are putting things from
the real world into the computer. But you are putting things from the computer
into the real world.
Can you describe again these two types of imposition, the first one being
very familiar to us. It must be the most frequently asked question on the
Scribus mailing list: How to do imposition. Even the most popular search
term on the OSP website is ‘Bookletprinting’. But what is the difference with
the plan for a 3D object? A classic imposition plan is also somehow about
turning a flat surface into a three dimensional object?
It is almost translatable. I’m reworking the 3D version to be able to
incorporate the flat folding. It is not quite there yet, the problem is the
connection between the pages. Currently, in the 3D version, you have a
shape that has a definitive form and that controls how things bleed across
the edges. When you have a piece of paper for a normal imposition, the
pages that are next to each other in the physical form are not necessarily
related to each other at all in the actual piece of paper. Right now, the piece
of paper you use for the 3D model is very defined, there is no flexibility.
Give me a few months!
So it is very different actually.

It is a different approach. One person wanted to do flexagons, it is sort
of like origami I guess, but it is not quite as complicated. You take a piece
of paper, cut out a square and another square, and than you can fold it and
you end up with a square that is actually made up of four different sections.
Than you can take the middle section, and you get another page and you can
81

keep folding in strange ways and you get different pages. Now the question
becomes: how do you define that page, that is a collection of four different
chunks of paper? I’m working on that!
We talk about the move from 2D to 3D as if these pages are empty. But
you actually project images on them and I keep thinking about maps, transitional objects where physical space is projected on paper which then becomes a
second real space and so on. Are you at all interested in maps?
A little bit. I don’t really want to because it is such a well-explored
field already. Already for many hundreds of years the problem is how do
you represent a globe onto a more or less two dimensional surface. You
have to figure out a way to make globe gores or other ways to project it and
than glue it on to a ball for example. There is a lot of work done with that
particular sort of imagery, but I don’t know.
Too many people in the field!

Yes. One thing that might be interesting to do though is when you have
a ball that is a projection surface, then you can do more things, like overlays
onto a map. If you want to simulate earthquakes for example. That would
be entertaining.
And the panoramic images you make, do you use special equipment for
this?

For the first couple that I made, I made this 30-sided polyhedron that
you could mount a camera inside and it sat on a base in a particular way so
you could get thirty chunks of images from a really cheap point and shoot
camera. You do all that, and you have your thirty images and it is extremely
laborious to take all these thirty images and line them up. That is why I
made the 3D portion of Laidout, it was to help me do that in an easier
fashion. Since then I’ve got a fish-eyed lens which simplifies things quite
considerably. Instead of spending ten hours on something, I can do it in ten
minutes. I can take 6 shots, and one shot up, one shot down. In Hugin you
can stitch them all together.

And the kinds of things you photograph? We saw the largest rodent on
earth? How do you pick a spot for your images?
82

I am not really sure. I wander around and than photograph whatever
stands out. I guess some unusual configuration of architecture frequently
or sometimes a really odd event, or a political protest sometimes. The trick
with panoramas is to find an area where something is happening all over
the globe. Normally, on sunny days, you take a picture and all your image
is blank. As pretty as the blue sky is, there is not a lot going on there
particularly.
Panoramic images are usually spherical or circular. Do you take certain
images with a specific projection surface in mind?
To an extent. I take enough images. Once I have a whole bunch of
images, the task is to select a particular image that goes with a particular
shape. Like cubes there are few lines and it is convenient to line them up to
an actual rectangular space like a room. The tetrahedron made out of cones,
I made one of Mount St. Helens, because I thought it was an interesting
way to put the two cones together. You mentioned 3D printers earlier, and
one thing I would like to do is to extend the panoramic image to be more
like a progression. For most panoramic images, the focal point is a single
point in space. But when you walk along a trail, you might have a series of
photographs all along. I think it could be an interesting work to produce,
some kind of ellipsoidal shape with a panoramic image that flows along the
trail.
Back to Laidout, and keeping with the physical and the digital. Would
there be something like a digital papercut?
Not really. Maybe you can have an Arduino and a knife?
I was more imagining a well placed crash?

In a sense there is. In the imposition view, right now I just have a green
bar to tell where the binding is. However when you do a lot of folds, you
usually want to do a staple. But if you are stapling and there is not an actual
fold there, than you are screwed.

83

The following statements were recorded by Urantsetseg
Ulziikhuu (Urana) in 2014. She studied communication in
Istanbul and Leuven and joined Constant for a few months
to document the various working practices at Constant
Variable. Between 2011 and 2014, Variable housed studios
for Artists, Designers, Techno Inventors, Data Activists,
Cyber Feminists, Interactive Geeks, Textile Hackers, Video
Makers, Sound Lovers, Beat Makers and other digital creators who were interested in using F/LOS software for
their creative experiments.

Why do you think people should use and or practice
Open Source software? What is in it for you?
Urantsetseg Ulziikhuu

The knitting machine that I am using normally has a
computer from the eighties. Some have these scanners that are really old
and usually do not work anymore. They became obsolete. If it wasn’t for
Open Source, we couldn’t use these technologies anymore. Open Source
developers decided that they should do something about these machines and
found that it was not that complicated to connect these knitting machines
directly to computers. I think it is a really good example how Open Source
is important, because these machines are no longer produced and industry
is no longer interested in producing them again, and they would have died
without further use.
The idea that Open Source is about sharing is also important. If you try to
do everything from zero, you just never advance. Now with Open Source, if
somebody does something and you have access to what they do, and you can
take it further and take it into a different direction.

Claire Williams

99

I haven’t always used Open Source software. It started
at the Piet Zwart Institute where there was a decision made by Matthew
Fuller and Femke Snelting who designed the program. They brought a
bunch of people together that asked questions about how our tools influence
practice, how they are used. And so, part of my process is then teaching in
that program, and starting to use Free Software more and more. I should
say, I had already been using one particular piece of Free Software which
is FFmpeg, a program that lets you work with video. So there again there
was a kind of connection. It was just by the virtue of the fact that it was
one of the only tools available that could take a video, pull out frames,
work with lots of different formats, just an amazing tool. So it started with
convenience. But the more that I learned about the whole kind of approach
of Open Source, the more Open Source I started to use. I first switched from
MacOSX to maybe Dual Booting and now indeed I am pretty much only
using Open Source. Not exclusively Open Source, because I occasionally use
platforms online that are not free, and some applications.
I am absolutely convinced that when you use these tools, you are learning
much more about inner workings of things, about the design decisions that
go into a piece of software so that you are actually understanding at a very
deep level, and this then lets you move between different tools. When
tools change, or new things are offered, I think it is really a deep learning
that helps you for the future. Whereas if you just focus on the specific
particularities of one platform or piece of software, that is a bit fragile and
will inevitably be obsolete when a software stops being developed or some
kind of new kind of way of working comes about.
Michael Murtaugh

I use Open Source software every day, as I have
Debian on my laptop. I came to it through anarchism – I don’t have a tech
background – so it’s a political thing mainly. Not that F/LOSS represents
a Utopian model of production by any means! As an artist it fits in with
my interest in collaborative production. I think the tools we use should be
malleable by the people who use them. Unfortunately, IT education needs
to improve quite a lot before that ideal becomes reality.
Politically, I believe in building a culture which is democratic and malleable
by its inhabitants, and F/LOSS makes this possible in the realm of software.
The benefits as a user are not so great unless you are tech-savvy enough to
really make use of that freedom. The software does tend to be more secure
Eleanor Greenhalgh

100

and so on, though I think we’re on shaky ground if we try to defend F/LOSS
in terms of its benefits to the end user. Using F/LOSS has a learning curve,
challenges which I put up with because I believe in it socially. This would
probably be a different answer from say, a sysadmin, someone who could see
really concrete benefits of using F/LOSS.
Actually I came from Open Content and alternative licensing to the technical side of using GNU/Linux. My main motivation
right now is the possibility to develop a deeper relationship with my tools.
For me it is interesting to create my own tools for my work, rather than
to use something predefined. Something everyone else uses. With Free
Software this is easier – to invent tools. Another important point is that
with Free Software and open standards it’s more likely that you will be able
to keep track of your work. With proprietary software and formats, you are
pretty much dependent on decisions of a software company. If the company
decides that it will not continue an application or format, there is not much
you can do about it. This happened to users of FreeHand. When Adobe
acquired their competitor Macromedia they decided to discontinue the development of FreeHand in favour of their own product Illustrator. You can
sign a petition, but if there is no commercial interest, most probably nothing
will happen. Let’s see what happens to Flash.

Christoph Haag

I studied sculpture, which is a very solitary way of working. Already through my studies, this idea of an artist sitting around in a
studio somewhere, being by himself, just doing his work by himself, didn’t
make sense to me. It is maybe true for certain people, but it is definitely
not true to me today, the person I am. I always integrated other people into
my work, or do collaborative work. I don’t really care about this ‘it is my
work’ or ‘it is your work’, if you do something together, at some point the
work exists by itself. For me, that is the greatest moment, it is just independent. It actually rejoins the authorship question, because I don’t think
you can own ideas. You can kind of put them out there and share them.
It is organic, like things that can grow and that they will become bigger
and bigger, become something else that you couldn’t have ever thought. It
makes the horizon much bigger. It is a different way of working I guess.
The obvious reason is that it is free, but the sharing philosophy is really at
the core of it. I have always thought that when you share things, you do not
Christina Clar

101

get back things instantly, but you do get so much things in another way,
not in the way you expect. But if you put in a idea out, use tools that are
open and change them, put them out again. So there is lot of back and
forth of communication. I think that is super important. It is the idea of
evolving together, not just by ourselves. I really do believe that we do evolve
much quicker if we are together than everybody trying to do things by his
or herselves. I think it is very European idea to get into this individualism,
this thinking of idea of doing things by myself, my thing. But I think we
can learn a lot from Asia, just ways of doing, because there community is
much more important.
I don’t necessarily develop like software or codes, because I am not a software developer. But I would say, I am involved in
analog way. I do use Open Source software, although I have to say I do not
much with computers. Most of my work is analog. But I do my researches
on the website. I am a user.
I started to develop an antipathy against large corporations, operating systems or softwares, and started to look for alternatives. Then you come to the
Linux system and Ubuntu which has a very user-friendly interface. I like the
fact that behind the software that I am using, there is a whole community,
who are until now without major financial interests and who develop tools
for people like me. So now I am totally into Open Source software, and I
try to use as much as I can. So my motivation would be I want to get off
the track of big corporates who will always kind of lead you into consuming
more of their products.
John Colenbrander

What does Free Culture mean to you? Are you taking
part in a ‘Free Culture Movement’?
Urantsetseg Ulziikhuu

Michael Murtaugh I’d like to think so, but I realised of that it is quite
hard. Only now, I am seriously trying to really contribute back to projects
and I wouldn’t even say that I am an active contributer to Free Software
projects. I am much more of a user and part of the system. I am using it in
my teaching and my work, but now I try to maybe release software myself in
some way or I try to create projects that people could actually use. I think

102

it is another kind of dimension of engagement. I haven’t really fully realised
it, so yes for that question if I am contributing to Free Culture. Yes, but I
could go lot deeper.
John Colenbrander I am a big supporter of the idea of Free Culture. I
think information should be available for people, especially for those who
have little access to information. I mean we live in the West and we have
access to information more or less with physical libraries and institutions
where we can go. Specially in Asia, South America, Africa this is very
important. There is a big gap between those who have access to knowledge
and those don’t have access to knowledge.
That’s a big field to explore to be able to open up information to people who
have very poor access to information. Maybe they are not even able to write
or read. That’s already is a big handicap. So I think it is a big mission in
that sense.

Could Free Culture be seen as an opposition to commercialism?
Urantsetseg Ulziikhuu

Michael Murtaugh It is a tricky question. I think no matter what, if you
go down the stack, in terms of software and hardware, if you get down to
the deepest level of a computer then there is little free CPU design. So I
think it is really important to be able to work in this kind of hybrid spaces
and to be aware of then how free Free is, and always look for alternatives
when they are available. But to a certain degree, I think it is really hard to
go for a total absolute. Or it is a decision, you can go absolute but that may
mean that you are really isolated from other communities. So that’s always
a bit of balancing act, how independent can you be, how independent you
want to be, how big does your audience need to be, or you community needs
to be. So that’s a lot of different decisions. Certainly, when I am working
in the context of an art school with design practitioners, you know it is not
always possible to really go completely independent and there are lots of
implications in terms of how you work and whom you can work with, and
the printers you can work with. So it is always a little bit of trade-off, but it
is important to understand what the decisions are.

103

Eleanor Greenhalgh I think the idea of a Free Culture movement is very
exciting and important. It has always gone on, but stating it in copyrightaware terms issues an important challenge to the ‘all rights reserved’ statusquo. At the same time I think it has limitations, at least in its current form.
I’m not sure that rich white kids playing with their laptops is necessarily a
radical act. The idea and the intention are very powerful though, because
it does have the potential to challenge the way that power – in the form of
‘intellectual property’ – is distributed.
Christoph Haag Copyright has become much more enforced over the last
years than it was ever before. In a way, culture is being absorbed by companies trying to make money out of it. And Free Culture developed as a
counter movement against this. When it comes to mainstream culture, you
are most often reduced to a consumer of culture. Free Culture then is a
obvious reaction. The idea of culture where you have the possibility to engage again, to become active and create your version, not just to consume
content.

How could Open Source software be economically sustainable, in a way that is beneficial for both developers/creators and users?
Urantsetseg Ulziikhuu

Eleanor Greenhalgh That’s a good question! A very hard one. I’m not
involved enough in that community to really comment on its economic future. But it does, to me, highlight what is missing from the analysis in
Free Culture discourse, the economic reality. It depends on where they (developers) work. A lot of them are employed by companies so they get a
salary. Others do it for a hobby. I’d be interested to get accurate data on
what percentage of F/LOSS developers are getting paid, etc. In the absence
of that data, I think it’s fair to say it is an unsolved problem. If we think
that developers ‘should’ be compensated for their work, then we need to talk
about capitalism. Or at least, about statutory funding models.

104

It is interesting that you used both ‘sustainability’ and
‘economic viability’. And I think those are two things very often in opposition. I am doing a project now about publishing workflows and future electronic publishing forums. And that was the one thing we looked at. There
were several solutions on the market. One was a platform called ‘Editorial’
which was a very nice website that you could use to mark down texts collaboratively and and then it could produce ePub format books. After about
six months of running, it closed down as many platforms do. Interestingly,
in their sign-off message it said: You have a month to get your stuff out of the
website, and sorry we have decided not to Open Source the project. As much as
we loved making it, it was just too much work for us to keep this running. In
terms of real sustainability, Open Source of course would have allowed them
to work with anybody, even if it is just a hobby.
Michael Murtaugh

It is very related to passion of doing these things.
Embroidering machines have copyrighted softwares installed. The software
itself is very expensive, around 1000 , and the software for professionals is
6000 to buy. Embroidering machines are very expensive themselves too.
These softwares are very tight and closed, you even have to have special USB
key for patterns. And there are these two guys who are software developers,
they are trying to come up with a format which all embroidering machines
could read. They take their time to do this and I think in the end if the
project works out, they will probably get attention and probably get paid
also. Because instead of giving 1000 to copyrighted software, maybe you
would be happy to give 50 to these people.
Claire Williams

105

Date: Thu, 12 Sep 2013 15:50:25 +0200
From: FS
To: OSP

Dear OSP,

For a long time I have wanted to organise a conversation with you
about the place and meaning of distributed version control in OSP
design work. First of all because after three years of working with
Git intensely, it is a good moment to take stock. It seems that many
OSP methods, ideas and politics converge around it and a conversation discussing OSP practice linked to this concrete (digital) object
could produce an interesting document; some kind of update on what
OSP has been up to over the last three years and maybe will be in
the future. Second: Our last year in Variable has begun. Under the
header Etat des Lieux, Constant started gathering reflections and documents to archive this three year working period. One of the things
I would like to talk about is the parallels and differences between a
physical studio space and a distributed workflow. And of course I am
personally interested in the idea of ‘versions’ linked to digital collaboration. This connects to old projects and ideas and is sparked again
by new ones revived through the Libre Graphics Research Unit and
of course Relearn.
I hope you are also interested in this, and able to make time for it. I
would imagine a more or less structured session of around two hours
with at least four of you participating, and I will prepare questions
(and cake).
Speak soon!
xF

109

How do you usually explain Git to design students?
Before using Git, I would work on a document. Let’s say a layout, and to
keep a trace of the different versions of the layout, I would append _01, _02
to the files. That’s in a way already versioning. What Git does, is that it
makes that process somehow transparent in the sense that, it takes care of
it for you. Or better, you have to make it take care for you. So instead of
having all files visible in your working directory, you put them in a database,
so you can go back to them later on. And then you have some commands to
manipulate this history. To show, to comment, to revert to specific versions.
More than versioning your own files, it is a tool to synchronize your work
with others. It allows you to work on the same projects together, to drive
parallel projects.
It really is a tool to make collaboration easier. It allows you to see differences.
When somebody proposes you a new version of a file, it highlights what has
changed. Of course this mainly works on the level of programming code.
Did you have any experience with Git before working with OSP?
Well, not long before I joined OSP, we had a little introduction to Mercurial,
another versioning software, at school in 2009. Shortly after I switched to
Git. I was working with someone else who was working with Git, and it was
so much better.
Alex was interested in using Git to make Brainch 1 . We wanted to make a web
application to fork texts that are not code. That was our first use of Git.
I met OSP through Git in a way. An intern taught me the program and he
said: Eric once you’ll get it, you’ll get so excited!. We were in the cafeteria of
the art school. I thought it was really special, like someone was letting me
in on a secret and we we’re the only ones in the art school who knew about
it. He thought me how to push and pull. I saw quickly how Git really
is modeled on how culture works. And so I felt it was a really interesting,
promising system. And then I talked about it at the Libre Graphics Meeting
in 2010, and so I met OSP.
1

A distributed text editing platform based on Django and Git http://code.dyne.org/brainch

110

I started to work on collaborative, graphic design related stuff when I was
developing a font manager. I’ve been connected to two versioning systems
and mainly used SVN. Git came well after, it was really connected to web
culture, compared to Subversion, which is more software related.
What does it mean that Git is referred to as ‘distributed versioning’?
The first command you learn in Git, is the clone command. It means that
you make a copy of a project that is somehow autonomous. Contrary to
Subversion you don’t have this server-client architecture. Every repository
is in itself a potential server and client. Meaning you can keep track of your
changes offline.
At some point, you decided to use ‘distributed versioning’ rather than a
centralized system such as Subversion. I remember there was quite some
discussion ...
I was not hard to convince. I had no experience with other versioning
systems. I was just excited by the experience that others had with this new
tool. In fact there was this discussion, but I don’t remember exactly the
arguments between SVN or Git. For what I remember Git was easier.
The discussion was not really on the nature of this tool. It was just: who
would keep Git running for OSP? Because the problem is not the system in
itself, it’s the hosting platform. We didn’t find any hosted platform which
fitted our taste. The question was: do we set up our own server, and who is
going to take care of at. At this time Alex, Steph and Ivan were quite excited
about working with Git. And I was excited to use Subversion instead, but I
didn’t have to time to take care of setting it up and everything.
You decided not to use a hosted platform such as Gitorious or GitHub?
I guess we already had our own server and were hosting our own projects. But
Pierre you used online platforms to share code?
When I started developing my own projects it was kind of the end of
SourceForge. 2 I was looking for a tool more in the Free Software tradition.
2

SourceForge is a web based source code repository. It was the first platform to offer this
service for free to Open Source projects.

111

There was gna, and even though the platform was crashing all the time, I
felt it was in line with this purpose.
If I remember correctly, when we decided between Git and Subversion,
Pierre, you were also not really for it because of the personality of its main
developer, Linus Torvalds. I believe it was the community aspect of Git that
bothered you.

Well Git has been written to help Linus Torvalds receive patches for the
Linux kernel; it is not aimed at collaborative writing. It was more about
making it convenient for Linus. And I didn’t see a point in making my
practice convenient for Linus. I was already using Subversion for a while
and it was really working great at providing an environment to work together with a lot of people and check out different versions. Anything you
expect from a versioning system was there, all elements for collaborative
work were there. I didn’t see the point to change for something that didn’t
feel as comfortable with, culturally. This question of checking out different
directories of repositories was really important to me. At this time (Git has
evolved a lot) it was not possible to do that. There were other technical
aspects I was quite keen of. I didn’t see why to go for Git which was not
offering the same amount of good stuff.

But then there is this aspect of distribution, and that’s not in Subversion.
If some day somebody decides to want a complete copy of an OSP project,
including all it’s history, they would need to ask us or do something complicated to give it to them.

I was not really interested in this ‘spreading the whole repository’. I was
more concerned about working together on a specific project.

It feels like your habit of keeping things online has shifted. From making
an effort afterwards to something that happens naturally, as an integral
part of your practice.

It happened progressively. There is this idea that the Git repository is linked
to the website, which came after. The logic is to keep it all together and
linked, online and alive.
112

That’s not really true ... it was the dream we had: once we have Git, we
share our files while working on them. We don’t need to have this effort
afterwards of cleaning up the sources and it will be shareable. But it is not
true. If we do not put an effort to make it shareable it remains completely
opaque. It requires still an investment of time. I think it takes about 10%
of time of the project, to make it readable from the outside afterwards.

Now, with the connection to our public website, you’re more conscious that all
the files we use are directly published. Before we had a Git web application that
allowed someone to just browse repositories, but it was not visual, so it was hard
to get into it. The Cosic project is a good example. Every time I want to show
the project to someone, I feel lost. There are so many files and you really don’t
know which ones to open.

Maybe, Eric, you can talk about ‘Visual Culture’?

Basically ‘Visual Culture’ is born out of this dream I talked about just now.
That turns out not to be true, but shapes our practice and helps us think
about licensing and structuring and all those interesting questions. I was
browsing through this Git interface that Stéphanie described, and thought
it was a missed opportunity, because here is this graphic design studio,
who publishes all their works, while they are working. Which has all kind
of consequences but if you can’t see it, if you don’t know anything about
computer programming, you have no clue on what’s going on. And also,
because it’s completely textual. And for example a .sla file, if you don’t know
about Open Source, if you don’t know about Scribus it could as well be
salad. It is clear that Git was made for text. It was the idea to show all the
information that is already there in a visual form. But an image is an image,
and type is a typeface, and it changes in a visual way. I thought it made
sense for us to do. We didn’t have anyone writing posts on our blog. But
we had all this activity in the Git repository.
It started to give some schematic view on our practice, and renders the current
activity visible, very exciting. But it is also very frustrating because we have lots
of ideas and very little time to implement them. So the ‘Visual Culture’ project
is terribly late on the ball comparing to our imagination.
113

Take by example the foundry. Or the future potential of the ‘Iceberg’ folders. Or
our blog that is sometimes cruelly missing. We have ways to fill all these functions
with ‘Visual Culture’ but still no time to do it!
In a way you follow established protocols on how Open Source code is
usually published. There should be a license, a README file ... But OSP
also decided to add a special folder, which you called ‘Iceberg’. This is a
trick to make your repository more visual?

Yeah, because even if something is straightforward to visualise, it helps if
you can make a small render of it. But most of the files are a accumulation
of files, like a webpage. The idea is that in the ‘Iceberg’ folder, we can put a
screenshot, or other images ...

We wanted the files that are visible, to be not only the last files added. We wanted
to be able to show the process. We didn’t want it to be a portfolio and just show
the final output. But we wanted to show errors and try-outs. I think it’s not only
related to Git, but also to visual layout. When you want to share software, we
say release early, release often, which is really nice. But it’s not enough to just
release, because you need to make it accessible to other people to understand what
they are reading. It’s like commenting your code, making it ... I don’t want to
say ‘clean’ ... legible, using variable names that people can understand. Because,
sometimes when we code just for ourselves I use French variables so that I’m sure
that it’s not word-protected by the programming language. But then it is not
accessible to many people. So stuff like that.
You have decided to use a tool that’s deeply embedded in the world of
F/LOSS. So I’ve always seen your choice for Git both as a pragmatic
choice as well as a fan choice?

Like as fans of the world of Open Source?

Yes. By using this tool you align yourself, as designers, with people that
develop software.

I’m not sure, I join Pierre on his feelings towards Linus Torvalds, even
though I have less anger at him. But let’s say he is not someone I especially
114

like in his way of thinking. What I like very much about Git is the distributed aspect. With it you can collaborate without being aligned together.
While I think Linus Torvalds idea is very liberal and in a way a bit sad, this
idea that you can collaborate without being aligned, without going through
this permission system, is interesting. With Scribus for example, I never
collaborated on it, it’s such a pain to got through the process. It’s good and
bad. I like the idea of a community which is making a decision together, at
the same time it is so hard to enter this community that you just don’t want
to and give up.
How does it feel, as a group of designer-developers, to adopt workflows,
ways of working, and also a vocabulary that comes from software development?

On the one hand it’s maybe a fan act. We like this movement of F/LOSS
development which is not always given the importance it has in the cultural
world. It’s like saying hey I find you culturally relevant and important. But
there’s another side to it. It’s not just a distant appropriation, it’s also the fact
that software development is such a pervasive force. It’s so much shaping
the world, that I feel I also want to take part in defining what are these
procedures, what are these ways of sharing, what are these ways of doing
things. Because I also feel that if I ask someone from another field as
a cultural actor, and take and appropriate these mechanisms and ways of
doing, I will be able to influence what they are. So there is the fan act, and
there’s also the act of trying to be aware of all the logic contained in these
actions.

And from another side, in the world of graphic design it is also a way to
affirm that we are different. And that we’re really engaged in doing this
and not only about designing nice pictures. That we really develop our own
tools.

It is a way to say: hey, we’re not a kind of politically engaged designers with
a different political goal each next half month, and than we do a project
about it. It really impacts our ecosystem, we’re serious about it.
115

It’s true that, before we started to use Git, people asked: So you’re called
Open Source Publishing, but where are your sources? For some projects you
could download a .zip file but it was always a lot of trouble, because you needed
to do it afterwards, while you were already doing other projects.

Collaboration started to become a prominent part of the work; working
together on a project. Rather than, oh you do that and when you are finished
you send the file over and I will continue. It’s really about working together on
a project. Even if you work together in the same space, if you don’t have a
system to share files, it’s a pain in the ass.
After using it for a few years, would you say there are parts of in Git
where you do not feel at home?

In Git, and in versioning systems in general, there is that feeling that the
latest version is the best. There is an idea of linearity, even though you can
have branches, you still have an idea of linearity in the process.

Yes, that’s true. We did this workshop Please computer let me design, the first
time was in a French school, in French, and the second time for a more European
audience, in English. We made a branch, but then you have the default branch the English one - you only see that one, while they are actually on the same level.

So the convention is to always show the main branch, the ‘master’?

In a way there is no real requirement in Git to have a branch called ‘master’.
You can have a branch called ‘English’ and a branch called ‘French’. But
it’s true that all the visualization software we know (GitHub or Gitorious
are ways to visualize the content of a Git repository), you’ll need to specify
which is the branch that is shown by default. And by default, if you don’t
define it, it is ‘master’.
For certain types of things such as code and text it works really well, for
others, like you’re making a visual design, it’s still very hard to compare
differences. If I make a poster for example I still make several files instead of
branches, so I can see them together at once, without having to check-out
another branch. Even in websites, if I want to make a layout, I’ll simply make
a copy of the HTML and CSS, because I want to be able to test out and
116

compare them. It might be possible with branches, it’s just to complicated.
Maybe the tools to visualize it are not there ... But it’s still easier to make
copies and pick the one you like.

It’s quite heavy to go back to another version. Also working collaboratively is
actually quite heavy. For example in workshops, or the ‘Balsamine’ project ... we
were working together on the same files at the same time, and if you want to share
your file with Git you’ll have to first add your file, then commit and pull and
push, which is four commands. And every time you commit you have to write
a message. So it is quite long. So while we were working on the .css for ‘Visual
Culture’, we tried it in Etherpad, and one of us was copying the whole text file
and committing.

So you centralized in the end.

It’s more about third-party visual software. Let’s say Etherpad for example,
it’s a versioning system in itself. You could hook into Git through Etherpad
and each letter you type could be a commit. And it would make nonsense
messages but at the same time it would speed up the process to work together. We can imagine the same thing with Git (or any other collaborative
working system) integrated into Inkscape. You draw and every time you save
... At some point Subversion was also a WebDav server, it means that for
any application it was possible to plug things together. Each time you would
save you file it would make a commit on the server. It worked pretty well
to bring new people into this system because it was just exactly the same as
the OpenOffice, it was an open WebDav client. So it was possible to say to
OpenOffice that you, where you save is a disk. It was just like saving and it
was committing.

I really agree. From the experience of working on a typeface together in
Git with students, it was really painful. That’s because you are trying to
do something that generates source code, a type design program generates
source code. You’re not writing it by hand, and if you then have two versions
of the type design program, it already starts to create conflicts that are quite
hard. It’s interesting to bring to models together. Git is just an architecture
on how to start your version, so things could hook into it.
117

For example with Etherpad, I’ve looked into this API the other day, and
working together with Git, I’m not sure if having every Etherpad revision
directly mapped to a Git revision would makes sense if you work on a project
... but at the same time you could have every saved revision mapped to a
Git revision. It’s clear Git is made for asynchronous collaboration process.
So there is Linus in his office, there are patches coming in from different
people. He has the time also to figure out which patch needs to go where.
This doesn’t really work for the Etherpad-style-direct-collaboration. For
me it’s cool to think about how you could make these things work together.
Now I’m working on this collaborative font editor which does that in some
sort of database. How would that work? It would not work if every revision
would be in the Git. I was thinking you could save, or sort of commit, and
that would put it in a Git repository, this you can pull and push. But if
you want to have four people working together and they start pulling, that
doesn’t work on Git.

I never really tried Sparkleshare, that could maybe work? Sparkleshare is making
a commit message every time you save a document. In a way it works more like
Dropbox. Every time you save it’s synchronized with the server directly.

So you need to find a balance between the very conscious commits you
make with Git and the fluidness of Etherpad, where the granularity is
much finer. Sparkleshare would be in between?
I think it would be interesting to have this kind of Sparkleshare behaviour, but
only when you want to work synchronously.

So you could switch in and out of different modes?

Usually Sparkleshare is used for people who don’t want to get to much involved
in Git and its commands. So it is really transparent: I send my files, it’s synchronized. I think it was really made for this kind of Dropbox behaviour. I think
it would make sense only when you want to have your hands on the process. To
have this available only when you decide, OK I go synchronous. Like you say,
if you have a commit for every letter it doesn’t make sense.
It makes sense. A lot of things related to versions in software development
is meant to track bugs, to track programming choices.
118

I don’t know for you ... but the way I interact with our Git repository since we
started to work with it ... I almost never went into the history of a project. It’s
just, it really never happened to go back into this history, to check out an old
version.

I do!

Some neat feature of Git is the dissect command. To find where it broke.

You can top from an old revision that you know that works and then track
down, like checkout, track down the bug.

Can you give a concrete example, where that would be useful, I mean,
not in code.

Not code, okay. That I don’t know.

In a design, like visual design, I think it never happens. It happens on websites,
on tools. Because there is a bug, so you need to come back to see where it broke.
But for a visual design I’m not sure.

It’s true, also because as you said before, with .svg files or .sla files we often
have several duplicates. I sometimes checkout those. But it’s true it’s often
related to merge problems. Or something, you don’t know what to do, so
you’ll just check-out, to go back to an earlier version.

It would be interesting for me to really look at our use of Git and map some
kind of tool on top of a versioning system. Because it’s not even versioning,
it is also a collaborative workflow, and to see what we mean. Just to use
maybe some feature of Git or whatever to provide the services we need and
really see what we exactly work with. And, this kind of thing where we
want to see many versions at the same time, to compare seems important.
Well it’s the kind of thing that could take advantage of a versioning system,
to build.
It is of course a bit strange that if you want to see different versions next
to each other you have to go back in time. It’s a kind of paradox, no?

But then you can’t see them at the same time
Exactly, no.

119

Because there is no way to visualize your trip back in history.

Well I think, something you could all have some interesting discussion
about, is the question of exchange. Because now we are talking about the
individual. We’ve talked how it’s easier to contribute to Git based projects
but to be accepted into an existing repository someone needs to say okay,
I want it, which is like SVN. What is easier, is to publish you’re whole
Git repository online, with the only difference from the the first version,
is that you added your change, but it means that in proposing a change
you are already making a new cultural artifact. You’re already putting a new
something there. I find this to be a really fascinating phenomena because
it has all kinds of interesting consequences. Of course we can look at it
the way of, it’s the cold and the liberal way of doing things. Because the
individual is at the center of this, because you are on your own. It’s your
thing in the first place, and then you can see if it maybe becomes someone
else’s thing too. So that has all kinds of coldness about it and it leads to
many abandoned projects and maybe it leads to a decrease of social activity
around specific projects. But there’s also an interesting part of it, where it
actually resembles quite well how culture works in the first place. Because
culture deals with a lot redundancy, in the sense that we can deal with many
kinds of very similar things. We can have Akzidenz Grotesk, Helvetica and
the Akkurat all at the same time, and they have some kind of weird cultural
lineage thing going on in between them.

Are there any pull requests for OSP?
We did have one.

Eric is right to ask about collaboration with others, not only how to work
internally in a group.

That’s why GitHub is really useful. Because it has the architecture to exchange
changes. Because we have our own server it’s quite private, it’s really hard to
allow anyone to contribute to fonts for example. So we had e-mails: Hey here’s
a new version of the font, I did some glyphs, but also changed the shape of
the A. There we have two different things, new glyphs is one thing, we could say
120

we take any new glyph. But changing the A, how do you deal with this? There’s
a technical problem, well not technical ...

An architectural problem?

Yeah, we won’t add everyone’s SSH-key to the server because it will be endless
to maintain. But at the same time, how do you accept changes? And then, who
decides what changes will be accepted?

For the foundry we decided to have a maintainer for each font project.

It’s the kind of thing we didn’t do well. We have this kind of administrative
way of managing the server. Well it’s a lot of small elements that all together
make it difficult. Let’s say at some point we start to think maybe we need to
manage our repositories, something a bit more sophisticated then Gitolite. So we
could install something like Gitorious. We didn’t do it but we could imagine
to rebuild a kind of ecosystem where people have their own repositories and
do anything we can imagine on this kind of hosting service. Gitorious is a
Free Software so you can deploy it on your own server. But it is not trivial
to do.
Can you explain the difference between Gitorious and GitHub?

Gitorious is first a free version, it’s not a free version of Git but GitHub. One
is free and one is not.
Meaning you can not install GitHub on your own server.

Git is a storage back-end, and Gitorious or GitHub are a kind of web application to interact with the repository and to manage them. And GitHub
is a program and a company deploying these programs to offer both a commercial service and a free-of-charge service. They have a lot of success with
the free service Git in a sense. And they make a lot of money at providing
the same service, exactly the same, just it means that you can have private
space on the server. It’s quite convenient, because the tools are really good
to manage repositories. And Gitorious I don’t exactly know what is their
business model, they made all their source code to run the platform Free
Software. It means they offer a bit less fancy features.
121

A bit less shiny?

Yeah, because they have less success and so less money to dedicate to development of the platform. But still it’s some kind of easy to grasp web interface
management, repositories manager. Which is quite cool. We could do that,
to install this kind of interface, to allow more people to have their repositories on the OSP-server. But here comes the difficult thing: we would need
a bit more resources to run the server to host a lot of repositories. Still this
moment we have problems sometimes with the server because it’s not like
a large server. Nobody at OSP is really a sysadmin, and has time to install
and setup everything nicely etcetc. And we also would have to work on the
gitorious web application to make it a bit more in line with our visual universe. Because now it’s really some kind of thing we cannot associate with
really.

Do you think ‘Visual Culture’ can leverage some of the success of GitHub?
People seem to understand and like working this way.

Well, it depends. We also meet a lot of people who come to GitHub and say,
I don’t understand, I don’t understand anything of this! Because of it’s huge
success GitHub can put some extra effort in visualization, and they started
to run some small projects. So they can do more than ‘Visual Culture’ can
do.
And is this code available?

Some of their projects are Open Source.

Some of their projects are free. Even if we have some things going on in
‘Visual Culture’, we don’t have enough manpower to finalize this project.
The GitHub interface is really specific, really oriented, they manage to do
things like show fonts, show pictures, but I don’t think they can display
.pdf. ‘Visual Culture’ is really a good direction, but it can become obsolete
by the fact that we don’t have enough resource to work on it. GitHub starts
to cover a lot of needs, but always in their way of doing things, so it’s a
problem.
122

I’m very surprised ... the quality of Git is that it isn’t centralized, and nowadays everything is becoming centralized in GitHub. I’m also wondering
whether ... I don’t think we should start to host other repositories, or maybe
we should, I don’t know.
Yeah, I think we should

You do or you don’t want to become a hosting platform?

No. What I think is nice about GitHub is of course the social aspect around
sharing code. That they provide comments. Which is an extra layer on top
of Git. I’m having fantasies about another group like OSP who would use
Git and have their own server, instead of having this big centralized system.
But still have ways to interact with each other. But I don’t know how.
It would be interesting if it’s distributed without being disconnected.

If it was really easy to setup Git, or a versioning server, that would be
fantastic. But I can remember, as a software developer, when I started to
look for somewhere to host my code it was no question to setup my own
server. Because of not having time, no time to maintain, no time to deploy
etcetc. At some point we need hosting-platforms for ourselves. We have
almost enough to run our own platform. But think of all the people who
can’t afford it.
But in a way you are already hosting other people’s projects. Because
there are quite a few repositories for workshops that actually not belong
to you.

Yeah, but we moved some of them to GitHub just to get rid of the pain of
maintaining these repositories.
We wanted the students to be independent. To really have them manage
their own projects.

GitHub is easier to manage then our own repository which is still based on
a lot of files.
123

For me, if we ever make this hosting platform, it should be something else then
our own website. Because, like you say, it’s kind of centralized in the way we use
it now. It’s all on the Constant server.

Not anymore?

No, the Git repositories are still on the Constant server.

Ah, the Git is still. But they are synced with the OSP server. But still, I can
imagine it would be really nice to have many instances of ‘Visual Culture’
for groups of people running their own repositories.
It feels a bit like early days of blogging.

It would be really, really nice for us to allow other people to use our services.
I was also thinking of this, because of this branching stuff. For two reasons,
first to make it easier for people to take advantage of our repository. Just
like branching our repository would be one click, just like in Gitorious or
GitHub. So I have an account and I like this project and I want to change
something, I just click on it. You’re branched into your own account and
you can start to work with it. That’s it, and it would be really convenient
for people who would like to work with our font files etc. And once we
have all these things running on our server we can think of a lot of ideas to
promote our own dynamic over versioning systems. But now we’re really a
bit stuck because we don’t have the tools we would like to have. With the
repositories, it’s something really rigid.
It is interesting to see the limits of what actually can happen. But it is
still better than the usual (In)design practices?

We would like to test GitMX. We don’t know much about it, but we would
like to use it for the pictures in high-resolution, .pdfs. We thought about it
when we were in Seoul, because we were putting pictures on a gallery, and
we were like ah, this gallery. We were wondering, perhaps if GitMX works
well, perhaps it can be separated into different types of content. And then
we can branch them into websites. And perhaps pictures of the finalized
work. In the end we have the ‘Iceberg’ with a lot of ‘in-progress’-pictures,
124

but we don’t have any portfolio or book. Again because we don’t care much
about this, but at the end we feel we miss it a bit.

A narration ...

... to have something to present. Each time we prepare a presentation, we
need to start again to find back the tools and files, and to choose what we
want to send for the exhibition.

It’s really important because at some point, working with Git, I can remember telling people ...
Don’t push images!
I remember.

The repository is there to share the resources. And that’s really where it
shines. And don’t try to put all your active files in it. At some point we miss
this space to share those files.
But an image can be a recipe. And code can be an artifact. For me the
difference is not so obvious.

It is not always so clear. Sometimes the cut-off point is decided by the weight of
the file, so if it is too heavy, we avoid Git. Another is: if it is easy to compile, leave
it out of Git. Sometimes the logic is reversed. If we need it to be online even if
not a source, but simply we need to share it, we put it on the Git. Some commits
are also errors. The distinction is quite organic until now, in my experience. The
closer the practice gets to code, the more clean the versioning process is.

There is also a kind of performative part of the repository. Where a
commit counts as a proof of something ...
When I presented the OSP’s website, we had some remarks like, ah it’s good we
can see what everybody has done, who has worked.

But strangely so far there were not many reactions from partners or clients
regarding the fact that all the projects could be followed at any stage. Even budget
wise ... Mostly, I think, because they do not really understand how it works.
And sometimes it’s true, it came to my mind, should we really show our website
to clients? Because they can check whether we are working hard, or this week
125

we didn’t do shit ... And it’s, I think it’s really based on trust and the type of
collaboration you want with your client. Actually collaboration and not a hierarchical relationship. So I think in the end it’s something that we have to work
on. On building a healthy relationship, that you show the process but it’s not
about control. The meritocracy of commits is well known, I think, in platforms
like GitHub. I don’t think in OSP this is really considered at all actually.

It supports some self-time tracking that is nuanced and enriched by e-mail,
calendar events, writing in Etherpads. It gives a feeling of where is the activity
without following it too closely. A feeling rather than surveillance or meritocracy.

I know that Eric ... because he doesn’t really keep track of his working hours. He
made a script to look into his commit messages to know when he worked on a
project. Which is not always truthful. Because sometimes you make a commit on
some files that you made last week, but forgot to commit. And a commit is a
text message at a certain time. So it doesn’t tell you how much time you spent on
the file.

Although in the way you decided to visualize the commits, there is a sense
of duration between the last and the commit before. So you have a sense
of how much time passed in between. Are there ways you sometimes
trick the system, to make things visible that might otherwise go missing?
In the messages sometimes, we talk about things we tried and didn’t work.
But it’s quite rare.

I kind of regret that I don’t write so much on the commits. At the beginning
when we decided to publish the messages on the homepage we talked about
this theater dialogue and I was really excited. But in the end I see that I
don’t write as much as I would like.
I think it’s really a question of the third-party programs we use. Our
messages are like a dialogue on the website. But when you write
a commit message you’re not at all in this interface. So you don’t answer
to something. If we would have the same kind of interface we have on the
website, you would realize you can answer to the previous commit message.
You have this sort of narrative thread and it would work. We are in the
commit

126

middle, we have this feeling of a dialogue on one side, but because when
you work, you’re not on the website to check the history. It’s just basically, it
would be about to make things really in line with what we want to achieve.
I commit just when I need to share the files with someone else. So I wait
until the last moment.

To push you mean?

No, to commit. And then I’ve lost track of what I’ve done and then I just
write ...

But it would be interesting, to look at the different speeds of collaboration. They might need each another type of commit message.

But it’s true, I must admit that when I start working on a project I don’t read the
last messages. And so, then you lose this dialogue as you said. Because sometimes
I say, Ludi is going to work on it. So I say, OK Ludi it’s your turn now,
but the thing is, if she says that to me I would not know because I don’t read the
commit messages.

I suppose that is something really missing from the Git client. When you
you update your working copy to synchronize with the server it just
says files change, how many changes there were. But doesn’t give you the
story.
pull,

That’s what missing when you pull. It should instead of just showing which files
have changed, show all the logs from the last time you pulled.

Your earlier point, about recipes versus artifacts. I have something to add
that I forgot. I would reverse the question, what the versioning system
considers to be a recipe is good, is a recipe. I mean, in this context ‘a
recipe’ is something that works well within the versioning system. Such as
the description of your process to get somewhere. And I can imagine it’s
something, I would say the Git community is trying to achieve that fact.
Make it something that you can share easily.

But we had a bit of this discussion with Alex for a reader we made. It is going to
be published, so we have the website with all the texts, and the texts are all under
127

a free license. But the publisher doesn’t want us to put the .pdfs online. I’m quite
okay with that, because for me it’s a condition that we put the sources online. But
if you really want the .pdf then you can clone the repository and make them
yourself in Scribus. It’s just an example of not putting the .pdf, but you have
everything you need to make the .pdf yourself. For me it’s quite interesting to say
our sources are there. You can buy the book but if you want the .pdf you have
to make a small effort to generate it and then you can distribute it freely. But I
find it quite interesting to, of course the easiest way would be the .pdf but in this
case we can’t. Because the publisher doesn’t want us to.

But that distinction somehow undervalues the fact that layout for example
is not just an executed recipe, no? I mean, so there is this kind of grey
area in design that is ... maybe not the final result, but also not a sort of
executable code.
We see it with ‘Visual Culture’, for instance, because Git doesn’t make it easy
to work with binaries. And the point of ‘Visual Culture’ is to make .jpegs
visible and all the kind of graphical files we work with. So it’s like we don’t
know how to decide whether we should put for instance .pdfs in the Git
repository online. Because on the one hand it makes it less manageable with
Git to work with. But on the other hand we want to make things visible on
the website.
But it’s also storage-space. If you want to clone it, if you want people to clone
it also you don’t want a 8 gigabyte repository.

I don’t know because it’s not really what OSP is for, but you can imagine, like
Dropbox has been made to easily share large files, or even files in general.
We can imagine that another company will set up something, especially
graphic designers or the graphic industry. The way GitHub did something
for the development industry. They will come up with solutions for this
very problem.
I just want to say that I think because we’re not a developer group, at the start the
commit messages were a space where you would throw all your anger, frustration.
And we first published a Git log in the Balsamine program, because we saw that.
This was the first program we designed with ConTeXt. So we were manipulating
128

code for layout. The commit messages were all really funny, because Pierre and
Ludi come from a non-coding world and it was really inspiring and we decided
to put it in the publication. Then we kind of looked, Ludi says two kind of bad
things about the client, but it was okay. Now I think we are more aware that it’s
public, we kind of pay attention not to say stuff we don’t mean to ...

It’s not such an exciting space anymore as in the first half year?

It often very formal and not very, exciting, I think. But sometimes I put
quite some effort to just make clear what I’m trying to share.

And there are also commits that you make for yourself. Because sometimes, even
if you work on a project alone, you still do a Git project to keep track, to have a
history to come back to. Then you write to yourself. I think it’s also something
else. I’ve never tried it.

It’s a lot to ask in a way, to write about what you are doing while you are
doing it.

I think we should pay more attention to the first commit of a project, and
the last. Because it’s really important to start the story and to end it. I speak
about this ‘end’ because I feel overflowed by all these not-ended projects, I’m
quite tired of it. I would like us to find a way to archive projects which are
not alive any more. To find a good way to do it. Because the list of folders
is still growing, and in a way it is okay but a lot of projects are not active.

But it’s hard to know when is the last commit. With the Balsamine project it’s
quite clear, because it’s season per season. But still, we never know when it is the
last one. The last one could be solved by the ‘Iceberg’, to make the last snapshots
and say okay now we make the screenshots of the latest version. And then you close
it ... We wanted that the last one was Hey, we sent the .pdfs to the printer.
But actually we had to send it back another time because there was a mistake.
And then the log didn’t fit on the page anymore.

129

At the Libre Graphics Meeting 2008, OSP sat down with
Chris Lilley on a small patch of grass in front of the
Technical University in Wroclaw, Poland. Warmed up by
the early May sun, we talked about the way standards are
made, how ‘specs’ influence the work of designers, programmers and managers and how this process is opening up to voices from outside the W3C. Chris Lilley is
trained as a biochemist, and specialised in the application
of biological computing. He has been involved with the
World Wide Web Consortium since the 1990s, headed the
Scalable Vector Graphics (SVG) working group and currently looks after two W3C activity areas: graphics, including PNG, CGM, graphical quality, and fonts, including font formats, delivery, and availability of font software.
I would like to ask you about the way standards are made ... I think there’s a
relation between the way Free, Libre and Open Source software works, and
how standards work. But I am particularly interested in your announcement
in your talk today that you want to make the process of defining the SVG
standard a public process?
Right. So, there’s a famous quote that says that standards are like sausages.
Your enjoyment of them is improved by not knowing how they’re made. 1
And to some extent, depending on the standards body and depending on
what you’re trying to standardize, the process can be very messy. If you
were to describe W3C as a business proposition, it has got to fail. You’re
taking companies who all have commercial interests, who are competing and
you’re putting them in the same room and getting them to talk together and
agree on something. Oddly, sometimes that works! You can sell them the
idea that growing the market is more important and is going to get them
more money. The other way ... is that you just make sure that you get the
managers to sign, so that their engineers can come and discuss standards,

1

Laws are like sausages. It’s better not to see them being made. Otto von Bismarck, 1815–1898

135

and then you get the engineers to talk and the managers are out of the way.
Engineers are much more forthcoming, because they are more interested in
sharing stuff because engineers like to share what they’re doing, and talk
on a technical level. The worst thing is to get the managers involved, and
even worse is to get lawyers involved. W3C does actually have all those
three in the process. Shall we do this work or not is a managerial level that’s
handled by the W3C advisory committee, and that’s where some people
say No, don’t work on that area or We have patents or This is a bad idea or
whatever. But often it goes through and then the engineers basically talk
about it. Occasionally there will be patents disclosed, so the W3C also has
a process for that. The first things are done are the ‘charters’. The charter
says what the group is going to work on a broad scope. As soon as you’ve got
your first draft, that further defines the scope, but it also triggers what it’s
called an exclusion opportunity, which basically gives the companies I think
ninety days to either declare that they have a specific patent and say what it’s
number is and say that they exclude it, or not. And if they don’t, they’ve just
given a royalty-free licence to whatever is needed to implement that spec.
The interesting thing is that if they give the royalty-free licence they don’t
have to say which patents they’re licencing. Other standards organizations
build up a patent portfolio, and they list all these patents and they say what
you have to licence. W3C doesn’t do that, unless they’ve excluded it which
means you have to work around it or something like that. Based on what
the spec says, all the patents that have been given, are given. The engineers
don’t have to care. That’s the nice thing. The engineers can just work away,
and unless someone waves a red flag, you just get on with it, and at the end
of the day, it’s a royalty-free specification.
But if you look at the SVG standard, you could say that it’s been quite a
bumpy road 2 ... What kind of work do you need to do to make a successful
standard?

Firstly, you need to agree on what you’re building, which isn’t always firm
and sometimes it can change. For example, when SVG was started the idea
was that it would be just static graphics. And also that it would be animated
2

http://ospublish.constantvzw.org/news/whos-afraid-of-adobe-not-me-says-the-mozillafoundation

136

using scripts, because with dynamic HTML and whatever, this was ’98, we
were like: OK, we’re going to use scripting to do this. But when we put it
out for a first round of feedback, people were like No! No, this is not good
enough. We want to have something declarative. We don’t want to have to write
a script every time we want something to move or change color. Some of the
feedback, from Macromedia for example was like No, we don’t think it should
have this facility, but it quickly became clear why they were saying that and
what technology they would rather use instead for anything that moved or
did anything useful ... We basically said That’s not a technical comment, that’s
a marketing comment, and thank you very much.

Wait a second. How do you make a clear distinction between marketing and
technical comments?

People can make proposals that say We shouldn’t work on this, we shouldn’t
work on that, but they’re evaluated at a technical level. If it’s Don’t do it
like that because it’s going to break as follows, here I demonstrate it then that’s
fine. If they’re like Don’t do it because that competes with my proprietary
product then it’s like Thanks for the information, but we don’t actually care.
It’s not our problem to care about that. It’s your problem to care about
that. Part of it is sharing with the working group and getting the group
to work together, which requires constant effort, but it’s no different from
any sort of managerial or trust company type thing. There’s this sort of
encouragement in it that at the end of the day you’re making the world a
better place. You’re building a new thing and people will use it and whatever.
And that is quite motivating. You need the motivation because it takes a lot
longer than you think. You build the first spec and it looks pretty good and
you publish it and you smooth it out a bit, put it out for comments and you
get a ton of comments back. People say If you combine this with this with this
then that’s not going to work. And you go Is anyone really going to do that? But
you still have to say what happens. The computer still has to know what
happens even if they do that. Ninety percent of the work is after the first
draft, and it’s really polishing it down. In the W3C process, once you get
to a certain level, you take it to what is euphemistically called the ‘last call’.
This is a term we got from the IETF. 3 It actually means ‘first call’ because

3

The Internet Engineering Task Force, http://www.ietf.org/

137

you never have just one. It’s basically a formal round of comments. You log
every single comment that’s been made, you respond to them all, people can
make an official objection if you haven’t responded to the comment correctly
etcetera. Then you publish a list of what changes you’ve made as a basis of
that.

What part of the SVG standardization process would you like to make public?

The part that I just said has always been public. W3C publishes specifications on a regular basis, and these are always public and freely available.
The comments are made in public and responded to in public. What hasn’t
been public has been the internal discussions of the group. Sometimes it
can take a long time if you’ve got a lot of comments to process or if there’s a
lot of argumentation in the group: people not agreeing on the direction to
go, it can take a while. From the outside it looks like nothing is happening.
Some people like to follow this at a very detailed level, and blog about it,
and blablabla. Overtime, more and more working groups have become public. The SVG group just recently got recharted and it’s now a public group.
All of its minutes are public. We meet for ninety minutes twice a week on
a telephone call. There’s an IRC log of that and the minutes are published
from that, and that’s all public now. 4

Could you describe such a ninety minute meeting for us?

There are two chairs. I used to be the chair for eight years or so, and then
I stepped down. We’ve got two new chairs. One of them is Erik Dahlström
from Opera, and one of them is Andrew Emmons from Bitflash. Both
are SVG implementing companies. Opera on the desktop and mobile, and
Bitflash is just on mobile. They will set out an agenda ahead of time and
say We will talk about the following issues. We have an issue tracker, we have
an action tracker which is also now public. They will be going through the
actions of people saying I’m done and discussing whether they’re actually
done or not. Particular issues will be listed on the agenda to talk about
and to have to agree on, and then if we agree on it and you have to change
the spec as a result, someone will get an action to change that back to the
4

Scalable Vector Graphics (SVG) Feedback Page:
http://www.w3.org/Graphics/SVG/feedback.html

138

spec. The spec is held into CVS so anyone in the working group can edit
it and there is a commit log of changes. When anyone accidentally broke
something or trampled onto someone else’s edit, or whatever - which does
happen - or if it came as the result of a public comment, then there will be
a response back saying we have changed the spec in the following way ... Is
this acceptable? Does this answer your comment?
How many people do take part in such a meeting?

In the working group itself there are about 20 members and about 8 or
so who regularly turn up, every week for years. You know, you lose some
people over time. They get all enthusiastic and after two years, when you
are not done, they go off and do something else, which is human nature.
But there have been people who have been going forever. That’s what you
need actually in a spec, you need a lot of stamina to see it through. It is a
long term process. Even when you are done, you are not done because you’ve
got errata, you’ve got revisions, you’ve got requests for new functionalities
to make it into the next version and so on.

On the one hand you could say every setting of a standard is a violent process,
some organisation forcing a standard upon others, but the process you describe
is entirely based on consensus.

There’s another good quote. Tim Berners Lee was asked why W3C works
by consensus, rather than by voting and he said: W3C is a consensus-based
organisation because I say so, damn it. 5 That’s the Inventor of the Web,
you know ... (laughs) If you have something in a spec because 51% of the
people thought it was a good idea, you don’t end up with a design, you end
up with a bureaucratic type decision thing. So yes, the idea is to work by
consensus. But consensus is defined as: ‘no articulated dissent’ so someone
can say ‘abstain’ or whatever and that’s fine. But we don’t really do it on
a voting basis, because if you do it like that, then you get people trying to
5

Consensus is a core value of W3C. To promote consensus, the W3C process requires Chairs
to ensure that groups consider all legitimate views and objections, and endeavor to resolve
them, whether these views and objections are expressed by the active participants of the
group or by others (e.g., another W3C group, a group in another organization, or the general
public). World Wide Web Consortium. General Policies for W3C Groups, 2005. [Online; accessed 30.12.2014]

139

make voting blocks and convince other people to vote their way ... it is much
better when it is done on the basis of a technical discussion, I mean ... you
either convince people or you don’t.
If you read about why this kind of work is done ... you find different arguments. From enhancing global markets to: ‘in this way, we will create a
better world for everyone’. In Tim Berners-Lee’s statements, these two are
often mixed. If you for example look at the DIN standards, they are unambiguously put into the world as to help and support business. With Web
Standards and SVG, what is your position?

Yes. So, basically ... the story we tell depends on who we are telling it to and
who is listening and why we want to convince them. Which I hope is not as
duplicitous as it may sound. Basically, if you try to convince a manager that
you want 20% time of an engineer for the coming two years, you are telling
them things to convince them. Which is not untrue necessarily, but that is
the focus they want. If you are talking to designers, you are telling them how
that is going to help them when this thing becomes a spec, and the fact that
they can use this on multiple platforms, and whatever. Remember: when
the web came out, to exchange any document other than plain text was extremely difficult. It meant exchanging word processor formats, and you had
to know on what platform you were on and in what version. The idea that
you might get interoperability, and that the Mac and the PC could exchange
characters that were outside ASCII was just pie in the sky stuff. When we
started, the whole interoperability and cross-platform thing was pretty novel
and an untested idea essentially. Now it has become pretty much solid. We
have got a lot of focus on disabled accessibility, and also internationalization
which is if you like another type of accessibility. It would be very easy for
an organisation like W3C, which is essentially funded by companies joining it, and therefore they come from technological countries ... it would be
very easy to focus on only those countries and then produce specifications
that are completely unusable in other areas of the world. Which still does
sometimes happen. This is one of the useful things of the W3C. There is
the internationalization review, and an accessibility review and nowadays also
a mobile accessible review to make sure it does not just work on desktops.
Some organisations make standards basically so they can make money. Some
140

of the ISO 6 standards, in particular the MPEG group, their business model
is that you contribute an engineer for a couple of years, you make a patent
portfolio and you make a killing off licencing it. That is pretty much to keep
out the people who were not involved in the standards process. Now, W3C
takes quite an opposite view. The Royalty-Free License 7 for example, explicitly says: royalty-free to all. Not just the companies who were involved
in making it, not just companies, but anyone. Individuals. Open Source
projects. So, the funding model of the W3C is that members pay money,
and that pays our salaries, basically. We have a staff of 60 odd or so, and
that’s where our salaries come from, which actually makes us quite different
from a lot of other organisations. IETF is completely volunteer based so
you don’t know how long something is going to take. It might be quick, it
might be 20 years, you don’t know. ISO is a national body largely, but the
national bodies are in practice companies who represent that nation. But in
W3C, it’s companies who are paying to be members. And therefore, when
it started there was this idea of secrecy. Basically, giving them something
for their money. That’s the trick, to make them believe they are getting
something for their money. A lot of the ideas for W3C came from the
X Consortium 8 actually, it is the same people who did it originally. And
there, what the meat was ... was the code. They would develop the code and
give it to the members of the X Consortium three months before the public
got it and that was their business benefit. So that is actually where our ‘three
month rule’ comes from. Each working group can work for three months
but then they have to go public, have to publish. ‘The heartbeat rule’, we
call it now. If you miss several heartbeats then you’re dead. But at the same
time if you’re making a spec and you’re growing the market then there’s a
need for it to be implemented. There’s an implementation page where you
encourage people to implement, you report back on the implementations,
6
7
8

International Standards for Business, Government and Society International Organization for
Standardization (ISO), http://www.iso.org
Overview and Summary of W3C Patent Policy
http://www.w3.org/2004/02/05-patentsummary.html
The purpose of the X Consortium was to foster the development, evolution, and maintenance of the
X Window System, a comprehensive set of vendor-neutral, system-architecture neutral,
network-transparent windowing and user interface standards.
http://www.x.org/wiki/XConsortium

141

you make a test suite, you show that every feature in the spec that there’s
a test for ... at least two implementations pass it. You’re not showing that
everyone can use it at that stage. You’re showing that someone can read the
spec and implement it. If you’ve been talking to a group of people for four
years, you have a shared understanding with them and it could be that the
spec isn’t understandable without that. The implementation phase lets you
find out that people can actually implement it just by reading the spec. And
often there are changes and clarifications made at that point. Obviously one
of the good ways to get something implemented is to have Open Source
people do it and often they’re much more motivated to do it. For them it’s
cool when it is new, If you give me this new feature it’s great we’ll do it rather
than: Well that doesn’t quite fit into our product plans until the next quarter
and all that sort of stuff. Up until now, there hasn’t really been a good way
for the Open Source people to get involved. They can comment on specs
but they’re not involved in the discussions. That’s something we’re trying
to change by opening up the groups, to make it easier for an Open Source
group to contribute on an ongoing basis if they want to. Right from the
beginning part, to the end where you’re polishing the tiny details in the
corner.
I think the story of web fonts shows how an involvement of the Open Source
people could have made a difference.

When web fonts were first designed, essentially you had Adobe and Apple
pushing one way, Bitstream pushing the other way, both wanting W3C to
make their format the one and only official web format, which is why you
ended up with a mechanism to point to fonts without saying what format
was required. And than you had the Netscape 4, which pointed off to a
Bitstream format, and you had IE4 which pointed off to this Embedded
Open Type (EOT) format. If you were a web designer, you had to have two
different tools, one of which only worked on a Mac, and one of which only
worked on PC, and make two different fonts for the same thing. Basically
people wouldn’t bother. As Håkon 9 mentioned the only people who do
actually use that right now really, are countries where the local language
9

Håkon Wium Lie proposed Cascading Style Sheets (CSS) in 1994.
http://www.w3.org/People/howcome/

142

is not well provided for by the Operating Systems. Even now, things like
WindowsXP and MacOSX don’t fully support some of the Indian languages.
But they can get it into web pages by using these embedded fonts. Actually
the other case where it has been used a lot, is SVG, not so much on the
desktop though it does get used there but on mobiles. On the desktop
you’ve typically got 10 or 20 fonts and you got a reasonable coverage. On a
mobile phone, depending on how high or low ended it is, you might have
a single font, and no bold, and it might even be a pixel-based font. And
if you want to start doing text that skews and swirls, you just can’t do that
with a pixel-based font. So you need to download the font with the content,
or even put the font right there in the content just so that they can see
something.
I don’t know how to talk about this, but ... envisioning a standard before
having any concrete sense of how it could be used and how it could change the
way people work ... means you also need to imagine how a standard might
change, once people start implementing it?
I wouldn’t say that we have no idea of how it’s going to work. It’s more a
case that there are obvious choices you can make, and then not so obvious
choices. When work is started, there’s always an idea of how it would fit in
with a lot of things and what it could be used for. It’s more the case that
you later find that there are other things that you didn’t think of that you
can also use it for. Usually it is defined for a particular purpose and than
find that it can also do these other things.

Isn’t it so that sometimes, in that way, something that is completely marginal,
becomes the most important?

It can happen, yes.

For me, SVG is a good example of that. As I understood it, it was planned
to be a format for the web. And as I see it today, it’s more used on the
desktop. I see that on the Linux desktop, for theming, most internals are
using SVG. We are using Inkscape for SVG to make prints. On the other
hand, browsers are really behind.
143

Browsers are getting there. Safari has got reasonably good support. Opera
has got very good support. It really has increased a lot in the last couple
of years. Mozilla Firefox less so. It’s getting there. They’ve been at it
for longer, but it also seems to be going slower. The browsers are getting
there. The implementations which I showed a couple of days ago, those
were mobile implementations. I was showing them on a PC, but they were
specially built demos. Because they’re mobile, it tends to move faster.

But you still have this problem that Internet Explorer is a slow adopter.

Yes, Internet Explorer has not adopted a lot of things. It’s been very slow
to do CSS. It hasn’t yet done XHTML, although it has shipped with an
XML parser since IE4. It hasn’t done SVG. Now they’ve got their own
thing ... Silverlight. It has been very hard to get Microsoft on board and
getting them doing things. Microsoft were involved in the early part of
SVG but getting things into IE has always been difficult. What amazes me
to some extent, is the fact that it’s still used by about 60-70% of people.
You look at what IE can do, and you look at what all the other browsers
can do, and you wonder why. The thing is ... it is still a break and some
technologies don’t get used because people want to make sure that everyone
can see them. So they go down to the lowest common denominator. Or
they double-implement. Implement something for all the other browsers,
and implement something separate for IE, and than have to maintain two
different things in parallel, and tracking revisions and whatever. It’s a nightmare. It’s a huge economic cost because one browser doesn’t implement the
right web stuff. (laughing, sighing)

My question would be: what could you give us as a kind of advice? How
could we push this adoption where we are working? Even if it only is the
people of Firefox to adopt SVG?

Bear in mind that Firefox has this thing of Trunk builds and Branch builds
and so on. For example when Firefox 3 came out, well the Beta is there.
Suddenly there’s a big jump in the SVG stuff because all the Firefox 2 was
on the same branch as 1.5, and the SVG was basically frozen at that point.
The development was ongoing but you only saw it when 3 came out. There
were a bunch of improvements there. The main missing features are the
144

animation and the web fonts and both of those are being worked on. It’s
interesting because both of those were on Acid 3. Often I see an acceleration
of interest in getting something done because there’s a good test. The Acid
Test 10 is interesting because it’s a single test for a huge slew of things all at
once. One person can look at it, and it’s either right or it’s wrong, whereas
the tests that W3C normally produces are very much like unit tests. You
test one thing and there’s like five hundred of them. And you have to go
through, one after another. There’s a certain type of person who can sit
through five hundred test on four browsers without getting bored but most
people don’t. There’s a need for this sort of aggregative test. The whole
thing is all one. If anything is wrong, it breaks. That’s what Acid is designed
to do. If you get one thing wrong, everything is all over the place. Acid 3
was a submission-based process and like a competition, the SVG working
group was there, and put in several proposals for what should be in Acid 3,
many of which were actually adopted. So there’s SVG stuff in Acid 3.

So ... who started the Acid Test?

Todd Fahrner designed the original Acid 1 test, which was meant to exercise
the tricky bits of the box-model in CSS. It ended like a sort Mondrian
diagram, 11 red squares, and blue lines and stuff. But there was a big scope
for the whole thing to fall apart into a train wreck if you got anything
wrong. The thing is, a lot of web documents are pretty simple. They got
paragraphs, and headings and stuff. They weren’t exercising very much the
model. Once you got tables in there, they were doing it a little bit more. But
it was really when you had stuff floated to one side, and things going around
or whatever, and that had something floated as well. It was in that sort of
case where it was all breaking, where people wouldn’t get interoperability.
It was ... the Web Standards Project 12 who proposed this?
Yes, that’s right.
10
11
12

The Acid 3 test: http://acid3.acidtests.org is comprehensive in comparison to more detailed,
but fragmented SVG tests:
http://www.w3.org/Graphics/SVG/WG/wiki/Test_Suite_Overview#W3C_Scalable_Vector_Graphics_.28SVG.29_Test
Acid Test Gallery http://moonbase.rydia.net/mental/writings/box-acid-test/
The Web Standards Project is a grassroots coalition fighting for standards which ensure simple,
affordable access to web technologies for all http://www.webstandards.org/

145

It didn’t come from a standards body.

No, it didn’t come from W3C. The same for Acid 2, Håkon Wium Lie was
involved in that one. He didn’t blow his own trumpet this morning, but
he was very much involved there. Acid 3 was Ian Hickson, who put that
together. It’s a bit different because a lot of it is DOM scripting stuff. It
does something, and then it inquires in the DOM to see if it has been done
correctly, and it puts that value back as a visual representation so you can
see. It’s all very good because apparently it motivates the implementors to
do something. It’s also marketable. You can have a blog posting saying we
do 80% of Acid Test. The public can understand that. The people who are
interested can go Oh, that’s good.
It becomes a mark of quality.

Yes, it’s marketing. It’s like processor speed in PCs and things. There are
so much technology in computers, so than what do you market it on? Well
it’s got that clock speed and it’s got this much memory. OK, great, cool.
This one is better than that one because this one’s got 4 gigs and that one’s
got 2 gigs. It’s a lot of other things as well, but that’s something that the
public can in general look at and say That one is better. When I mentioned
the W3C process, I was talking about the engineers, managers. I didn’t talk
about the lawyers, but we do have a process for that as well. We have a patent
advisory group conformed. If someone has made a claim, and it’s disputed
then we can have lawyers talking among themselves. What we really don’t
have in that is designers, end-users, artists. The trick is to find out how to
represent them. The CSS working group tried to do that. They brought in
a number of designers, Jeff Veen 13 and these sort of people were involved
early on. The trouble is that you’re speaking a different language, you’re
not speaking their language. When you’re having weekly calls ... Reading a
spec is not bedtime reading, and if you’re arguing over the fine details of a
sentence ... (laughing) well, it will put you to sleep straight away. Some of
the designers are like: I don’t care about this. I only want to use it. Here’s what
I want to be able to do. Make it that I can do that, but get back to me when it’s
done.
13

Jeff Veen was a designer at Wired magazine, in those days.
http://adaptivepath.com/aboutus/veen.php

146

That’s why the idea of the Acid Test is a nice breed between the spec and
the designer. When I was seeing the test this morning, I was thinking
that it could be a really interesting work to do, not to really implement it
but to think about with the students. How would you conceive a visual
test? I think that this could be a really nice workshop to do in a university
or in a design academy ...
It’s the kind of reverse-reverse engineering of a standard which could help
you understand it on different levels. You have to imagine how wild you
can go with something. I talk about standards, and read them - not before
going to bed - because I think that it’s interesting to see that while they’re
quite pragmatic in how they’re put together, but they have an effect on the
practice of, for example, designers. Something that I have been following with
interest is the concept of separating form and content has become extremely
influential in design, especially in web design. Trained as a pre-web designer,
I’m sometimes a bit shocked by the ease with which this separation is made.

That’s interesting. Usually people say that it’s hard or impossible, that you
can’t ever do it. The fact that you’re saying that it’s easy or that it comes
naturally is interesting to me.

It has been appropriated by designers as something they want. That’s why it’s
interesting to look at the Web Standards Project where designers really fight
for a separation of content and form. I think that this is somehow making
the work of designers quite ... boring. Could you talk a bit about how this is
done?
It’s a continuum. You can’t say that something is exactly form or exactly
presentation because there are gradations. If you take a table, you’ve already
decided that you want to display the material in a tabular way. If it’s a real
table, you should be able to transpose it. If you take the rows and columns,
and the numbers in the middle then it should still work. If you’ve got
‘sales’ here and if you’ve got ‘regions’ there, then you should still be able to
transpose that table. If you’re just flipping it 90 degrees then you are using
it as a layout grid, and not as a table. That’s one obvious thing. Even then,
deciding to display it as a tabular thing means that it probably came from a
much bigger dataset, and you’ve just chosen to sum all of the sales data over
147

one year. Another one: you have again the sales data, you could have it as pie
chart, but you could also have it as a bar chart, you could have it in various
other ways. You can imagine that what you would do is ship some XML
that has that data, and then you would have a script or something which
would turn it into an SVG pie chart. And you could have a bar chart, or you
could also say show me only February. That interaction is one of the things
that one can do, and arguably you’re giving it a different presentational form.
It’s still very much a gradation. It’s how much re-styleability remains. You
can’t ever have complete separation. If I’m describing a company, and [1]
I want to do a marketing brochure, and [2] I want to do an annual report
for the shareholders, and [3] I want to do an internal document for the
engineering team. I can’t have the same content all over those three and just
put styling on it. The type of thing I’m doing is going to vary for those
audiences, as will the presentation. There’s a limit. You can’t say: here’s the
überdocument, and it can be styled to be anything. It can’t be. The trick is
to not mingle the style of the presentation when you don’t need to. When
you do need to, you’re already halfway down the gradient. Keep them as far
apart as you can, delay it as late as possible. At some point they have to be
combined. A design will have to go into the crafting of the wording, how
much wording, what voice is used, how it’s going to fit with the graphics
and so on. You can’t just slap random things together and call it design,
it looks like a train wreck. It’s a case of deferment. It’s not ever a case of
complete separation. It’s a case of deferring it and not tripping yourself up.
Just simple things like bolds and italics and whatever. Putting those in as
emphasis and whatever because you might choose to have your emphasized
words done differently. You might have a different font, you might have a
different way of doing it, you might use letter-spacing, etc. Whereas if you
tag that in as italics then you’ve only got italics, right? It’s a simple example
but at the end of the day you’re going to have to decide how that is displayed.
You mentioned print. In print no one sees the intermediate result. You see
ink on paper. If I have some Greek in there and if I’ve done that by actually
typing in Latin letters on the keyboard and putting a Greek font on it and
out comes Greek, nobody knows. If it’s a book that’s being translated, there
might be some problems. The more you’re shipping the electronic version
around, the more it actually matters that you put in the Greek letters as
148

Greek because you will want to revise it. It matters that you have flowing
text rather than text that has been hand-ragged because when you put in
the revisions you’re going to have to re-rag the entire thing or you can just
say re-flow and fix it up later. Things like that.

The idea of time, and the question of delay is interesting. Not how, but when you
enter to fine-tune things manually. As a designer of books, you’re always facing
the question: when to edit, what, and on what level. For example, we saw this
morning 14 that the idea of having multiple skins is really entering the publishing
business, as an idea of creativity. But that’s not the point, or not the complete
point. When is it possible to enter the process? That’s something that I think we
have to develop, to think about.

The other day there was a presentation by Michael Dominic Kostrzewa 15
that shocked me. He is now working for Nokia, after working for Novell
and he was explaining how designers and programmers were fighting each
other instead of fighting the ‘real villain’, as he said, who were the managers. What was really interesting was how this division between content
and style was also recouping a kind of political or socio-organizational divide within companies where you need to assign roles, borders, responsibilities to different people. What was really frightening from the talk was
that you understood that this division was encouraging people not to try
and learn from each other’s practice. At some point, the designer would
come to the programmer and say: In the spec, this is supposed to be like this
and I don’t want to hear anything about what kind of technical problems you
face.
Designers as lawyers!

Yes ... and the programmer would say: OK, we respect the spec, but then
we don’t expect anything else from us. This kind of behaviour in the end,
blocks a lot of exchange, instead of making a more creative approach
possible.
14
15

Andy Fitsimon: Publican, the new Open Source publishing tool-chain (LGM 2008)
http://media.river-valley.tv/conferences/lgm2008/quicktime/0201-Andy_Fitzsimon.html
Michael Dominic Kostrzewa. Programmers hell: working with the UI designer (LGM 2008)

149

I read about (and this is before skinning became more common) designers
doing some multimedia things at Microsoft. You had designers and then
there were coders. Each of them hated the other ones. The coders thought
the designers were idiots who lived in lofts and had found objects in their
ears. The designers thought that the programmers were a bunch of socially
inept nerds who had no clue and never got out in sunlight and slept in their
offices. And since they had that dynamic, they would never explain to each
other ( ... )
(policeman arrives)

POLICEMAN:
Do you speak English?

Yes.

POLICEMAN:
You must go from this place because there’s a conference.

Yes, we know. We are part of this conference (shows LGM badge).

POLICEMAN:
We had a phone call that here’s a picnic. I don’t really see a picnic ...

We’re doing an interview.

POLICEMAN:
It looks like a picnic, and professors are getting nervous. You must go sit
somewhere else. Sorry, it is the rules. Have a nice day!

150

At the Libre Graphics Meeting 2008, OSP picks up a conversation that Harrison allegedly started in a taxi in Montreal, a year
earlier. We meet font designer and developer Dave Crossland
in a noisy food court to speak about his understanding of the
intertwined histories of typography and software, and the master in type design at the Department of Typography at the
University of Reading. Since the interview, a lot has happened.
Dave finished his typeface Cantarell and moved on to consult
the Google Web Fonts project, commissioning new typefaces
designed for the web. He is also currently offering lectures on
typeface design with Free Software.
Harrison (H)

1, 2.

Ludivine Loiseau (LL)
and now all:
Dave Crossland (DC)

Hello Dave.

Hellooo ...

Alright!

Well, thank you for taking a bit of time with us for the interview. First
thing is maybe to set a kind of context of your situation, your current situation.
What you’ve done before. Why are you setting fonts and these kind of things.
H

Oh yes, yeah. Well, I take it quite far back, when I was a teenager. I
was planning to do computer science university studying like mathematics
and physics in highschool. I needed some work experience. I decided I
didn’t want to work with computers. So I dropped maths and physics and
I started working at ... I mean I started studying art and design, and also
socio-linguistics in highschool. I was looking at going to Fine Arts but I
wasn’t really too worried about if I could get a job at the end of it, because
I could get a job with computers, if I needed to get a job So I studied that
at my school for like a one year course, after my school. A foundation year,
and the deal with that is that you study all the different art and design disciplines. Because in highschool you don’t really have the specialities where you
specifically study textile or photography, not every school has a darkroom,
schools are not well equipped.
DC

155

You get to experience all these areas of design and in that we studied graphic
design, motion graphics and I found in this a good opportunity to bring together the computer things with fine arts and visual arts aspects. In graphic
design in my school it was more about paper, it had nothing to do with
computers. In art school, that was more the case. So I grew into graphic
design.
Ordering coffee and change of background music: Oh yeah, African beats!

So, yes. I was looking at graphic design that was more computer based than
in art school. I wasn’t so interested in like regular illustration as a graphic
design. Graphic design has really got three purposes: to persuade people,
that’s advertising; to entertain people, movie posters, music album covers,
illustration magazines; and there is also graphic design to inform people,
in England it’s called ‘information design’, in the US it’s called ‘information
architecture’ ... stucturing websites, information design. Obviously a big
part of that is typography, so that’s why I got interested in typography, via
information design. I studied at Ravensbourne college in London, what
I applied for was graphic information design. I started working at the IT
department, and that really kept me going to that college, I wasn’t so happy
with the direction of the courses. The IT department there was really really
good and I ended up switching to the interaction design course, because that
had more freedom to do the kind of typographic work I was intersted in.
So I ended up looking at Free Sofware design tools because I became frustrated by the limitations of the Adobe software which in the college was
using, just what everybody used. And at that point I realized what ‘software freedom’ meant. I’ve been using Debian since I was like a teenager,
but I hadn’t really looked to the depth of what Free Software was about. I
mean back in the nineties Windows wasn’t very good but probably at that
time 2003-2004, MacOSX came out and it was getting pretty nice to use.
I bought a Mac laptop without really thinking about it and because it was
a Unix I could use the software like I was used to do. And I didn’t really
think about the issues with Free Software, MacOSX was Unix so it was the
same I figured. But when I started to do my work I really stood against the
limitations of Adobe software, specifically in parallel publishing which is
when you have the same basic informations that you want to communicate
in different mediums. You might want to publish something in .pdf, on the
web, maybe also on your mobile phone, etc. And doing that with Adobe
156

software back then was basically impossible. I was aware of Free Software
design tools and it was kind of obvious that even if they weren’t very pushed
by then they at least had the potential to be able to do this in a powerful
way. So that’s what I figured out. What that issue with Free Software really
meant. Who’s in control of the software, who decides what it does, who
decides when it’s going to support this feature or that feature, because the
features that I wanted, Adobe wasn’t planning to add them. So that’s how I
got interested in Free Software.
When I graduated I was looking for something that I could contribute in
this area. And one of the Scribus guys, Peter Linnell, made an important
post on the Scribus blog. Saying, you know, the number one problem
with Free Software design is fonts, like it’s dodgy fonts with incorrect this,
incorrect that, have problems when printed as well ... and so yeah, I felt
woa, I have a background in typography and I know about Free Software,
I could make contributions in fonts. Looking into that area, I found that
there was some postgraduate course you can study at in Europe. There’s
two, there is one at The Hague in The Netherlands and one at Reading.
They’re quite different courses in their character and in how much they cost
and how long they last for and what level of qualification they are. But
they’re both postgraduate courses which focus on typeface design and font
software development. So if you’re interesed in that area, you can really
concentrate for about a year and bring your skills up to a high professional
level. So I applied to the course at Reading and I was accepted there and
I’m currently studying there part time. I’m studying there to work on Free
Software fonts. So that’s the full story of how I ended up in this area.
Excellent! Last time we met, you summarized in a very relevant way the
history of font design software which is a proof by itself that everything is related
with fonts and this kind of small networks and I would like you to summarize it
again.
H

L

a

u

g

h

i

n

g

Alright. In that whole journey of getting into this area of parallel publishing and automated design, I was asking around for people who
worked in that area because at that time not many people had worked in
parallel publishing. It’s a lot of a bigger deal now, especially in the Free
Software community where we have Free Software manuals translated into
DC

157

many languages, written in .doc and .xml and then transformed into print
and web versions and other versions. But back then this was kind of a new
concept, not all people worked on it. And so, asking around, I heard about
the department of typography at the university of Reading. One of the lecturers there, actually the lecturer of the typeface design course put me on
to a designer in Holland, Petr van Blokland. He’s a really nice guy, really
friendly. And I dropped him an e-mail as I was in Holland that year – just
dropped by to see him and it turned out he’s not only involved in parallel
publishing and automated design, but also in typedesign. For him there is
really no distinctions between type design and typography. It’s kind of like a
big building – you have the architecture of the building but you can also go
down into the bricks. It’s kind of like that with typography, the type design
is all these little pieces you assembly to create the typography out of . He’s
an award-winning typeface designer and typographer and he was involved
in the early days of typography very actively. He kind of explained me the
whole story of type design technology.
C

o

f

f

e

e

d

e

l

i

v

e

r

y

a

n

d

j

a

z

z

m

u

s

i

c

So, the history of typography actually starts with Free Software, with Donald
Knuth and his TeX. The TeX typesetting system has its own font software
or font system called Metafont. Metafont is a font programming language,
and algebraic programming language describing letter forms. It really gets
into the internal structure of the shapes. This is a very non-visual programming approach to it where you basically use this programming language to
describe with algebra how the shapes make up the letters. If you have a
capital H, you got essentially 3 lines, two verticals stands and a horizontal
crossbar and so, in algebra you can say that you’ve got one ratio whitch is
the height of the vertical lines and another ratio which is the width between
them and another ratio which is the distance between the top point and the
middle point of the crossbar and the bottom point. By describing all of that
in algebra, you really describe the structure of that shape and that gives you
a lot of power because it means you can trace a pen nib objects over that
skeleton to generate the final typeform and so you can apply variations, you
can rotate the pen nib – you can have different pen nib shapes And you can
have a lot of different typefaces out of that kind of source code. But that
approach is not a visual approach, you have to take it with a mathematical
158

mind and that isn’t something which graphic designers typically have as a
strong part of their skill set.

The next step was describing the outline of a typeface, and the guy who
did this was working, I believe, at URW. He invented a digital typography
system or typedesign program called Ikarus. The rumor is it’s called Ikarus
because it crashed too much. Peter Karow is this guy. He was the absolute
unknown real pioneer in this area. They were selling this proprietary software powered by a tablet, with a drawing pen for entering the points and it
used it’s own kind of spline-curve technology.
This was very expensive – it ran on DMS computers and URW was making
a lot of money selling those mini computers in well I guess late 70s and
early 80s. And there was a new small home computer that came out called
the Apple Macintosh. This was quite important because not only was it a
personal computer. It had a graphical user interface and also a printer, a laser
writer which was based on the Adobe PostScript technology. This was what
made desktop publishing happen. I believe it was a Samsung printer revised
by Apple and Adobe’s PostScript technology. Those three companies, those
three technologies was what made desktop publishing happen. Petr van
Blokland was involved in it, using the Ikarus software, developing it. And
so he ported the program to the Mac. So Ikarus M was the first font
editor for personal computers and this was taken on by URW but never
really promoted because the ... Mac costs not a lot money compared to those
big expensive computers. So, Ikarus M was not widely distributed. It’s
kind of an obvious idea – you know you have those innovative computers
doing graphic interfaces and laser printing and several different people had
several different ideas about how to employ that. Obviously you had John
Warnock within Adobe and at that point Adobe was a systems company,
they made this PostScript system and these components, they didn’t make
any user applications. But John Warnock – and this is documented in the
book on the Adobe story – he really pushed within the company to develop
Adobe Illustrator, which allowed you to interact with the edit PostScript
code and do vector drawings interactively. That was the kind of illustration
and graphic design which we mentioned earlier. That was the ... page layout
sort of thing and that was taking care of by a guy called Paul Brainerd,
whose company Aldus made PageMaker. That did similar kind of things
than Illustrator, but focused on page layout and typography, text layout
159

rather than making illustrations. So you had Illustrator and PageMaker and
this was the beginning of the desktop publishing tool-chain.
When was it?

H

This is in the mid-eighties. The Mac came out in 1984

DC

Pierre Huyghebaert (PH)

Illustrator in 1986 I think.

Yeah. And then the Apple LaserWriter, which is I believe a Samsung
printer, came out in 1985, and I believe the first edition of Illustrator was in
1988 ...
DC

No, I think Illustrator 1 was in 1986.

PH
DC
H

OK, if you read the official Adobe story book, it’s fully documented 1 .

It’s interesting that it follows so quickly after the Macintosh.

Yes! That’s right. It all happened very quickly because Adobe and
Apple had really built with PostScript and the MacOS, they had the infrastructure there, they could build on top of. And that’s a common thing we
see played out over and over ... Things are developed quite slowly when they
are getting the infrastructure right, and then when the infrastructure is in
place you see this burst of activity where people can slot it together very
quickly to make some interesting things. So, you had this other guy called
Jim von Ehr and he saw the need for a graphical user interface to develop
fonts with and so he founded a small compagny called Altsys and he made a
program called Fontographer. So that became the kind of de-facto standard
font editing program.
DC

PH

used?

And before that, do you know what font design software Adobe designers

I don’t know. Basically when Adobe made PostScript for the Apple
LaserWriter then they had the core 35 PostScript fonts, which is about
a thousand families, 35 differents weights or variants of the fonts. And I
believe that those were from Linotype. Linotype developed that in collaboration with Adobe, I have no idea about what software they used, they
may have had their own internal software. I know that before they had
DC

1

Pamela Pfiffner. Inside the Publishing Revolution: The Adobe Story. Adobe Press, 2008

160

Illustrator they were making PostScript documents by hand like TeX, programming PostScript sourcecode. It might have been in a very low tech way.
Because those were the core fonts that have been used in PostScript.
So you had Fontographer and this is yeah I mean a GUI application for
home computers to make fonts with. Fontographer made early 90s David
Carson graphic design posters. Because it meant that anybody could start
making fonts not only people that were in the type design guild. That all
David Carson kind of punk graphic design, it’s really because of Desktop
publishing and specifically because of Fontographer. Because that allowed
people to make these fonts. Previous printing technologies wouldn’t allow
you to make these kinds of fonts without extreme efforts. I mean a lot of the
effects you can do with digital graphics you can’t do without digital graphics
– air brushing sophisticated effects like that can be achieved but it’s really a
lot of efforts.

So going back to the guys from Holland, Petr has a younger brother called
Erik and he went to the college at the Royal Academie of design the KABK
in the Hague with a guy who is Just van Rossum and he’s the younger
brother of Guido van Rossum who is now quite famous because he’s the guy
who developed and invented Python. In the early 90s Jim von Ehr is developping Fontographer, and Fontographer 4 comes out and Petr and Just and
Erik managed to get a copy of the source code of Fontographer 3 which is the
golden version that we used, like Quark, that was what we used throughout
most of the 90s and so they started adding things to that to do scripting on
Fontographer with Python and this was called Robofog, and that was still
used until quite recently, because it had features no one has ever seen enywhere else. The deal was you had to get a Fontographer 4 license, and then
you could get a Robofont license, for Fontographer 3. Then Apple changed
the system architecture and that meant Fontographer 3 would no longer
run on Apple computers. Obviously that was a bit of a damn on Robofog.
Pretty soon after that Jim sold Fontographer to Macromedia. He and his
employes continued to develop Fontographer into Freehand, it went from a
font drawing application into a more general purpose illustration tool. So
Macromedia bought Altsys for Freehand because they were competing with
Adobe at that time. And they didn’t really have any interest in continuing
to develop Fontographer. Fonts is a really obscure kind of area. As a proprietary software company, what you are doing things to make a profit and if
161

the market is too small to justify your investment then you’ll just not keep
developing the software. Fontographer shut at that point.
PH

I think they paid one guy to maintain it and answer questions.

Yeah. I think they even stop actively selling it, you had to ask them to
sell you a license. Fontographer has stopped at that point and there was no
actively developed font editor. There were a few Windows programs, which
were kind of shareware for developing fonts because in this time Apple and
Microsoft got fed up with paying Adobe’s extortion of PostScript licensing
fees. They developed their own font format called TrueType. There were
Windows font editing programs.
Yeah. I think they even stop actively selling it, you had to ask them to sell
you a license. Fontographer has stopped at that point and there was no actively developed font editor. There were a few Windows programs, which
were kind of shareware for developing fonts because in this time Apple and
Microsoft got fed up with paying Adobe’s extortion of PostScript licensing fees. They developed their own font format called TrueType. When
Fontographer stopped there was the question of which one will become the
predominant font editor and so there was Fontlab. This was developed by
a guy Yuri Yarmola, Russian originally I believe, and it became the primary
proprietary type design tool.
The Python guys from Holland started using Fontlab. They managed to
convince the Fontlab guys to include Python scripting support in Fontlab.
Python had become a major language, for doing this kind of scripting. So
Fontlab added in Python scripting. And then different type designers, font
developers started to use Python scripts to help them develop their fonts,
and a few of the guys doing that decided to join up and they created the
RoboFab project which took the ideas that had been developed for Robofob
and reimplemented them with Fontlab – so RoboFab. This is now a Free
Software package, under the MIT Python style licence. So it is a Free
Software licence but without copyleft. It has beeing developed as a collaborative project. If you’re interested in the development you can just join the
mailing list. It’s a very mature project and the really beautiful thing about
it that they developed a font object model and so in Python you have a very
clean and easily understandable object-oriented model of what a font is. It
makes it very easy to script things. This is quite exciting because that means
you can start to do things which are just not really visible with the graphic
design interface. The thing with those fonts is like there is a scale, it is like
DC

162

architecture. You’ve got the designer of the building and the designer of
the bricks. With a font it is the same. You have the designer who shapes
each letter and then you’ve got the character-spacing which makes what a
paragraph will look like. A really good example of this is if you want to do
interpolation, if you have a very narrow version of a font and a very wide one,
and you want to interpolate in different versions between those two masters
– you really want to do that in a script, and RoboFab makes this really easy
to do this within Fontlab. The ever important thing about RoboFab was
that they developed UFO, I think it’s the Universal Font Object – I’m not
sure what the exact name is – but it’s a XML font format which means that
you can interchange font source data with different programs and specifically
that means that you have a really good font interpolation program that can
read and write that UFO XML format and then you can have your regular
type design format font editor that will generate bitmap font formats that
you actually use in a system. You can write your own tool for a specific
task and push and pull the data back and forth. Some of these Dutch guys,
especially Erik has written a really good interpolation tool. So, as a kind
of thread in the story of font. Remember that time where Fontographer
was not developed actively then you have Georges Williams from California
who was interested in digital typography and fonts and Fontographer was
not being activelly developed and he found that quite frustrating so he said
like Well, I’ll write my own font editor. He wrote it from scratch. I mean
this is a great project.
LL

Can you tell us some details about your course?

DC
There are four main deliverables in the course, that you normally
do in one year, twelve months. The big thing is that you do a professional quality OpenType font, with an extended pan-european latin coverage in regular and italic, maybe bold. You also do a complex non-latin
in Arabic, Indic, maybe Cyrillic ... well not really Cyrillic because there are
problems to get a Cyrillic type experts from Russia to Britain ... or Greek,
or any script with which you have a particular background in. And so,
they didn’t mandate which software students can use, and I was already
used to FontForge, while pretty much all the other students were using
FontLab. This font development is the main thing. The second thing is
the dissertation, that goes up to 8,000 words, an academic master in typography dissertation. Then there is a smaller essay, that will be published
on http://www.typeculture.com/academic_resource/articles_essays/, and it’s

163

a kind of a practice for writing the dissertation. Then you have to document
your working process throughout the year, you have to submit your working
files, source files. Every single step is documented and you have to write
a small essay describing your process. And also, of course, apart from the
type design, you make a font specimen, so you make a very nice piece of
design that show up your font in use, as commercial companies do. All that
takes a full intense year. For British people, the course costs about £3,000,
for people in the EU, it costs about £5,000 and about £10,000 for non-EU.
Have a look at the website for details, but yes, it’s very expensive.
LL

And did you also design a font?

Yes. But I do it part-time. Normally, you could do the typeface,
and the year after you do the dissertation. For personal reasons, I do the
dissertation first, in the summer, and next year I’ll do the typeface, I think
in July next year.
DC

LL

You have an idea on which font you’ll work?

Yes. The course doesn’t specify which kind of typeface you have to
work on. But they really prefer a textface, a serif one, because it’s the most
complicate and demanding work. If you can do a high quality serif text
typeface design, you can do almost any typeface design! Of course, lots of
students do also a sans serif typeface to be read at 8 or 9 points, or even
for by example dictionaries at 6 or 7 points. Other students design display
typefaces that can be used for pararaphs but probably not at 9 points ...
DC

It looks like you are asked to produce quite a lot of documents.
Are these documents published anywhere, are they available for other designers?
Femke Snelting (FS)

Yes, the website is http://www.typefacedesign.net and the teaching
team encourages students to publish their essays, and some people have
published their dissertation on the web, but it varies. Of course, being an
academic dissertation, you can request if from the university.
DC

I’m asking because in various presentations the figure of the ‘expert typographer’ came up, and the role Open Source software could have, to open up this
guild.
FS

Yeah, the course in The Hague is cheaper, the pound was quite high so
it’s expensive to live in Britain during the last year, and the number of people
able to produce high quality fonts is pretty small ... And these courses are
DC

164

quite inaccessible for most of the people because of being so expensive, you
have to be quite commited to follow them. The proprietary font editing
software, even with a student discount, is also a bit expensive. So yes, Free
and Open Source software could be an enabler. FontForge allows anybody
to grab it on the Internet and start making fonts. But having the tools
is just the beginning. You have to know what you’re doing to a design a
typeface, and this is separate from font software techinques. And books
on the subject, there are quite a few, but none are really a full solution.
There www.typophile.org, a type design forum on the web, where you can
post preliminary designs. But of course you do not get the kind of critical
feedback as you can get on a masters course ...

FS
We talked to Denis Jacquerye from the DéjàVu project, and most of the
people who collaborate on the project are not type designers but people who are
interested in having certain glyphs added to a typeface. And we asked him if
there is some kind of teaching going on, to be sure that the people contributing
understand what they are doing. Do you see any way of, let’s say, a more open
way of teaching typography starting to happen?

Yeah, I mean, that the part of why the Free Software movement is
going to branch down into the Free Culture movement. There is that website Freedom Defined 2 that states that the principles of Free Software can
apply to all other kind of works. This isn’t shared by everybody in the Free
Software movement. Richard Stallman makes a clear difference between
three kind of works: the ones that function like software, encyclopedias,
dictionaries, text books that tell how to makes things, and text typefaces.
Art works like music and films, and text works about opinions like scientific papers or political manifestos. He believes that different kinds of rights
should apply for that different kind of works. There is also a different view
in which anything in a computer can be edited ought to be free like Free
Software. That is certainly a position that many people take in the Free
Software community. In the WikiMedia Foundation text books project,
you can see that when more and more people are involved in typeface design
from the Free Culture community, we will see more and more education
material. There will be a snowball effect.

DC

PH

2

Dave, we are running out of time ...

http://freedomdefined.org

165

So just to finish about the FontForge Python scripting ... There is
Python embeded in FontForge so you can run scripts to control FontForge,
you can add new features that maybe would be specific to your font and then
in FontForge there is also a Python module which means that you can type
into a Python interpretor. You type import fontforge and if it doesn’t
give you an error then you can start to do FontForge functions, just like in
the RoboFab environment. And in the process of adding that George kind
of re-architectured the FontForge source code so instead of being one large
program, there is now a large C library, libfontforge, and then a small C
program for rendering and also the Python module, a binding or interface
to that C library. This means if you are an application programmer it is very
straightforward to make a new font editor in whatever language you want,
using whatever graphic toolkit you want. So if you’re a JDK guy or a GTK
guy or even if you’re on Windows or Mac OS X, you can make a font editor
that has all the functionality of FontForge. FontForge is a kind of engine to
make font editors. This is quite exciting because it means it’s pretty straight
forward for somebody to write a font editing program which is designed for,
say, beginners.
So, to come back to what we were just talking about in term of educational
materials to get people new to typeface design to be confident with themselves. Maybe they won’t be in that professional level yet, but they will be
pleased with their own work and happy to work in a user interface where
you feel like in 2006, you know, with nice icons nice windows; anti aliasing
and these kind of things.
I mean there’s nothing wrong with the FontForge interface. It is what it
is. But it scares a lot of people away, people say that they don’t like this. I
think it is too scary, too different. I think we are going to see some exciting
stuff in the next few years in the Free Software font editor space.
DC

166

At the Libre Graphics Meeting 2008 in Wroclaw, just before
Michael Terry presents his project ingimp to an audience of
curious GIMP developers and users, we meet up to talk more
about ‘instrumenting GIMP’ and about the way Terry thinks
data analysis could be done as a form of discourse. Michael
Terry is a computer scientist working at the Human Computer
Interaction Lab of the University of Waterloo, Canada and his
main research focus is on improving usability in Open Source
software. We speak about ingimp, a clone of the popular image
manipulation programme GIMP, but with an important difference: ingimp allows users to record data about their usage in to
a central database, and subsequently makes this data available to
anyone. This conversation was also published in the Constant
publication Tracks in electr(on)ic fields.
Maybe we could start this conversation with a description of the ingimp project
you are developing and why you chose to work on usability for GIMP?
So the project is ‘ingimp’, which is an instrumented version of GIMP, it
collects information about how the software is used in practice. The idea is
you download it, you install it, and then with the exception of an additional
start up screen, you use it just like regular Gimp. So, our goal is to be as
unobtrusive as possible to make it really easy to get going with it, and then
to just forget about it. We want to get it into the hands of as many people
as possible, so that we can understand how the software is actually used in
practice. There are plenty of forums where people can express their opinions
about how GIMP should be designed, or what’s wrong with it, there are
plenty of bug reports that have been filed, there are plenty of usability issues
that have been identified, but what we really lack is some information about
how people actually apply this tool on a day to day basis. What we want
to do is elevate discussion above just anecdote and gut feelings, and to say,
well, there is this group of people who appear to be using it in this way,
these are the characteristics of their environment, these are the sets of tools
171

they work with, these are the types of images they work with and so on, so
that we have some real data to ground discussions about how the software
is actually used by people. You asked me now why GIMP? I actually used
GIMP extensively for my PhD work. I had these little cousins come down
and hang out with me in my apartment after school, and I would set them
up with GIMP, and quite often they would always start off with one picture,
they would create a sphere, a blue sphere, and then they played with filters
until they got something really different. I would turn to them looking
at what they had been doing for the past twenty minutes, and would be
completely amazed at the results they were getting just by fooling around
with it. And so I thought, this application has lots and lots of power, I’d
like to use that power to prototype new types of interface mechanisms. So
I created JGimp, which is a Java based extension for the 1.0 GIMP series,
that I can use as a back-end for prototyping novel user interfaces. I think
that it is a great application, there is a lot of power to it, and I had already
an investment in its code base so it made sense to use that as a platform for
testing out ideas of open instrumentation.
What is special about ingimp, is the fact that the data you generate is made by
the software you are studying itself. Could you describe how that works?
Every bit of data we collect, we make available: you can go to the website,
you can download every log file that we have collected. The intent really
is for us to build tools and infrastructure so that the community itself can
sustain this analysis, can sustain this form of usability. We don’t want to
create a situation where we are creating new dependencies on people, or
where we are imposing new tasks on existing project members. We want to
create tools that follow the same ethos as Open Source development, where
anyone can look at the source code, where anyone can make contributions,
from filing a bug to doing something as simple as writing a patch, where
they don’t even have to have access to the source code repository, to make
valuable contributions. So importantly, we want to have a really low barrier
to participation. At the same time, we want to increase the signal-to-noise
ratio. Yesterday I talked with Peter Sikking, an information architect working for GIMP, and he and I both had this experience where we work with
user interfaces, and since everybody uses an interface, everybody feels they
are an expert, so there can be a lot of noise. So, not only did we want to
create an open environment for collecting this data, and analysing it, but we
172

also want to increase the chance that we are making valuable contributions,
and that the community itself can make valuable contributions. Like I said,
there is enough opinion out there. What we really need to do is to better
understand how the software is being used. So, we have made a point from
the start to try to be as open as possible with everything, so that anyone can
really contribute to the project.
ingimp has been running for a year now. What are you finding?
I have started analysing the data, and I think one of the things that we
realised early on is that it is a very rich data set; we have lots and lots of
data. So, after a year we’ve had over 800 installations, and we’ve collected
about 5000 log files, representing over half a million commands, representing thousands of hours of the application being used. And one of the things
you have to realise is that when you have a data set of that size, there are so
many different ways to look at it that my particular perspective might not
be enough. Even if you sit someone down, and you have him or her use the
software for twenty minutes, and you videotape it, then you can spend hours
analysing just that twenty minutes of videotape. And so, I think that one of
the things we realised is that we have to open up the process so that anyone
could easily participate. We have the log files available, but they really didn’t
have an infrastructure for analysing them. So, we created this new piece of
software called ‘StatsJam’, an extension to MediaWiki, which allows anyone
to go to the website and embed SQL-queries against the ingimp data set
and then visualise those results within the Wiki text. So, I’ll be announcing
that today and demonstrating that, but I have been using that tool now for
a week to complement the existing data analysis we have done. One of the
first things that we realized is that we have over 800 installations, but then
you have to ask, how many of those are really serious users? A lot of people
probably just were curious, they downloaded it and installed it, found that it
didn’t really do much for them and so maybe they don’t use it anymore. So,
the first thing we had to do is figure out which data points should we really
pay attention too. We decided that a person should have saved an image,
and they should have used ingimp on two different occasions, preferably at
least a day apart, where they’d saved an image on both of the instances. We
used that as an indication of what a serious user is. So with that filter in
place, then the ‘800 installations’ drops down to about 200 people. So we
had about 200 people using ingimp, and looking at the data this represents
173

about 800 hours of use, about 4000 log files, and again still about half a million commands. So, it’s still a very significant group of people. 200 people
is still a lot, and that’s a lot of data, representing about 11000 images they
have been working on, there’s just a lot.
From that group, what we found is that use of ingimp is really short and
versatile. So, most sessions are about fifteen minutes or less, on average.
There are outliers, there are some people who use it for longer periods of
time, but really it boils down to them using it for about fifteen minutes, and
they are applying fewer than a hundred operations when they are working on
the image. I should probably be looking at my data analysis as I say this, but
they are very quick, short, versatile sessions, and when they use it, they use
less than 10 different tools, or they apply less than 10 different commands
when they are using it. What else did we find? We found that the two
most popular monitor resolutions are 1280 by 1024 and 1024 by 768. So,
those represent collectively 60% of the resolutions, and really 1280 by 1024
represents pretty much the maximum for most people, although you have
some higher resolutions. So one of the things that’s always contentious
about GIMP, is its window management scheme and the fact that it has
multiple windows, right? And some people say, well you know this works
fine if you have two monitors, because you can throw out the tools on one
monitor and then your images are on another monitor. Well, about 10%
to 15% of ingimp users have two monitors, so that design decision is not
working out for most of the people, if that is the best way to work. These
are things I think that people have been aware of, it’s just now we have
some actual concrete numbers where you can turn to and say, now this is
how people are using it. There is a wide range of tasks that people are
performing with the tool, but they are really short, bursty tasks.
Every time you start up ingimp, a screen comes up asking you to describe what
you are planning to do and I am interested in the kind of language users invent
to describe this, even when they sometimes don’t know exactly what it is they are
going to do. So inventing language for possible actions with the software, has in
a way become a creative process that is now shared between interface designer,
developer and user. If you look at the ‘activity tags’ you are collecting, do you
find a new vocabulary developing?
I think there are 300 to 600 different activity tags that people register
within that group of ‘significant users’. I didn’t have time to look at all of
174

them, but it is interesting to see how people are using that as a medium
for communicating to us. Some people will say, Just testing out, ignore this!
Or, people are trying to do things like insert HTML code, to do like a
cross-site scripting attack, because, you have all the data on the website, so
they will try to play with that. Some people are very sparse and they say
‘image manipulation’ or ‘graphic design’ or something like that, but then
some people are much more verbose, and they give more of a plan, This
is what I expect to be doing. So, I think it has been interesting to see how
people have adopted that and what’s nice about it, is that it adds a really nice
human element to all this empirical data.
I wanted to ask you about the data, without getting too technical, could
you explain how these data are structured, what do the log files look like?

So the log files are all in XML, and generally we compress them, because
they can get rather large. And the reason that they are rather large is that we
are very verbose in our logging. We want to be completely transparent with
respect to everything, so that if you have some doubts or if you have some
questions about what kind of data has been collected, you should be able to
look at the log file, and figure out a lot about what that data is. That’s how
we designed the XML log files, and it was really driven by privacy concerns
and by the desire to be transparent and open. On the server side we take
that log file and we parse it out, and then we throw it into a database, so
that we can query the data set.
Now we are talking about privacy ... I was impressed by the work you have done
on this; the project is unusually clear about why certain things are logged, and
other things not; mainly to prevent the possibility of ‘playing back’ actions so that
one could identify individual users from the data set. So, while I understand
there are privacy issues at stake I was wondering ... what if you could look at the
collected data as a kind of scripting for use? Writing a choreography that might
be replayed later?
Yes, we have been fairly conservative with the type of information that we
collect, because this really is the first instance where anyone has captured
such rich data about how people are using software on a day to day basis,
and then made it all that data publicly available. When a company does
175

this, they will keep the data internally, so you don’t have this risk of someone outside figuring something out about a user that wasn’t intended to be
discovered. We have to deal with that risk, because we are trying to go about
this in a very open and transparent way, which means that people may be
able to subject our data to analysis or data mining techniques that we haven’t
thought of and extract information that we didn’t intent to be recording in
our file, but which is still there. So there are fairly sophisticated techniques
where you can do things like look at audio recordings of typing and the timings between keystrokes, and then work backwards with the sounds made
to figure out the keys that people are likely pressing. So, just with keyboard
audio and keystroke timings alone you can often give enough information
to be able to reconstruct what people are actually typing. So we are always
sort of weary about how much information is in there. While it might be
nice to be able to do something like record people’s actions and then share
that script, I don’t think that that is really a good use of ingimp. That said,
I think it is interesting to ask, could we characterize people’s use enough, so
that we can start clustering groups of people together and then providing a
forum for these people to meet and learn from one another? That’s something we haven’t worked out. I think we have enough work cut out for us
right now just to characterize how the community is using it.
It was not meant as a feature request, but as a way to imagine how usability
research could flip around and also become productive work.

Yes, totally. I think one of the things that we found when bringing people
into to assess the basic usability of the ingimp software and ingimp website,
is that people like looking at things like what commands other people are
using, what the most frequently used commands are, and part of the reason
that they like that, is because of what it teaches them about the application.
So they might see a command they were unaware of. So we have toyed with
the idea of then providing not only the command name, but then a link
from that command name to the documentation – but I didn’t have time to
implement it, but certainly there are possibilities like that, you can imagine.

Maybe another group can figure something out like that? That’s the beauty of
opening up your software plus data set of course. Well, just a bit more on what
is logged and what not ... Maybe you could explain where and why you put the
limit and what kind of use you might miss out on as a result?
176

I think it is important to keep in mind that whatever instrument you use
to study people, you are going to have some kind of bias, you are going
to get some information at the cost of other information. So if you do a
video taped observation of a user and you just set up a camera, then you
are not going to find details about the monitor maybe, or maybe you are
not really seeing what their hands are doing. No matter what instrument
you use, you are always getting a particular slice. I think you have to work
backwards and ask what kind of things do you want to learn. And so the
data that we collect right now, was really driven by what people have done
in the past in the area of instrumentation, but also by us bringing people
into the lab, observing them as they are using the application, and noticing
particular behaviours and saying, hey, that seems to be interesting, so what
kind of data could we collect to help us identify those kind of phenomena,
or that kind of performance, or that kind of activity? So again, the data that
we were collecting was driven by watching people, and figuring out what
information will help us to identify these types of activities. As I’ve said,
this is really the first project that is doing this, and we really need to make
sure we don’t poison the well. So if it happens that we collect some bit of
information, that then someone can later say, Oh my gosh, here is the person’s
file system, here are the names they are using for the files or whatever, then it’s
going to make the normal user population weary of downloading this type
of instrumented application. This is the thing that concerns me most about
Open Source developers jumping into this domain, is that they might not
be thinking about how you could potentially impact privacy.
I don’t know, I don’t want to get paranoid. But if you are doing it, then
there is a possibility someone else will do it in a less considerate way.
I think it is only a matter of time before people start doing this, because
there are a lot of grumblings about, we should be doing instrumentation, someone just needs to sit down and do it. Now there is an extension out for Firefox
that will collect this kind of data as well, so you know ...
Maybe users could talk with each other, and if they are aware that this
type of monitoring could happen, then that would add a different social
dimension ...
177

It could. I think it is a matter of awareness, really, so when we bring
people into the lab and have them go to the ingimp website, download and
install it and use it, and go check out the stats on the website, and then we
ask questions like, what kind of data are we collecting? We have a lengthy
concern agreement that details the type of information we are collecting and
the ways your privacy could be impacted, but people don’t read it.
So concretely ... what information are you recording, and what information are
you not recording?
We record every command name that is applied to a document, to an image.
Where your privacy is at risk with that, is that if you write a custom script,
then that custom script’s name is going to be inserted into a log file. And so
if you are working for example for Lucas or DreamWorks or something like
that, or ILM, in some Hollywood movie studio and you are using ingimp
and you are writing scripts, then you could have a script like ‘fixing Shrek’s
beard’, and then that is getting put into the log file and then people are
going to know that the studio uses ingimp. We collect command names,
we collect things like what windows are on the screen, their positions, their
sizes, we take hashes of layer names and file names. We take a string and
then we create a hash code for it, and we also collect information about how
long is this string, how many alphabetical characters, numbers, things like
that, to get a sense of whether people are using the same files, the same
layer names time and time again, and so on. But this is an instance where
our first pass at this, actually left open the possibility of people taking those
hashes and then reconstructing the original strings from that. Because we
have the hash code, we have the length of the string, all you have to do is
generate all possible strings of that length, take the hash codes and figure
out which hashes match. And so we had to go back and create a new
scheme for recording this type of information where we create a hash and
we create a random number, we pair those up on the client machine but
we only log the random number. So, from log to log then, we can track if
people use the same image names, but we have no idea of what the original
string was. There are these little ‘gotchas’, things to look out for, that I
don’t think most people are aware of, and this is why I get really concerned
about instrumentation efforts right now, because there isn’t this body of
experience of what kind of data should we collect, and what shouldn’t we
collect.
178

As we are talking about this, I am already more aware of what data I would allow
to be collected. Do you think by opening up this data set and the transparent
process of collecting and not collecting, this will help educate users about these
kinds of risks?
It might, but honestly I think probably the thing that will educate people
the most is if there was a really large privacy error and that it got a lot of
news, because then people would become more aware of it because right
now – and this is not to say that we want that to happen with ingimp – but
when we bring people in and we ask them about privacy, Are you concerned
about privacy?, and they say No, and we say Why? Well, they inherently trust
us, but the fact is that Open Source also lends a certain amount of trust to
it, because they expect that since it is Open Source, the community will in
some sense police it and identify potential flaws with it.

Is that happening?
Are you in dialogue with the Open Source community about this?

No, I think probably five to ten people have looked at the ingimp code –
realistically speaking I don’t think a lot of people looked at it. Some of the
GIMP developers took a gander at it to see how could we put this upstream,
but I don’t want it upstream, because I want it to always be an opt-in, so
that it can’t be turned on by mistake.
You mean you have to download ingimp and use it as a separate program? It
functions in the same way as GIMP, but it makes the fact that it is a different
tool very clear.

Right. You are more aware, because you are making that choice to download
that, compared to the regular version. There is this awareness about that.
We have this lengthy text based consent agreement that talks about the data
we collect, but less than two percent of the population reads license agreements. And, most of our users are actually non-native English speakers,
so there are all these things that are working against us. So, for the past
year we have really been focussing on privacy, not only in terms of how we
collect the data, but how we make people aware of what the software does.
We have been developing wordless diagrams to illustrate how the software
179

functions, so that we don’t have to worry about localisation errors as much.
And so we have these illustrations that show someone downloading ingimp,
starting it up, a graph appears, there is a little icon of a mouse and a keyboard on the graph, and they type and you see the keyboard bar go up, and
then at the end when they close the application, you see the data being sent
to a web server. And then we show snapshots of them doing different things
in the software, and then show a corresponding graph change. So, we developed these by bringing in both native and non-native speakers, having
them look at the diagrams and then tell us what they meant. We had to go
through about fifteen people and continual redesign until most people could
understand and tell us what they meant, without giving them any help or
prompts. So, this is an ongoing research effort, to come up with techniques
that not only work for ingimp but also for other instrumentation efforts, so
that people can become more aware of the implications.
Can you say something about how this type of research relates to classic usability
research and in particular to the usability work that is happening in Gimp?
Instrumentation is not new, commercial software companies and researchers
have been doing instrumentation for at least ten years, probably ten to
twenty years. So, the idea is not new but what is new, in terms of the
research aspects of this, is how do we do this in a way where we can make
all the data open? The fact that you make the data open, really impacts your
decision about the type of data you collect and how you are representing it.
And you need to really inform people about what the software does. But I
think your question is ... how does it impact the GIMP’s usability process?
Not at all, right now. But that is because we have intentionally been laying
off to the side, until we got to the point where we had an infrastructure,
where the entire community could really participate with the data analysis.
We really want to have this to be a self-sustaining infrastructure, we don’t
want to create a system where you have to rely on just one other person for
this to work.

What approach did you take in order to make this project self-sustainable?

Collecting data is not hard. The challenge is to understand the data, and I
don’t want to create a situation where the community is relying on only one
180

person to do that kind of analysis, because this is dangerous for a number of
reasons. First of all, you are creating a dependency on an external party, and
that party might have other obligations and commitments, and might have
to leave at some point. If that is the case, then you need to be able to pass the
baton to someone else, even if that could take a considerate amount of time
and so on. You also don’t want to have this external dependency, because
of the richness in the data, you really need to have multiple people looking
at it, and trying to understand and analyse it. So how are we addressing
this? It is through this StatsJam extension to the MediaWiki that I will
introduce today. Our hope is that this type of tool will lower the barrier
for the entire community to participate in the data analysis process, whether
they are simply commenting on the analysis we made or taking the existing
analysis, tweaking it to their own needs, or doing something brand new.

In talking with members of the GIMP project here at the Libre Graphics
Meeting, they started asking questions like, So how many people are doing
this, how many people are doing this and how many this? They’ll ask me while
we are sitting in a café, and I will be able to pop the database open and say, A
certain number of people have done this, or, no one has actually used this tool at
all. The danger is that this data is very rich and nuanced, and you can’t really
reduce these kind of questions to an answer of N people do this, you have to
understand the larger context. You have to understand why they are doing
it, why they are not doing it. So, the data helps to answer some questions,
but it generates new questions. They give you some understanding of how
the people are using it, but then it generates new questions of, Why is this
the case? Is this because these are just the people using ingimp, or is this
some more widespread phenomenon? They asked me yesterday how many
people are using this colour picker tool – I can’t remember the exact name –
so I looked and there was no record of it being used at all in my data set. So
I asked them when did this come out, and they said, Well it has been there at
least since 2.4. And then you look at my data set, and you notice that most of
my users are in the 2.2 series, so that could be part of the reasons. Another
reason could be, that they just don’t know that it is there, they don’t know
how to use it and so on. So, I can answer the question, but then you have
to sort of dig a bit deeper.
You mean you can’t say that because it is not used, it doesn’t deserve any attention?
181

Yes, you just can’t jump to conclusions like that, which is again why we
want to have this community website, which shows the reasoning behind
the analysis. Here are the steps we had to go through to get this result, so
you can understand what that means, what the context means, because if you
don’t have that context, then it’s sort of meaningless. It’s like asking, what
are the most frequently used commands? This is something that people
like to ask about. Well really, how do you interpret that? Is it the numbers
of times it has been used across all log files? Is it the number of people
that have used it? Is it the number of log files where it has been used at
least once? There are lots and lots of ways in which you can interpret this
question. So, you really need to approach this data analysis as a discourse,
where you are saying, here are my assumptions, here is how I am getting to
this conclusion, and this is what it means for this particular group of people.
So again, I think it is dangerous if one person does that and you become to
rely on that one person. We really want to have lots of people looking at it,
and considering it, and thinking about the implications.
Do you expect that this will impact the kind of interfaces that can be done for
GIMP?
I don’t necessarily think it is going to impact interface design, I see it
really as a sort of reality check: this is how communities are using the
software and now you can take that information and ask, do we want to
better support these people or do we ... For example on my data set, most
people are working on relatively small images for short periods of time,
the images typically have one or two layers, so they are not really complex
images. So regarding your question, one of the things you can ask is, should
we be creating a simple tool to meet these people’s needs? All the people are
is just doing cropping and resizing, fairly common operations, so should we
create a tool that strips away the rest of the stuff? Or, should we figure out
why people are not using any other functionality, and then try to improve
the usability of that? There are so many ways to use data I don’t really
know how it is going to be used, but I know it doesn’t drive design. Design
happens from a really good understanding of the users, the types of tasks
they perform, the range of possible interface designs that are out there, lots
of prototyping, evaluating those prototypes and so on. Our data set really
is a small potential part of that process. You can say, well according to this
data set, it doesn’t look like many people are using this feature, let’s not
182

much focus too on that, let’s focus on these other features or conversely,
let’s figure out why they are not using them ... Or you might even look at
things like how big their monitor resolutions are, and say well, given the size
of the monitor resolution, maybe this particular design idea is not feasible.
But I think it is going to complement the existing practices, in the best
case.

And do you see a difference in how interface design is done in free software projects,
and in proprietary software?
Well, I have been mostly involved in the research community, so I don’t have
a lot of exposure to design projects. I mean, in my community we are always
trying to look at generating new knowledge, and not necessarily at how to
get a product out the door. So, the goals or objectives are certainly different.
I think one of the dangers in your question is that you sort of lump a lot
of different projects and project styles into one category of ‘Open Source’.
‘Open source’ ranges from volunteer driven projects to corporate projects,
where they are actually trying to make money out of it. There is a huge diversity of projects that are out there; there is a wide diversity of styles, there
is as much diversity in the Open Source world as there is in the proprietary
world. One thing you can probably say, is that for some projects that are
completely volunteer driven like GIMP, they are resource strapped. There is
more work than they can possibly tackle with the number of resources they
have. That makes it very challenging to do interface design, I mean, when
you look at interface code, it costs you 50% or 75% of a code base. That
is not insignificant, it is very difficult to hack and you need to have lots of
time and manpower to be able to do significant things. And that’s probably
one of the biggest differences you see for the volunteer driven projects, it
is really a labour of love for these people and so very often the new things
interest them, whereas with a commercial software company developers are
going to have to do things sometimes they don’t like, because that is what
is going to sell the product.

183

In 2007, OSP met with venture communist Dmytri Kleiner
and his wife Franziska, 1 late at night in the bar Le Coq in
Brussels. Kleiner had just finished his lecture InfoEnclosure-2.0
at Verbindingen/Jonctions and we wanted to ask what his ideas
about peer production could mean for the practice of designers and typographers. Referring to Benjamin Tucker, Yochai
Benkler, Marcel Mauss and of course Karl Marx, Kleiner explains how to prevent leakage at the point of scarcity through
operating within a total system of worker owned companies.
Between fundamentals of media- and information economy, he
talks about free typography and what it has to do with nuts
and bolts, the problem of working with estimates and why the
people that develop Scribus should own all the magazines it
enables.

First of all we have to be clear, our own company is very small and
doesn’t actually earn enough money to sustain itself right now. We sustain
our company at this point by taking on other projects; for example we are
here for a project that has really little to do with Telekommunisten, where
we’re helping a recruiting company in Canada, I’m in the UK for a very different reason than Telekommunisten, doing independent software development for a private company. So we’re still self-funding our company. So we
haven’t yet got to a stage where our company can actually sustain itself from
our own peer production, which is our goal. But how we plan to realize
that goal, is through peer production. To start we can sketch out a simple economic model, to understand how the economics work. Economics
work with the so called factors of production: you have land, labour and
capital. Land is natural resources, that which occurs naturally, that which
nobody produces, that just sort exists. Land, electromagnetic frequencies,
everything which naturally exists. Labour is work, something that people
do. Capital is what happens when you apply labour to land, and you create
products. Some of these products have to be consumed, and some of those
products are to be used in further production, and that’s capital. So capital

1

editor for a German publishing company

187

is the result of labour applied to land that create output that is used for
further production, and that’s tools, machines and so forth. This system
produces commodities which are consumed in the market. In this system
the dominating input in the production owns the final product, and all of
the actual value of the products is captured at that stage. So whoever sells
the product in the marketplace captures the full value of that product, the
full marginal value, or use value. All of the inputs to that process can never
make anymore than their own cost of reproduction, make their own subsistence cost. So if as a worker you’re selling your labour to somebody else
who owns the product, you’re never going to capture anymore than your
subsistence cost.
Could you make that sort of concrete?

Well, the reason that people need design is because there’s some product
that in the end requires design as an input. For instance, a simple case is
obviously a magazine, in which design is a major input. The value is always
going to be captured by the people selling the magazine. All of the inputs
to that magazine, including design, journalism, layout, administration, are
never going to capture more than their reproduction costs. So in order for
any group of workers to really capture the value of their labour, they have to
own the final product. Which means that they can’t just simply be isolated
in one field, like design. It means that the entire productive cycle has to be
owned collectively by the workers. The designers, together with the journalists, together with the administrators, have to own the magazine, otherwise
they can’t capture their full value. As a group of designers this is very difficult, because as a group of designers you’re only selling an input, you’re not
at the end owning a product. The only way to do this is by forming alliances
with other people, and not based on wages, not based on them giving you
an arbitrary amount of money for that input, which will never be higher
than reproduction cost, but based on owning together the final product. So
you contribute design, somebody else contributes journalism, somebody else
contributes administration and together you all own this magazine. Then
it is this magazine that is sold on the market that is your wage, the value
of the magazine on the market. That is the only way that you can capture
the marginal value of your labour. You have to sell the product, not the input, not labour. Marx talks about labour being itself a commodity, and that
means that you can never capture its marginal contribution of production,
you can only capture its reproduction cost. Which means what it would
188

cost to sustain a designer. A designer needs to eat, a designer needs a place
to live, to have a certain lifestyle to fit in the design community and that’s
all you get by selling your labour. You won’t get anymore because there is
no reason for the owner of the product to give you anymore. The only way
you can get more is if you own the product itself, collectively with the other
labour inputs. And I know that’s a bad answer, nobody wants to hear that
answer.
Haha!

This estimate is at the start in the possibility. Because the whole point
of a creative project is that you’re doing something that hasn’t been done
before. And we have all struggled with this before. There’s two things you
don’t know at the beginning of a contract. The first is how long it will
take and the second is what the criteria of being finished will be. You don’t
know either of those two things, and, since you don’t, determining the value
upfront of that is a complete guess. Which means that, when you agree to a
fixed-price term, you are agreeing to take on yourself the risk of the delivery
of the project. So it’s a transfer of risks. Of course the people that are buying
your labour as commodity want to put that risk back on you. They don’t
want to take the risk so they make you do that, because they can’t answer
the question of how much does it cost and how long it will take. They want
a guarantee of a fixed price and they want you to take all the risk. Which is
very unfair because it’s their product in the end; the end product is owned
by them and not by you. It’s a very exploitative relationship to force you
to take the risk for capitalizing their product. It’s a bad relationship from
the beginning. If you’re good at estimating and you know your work and
your limits and the kind of work you can do, you can make that work, and
make a living by being good at this estimates; but still first of all you’re
taking all the risk unfairly, and second you can’t make anything more than a
living. While if we’re going to build any kind of movement for social change
with these new forms of organization, we have to accumulate. Because the
political power is an extension of economic power. So if we actually think
that our peer production communities are going to have political power and
ultimately change society, that can only happen to the degree that we can
accumulate. Which means capturing more than the reproduction costs of
our labour input, it means actually capturing the full value of our labour’s
products. The Benjamin Tucker quote I mentioned before is a good way to
keep it in mind. The natural wage of labour is its product. The natural wage
189

of labour isn’t 40 an hour, it isn’t some arbitrary number. The natural wage
of labour is its product.
In our case the product is making phone calls. And we don’t offer our labour
in the form of software development, we are putting together a collective
that can do everything, develop a software and bring it to the market. It is
actually the consumer making telephone calls that will pay for it. As I said,
with it we are not actually making a sustainable living from it right now.
We are only building this. We are still making most of our sustenance by
selling our labour.
Yeah.

That’s where we are starting from. But because we are going for a
model where the end product is sold directly to the consumer, there is
not mediation. There is no capitalist owners that are buying our labour and
owning the product and then selling the product for it’s value to the market.
We are selling the product directly to the consumers of the product, so there
is nothing in-between. And all of the workers that contribute to the making
of this product, whether they are programmers or into administration or
designers, together own this product and own this company. If you’re not
selling the product, then what you’re selling is behavioural control. If you’re
not paying for the magazine directly, it is paid for with the money coming
from lobbyists or from advertisers that want to control the behaviour of the
people perceiving that media, by making them buy some things or vote in a
certain way or have a certain image of a certain state department or the role
of the state. In the economical model where the actual magazine isn’t being
sold, where the media is free, in the way television is free, the base of that
model is what Dallas Smythe calls ‘audience power’. Smythe is one of the
main writers about the politically economy of communications, and this is
sort of referred to in his ‘audience commodity’ thing, which is very degraded
and unfundamental discourse, but it’s related. ‘Audience power’, ultimately,
is just behavioural control. There is money to be made by changing the
behaviours of others. And this is the fundamental source of media funding,
sometimes it is commercials to sell an actual product by ads and sometimes
it is more subtle, like legitimizing a political system or getting people to
think favourably about a party or a state department or a government.
All the artists and the designers of the poster and the people that come to
the event, they have all kinds of motivations, use value. But the exchange
values, where the money comes from, the people buying the checks, what
190

they are buying is behavioural control, is to be represented in this context.
Through their commercial or political or legitimation purposes. The state
has legitimation needs, the state needs to be something that is thought of as
positive by people. And it does this by funding things that give a legitimacy,
like art, culture, social services. What it is buying, is this legitimation. It is
behavioural control. When an advertiser sponsors an art show or an event
or a television program what they are buying is the chance to make people
buy their product. So it is not that every single person, every single artists
in the show was thinking about how to manipulate the audience. Not at all,
they are just making art ... But where the money comes from, what they
are actually selling on the market, is behavioural control. It is the so called
‘audience power’.
How does that change the work itself you think?

It changes the way you work, a lot. There are so many restrictions
and limitations when you work on this model, on capital finance, because
the medium is constantly subverted and subjugated by the mediation, the
mediation is the message to make it a catch phrase. If you know that your
art show is being funded by a certain agency, you’re going to avoid talking
critically about that agency, because obviously that is going to deny you
funding further on. It’s clear that the sources of funding affect the actual
message that is delivered at the end. It’s not possible to have SONY Records
sponsor an art show that then tells you how SONY is evil. It is very unlikely
that it is going to be funded again, maybe you can trick them once, but it’s
not going to be sustainable. We were joking before about how my use of
anarchist and socialist terminology actually gets the most flak from other
people in my own field. That’s because they are trying to portray what we
do in Free Software development and peer production as being unpolitical.
With my saying that no, it’s actually quite political, explaining why, they
feel like I’m blowing their cover. Like I’m almost outing them as being
leftist radicals and they don’t want this image because they actually think
they can fool this system. Which I think is delusional, I don’t think you
can fool this system. But that’s a very clear example how it does actually
change the context and change the message. Because you are always selfconscious of how you’re going to pay your rent and how you’re going to pay
your bills. It’s impossible to separate yourself from this context and if the
funding is coming from these directions you’re always going to self-censor
and it’s going to affect what you talk about in your choices that you make.
191

What to present, what not to present, where to place the emphasis where
not to place the emphasis, it will always be modified by the context you are
producing in. And if what you’re being paid for is essentially to make people
like SONY or make people like the state then it’s going to change the way
you present what you are doing.
Yochai Benkler used the term ‘commons-based peer production’ and of
course took great pains to avoid talking about communism and try to limit
this only to information production. He’s very clear, for him this is not for
real material production. Because he’s a liberal lawyer, working for a major
university, in the states ... so this is how he presents his work.
But what this means, commons-based production, means that the instruments of production are actually collectively owned but controlled by the
direct producers, which means that nobody can actually earn money simply by owning the instruments of production. You can only earn money
by employing the instruments of production in actually making something.
So, commons-based peer production. You have common things like instruments of production, land and capital, they’re are commonly controlled and
commonly owned, and individual labour of peers is applied to that shared
commons and the results of that labour is then owned by the actual producers. None of that product is owned by the people who are simply owning
instruments of production. That is what is meant by commons-based peer
production. But that’s exactly what the anarchist and the socialist call communism. There is no actual difference. Communism in a text book example
is the state less, property-less society. And that’s what it means, commonsbased peer production is a neologism, a modern way of saying communism
because for political reasons, post-war rhetoric, these words are verboten
and you can’t say them. So people invent new words, but they’re saying
exactly the same things. The point is that producers require land and capital to produce. If certain private interest controls all of the access of direct
producers to land and capital, then those private interests can extract the
surplus value. Another great quote from Benjamin Tucker is whenever one
person earns without sweating ... ehm sorry, whenever one person earns without
sweating, another person sweats without earning and that’s fundamentally true.
If anybody is earning revenue simply by owning instruments of production,
that means that people actually producing are not capturing the value of
their labour. And that’s what commons-based peer production is. The idea
that we have a commons which is all of our property, nobody controls our
instruments of production, they’re all our property together. Each of us
192

have our labour and we apply that to the commons and we produce something and whatever we produce, that is ours. It’s our own, provided that we
are not taking anything away from anybody else, provided that we are not
taking any exclusive control of the commons.
In the case of Free Software development, the Free Software itself is a commons. But things that you might make with Free Software are not part of
the commons, they’re your own. But the problem with software itself is
that because software is immaterial and therefore has no reproduction costs,
it can be reproduced with no costs, it also has no exchange value. So in
order to convert it to exchange value you always have to apply other forms of
property: land, capital, hard fixed property ... And so, as commons-based
peer producers in the Yochai Benkler world, we have our little internal communism, but we can neither live in it nor feed ourselves with it. So in order
to actually sustain ourselves, to actually capture our material subsistence, we
then have to deal with people that own land an capital; fixed, scarce properties, and we have no leverage in that negotiation. The only things we can
get back from the people that consume the output of our labour, is our
reproduction costs and nothing more, while they continue to capture and
accumulate the extra value. Again, how that applies to design is another
thing, I don’t think you can isolate one kind of worker from the overall
thing. The point is you have to think of where is the value coming from,
what are you really selling? Because you’re not really selling design, design
is an input. What are you really ...
What do you mean with ‘design is an input’?

Design is an input. The average consumer doesn’t buy design. Nobody
goes to a store and says I’d like a design. They only want the design because
they want another product that has design as an input of that product. If
you’re making beer and you need a label, you find a designer to make the
label. But what you’re selling is beer, you’re not selling design. So you always
have to think about what are you really selling. What is the actual product
that people is exchanging for, what is the source for the exchange value.
And once you identify the source of the exchange value, you have to figure
out how to create a direct relationship with all the other producers that are
involved in the production cycle.
...

Seems incredibly difficult ...

If it was easy then capitalism would have been overthrown centuries ago

193

... You’re now owning a magazine already with a couple of people. The
next person asks you to design a beer label ...
You have to own the beer factory!

... And I think next you should own the paper company that makes ...

And then you need people and say I know how to make design, I need
some people who know how to make beer. So then we have a beer factory.
And then you need people who drink the beer! Who’s going to make the
people that drink the beer?
Haha.

But wait, there must be a little bit of difference, a modified option to
this. For example ...

In the scenario of commons-based peer production it’s not that the designers have to own the beer factory, it’s just that there can’t be any capitalist
in the middle that owns the land, it’s enough if the designers and the beer
makers both own the land together and the capital together ...
So if the beer company is also worker-owned and you come to an arrangement ... Isn’t it the idea of shares? Applying labour and therefore having shares on something ...

Yes, but it has to be equal. Shares in a capitalist system are unequal.
That’s the idea of copy-far-left. It’s the idea of a public license that allows
free use for non-alienated forms of production and denies free use for alienated forms of production. In the case of software, for instance, which is
not the greatest application of copy-far-left, but is a good example to understand, the software would be usable by a workers’ cooperative for free
but a private corporation employing wage labour and private capital couldn’t
use it for free. They would have to either not use it at all or negotiate a
different set of terms under which they could use it. So the question is
how do we remove coercive property relationships. If you really have a situation of commons-based peer production, or communism, where there is
no state, no property, the instruments of production are collectively owned,
people just work together in a very kind of free way, than it could certainly
work. But that’s not the world we are living in, so we have to be defensive
of our commons and how we produce in order for it to grow. We have
to think about where the exchange value is and think about where the use
194

value crosses into exchange value and make sure that the point is within our
boundary. If we can do that, that’s enough. If we have a worker-owned
design collective that works with a worker-owned beer company, that’s as
good as together owning a beer company. But only if they also live on land
and apartments that are also worker-owned, because otherwise the landlord will simply capture value; you have to look for the point of leakage.
Even with a workers’ design company and a workers’ beer company living
in Brussels renting from capitalist, then the people that own the apartment
and the land will simply capture all the surplus value. The surplus value
will always leak at the point of scarcity, so the system has to be complete,
what Marcel Mauss calls a ‘total system’. It has to be a total system, if it
is not, if the entire cycle of production doesn’t go through commons-based
peer production hands, then it’s going to leak at the first point of scarcity.
Then whoever privately controls the one scarce resource through which all
this cycle of production goes through, will capture all the surplus value.
Again, back to our very basic model. The price of anything is its reproduction cost, so the price of something that is immaterial is zero. So, since
the beginning of mechanical reproduction, property-based interest groups
have tried to create artificial barriers to production. When you have artificial
barriers to reproduction the immaterial assets start to behave like material
assets; this is where copyright and intellectual property come from. It’s
the desire of property groups, to make immaterial assets behave price-wise
the same as material assets, the only way to do that is creating barriers to
reproduction.
Typography obviously comes from this culture, like a lot of other media
culture. There is rules about how you can reproduce it, and it creates
the opportunity for the owners of these things to capture exchange value.
Because the reproduction costs are no longer zero, because of artificial costs
of reproduction. But in certain things the capitalists are not homogeneous,
there’s not just one group of capitalists. There is many different capitalists.
Even though some make their living from typography, many more capitalists make their living by using typography, so with typography as an input.
From the point of view of those capitalists, the ones trying to restrict the
reproduction of typography are a problem. So if they can hire their own
staff and develop free typography with other companies, they’re not selling
typography, that’s just an input for them. Like for standardized nuts and
bolts, one time this was true too, bolt-makers would make their nuts and
bolt not fit, in the sense that if you wanted to use a nut from one company
195

and a bolt from another you couldn’t do so. They tried to create a barrier
from this, but since the nuts and bolts industry is not the biggest in capital,
because capital itself need nuts and bolts, the other companies got together
and said wait a minute, let’s just have standardized nuts and bolts, we don’t
want to make our money from nuts and bolts, we want to make our money
off-stream, from the product we make from nuts and bolts. Typography
falls into the same system. I imagine most of the people that are creating
free typography work for companies and they have their salary paid by companies that use typography, not companies that sell typography. Companies
that actually use typography in other production, whether it’s publishing or
whatever else they’re making, so the reproduction costs of the typographers
is paid for by not controlling the typography itself, but by employing it in
production and using it in another field. The people that are still trying
to hold on to typography as a product, as an end product that they capture
from intellectual property, are being pushed out.
In other things this is not just the case. If you look at the amount of money
that publishing companies spend on QuarkXpress, that’s not really a big
deal. From their point of view, they can hire some programmers and they
can make their own QuarkXpress and work with five other publishing companies, but the amount of money that they spend on QuarkXpress overall,
isn’t that high ...
Haha.

So the same economy of scale doesn’t apply. This is why commercial
software is still hanging on in these niche markets where there isn’t a broad
enough market. It’s not a broad enough input so that freedom is supported
by the users of it. Typography is a very general input. It’s like a nut or
a bolt, while QuarkXpress is pretty specific. Franziska was saying that in
her publishing company all they really need is two copies, or maybe one
even, of the software, and the whole company can work with it. They
just go to the computer with it when they need to do the layout, overall
it’s not a huge cost. They don’t need it every time they publish a book.
Whether if they had to pay for the font they used and every time they
wanted to use a different font, and they had to pay for it again, that would
be a problem, so they’d rather use a free font, and if that means hiring
somebody to drop the pixels down for a new font once and then having it
free forever, it can all make sense. That’s why typography is different from
software. And so the Scribus project has gone really far but the reason
196

it’s obscure is because except from the ideological case, they don’t have a
business case they can make for the publishers. Because for publishers they
want a piece of software that works and if it costs 400$ once, who cares.
It doesn’t really affect their business model. You have to make the case for
the publishers that if you form an association of all the publishers and you
together develop some new Free Software to do publishing, that would be
better and cheaper and faster. Then maybe eventually this case would be
made and something like this would exist, but it’s not like an operating
system or a web browser, that is really used everywhere all the time, and
would be really inconvenient to pay for every time. If companies had to pay
every single time they put a web browser on their computer, that would be
very inconvenient for them. Even Microsoft doesn’t dare to charge money
for Internet Explorer, cos they know people would just say Fuck off. They’re
not going to buy it. In more obscure areas, like publishing, 3D animation,
film and video, it doesn’t make so much of a difference. In those business
models, for instance 3D animation, one of the biggest companies is Pixar.
They make the movies! They don’t make the software, they go all the way
through the process and they make the movie! So they completely own
everything. For that reason it makes sense for them, since they capture the
full value of their product in the end, because they make the movies, that
their software enables them to make. And this would be a good model
for peer production as well, except obviously they’re a capitalist organization
and they exploit wage labour. But basically if Scribus really wanted to have a
financial base, the people that develop Scribus would have to own a magazine
that is enabled by Scribus. And if they can own the magazine that Scribus
enables then they can capture enough of that value to fund the development
of Scribus, and it would actually develop very quickly and be very good,
because that’s actually a total system. So right from the software to the
design, to the journalism, to the editing, to the sale, to the capture of the
value of the end consumer. But because it doesn’t do that, they’re giving
Free Software away ... To who? Where is the value captured? Where is the
use value transferred into exchange value? It’s this point that you have to get
all the way to, and if you don’t make it all the way there, even if you stop a
mile short, in that mile all of the surplus value will be sucked out.

197

This conversation took place in Montreal at the last day of
the Libre Graphics Meeting 2011. In the panel How to
keep and make productive libre graphics projects?, Asheesh
had responded rather sharply to a remark from the audience that only a very small number of women were
present at LGM: Bringing the problem back to gender is
avoiding the general problem that F/LOSS has with social
inclusion. Another good reason to talk to him were the
intriguing ‘Interactive training missions’ that he had been
developing as part of the OpenHatch.org project. I wanted
to know more about the tutorials he develops; why he decided to work on ‘story manuals’ that explain how to report a bug or how to work with version control. Asheesh
Laroia is someone who realizes that most of the work
that makes projects successful is hidden underneath the
surface. He volunteered his technical skills for the UN
in Uganda, the EFF, and Students for Free Culture, and
is a developer on the Debian team. Today, he lives in
Somerville, MA. He speaks about his ideas to audiences
at international F/LOSS conferences.
The interactive training missions are really linked to the background of
the OpenHatch project itself. I started working on it because to my mind,
one of the biggest reasons that people do not participate in Free Software
projects, is that they either don’t know how or don’t feel included. There is
a lot you have to know to be a meaningful contributor to Free Software and
I think that one of the major obstacle for getting that knowledge, and I am
being a bit sloppy with the use of the term maybe, is how to understand a
conversation on a bug-tracker for example. This is not something you run
into in college, learning computer science or any other discipline. In fact,
it is an almost anti-academic type of knowledge. Bug tracker conversations
201

are ‘just people talking’, a combination of a comment thread on a blog and
actual planning documents. There’s also tools like version control, where
close to no one learns about in college. There is something like the culture
of participating in mailing lists and chatting on IRC ... what people will
expect to hear and what people are expecting from you.
For people like me that have been doing all these things for years, it feels
very natural and it is very easy to forget all the advantages I have in this
regard. But a lot of the ways people get to the point where I am now
involves having friends that help out, like Hey, I asked what I thought was a
reasonable question on this mailing list and I did not get any answer or what
they said wasn’t very helpful. At this stage, if you are lucky, you have a friend
that helps you stay in the community. If you don’t, you fall away and think
I’m not going to deal with this, I don’t understand. So, the training missions
are designed to give you the cultural experience and the tool familiarity in an
automated way. You can stay in the community even when you don’t have a
friend, because the robot will explain you what is going on.

So how do you ‘harvest’ this cultural information? And how do you bring it into
your tool?

There is some creative process in what I call ‘writing the plot’; this is very
linear. Each training mission is usually between three and fifteen minutes
long so it is OK to have them be linear. In writing the plot, you just imagine
what would it take a new contributor to understand not only what to do, but
also what a ‘normal community member’ would know to do. The different
training missions get this right to different extents.

How does this type of knowledge form, you think? Did you need to become a kind
of anthropologist of Free Software? How do you know you teach the right thing?
I spend a lot of time both working with and thinking about new contributions to Free Software. Last September I organized a workshop to teach
computer science students how to get involved in Open Source. And I have
also been teaching interpersonally, in small groups, for ten or eleven years.
So I use the workshops to test the missions and than I simply ask what
works. But it is tough to evaluate the training missions through workshops
because the workshops are intended to be more interpersonal. I definitely
had positive feedback, but we need more, especially from people that have
been two or three years involved in the Free Software community, because
202

they understand what it feels like to be part of a community but they may
still feel somewhat unsure about whether they have everything and still remember what was confusing to learn.

I wasn’t actually asking about how successful the missions are in teaching the
culture Free Software ... I wanted to know how the missions learn from this
culture?
So far, the plots are really written by me, in collaboration with others. We
had one more recent contribution on Git written by someone called Mark
Freeman who is involved in the OpenHatch project. It did not have so
much community discussion but it was also pretty good from the start. So
I basically try to dump what is in my head?

I am asking you about this, thinking about a session we once organized at
Samedies, a woman-and-Free-Software group from Brussels. We had invited
someone to come talk to us about using IRC on the command-line and she was
discussing etiquette. She said: On IRC you should never ask permission before
asking a question. This was the kind of cultural knowledge she was teaching us
and I was a bit puzzled ... you could also say that this lack of social interfacing
on IRC is a problem. So why replicate that?
In Debian we have a big effort to check the quality of packages and maintaining that quality, even if the developer goes away. It is called the ‘Debian
QA project’ and there’s an IRC channel linked to that called #debian-qa.
Some of the people on that channel like to say hello to each other and
pay attention when other people are speaking, and others said stop with all
the noise. So finally, the people that liked saying hello moved to another
channel: #debian-sayhi.

Meaning the community has made explicit how it wants to be spoken to?

The point I am trying to make here, is that I am agreeing to part of what
you are saying, that these norms are actually flexible. But what I am further
saying, is that these norms are actually being bent.

I would like to talk about the new mission on bug reporting you said you were
working on, and how that is going. I find bug reports interesting because if
they’re good, they mix observation and narration, which asks a lot from the
imagination of both the writer and the reader of the report; they need to think
203

themselves in each others place: What did I expect that would happen? What
should have happened? What could have gone wrong? Would you say your
interactive training missions are a continuation of this collective imaginary work?

A big part of that sort of imagination is understanding the kinds of things
that could be reasonable. So this is where cultural knowledge comes in. If
you program in C or even if you just read about C, you understand that
there is something called ‘pointers’ and something called ‘segfaults’ and if
your program ends in that way, that is not a good thing and you should
report a bug. This requires an imagination on the side of the person filing
the bug. The training missions give people practice in seeing these sorts of
things and understand how they could work. To build a mental model, even
if it is fuzzy, that has enough of the right components so they can enter in
discussion and imagine what happened.
Of course when there are real issues such as groping at conferences, or
making people feel unwelcome because they are shown slides of half-naked
people that look like them ... that is actually a gender issue and that needs
to be addressed. But the example I gave was: Where are the Indians, where
are the Asians in our community? This is still a confusing question, but not
awkward.

Why is it not awkward?

(laughs) As I am an Indian person ... you might not be able to tell from the
transcription?
It is an easy thing to do, to make generalizations of categories of people
based on visible characteristics. Even worse, is to make generalizations about
all individual people in that class. It is really easy for people in the Free
Software community to subconsciously think there are no women in the
room ‘because women don’t like to program’, while we know that is really
not true. I like to bring up the Indian people as an example because there
are obviously a bunch of programmers in India ... the impression that they
can’t program, can’t be the reason they are excluded.

But in a way that is even more awkward?

Well, maybe I don’t feel it is that awkward because I see how to fix it, and I
even see how to fix both problems at the same time.
204

In Free Software we are not hungry for people in the same way that corporate
hiring departments are. We limp along and sometimes one or two or three
people join our project per year as if by magic and we don’t know how and
we don’t try to understand how. Sometimes external entities such as Google
Summer of Code cause many many more show up at the doorstep of our
projects, but because they are so many they don’t get any skills for how to
grow. When I co-ran this workshop at the computer science department at
the University of Pennsylvania on how to get involved in Open Source, we
were flooded with applicants. They were basically all feeling enthusiastically
about Open Source but confused about how to get involved. 35% of the
attendees were women, and if you look at the photos you’ll see that it wasn’t
just women we were diverse on, there were lots of types of people. That’s
a kind of diversity-neutral outreach we need. It is a self-empowerment
outreach: ‘you will be cooler after this, we teach you how to do stuff ’ and
not ‘we need you to do what we want you to do’, which is the hiring-kind
of outreach.

And why do you think Free Software doesn’t usually reach out in this way? Why
does the F/LOSS community have such a hard time becoming more diverse?

The F/LOSS community has problems getting more people and being more
diverse. To me, those are the same problems. If we would hand out flyers
to people with a clear message saying for example: here is this nice vector
drawings program called Inkscape. Try it out and if you want to make it even
better, come to this session and we’ll show you how. If you send out this
invitation to lots of people, you’ll reach more of them and you’ll reach more
diverse people. But the way we do things right now, is that we leave notes
on bug trackers saying: help wanted. The people that read bug trackers, also
know how to read mailing lists. To get to that point, they most likely had
help from their friends. Their friends probably looked like them, and there
you have a second or third degree diversity reinforcement problem. But
leaving gender diversity and race diversity aside, it is such a small number of
people!

So, to break that cycle you say there is a need to externalize knowledge ... like
you are doing with the OpenHatch project and with your project ‘Debian for
Shy People’? To not only explain how things technically work, but also how they
function socially?
205

I don’t know about externalizing ... I think I just want to grow our community. But when I feel more radical, I’d say we should just not write ‘How
to contribute’ pages anymore. Put a giant banner there instead saying: This
is such a fun project, come hang out with us on IRC ... every Sunday at 3PM.
Five or ten people might show up, and you will be able to have an individual
conversation. Quickly you’ll cross a boundary ... where you are no longer
externalizing knowledge, but simply treat them as part of your group.
The Fedora Design Bounties are a big shining example for me. Maírín Duffy
has been writing blog posts about three times a year: We want you to join
our community and here is something specific we want you to do. If you get it
right, the prize is that you are part of our community. The person that you get
this way will stick around because he or she came to join the community.
And not because you sent a chocolate cake?

Not for the chocolate cake, and also not for the 5000$ that you get over
the course of a Google summer of code project. So, I question whether it
is worth spending any time on a wiki-page explaining ‘How to contribute’
when instead you could attract people one by one, with a 100% success-rate.

Writing a ‘How to contribute’ page does force teams to reflect on what it takes to
become part of their community?
Of course that is true. But compared to standing at a job-fair talking to
people about their resume, ‘How to contribute’ pages are like anonymous,
impersonal walls of text that are not meant to create communication necessarily. If we keep focusing on communicating at this scale, we miss out on
the opportunity to make the situation better for individual people that are
likely to help us.

I feel that the Free Software community is quite busy with efficiency. When you
emphasize the importance of individual dialogue, it sounds like you propose a
different angle, even when this in the end has the desired effect of attracting more
loyal and reliable contributors.

It is amazing how valuable patience is.

You talked about Paul, the guy that stuck around on the IRC channel saying hi
to people and than only later started contributing patches after having seen two
or three people going through the process. You said: If we had implied that this
206

person would only be welcome when he was useful ... we would have lost
someone that would be useful in the future.

The obsession with usefulness is a kind of elitism. The Debian project
leader once made this sort of half-joke where he said: Debian developers
expect new Debian contributors to appear as fully formed, completely capable
Debian developers. That is the same kind of elitism that speaks from You
can’t be here until you are useful. By the way, the fact that this guy was some
kind of cheerleader was awesome. The number of patches we got because
he was standing there being friendly, was meaningful to other contributors,
I am sure of it. The truth is ... he was always useful, even before he started
submitting patches. Borrowing the word ‘useful’ from the most extreme
code-only definition, in the end he was even useful by that definition. He
had always been useful.

So it is an obsession with a certain kind of usefulness?
Yes.

It is nice to hear you bring up the value of patience. OSP uses the image of a
frog as their logo, a reference to the frog from the fairy tale ‘The frog and the
princess’. Engaging with Free Software is a bit like kissing a frog; you never know
whether it will turn into a prince before you have dared to love it! To OSP
it is important not to expect that things will go the way you are used to ... A
suspension of disbelief?

Or hopefulness! I had a couple of magic moments ... one of the biggest
magic moments for me was when I as a high school student e-mailed the
Linux kernel list and than I got a response! My file system was broken,
and fsck-tools were crashing. So I was at the end of what I could do and
I thought: let’s ask these amazing people. I ended up in a discussion with
a maintainer who told me to submit this bug-report, and use these dump
tools ... I did all these things and compiled the latest version from version
control because we just submitted a patch to it. By the end of the process
I had a working file system again. From that moment on I thought: these
magic moments will definitely happen again.
If you want magic moments, than streamlining the communication with your
community might not be your best approach?
207

What do you mean by that?

I was happy to find a panel on the program of LGM that addressed how this
community could grow. But than I felt a bit frustrated by the way people were
talking about it. I think the user and developer communities around Libre
Graphics are relatively small, and all people actually ask for, is dialogue. There
seems to be lots of concern about how to connect, and what tools to use for that.
The discussion easily drifts into self-deprecating statements such as ‘our website is
not up-to-date’ or ‘we should have a better logo’ or ‘if only our documentation
would be better’. But all of this seems more about putting off or even avoiding
the conversation.
Yes, in a way it is. I think that ‘conversations’ are the best, biggest thing
that F/LOSS has to offer its users, in comparison with proprietary software.
But a lot of the behavioral habits we have within F/LOSS and also as people
living in North America, is derived from what we see corporations doing.
We accept this as our personal strategies because we do not know any alternatives. The more I say about this, the more I sound like a hippie but I
think I’ll have to take the risk (laughs).
If you go to the Flash website, it tells you the important things you need to
know about Flash, and than you click download. Maybe there is a link to a
complex survey that tries to gather data en masse of untold millions of users.
I think that any randomly chosen website of a Libre Graphics project will
look similar. But instead it could say when you click download or run the
software ... we’re a bunch of people ... why don’t you come talk to us on IRC?
There are a lot people that are not in the conversation because nobody ever
invited them. This is why I think about diversity in terms of outreach, not
in terms of criticizing existing figures. If in some alternate reality we would
want to build a F/LOSS community that exists out of 90% women and
10% men, I bet we could do it. You just start with finding a college student
at a school that has a good Computer Science program ... she develops a
program with a bunch of her friends ... she puts up flyers in other colleges
... You could do this because there are relatively so little programmers in
the world busy with developing F/LOSS that you can almost handpick the
diversity content of your community. Between one and a thousand ... you
could do that. There are 6 million thousand people on this planet and the
amount of people not doing F/LOSS is enormous. Don’t wring your hands
about ‘where are the women’. Just ask them to join and that will be that!
208

Tying the story to data

In the summer of 2010, Constant commissioned artist and
researcher Evan Roth to develop a work of his choice, and
to make the development process available in some way.
He decided to use a part of his fee as prize-money for
The GML-Recorder Challenge, inviting makers to propose an Open Source device ‘that can unobtrusively record
graffiti motion data during a graffiti writer’s normal practice in the city’. In three interviews that took place in
Brussels and Paris within a period of one and a half years,
we spoke about the collaborative powers of the GMLstandard, about contact points between hacker and graffiti
cultures and the granularity of gesture.
Based on conversations between Evan Roth (ER), Femke
Snelting (FS), Peter Westenberg (PW), Michele Walther
(MW), Stéphanie Villayphiou (SV), John Haltiwanger (JH)
and momo3010.
Brussels, July 2010
ER
FS

So what should we talk about?

Can you explain what GML stands for?

GML stands for Graffiti Markup Language 1 . It is a very simple fileformat designed for amateur programmers. It is a way to store graffiti
motion data. I started working with graffiti writers, combining graffiti
and technology back in New York, in 2003. In graduate school, my thesis
ER

1

Graffiti Markup Language (.gml) is a universal, XML based, open file format designed to
store graffiti motion data (x and y coordinates and time). The format is designed to maximize
readability and ease of implementation, even for hobbyist programmers, artists and graffiti
writers. http://www.graffitimarkuplanguage.com

213

Tying the story to data

was on graffiti analysis, and writing software that could capture their
gestures, to archive motion data from graffiti writers. Back than I was
saving the data in an x-y-time array, I was calling them .graph files and I
sensed there was something interesting about the data, the visualization
of motion data but I had never opened up the project at that time.
About a year ago I released the second part of the project, of which the
source code was open but the dataset wasn’t. In conversation with a
friend of mine named Theo 2 , who also collaborated with me on the
L.A.S.E.R. Tag project 3 , he brought up the .graph file again and how
we could bring back the file format as a way to connect all these different applications. Graffiti Analysis 4 , L.A.S.E.R. Tag, EyeWriter 5 ... so I
worked with Theo Watson, Chris Sugrue 6 and Jamie Wilkinson 7 and
other people to develop Graffiti Markup Language. It is a simple set of
guidelines, basically an .xml file format that saves x-y-time data but does
it in a way that is very specifically related to graffiti so there’s a drip tag
and there’s tags related to the size of the brush and to how many strokes
you have: is it one stroke or two strokes or three strokes.
The main idea is: How do you archive the motion of graffiti and not just
the way graffiti looks. There are a lot of people photographing graffiti,
making documentaries etc. but there hasn’t been a way to archive graffiti
in ways of code yet.
FS

What do you mean, ‘archive in terms of code’?

There hasn’t been a programmatic way to archive graffiti. So this
is like taking a gesture and trying to boil it down to a set of coordinate
points that people can either upload or download. It is a sort of midpoint
between writers and hackers. Graffiti writers can download the software
and have how-to guides for how to do this, they can digitize their tags
ER

2
3

4
5
6
7

Theo Watson http://www.theowatson.com
In its simplest form, L.A.S.E.R. Tag is a camera and laptop setup, tracking a green laser
point across the face of a building and generating graphics based on the laser’s position which
then get projected back onto the same building with a high power projector.
http://graffitiresearchlab.com/projects/laser-tag
Graffiti Analysis is a digital graffiti blackbook designed for documenting more than just ink.
http://graffitianalysis.com
The EyeWriter is a low-cast eyetracking system originally designed for paralyzed graffiti artist
TEMPT. The EyeWriter system uses inexpensive cameras and Open Source computer vision
software to track the wearer’s eye movements. http://www.eyewriter.org
Chris Sugrue http://csugrue.com
Jamie Wilkinson http://www.jamiedubs.com

214

Tying the story to data

and upload it to an open database. The 000000book-site 8 hosts all this
data and some people are writing software for this.

So there are three parts: the GML-standard, software to record and
play and than there is the data itself – all of it is ‘open’ in some way. Could
you go through each of them and talk about how they produce uploads and
downloads?
FS

Right. It starts with Graffiti Analysis. It is software written in C++
using OpenFrameworks, an Open Source platform designed by artists for
visual applications. Right now you can download the recorder app and
from that you can generate your own .gml files. And from there you can
upload these files into the playback app. In the beginning that was the
only Open Source side of the project. Programmers could also make new
applications based on the software, which also happened.
Last night we met Stéphane Buellet 9 who is developing a calligraphy
analysis project and he used Graffiti Analysis as a starting point. I find it
exciting when that happens but more often people take the file-format as
a starting point, and use it as a jumping-off point for making their own
work.
Second was the database. We had this file-format that we loosely defined.
I worked with Jamie to develop the 000000book site. It is pretty nutsand-bolts but you can click ‘upload’ and click on your own .gml files and
it will playback in the browser. People have developed their own playback
mechanisms, which are some of the first Open Source collaborations that
happened around .gml files. There is a user account and you can upload
files; people have made image renderers, there are people that have made
Flash players, SVG players. Golan Levin has developed an application
that converts a .gml file into an auto-CAD format. The 000000book site
is basically where graffiti writers connect to developers.
In the middle between Graffiti Analysis and database is the Graffiti Markup
Language, that I think will have its own place on the web. But sometimes
ER

8

9

http://000000book.com. Pronounced: ‘Black Book’: ‘A black book is a graffiti artist’s
sketchbook. Often used to sketch out and plan potential graffiti, and to collect tags from
other writers. It is a writer’s most valuable property, containing all or a majority of the
person’s sketches and pieces. A writer’s sketchbook is carefully guarded from the police and
other authorities, as it can be used as material evidence in a graffiti vandalism case and link a
writer to previous illicit works.’
Wikipedia. Glossary of graffiti — wikipedia, the free encyclopedia, 2014. [Online; accessed 5.8.2014]

Stéphane Buellet, Camera Linea http://www.chevalvert.fr/portfolio/numerique/camera-linea

215

Tying the story to data

I see it as one project. One of my interests is in archiving graffiti and all
of these things are ways of doing that. It is interesting how these three
things work together. In terms of an OS development model it has been
producing results I haven’t seen when I just released source code.
FS

How do you do that, develop a standard for graffiti?

We started by looking at Graffiti Analysis and L.A.S.E.R. Tag, the
apps that were using graffiti motion data. From those two projects I had a
lot of experience of meeting graffiti writers as a userbase. When you meet
with them, they tell you right away what pieces of the software they think
are missing. So from talking with them we developed a lot of features
that now are in GML like brushes, drips, line-thickness. Some people
had single line tags and some people had multi-line tags so that issue
came up because GML tracks both drawing and non-drawing motion so
we knew that we needed in the file format to talk about pen up and pen
down. I was interested in the connection points between lines also.
We tried to keep it very stripped down. From the beginning we knew
that people that would participate as developers or anonymous contributors were not going to be the same people that would develop a Linux
core. They are students, people just getting into programming or visual
programming. We wanted people to be able to double-click a .gml file
and than everything should verbally make sense so it is Begin stroke.
End stroke. Anyone with basic programming skills should be able to
figure out what’s going on.
ER

Did you have any moment where you had to decide: this does not belong
to graffiti or: this might be more for calligraphy tracking?
FS

The only thing that has to be in there is the format in x-y time
scenario with some information on drawing and not drawing, everything
else is bonus. So if you load an .xml file structured like that, compliant
apps will load it in. On top of that, there are features that some apps
will want and others not. Keywords are, for example, a functionality that
we are still developing applications for. It is there but we are looking for
how to use it.
ER

FS

Did you ever think about this standard as a way to define a discipline?

(laughs) I think in the beginning it was a very functional conversation.
We were having apps running this data and I don’t think we were thinking
ER

216

Tying the story to data

of defining graffiti when we were writing the format. But looking back,
it is interesting to think about it.
Graffiti has a lot of privacy issues related to it too, right? So we did
discuss about what it would mean to start recording geo-located data.
There are different interests in graffiti. There is an interest in visuals and
in deconstructing characters. Another group is interested in it, because
it is a sport and more of a performance art. For this type of interest, it
is more important to know exactly where and when it happened because
it is different on a rooftop in New York to a studio in the basement of
someones house. But if someone realizes this data resulted from an illegal
action, and wanted to tie it back to someone, than it starts to be like
a surveillance camera. What happens when someone is caught with a
laptop with all this data?
FS

Your desire to archive, is it also about producing new work?

I see graffiti writers as hackers. They use the city in the same way
as hackers are using computer systems. They are finding ways of using
a system to make it do things that it wasn’t intended to do. I am not
sure graffiti writers see it this way, but I am in this position where I have
friends that are hackers, playing around with digital structures online.
Other friends are into graffiti writing and to me those two camps are
doing the most interesting things right now, but these are two communities that hardly overlap. One of the interests I have is making these
two groups of people hang out more. I was physically the person bridging these two groups; I was the nerd person meeting the graffiti writers
talking to them about software and having this database.
Now it is not about my personal collection anymore, it is making a handshake between two communities; making them run off with each other
and having fun as opposed to me having to be there all the time to make
introductions.
ER

Is GML about the distribution of signature? I mean: The gestures of
a specific person can now be reproduced by a larger community. How does
that work?

FS

This is an interesting conversation we should have with the graffiti
writers. A tag might be something they have been writing for more than
25 years and that will be very personal to them and the way they write
this is because they’ve written it a million times. So at the one hand it
ER

217

Tying the story to data

is super-personal, but on the other hand a lot of graffiti writers have no
problem sharing this data. To them it is just another tag. They feel like,
I have written this tag a billion times and so when you want to keep one of
them, it is no big deal.
I don’t think the conversation has gotten as involved as it could have.
You set something in motion and cross your fingers hoping that everyone
plays nice and things go well and so far that is what has been happening.
But you are dealing with people that are uploading something that is super
personal to them and I’d be curious to see what happens in the future.
The graffiti taxonomy project that I have been doing involves a lot of
photos of graffiti. It is a visual studies based on characters, I am shooting
thousands of photos of graffiti and I don’t have an opportunity to meet
with all these writers to ask them if it is OK. So I get e-mails from writers
once in a while saying Hey, you used a photograph of one of my tags and
usually it is them feeling out where my intentions are and where I am
coming from.
It has taken a long time to gain the trust of the community I am working with. Usually when I am able to explain what I am doing and that
everything is released openly and meant to be completely free, so far at
least the people I have managed to talk toare OK with it and understand
it. Initially when people see something they’ve made being used by other
people, a lot of times it can be a point where a red flag is raised and I am
assuming there are more red flags going to go up.
FS

If you upload a .gml file, can you insert a licence?

Not yet. Right now there is not even a ‘private mode’ on the
000000book site. If you upload, everything is public. There is a lot of
interesting issues with respect to the licence that I have been reluctant to
deal with yet. Once you start talking too much about it, you will scare
off people on either side of the fence. I think that will have to happen at
some point but for now I have decided to refer to it as an ‘open database’
and I hope that people will play nicely, like I said.
ER

FS

But just imagine, what kind of licence would you need?

It might make more sense to go for a media-related licence than for
a code licence. Creative Commons licences would lend themselves easily
for this. People could choose non-commercial or pure public domain.
Does that make sense?
ER

218

Tying the story to data

Well, yes but if you look at the objects that people share, we’re much
closer to code than to a video file?
FS

is?

ER

Functionally it is code. But would a graffiti writer know what GPL

I am interested in the apprentice-system you were talking about earlier.
Like a young writer learning from someone else they admire. The GML
notation of x-y-time might help someone to learn as well. But would you
ever really copy someone else’s tag?
PW

One of the reasons I think graffiti writing has this history of apprenticeship is because you don’t really have a chance to learn otherwise. You
don’t turn on the TV and see someone else doing it. You only see how it
is being written if you see other people actually do it. That was one of the
original reasons I started doing graffiti research because, having met with
graffiti writers. I thought: it is a dance, it is as much about motion as
it is about how the final image is constructed. You can come to a much
better understanding about how it is made as opposed to just seeing a
photograph of it.

ER

If you want to learn from the person writing, you would need to see
more than just the trace of a pen?
PW

Someones tag might look completely different if they had six seconds
to make it, they make different decisions. In the first version of the
Graffiti Analysis project, I had one camera recorder tracking the pen and
another camera behind the hand and another so you could see the full
body. But there was something about tracking just the pen tip that I
liked. It is an easier point of entry for dealing with the motion data than
having three different video feeds.
ER

Maybe it is more about metadata? Not a question of device or application, but about space for a comment.
FS

Maybe in the keywords there will be something like: Rooftop.
Brooklyn. Arrested.
The most interesting part is often the stories that people tell afterward
anyway. So it is an interesting idea, how to tie the story to the data.
It is a design problem too. Historically graffiti has been documented
many times by outsiders. The movie Style Wars 10 is a good example of
ER

10

Style Wars. Tony Silver, 1983. http://www.stylewars.com

219

Tying the story to data

this epic documentary that was made by outsiders that became insiders.
Also, the people that have been documenting most of the graffiti are not
necessarily graffiti writers.
Graffiti has a history with documentarians entering into their community and playing a role but sharing the stories is something writers do
internally, not as much to outsiders. How do you figure out a way to get
graffiti writers to document their stories into the .gml files themselves,
or is it going to take outsiders? How does the format facilitate that?

Do you think the availability of a project like GML can have an impact
on the way graffiti is learned? If data becomes available in a community
that operates traditionally through apprenticeships and person-to-person
sharing, what does it do?
FS

I am interested in Open Source culture being influenced by graffiti,
and I am interested in Open Source culture influencing graffiti as well.
On a big picture I would love it if the graffiti community got interested
in these ideas and had more of a skill-sharing-knowledge-base.
KATSU 11 , someone I worked with in New York, has acquireda lot of
knowledge about how to make tools for graffiti and he initially wasn’t
so much into sharing them, because graffiti writers tend to save that
knowledge for themselves so that their tags are always bigger and better (laughs). Talking to him I think I convinced him to write tutorials on
how to make some of these tools. On the street art side there is Mark
Jenkins 12 , he has this technique of making 3D objects that exist within
the city and we had a lot of conversations too.
There are many ways tech circles and Open Source circles can come together with people that are making things outside, with their hands. I
think graffiti can learn from that. In the end people would be making
more things outside which would be a good thing.
ER

In a way typography has a similar culture of apprenticeship. Some
people enjoy spreading knowledge, and others resist in the name of quality
control.
FS

Interesting. I think the work I am doing is such a tangent! In general,
for something that is decidedly against the rules, the culture of writing
graffiti often has a rigid structure. To people in that community what
ER

11
12

KATSU http://www.flickr.com/search/?q=graffiti+katsu
Mark Jenkins tapesculptures http://tapesculpture.org

220

Tying the story to data

I do is a blip on their radar. I am honored when I get to meet graffiti
writers and they are interested in what I am doing but I don’t think it
will change anything in what is in some ways a very strict system.
And I don’t want that either. I like the fact that they found a way to make
spraypaint and markers change the way each city in the world looks. They
have the tools they need. Digital projectors will not change that. Graffiti
writers still like to see their names projected at big scales in new ways but
it is not something they really need (laughs).

And the other way around? How does graffiti have an influence on
Open Source communities?
FS

For the people on the technology side, it is an easy jump. To think
about hacking software systems and than about making things outside.
I see that with the Free Art and Technology Group 13 that I help run.
When they start thinking about projects in the city, it takes little to come
up with great ideas. I also see that in the class I teach, Urban Hacking.
There is already a natural overlap.
ER

FS

What connects the two?

It is really about the idea of hacking. The first assignment in the
class is not to make anything, but simply to identify systems in the city.
What are elements that repeat. Trying to find which ones you can slip
into. It has been happening in graffiti forever. Graffiti in New York in
the eighties was to me a hack, a way to have giant paintings circulating in
the city ... There is a lot of room to explore there.
ER

Your experience with the Blender community 14 did not sound like an
easy bridge?

FS

Recently I released a piece of software that translates a .gml file and
translates it into a .stl file, which is a common 3D format. So you can
basically take a graffiti gesture and import it into software like Blender.
I used Blender because I wanted to highlight this tool, because I want
these communities to talk to each other.
So I was taking a tag that was created in the streets of Vienna and pulling
it into Blender and in the end I was exporting it to something that could
ER

13
14

The Free Art and Technology (F.A.T.) Lab is an organization dedicated to enriching the
public domain through the research and development of creative technologies and media.
Release early, often and with rap music. http://fffff.at
Blender is a free Open Source 3D content creation suite. http://www.blender.org/

221

Tying the story to data

be 3D printed, to become something physical. The video that I posted intentionally showed online showed screenshots from Blender and it ended
up on one of the bigger community sites. I only saw it when my cousin,
who is a big Blender user, e-mailed me the thread. There is about a hundred dedicated Blender users discussing the legitimacy of graffiti in art
and how their tools are used 15 ; pretty interesting but also pretty conservative.
FS

Why do you think the Blender community responded in that way?

It doesn’t surprise me that much. Graffiti is hard to accept, especially
when we are talking about tags. So the only reason we might be slightly
surprised by hearing people in the Open Source community react that
way, is because intellectual property doesn’t translate always to physical
property. Writing your name on someone’s door is something people universally don’t like. I understand. For me the connection makes sense but
just because you make Open Source doesn’t mean you’ll be interested in
graffiti or street art or vice versa. I think if I went to a Blender conference
and gave a talk where I explained sort of where I see these things overlap,
I could make a better case than the three minute video they reacted to.
ER

What about Gesture Markup Language instead of Graffiti Markup
Language?
FS

Essentially GML records x-y-time data. If you talk about what it
functionally does, it is probably more related to gesture than it is to graffiti. There is nothing at the core specifically related to graffiti. I am
interested in branding it in relation to graffiti and to get people to talk
about Open Source where it is traditionally not talked about. To me
that is interesting. It is a way to get people excited about open data, and
popularizing ideas about Open Source.
ER

FS

Would you be OK if it would get more popular in non-graffiti circles?

I am super excited when I see it used in bizarre places. I’ll keep using
it for graffiti, but someone e-mailed me that they were upset that it only
tracks one point. There hasn’t been a need to track multiple tags at once.
They wanted to use it to track juggling, but how to track multiple balls
in the air? I keep calling it Graffiti Markup Language because I think it
is a good story.
ER

15

http://www.blendernation.com/2010/07/09/blender-graffiti-analysis

222

Tying the story to data

PW

What’s the licence on GML?

We haven’t really entered into that. Why would you need a licence
on a file format?
ER
FS

It would prevent that anyone could own the standard.

That sounds good. Actually it would be interesting for the project, if
someone would try to licence it. Legal things matter, but for the things I
do, I am most of all interested in getting the idea across.
ER

I am interested in the way GML stems from a specific practice. How
it is different and similar to large, legal, commercial, global standardization practices. Related, how can GML connect to other standard practices?
Could it be RDF compliant?

FS

PW

Gesture recognition to help out the police?

Or maps of places that are in need of some graffiti? How to link GML
to other types of data?
FS

It is hard for me to imagine something. But one thing is interesting
for example, how GML is used in the EyeWriter project. It has not
so much to do with gesture, but more with how you would draft in a
computer. TEMPT is plotting points, so the time data might not be so
interesting but because it is in the same format, the community might
pick it up and do something with it. All the TEMPT data he writes with
his eyes and it is uploaded to the 000000book site automatically. That
allowed another artist called Benjamin Gaulon 16 who I now know, but
didn’t know at the time, to use it with his Print Ball project. He took the
tag data from a paralyzed graffiti writer in Los Angeles and painted it on
a wall in Dublin. Eye-movement translated into a paint-ball gun ... that
is the kind of collaboration that I hope GML can be the middle-point
for. If that happens, things can start to extrapolate on either end.
ER

You talked about posting a wish-list and being surprised that your
wishes were fulfilled within weeks. Why do you think that a project like
EyeWriter, even if it interests a lot of people, has a hard time gathering
collaborators, while something much more general like GML seems to be
more compelling for people to contribute to?
FS

16

Benjamin Gaulon, Print Ball
http://www.eyewriter.org/paintball-shooting-robot-writes-tempt1-tag

223

Tying the story to data

I’ll answer that in a second, but you reminded me of something
else: because EyeWriter was GML based, a lot of the collaborations
that happened with people outside of the project were GML related,
not EyeWriter related. So we did have artists like Ben and Golan take
data drawn by TEMPT and do completely different things which made
TEMPT a collaborator with them in a way. The software allowed him to
share his work in a format that allowed other people to work with him.
The wish-list came out of the fact that I was working on a graffiti related
project that had a lot of use but not a lot of innovation. Not so many
people were using it in ways I wasn’t expecting, which is something you
always hope of course. By saying: Here’s the things I really would like to
happen, things started to happen. I have been surprised how that drove
momentum. Something similar I hope will happen to the work we will
do together in the next months too!
ER

FS

What are you planning to do?

We are planning to make a dedicated community page for the graffiti
markup language which is one of the three points of the triangle. The
second step would be a new addition to the wish-list, a challenge with a
prize associated to it which seems funny. The project I’d like to concentrate on is making the data collection easier so that graffiti writers can be
more active in the upload sense. Taking the NASA development model:
Can you get into orbit on this budget?
ER

How is that different from the way you record graffiti motion at the
moment?
FS

If I go out with a graffiti writer, I’m stuck standing with a laptop and
a camera facing the wall and then the graffiti writer needs to have a really
bright light attached to the writing device which is a bit counter-intuitive
when you are trying to do something without being seen (laughs). It
could be infrared by the way, that could be the first step but then security
cameras would still pick it up. The design I am focusing momentum on is
a system that’s easier. A system that can work without me there, without
having to have a laptop there. The whole idea is that it would be a natural
way to get good data, to document graffiti without a red-head holding a
laptop following you around the whole time!
ER

224

Tying the story to data

Paris, December 2010
FS

How is it to be the sole jury member?

I tried to get another jury-member on there actually. Do you know
Limor Fried? She runs Adafruit Industries. 17 I really like her work. She
works with her partner Phil Torrone who runs Make Blog. 18 I invited
her to be the second jury-member because she makes Open Source hardware kits; this is her full-time thing. She is very smart and has a lot of
background in making DIY kits that people actually build. She is also
very straightforward and very busy, so she wrote back and said: this is
too much work. No.
So ... yeah, I am the only jury member. Hmmm.
ER

SV

Is the contest already over?

It is not over. It was easy to launch; I tried to coincide it with the
launch of the website and there were a couple of things going on at the
same time. The launch helped spread the word about this file format, and
people making projects, and vice versa.
ER

Did you have any proposals that came close to meeting the challenge?
Did you consider giving out the prize?
FS

No.
There are a couple of people that got really close. The interesting thing
that is happening with the challenge is something that is also happening
to other high barrier projects: You end up speaking to the people you already work with the most. I have a hard time figuring out to some extent
what is really happening, but the things I hear, of people making progress,
is people that are close to me. It reminds me of the EyeWriter project
where people that are to dip their toes into this, are already in the friend
group, or one level removed. They are pretty high level programmers.
I didn’t really think that actual money would be such an incentive but
more that it would make the challenge feel serious, more in the sense
of an organization that has some kind of club behind it. If you solved
one of the design problems by the Mozilla community you could receive
ER

17
18

Limor Fried, Adafruit Industries http://www.adafruit.com
Phillip Torrone, Makezine http://makezine.com/pub/au/Phillip_Torrone

225

Tying the story to data

kudo’s from the community, but if you solved one of my projects, you
don’t really get kudo’s from my community, do you?
Having the money associated makes it this big thing. At Ars Electronica
and so on, it got people talking about it and so it is out there. That
part worked. Beyond that it has been a bit hard to keep the momentum.
Friends and colleagues send me ideas and ask me to look at things, but
people I don’t know are hard to follow; I don’t think they are publishing
their progress. There is a hackerspace in Porto that has been working on
it, so I see on their blog and Twitter that they are having meetings about
this and are working on it.
Don’t you think having only one prize produces a kind of exclusivity? It
seems logical not to publish your notes?
FS

ER Maybe. Kyle 19 has been thinking up ways to do it and I know he
wanted to use an optical mouse, and then this a friend Michael 20 has been
using sensors, and he ran into a software problem but had the hardware
problem more or less solved. And then Kyle, a software expert, has been
running into hardware problems and so I kind of introduced them to each
other over e-mail so I don’t know if they are working on it together.
FS

Would you consider splitting the prize?

I don’t care, but I don’t know if the candidates would consider splitting the prize! I know Michael has already spent a lot of money because
he has been buying Arduinos and other hardware. He wants to make
a cheap version to solve the problem and then make another one that
costs 150 on top of the price limitation to make it easier to use. He is
spending a bunch of money so even if he wins, it is going to get him only
out of the hole and he will not have much left.
Actually, Golan 21 had an idea for an iPhone app that he wants to make
but I am not sure it solves it.
ER

FS

Why don’t you think his app will solve it?

He is really interested in making something where you do not need
to meet with the graffiti writer. His idea was that if you could take a
photo of it on the wall, and then with your finger you guide it for how it
ER

19
20
21

Kyle McDonald http://kylemcdonald.net
Michael Auger http://lm4k.com
Golan Levin http://www.flong.com

226

Tying the story to data

was written. It has an algorithm for image processing and that combined
with your best guess of how it was written would be backed out in motion
data. But it is faked data.
FS

That it is really interesting!

Yes it is and I would love it if he would make it but I am not going to
let him win with it (laughs). I understand why he wants to do it; especially
if you are not inside the graffiti community, your only experience is what
you see on the wall and you don’t know who these people are and it is
going to be almost impossible to ever get data for those tags. If you don’t
have access to that community you are never going to get the tag of the
person that you really want. I like the idea that he is thinking about
getting some data from the wall as opposed to getting it from the hand.
ER

Learning by copying. Nowhere near solving the challenge, but interesting. OSP 22 we were discussing about the way designers are invited into
Open Source Software by way of contest. Troy James Sobotka 23 got angry
and wrote: We want to be part of this community, we don’t want to compete
for it.
FS

With the EyeWriter project, we were thinking a lot about that; how
to spur development. I think I would not have done a competition with
the EyeWriter. Making it fun, that is what makes it happen. If it would
be a really serious amount of money, with people scraping at each other,
fighting each other ...
For me, the fact that there is prize money makes something that is already
ridiculous in itself even more funny. To have prize money for such a small
community of people that are interested in coding and in graffiti. I’m not
seriously thinking that we can spur development with this kind of money.
To use the EyeWriter as an example, we’ve had money infusions from
awards mostly and we had to think about how we could use that money
to get from point A to point B. That’s also a project where we had very
ER

22
23

OSP (Open Source Publishing) is a graphic design collective that uses only Free, Libre and
Open Source software. http://ospublish.constantvzw.org
The very notion of Libre / Free software holds cooperation and community with such high regard
you would think that we would be visionary leaders regarding the means and methods we use to
collaborate. We are not. We seem to suffer from a collision of unity with diversity. How can we
more greatly create a world of legitimate discussion regarding art, design, aesthetic, music, and other
such diverse fields when we are so stuck on how much more consistent a damn panel looks with tripe
22 pixel icons of a given flavour?
http://www.librescope.com/975/spec-work-and-contests-part-two

227

Tying the story to data

definable design goals of what we wanted to reach, especially between the
first version and where we are now with the second version.
FS

How did that work?

We are not talking about a ton of money here, 10 to 20.000 , and
we tried to get as far as we could. We got almost no work done between
the meetings in LA but if we flew in, it was OK to take a week out of
our schedules and really hammer at it. We were trying to think how we
could do the same thing for people that we wanted to work with and who
we had met in conferences. So that is how we thought of spending that
money.
The other way we use money in the EyeWriter project is that we buy
people kits. We know a few people that are interested in hacking on it
but they don’t have the hardware. Not that they are so expensive, but
Zach wants to buy twenty or thirty unpackaged kits and he has interns
working with him in New York helping to build them. So we have these
systems ready so as soon as someone wants to get hacking on it, we can
mail them a working system that they can just plug in and they don’t
have to waste their time ordering all these parts from all these websites
all over China. And when they are done, they just send it back.
ER

You talked about some things in the challenge that worked and some
that didn’t.
FS

I think the forum is the obvious thing that did not work. I have
friends working on OpenFrameworks, it is headed primarily by Zach and
Theo. When you see that forum, it is very involved. It is a deep system,
with many different libraries and lots of code flying around. GML is really
not large enough.
I think what makes sense for this project is when I post news about the
project, I see it ripple in Google Alerts. For people working on it, having
a place where these things show up is already a lot. The biggest success
is the project space, to see all the projects happening.
ER

FS

What happened on the site since we talked?

A project I like, is kml2gml 24 for example. It is done by a friend from
Tokyo. He was gathering GPS data riding his bike around various cities,
and building up a font based on his path. I like projects like this, where
ER

24

Yamaguchi Takahiro http://www.graffitimarkuplanguage.com/kml2GML

228

Tying the story to data

someone takes a work that is already done and just writes an application
to convert the data into another format. To see him riding his bike played
back in GML was really nice. It is super low barrier to entry, he already
did all the hard work. I like that there is now a system for piping very
different kinds of data through GML.
FS

But it could also work the other way around?

Yeah. This is maybe a tangent but depending on how someone solves
the GML challenge ... I was discussing this with Mike (the person that is
developing the sensor based version). He was thinking that if you would
turn on his system, and leave it on for a whole night of graffiti writing,
you would have the gestural data plus the GPS data. You could make
a .gml file that is tracking you down the street, and zoom in when you
start making the tag. Also you would get much more information on
3D movement, like tilt and when the pen is picking up and going down.
Right now all I am getting is a 2D view through video data. I am really
keeping my fingers crossed. But he ran into trouble though.
ER

FS

Like what?

I have my doubts about using these kind of sensors, because ‘drift’ is
a problem. When you start using these sensors too long, it tends to move
a little bit. I think he is working within a 0.25 inch margin of error right
now, which is right on the edge. If you are recording someone doing a
big piece, this is not going to ruin my day too much but if you record a
little tag than it is a problem.
The other problem is that you need to orient the system before you start
tagging. It needs to know what is up and down, you have to define your
plane of access. I don’t really understand this 100% but he thinks he can
still fit it all within the ten second calibration requirement, he’s thinking
that each time you come to a wall, you tap once, you tap twice and tap a
third time to define what plane you are writing on and that calibrates the
3D space. Once you have that calibration done, you can start writing. It
is not as easy as attaching a motion sensor. The problem is hard.
ER

So you need to touch the wall before writing on it, feeling out the
playing field before starting! It is like working on a tablet; to move from
actual movement to instruction; navigation blends into the action of drawing
itself.
FS

ER

I like that!

229

Tying the story to data

SV

The guy using the iPhone did not use it as a sensor at all?

Theo was interested in using the iPhone to record motion data in
GML, but also to save the coordinates so you could try it into a Google
Earth or something but he had trouble with the sensitivity of the sensor.
Maybe it is better now but you needed to draw on a huge scale for one
letter. You could not record anything small.
ER

But it could be nice if you could record with a device that is less conspicuous.
FS

I know. I have just been experimenting with mounting cameras on
spray-cans. A tangent to GML, but related. It is not data, but video.
ER

What do you think is the difference between recording video, and
recording data? You mentioned that you wanted to move away from documentation the image to capture movement. Video is somehow indirect
data?
FS

Video is annoying in that it is computationally expensive. In Brazil 25
I have been using the laptop but the data is not very precise.
Kyle thinks he might be able to back out GML data from videos. This
might solve the challenge, depending on how many cameras you need and
how expensive they are. But so far I have not heard back from him. He
said it needs three different cameras all looking at the wall. I mean: talk
about computationally expensive! He likes video-processing, he knows
some Open Source software that can look for similar things and knows
how to relate them. To me it seems more difficult than it needs to be
(laughs).
ER

It is both overcomplicated and beautiful, trying to reverse engineer
movement from the image.
FS

I am getting more into video myself. I get more enjoyment from capturing the data than from the projections, like what most people associate
with my work.
ER

FS

Why is it so much more interesting to capture, rather than to project?

In part because it stays new, I’ve been doing those projections for a
while now and I know what happens at these events. For a while it was
very new, we just did it with friends, to project on the Brooklyn bridge
ER

25

Graffiti Analysis: Belo Horizonte, Brazil 2010 http://vimeo.com/16997642

230

Tying the story to data

for example. Now it has turned into these events where everyone knows
in advance, instead of just showing up at at a certain time ate a set corner.
It has lost a lot of its magic and power.
Michele and I have done so many of these projections and we sort of
know what to expect from it, what questions people will ask. When I
meet with graffiti writers, that almost always feels new to me. When we
went to Brazil, we intentionally tried to not project anything but to spend
as much time as possible with writers. Going out with graffiti writers to
me always feels right.

FS Is the documentation an excuse to be taken along, or is the act of
documenting itself interesting to you?

To me documentation is interesting. I don’t know where all of this
is going right now, I am just trying to get the footage; I put these pieces
together showing all this movement but I don’t really know what the final
project is. It is more about collecting data so I am interested in having
video, audio and GML that can be synced up, and the sound from these
microphones is something to do something with later. This is research
for me. I like the idea of having all this data related to a 10 second gesture.
I am thinking that in the future we can do interesting things with it. I
am even thinking about how the audio could be used as a signal to tell
you what is drawing and what is not drawing. It is a really analog way of
doing it, but in that way you don’t need a button where you are getting
true and false statements for what is drawing and what is not drawing;
you can just tell by the sound:
tfffpt ... tfffpt.
ER

FS

You can hear the space, and also the surface.

I got started doing this because I love graffiti and this is a way to
get closer to it again. Like getting back out to the streets and having
very personal relationships to the graffiti writers and talking to them,
and having them give feedback. I think that is how the whole challenge
started. It didn’t start because I was projecting, but because I was out on
the street and testing the capture, having graffiti writers nearby when it
is happening. It feels like things are progressing that way.
ER

Are you thinking of other ways of capturing? You talk about capturing
movement, but do you also archive other elements? Do you take notes,
pictures? What happens to the conversations you are having?
FS

231

Tying the story to data

I have been missing out on that piece. It is a small amount of time
we have, and I am already trying to get so much. I am setting up a
camera that shoots straight video from a tripod, I am capturing from the
laptop and I am also screencasting the application, my head is spinning.
One reason I screwed up this footage in the beginning is because with all
these things going on I forget to turn on some things. Maybe someone
will solve this challenge.
ER

FS

Are you actually an embedded anthropologist?

In the back of my head I am thinking this will become a longer documentary. I like to experiment with documentation, whether that is in
code or with video. I do think that there is this interesting connection
between documentation and graffiti and how these two things overlap.
I am always thinking about documentation. The graffiti writer that was
in Vienna 26 showed me a video that was amazing. It was him and a
friend going out on a sunny day at 15:30 in the afternoon with two head
mounted cameras, bombing an entire train and you hear the birds singing
and you only experience it by these two videos that are linked. There are
interesting constraints: your hands are already full, you don’t want peoples’ faces on camera so the head-mounted cameras were smart. Unless
you walk in front of a mirror (laughs).
ER

FS

Is it related to the dream of ‘self documenting code’?

I like that. Even doing the challenge is in a way a reflection on this,
how I am fighting to get GML back to the streets somehow, it has a
natural tendency to get closer to the browser, to the screen, and my job
is to get it back to the street. It is so sexy and fun and flashy and that is
important too. My job is to keep the graffiti influence on it as large as the
other part.
ER

FS
ER

Is any of this reflected in the standard itself?

I haven’t looked at the standard for a while now.

I was thinking again about live coding and notation. Simon Yuill 27
describes notation as a shared space that allows collaboration but also defines
the end of a collaboration.
FS

26
27

momo3010 http://momo1030.com
Simon Yuill. All problems of notation will be solved by the masses. Mute Magazine, 2008

232

Tying the story to data

Maybe using an XML-like structure was a bad idea? Maybe if I had
started with a less code-based set of rules? If the files were raw video,
it would encourage people to go outside more often? By picking XML
I am defining where the thing heads in a way. I think I am OK in the
role of fighting that tendency. It is not just a problem in GML but with a
lot of work I have been doing with graffiti and technology and even way
back with Graffiti Analysis, before GRL (Graffiti Research Lab), the idea
was always to keep the research very close to the people doing graffiti. I
was intentionally working with people bombing a lot and not with graffiti
celebrities. I wanted to work with who’s tag was on my mailbox, who’s
tag do I see a million times when I walk down the street. Since then
a lot has happened, like with more popular projects such as L.A.S.E.R.
Tag, and it goes almost always further away from graffiti. Maybe that is
a function of technology. Technology, or the way it is now, will always
drift towards entertainment uses, commercial uses.
ER

Do you think a standard can be subversive? You chose XML because it
is accessible to amateur programmers. But it is also a very formal standard,
and so the interface between graffiti writers and hackers is written in the
language of bureaucracy.
FS

ER (laughs) I thought that there was something funny with that. People
that know XML and the web, they get the joke that something so rigid
and standardized is connected to writing your name on the wall. But to
be honest, it was really just a pragmatic choice.

It reminds me of an interview 28 with François Chastanet who wrote a
book 29 about tagging in Los Angeles. He explains that the Gothic lettering
is inspired by administrative papers!
SV

I am wondering whether you’re thinking about the standard itself as
a space for hacking?
FS

Graffiti is somehow coded in-itself. Do you mean it would be interesting
to think how GML could be coded in a way for graffiti writers, not for
coders?
There would be more space for that when more people start to program at
a younger age? When it is more common knowledge. If I would start to do
ER

28
29

Interview with François Chastanet http://www.youtube.com/watch?v=ayPcaGVKJHg
François Chastanet, Cholo writing: Latino gang graffiti in Los Angeles. Dokument, 2009

233

Tying the story to data

that now, I would quickly lose my small user-base. I love that idea though;
the way XML is programmed fits very much to the way you program for the
web. But what if it was playing more with language, starting from graffiti
which is very coded?
When I was in college, I was always thinking about how to visualize
motion in print. I was looking for ways people had developed languages
for different ways of writing.
ER

Maybe you could look at the Chinese methods for teaching writing,
because the order of the strokes is really important. If you make the stroke
from bottom to top, and not from top to bottom, it is wrong.
SV

A friend in Hong Kong, MC Yan, loves the Graffiti Analysis project
because it shows the order in which he is writing and he likes to play
with that. So he writes words in different order than people are used to
and so it changes the meaning. People can not only watch the final result,
but also the order which is an interesting part of the writing process. The
brush, the angle, direction: depicting motion!
In the beginning of the Graffiti Analysis Research project I was very
against projection, because I felt that was totally against the idea of graffiti. I was presenting all of these print ideas and the output would be
pasted back into the city because I was against making an impermanent
representation of the data. In the end Zach said, you are just fighting this
because you have a motion project and you want to project motion and
then I said alright, I’ll do a test. And the tests were so exciting that I felt
OK with it.
ER

In what way does GML bridge the gap between digital drawing and
hand writing? Could you see a sort of computer-aided graffiti? Could you
see computation enter graffiti?
FS

Yeah. When you are in a controlled environment, in a studio, it is
easy but the outdoors part always trips me up. That is why the design
constraints get interesting, playing in real time with what someone is
writing. I think graffiti writers would be into that too. How to develop
a style that is unique enough to stand out in an existing canon is already
hard enough. This could give someone an edge.
ER

I think the next challenge I’d like to run is about recreating the data
outside. I’ve been thinking about these helicopters with embedded wireless
ER

234

Tying the story to data

camera’s, have you seen them? The obvious thing to me would be uploading
a .gml file to one of these helicopters that is dripping paint on a rooftop.
Scale is so important, so going bigger is always going to be better.
Gigantic rooftop tags could be a way to tie it back to the city, give it a
reason? I am thinking of ways to get an edge back to the project. The
GML-challenge is already a step into that direction; it is not about the
prettiest screensaver. To ask people to design something that is tying back
to what graffiti is, which is in a way a crime.
I think fixing the data capture is the right place to start, the next one could
be about making marks in the city. Like: the first person to recreate this
GML-tag on the roof of this building, that would be fun. The first person
that could put this ‘Hello World’ tag onto the Brooklyn bridge and get a
photo of it gets the prize. That would get us back to the question of how
we leave marks on the surface of the city.
When you capture data of an individual writer in a certain standard,
it ends up as typography?
FS

That’s another trend that happens when designers look at graffiti, and
I’ve fallen into this too sometimes, you want to be able to make fonts out of
it. People have done this actually; there’s a project in New York where they
met with pretty influential graffiti writers and asked them to write in boxes,
the whole alphabet, and I think there’s something interesting there.
The alphabet that you saw the robot write was drawn by TEMPT with the
EyeWriter and what he did was a little bit smarter than other attempts by
graffiti writers to make fonts. He intentionally picked a specific style, the
Cholo style, and the format is very tall, vertically oriented, angled. That
style is less about letter connections and pen-flow. What graffiti has developed into, and especially tags, is very much about how it is written and
the order of the letters. When TEMPT picked this style he made a smart
decision that a lot of people miss when you make a font, you miss all the
motions and the connections.
ER

What if a programmer could put this data in a font, and generate
alternating connections?
SV

ER That kind of stuff is interesting. It would help graffiti writers to design
tags maybe?
To get my feet wet, I designed a tag once, and it was so not-fun to write!
I was thinking about a tag that would look different and that would fit

235

Tying the story to data

into corners, I was interested in designing something that wasn’t curved;
that would fit the angles of the city, hard edges. So I had forgotten all
my research about drafting and writing. I think I stopped writing in part
because the tag I picked wasn’t fun o write. For a font to work like writing,
it is not just about possible connections between lines. You’d need another
level in the algorithm, the way the hand likes to move.
It would be a good algorithm to dream up. It was beautiful to see a
robot write TEMPT’s letters by the way.
FS

When TEMPT saw the robot writing for the first time, his reaction was
all about the order of how the letters were constructed. The order is I think
defined by the way he dropped the points in with the EyeWriter software.
When he was writing with his eyes, he ended up writing in the same way
as he would have written with his hands. When he saw the video with the
robot, it freaked him out because he was like: That’s how my hand moved
when I did that tag!
ER

236

Tying the story to data

The Graffiti Markup Field Recorder
challenge

An easily reproducible DIY device that can unobtrusively record graffiti motion data during a graffiti writer’s normal practice in the city. 30
Project Description and Design Requirements:



The GML Field Recorder Challenge is a DIY hardware and software solution for unobtrusively recording graffiti motion data during a graffiti writer’s
normal practice in the city. The winning project will be an easy to follow
instruction set that can be reproduced by graffiti writers and amateur technologists. The goal is to create a device that will document a night of graffiti
bombing into an easily retrievable series of Graffiti Markup Language (.gml)
files while not interfering with the normal process of writing graffiti. The
solution should be easy to produce, lightweight, cheap, secure, and require
little to no setup and calibration. The winning design solution will include
the following requirements listed below:
Material costs for the field device must not exceed 300

300 even felt expensive to me. How can this be a tool that is really
accessible? If it goes over a certain price point, it is not the kind of thing
that people can afford to make. It is a very small community, a lot of the
people that are going to have enough interest to build this are not going
to have a background in engineering, and are probably not even a part of
the maker scene that we know. The audience here might not be people
that are hanging out on Instructables. I wanted to make sure that the
price point meant that people could comfortably take a gamble to make
something for the first time. But I also did not want to make it so small
that the design would be impossible.

ER

30

GML-recorder challenge as published on:
http://www.graffitimarkuplanguage.com/challenges

237

.

Tying the story to data



Computers and equipment outside of the 300

can be used

for non-field activities (such as downloading and manipulating data captured in-field), but at the time of
capture a graffiti writer should have no more than 300
worth of equipment on him or herself.

I was trying to think of how the challenge could be gamed ... I did not
want to get into a situation where we were getting stressed out because some
smart hacker found a hole in the brief, and bought a next generation iPhone
that somehow just worked. I didn’t want to force people to buy expensive
equipment. This line was more about covering our own ass.
ER



The graffiti writer must be able to activate the recording function alone (i.e., without assistance from anyone else).
FS

Are you going to be out of work soon?

Thinking selfishly, I screw up on documentation a lot because I have
too many hats. When I’m going out doing this, I am carrying a laptop, a
calibration set up, I also have one video-camera on me that is just documenting, I have another one on a tripod, and I am usually screen capturing
the software as it processes the video-footage because it tells another story.
I screw up because I forget to hit stop or record. If the data-capture just
works, I can go have fun getting good video-footage.
ER

What if it had to be operated by more than one person? It is nice
how the documentation now turns the act of writing into a performancefor-one.
FS

If you record alone, the data becomes more interesting and mysterious,
right? I mean, no one else has seen it. Something captured very privately,
than gets potentially shared publicly and turned into things that are very
different. I also thought: you don’t want to be dependent on someone else.
It is a lot to ask, especially if you are doing something illegal.
ER

238

Tying the story to data



Any setup and/or calibration should be limited to 10
seconds or less.

This came out of me dealing with the current system. It feels wrong
that it takes ten to fifteen minutes to get it running. Graffiti is not meant
to be that way. This speaks to the problem of the documentation infringing on the writing process, which ideally wouldn’t happen. The longer
the set-up takes, the more it is going to influence the actual writing. It is
supposed to be a fly on the wall.
ER

FS
ER



Does it scale? Does a larger piece allow longer callibration -time?

That’s true. But I think this challenge is really about recording tags.

All hardware should be able to be easily concealed within
a coat with large pockets.

A hack to get around that would have been to design a jacket with ten
gallon pockets!
I put it there again, to make the device not be intrusive. A big part of graffiti
writing is about gaining entry and you limit where you can go depending on
how much equipment you have. How bulky it is, what walls you can get up,
what holes you can get through.
ER



The winning solution should be discrete and not draw
any added attention to the act of graffiti writing.
ER It’s part of the same issue, but this one also came out from me going
out and trying to capture with a system where it requires you to attach
a flashlight to a graffiti implement. I didn’t want anyone solving the
problem and then, Step one is: ‘Attach a police siren to a spraypaint can’

239

Tying the story to data



The resulting solution should be able to record at least
10 unique GML tags of approximately 10 seconds each in
length in one session without the need for connecting
to or using additional equipment.

I wasn’t thinking this was going to be an issue in terms of memorystorage, but maybe in terms of memory management. I did not want the
graffiti writer to behave as if he was on vacation with a camera that could take
only three photos. I wanted to make sure they were not making decisions
on what they were writing based and how much memory they had.
ER



All data recorded using the field recorder should be
saved in a secure and non-incriminating fashion.

(laughs) If I had to do that one again, I would have put that in Bonus
category actually. That’s a difficult question to ask. What does secure
mean? It seems a bit unfair, because it doesn’t fit in to the way graffiti is
currently documented. There’s not a lot of graffiti writers that currently
are shooting encrypted photos and videos, right?
But whatever bizarre format comes out from the sensor will help. I don’t
think that the NYPD will have time or make the effort to parse it. They’d
just have a file with a bunch of numbers. Time stamped GPS coordinates
would be more dangerous.
ER

FS

What would count as proof?

In most cases it is hard to convict someone on the basis of a photo
of a tag that you would tie to another tag. For good reasons, because if it
is a crew name for example, all of a sudden you are pinning one tag on a
person that could have been written by twenty people. This came up in
a trial in DC when an artist named BORF got arrested. He had written
his name everywhere, completely crushed DC and his trial was a big deal.
This issue came up and they argued that BORF was a collective, not an
individual. Who knows if that’s true, there were a lot of people around
him, but how do you really know?
ER

FS

GML could help balance the load?

You mean it would not be just the image of a tag but more like signing
at the bank?
ER

240

Tying the story to data

I mean that if you copy and distribute your data, the chance is small
that you can link it to an individual.
FS



The winning design will have some protection in the event
that the device falls into the wrong hands.

This again should probably have been a bonus item. Wouldn’t it be
awesome if you could go home and log in and flip a one to a zero and the
evidence goes up in smoke?
One graffiti writer friend told me: If the police comes, just smash the camera
as hard as you can! It’s a silly idea, but it shows that they are thinking
about it.
ER

FS
ER



Edible SD cards?

That would be a good idea!

Data should be able to be captured from both spray cans
and markers.
ER
FS

Yes.

Are you prepared for tools that do not exist yet?

That was kind of what I was thinking there. Markers are about direct
contact, spraypaint is in free space. If it works in those two situations, you
should theoretically be able to tie it to anything, even outside of graffiti. If
it was too much about spraypaint, it would be harder for someone to strap
it to a skateboard.
ER

241

Tying the story to data



System should be able to record writing on various surfaces and materials.

It is something you can easily forget about. When you are developing
something in the studio and it works well against a white wall, and than
when you go out in the city than you realize that brick is a really weird
surface. Or even writing on glass, or on metal or on other reflecting
surfaces that could screw up your reading. It is there as a reminder for
people that are not thinking about graffiti that much. The street and the
studio are so different.
ER



Data should be captured at 30 points per second minimum.

I was assuming that lots of people were going to use cameras, and
I wanted to make sure they were taking enough data points. With other
capturing methods it is probably not such a problem. Even at 30 points per
seconds you can start to see the facets if you zoom in, so anything less is not
ideal.
ER



The recording system should not interfere with the writer s
movements in anyway (including writing, running and climbing).

So this is where Muharrem is going to run into trouble. His solution
interferes. Not that much if you are just working in front of your body
space. But the way most writers write is that they are shuffling their feet
a lot, moving down the wall. Should it have said: Graffiti writer should
retain access to feet functionality? This point should be at the top almost.
ER

To me it feels strange, your emphasis on the tool blending into the
background. You could also see Muharrem’s solution as an enhancing device,
turning the writer into a tapdancer?
FS

I want to have on record: I love his solution! There’s a lot in his
design that is ‘making us more aware’ of what’s happening in the creation
of a tag. One thing that he is doing that is not in the specs, is that he is
ER

242

Tying the story to data

logging strokes, like up and down. When you watch him using it, you
can see a little light going from red to green when the fingers goes on
and off the spraypaint can. When you watch graffiti, it is too small of a
movement to even notice but when you are seeing that, it adds another
level of understanding of how they are writing.


All motion data should be saved using the current GML
standard 31 .
FS



Obvious.

All aspects of the winning design should be able to be
reproduced by graffiti writers and amateur technologists.

It wouldn’t be exciting if only ten people can make this thing. This
tool should not be just for people that can make NASA qualified soldering
connections. Ideally it should not have any soldering. I always thought of
a soldering iron like a huge barrier point. I’m all for duct-taped electrical
connections.

ER

There’s nothing about weather-resistant in the challenge. You’re not
thinking about rain, are you?
FS

A lot of paint stops working in rain too.
I think what you get from this brief though is that the whole impetus for
this project is about me trying to steer the ship that clearly wants to go
into another direction, back to my interest in what graffiti is rather than
anything that people might find aesthetically pleasing. It is not about
‘graffiti influenced visuals’.
ER

31

http://graffitimarkuplanguage.com/spec

243

Tying the story to data



All software must be released Open Source.

All hard-

ware must include clear DIY instructions/tutorials.

All

media must be released under an Open Content licence that
promotes collaboration (such as a Free Art License or
Creative Commons ShareAlike License).

I didn’t want it to be too specific, but there had to be some effort into
making it open.
ER



The recording must be an unobtrusive process, allowing the graffiti writer to concentrate solely on the act
of writing (not on recording).

The act of recording should

not interfere with the act of graffiti writing.

I’ve been through situations where the process gets so confusing that
you can’t keep your head straight and juggle all the variables. Your eyes
and ears are supposed to tell you about who’s coming around the corner.
Is there traffic coming or a train? There are so many other things you
need to pay attention to rather than: Is this button on?
The whole project is about getting good data. As soon as you force people
to think too much about the capture process, I think it influences when
and how they are writing.
ER

Bonus, but not required:


Inclusion of date, time and location saved in the .gml
file.

Yes. Security-wise that is questionable, but the nerd in me would just
love it. You could get really interesting data about a whole night of writing.
You could see a bigger story than just that of a single tag. How long did it
take to gain entry? How long were they hiding in the bushes? These things
get back to graffiti as a performance art rather than a form of visual art.
ER

244

Tying the story to data

Paris, November 2011
Last time we had contact we discussed how to invite Muharrem to
Brussels 32 . But now on the day of the deadline, it seems there are new
developments?
FS

ER I think in terms of the actual challenge, the main update is that since
we extended the deadline and made another call, I got an e-mail right on
the deadline today from Joshua Noble 33 with a very solid and pretty smart
proposal that seems to solve (maybe unfortunately for Muharrem) a bit
more of the design spec. It does it for cheaper and does it in a way that I
think is going to be easier to make also.
His design solution is using an optical mouse and he changed the sensors
so it has a stronger LED. He uses a modified lens on top of a plastic lens
that comes on top of a mouse, so that it can look at a surface that is a set
distance away. It has another sensor that looks at pitch, tilt and orientation,
but he is using that only to orient, the actual data gets recorded through the
mouse. It can get very high resolution, he is looking at up to a millimeter I
guess.
FS

Muharrem’s solution seems less precise?

I think he gets away with more because his solution is only for spraypaint
and once you are writing on that scale, even if you are off a few centimeters,
it might not ruin the data. If you look at the data he is getting, it actually
looks very good. I don’t think he has any numbers on the actual resolution
he is getting but if you were using his system with a pen, I think it would
be a different case. I like a lot of his solution too, it is an interesting hack.
It is funny that two of the candidates for the prize are both mouse hacks.
One is hacking a mechanical mouse and the other an optical mouse.
ER

FS
JH

It goes from drawing on a screen, to drawing on a wall?
And back again!

Yes. When I first was working on graffiti related software, the whole
reason I was building Graffiti Analysis as a capture application was beER

32

33

By early October 2011 no winning design-solution had been entered, besides a proposal from
Muharem Yildirim that came more than halfway. We decided to use the prize money to fly
Muharrem from Phoenix (US) to Brussels (BE) and document his project in a worksession as
part of the Verbindingen/Jonctions 13 meetingdays. http://www.vj13.constantvzw.org
Joshua Noble http://www.thefactoryfactory.com/gmlchallenge/

245

Tying the story to data

cause I did not want to hand graffiti writers a mouse (laughter). I had
done all this research into graffiti and started to be embedded in the
community and I knew enough about the community that if you were
going to ask them to take part in something that was already weird, you
could not give them a mouse and expect any respect on the other end
of that conversation. They respect their tools, so the reason I was using camera-input was because I wanted to have a flexible system where
they could bring in anything and I could attach a device to it. Now I am
coming back to mice finally.
Now the deadline has passed, do you think the passage from wishlist to
contest worked out?
FS

I think it was a good experiment, I am not sure how clever it was. To
take a piece of culture that a lot of people don’t even look at, or look at
it and think it is trash, to invest all this time and research and software
expertise into it makes people think about the graffiti practice and what
it actually is. The cash prize does something similar. It attaches weight
to something that most people don’t even care about. Even having the
name of an organization like Constant attached to it is showing that I am
really serious about this. In that sense it is different than a wishlist.
I just read the Linus Torvalds 34 biography, and I liked his idea that ‘fun’
is part of innovation, right? In a programming sense, it is scratching a
personal itch. The attachment of a prize is more to underline the fun
aspect than anything else.
ER

I am still puzzled about GML and how it is at the one hand stimulating
collaboration and sharing, and than it comes back to the proud individual
that wants to show off. It is kind of funny actually that now two people are
winning the prize.
FS

ER

I understand what you mean.

Also in F/LOSS, under the flag of ‘Open’ and ‘Free’ there is a lot of
competition. Do you feel that kind of tension in your work?
FS

Even ‘Open’ and ‘Free’ are in competition!
In a project like White-Glove Tracking for example, the most popular
video I had not made and it did not have my name on it but personally I
ER

34

Torvalds, Linus; David Diamond (2001). Just For Fun: The Story of an Accidental
Revolutionary. New York, New York, United States: HarperCollins.

246

Tying the story to data

still felt a part of it. I think when you are working in open systems, you
take pride when a project has wings. It is maybe even a selfish act. It is
the story of me receiving some art-finding and realizing that I am not the
best toolmaker for the job. Who ever manages to win the prize gets all
the glory, but I’m still going to feel awesome about it.

I have been reading the interview that Kyle McDonald did with Anton
Marini 35 and at some point he talks about being OK with sharing code and
libraries, but when it is too much of a personal style, then it is hard to share.
FS

Yes, I thought that was an interesting point. I’ve been in similar conversations on listservs with artists in the OpenFrameworks, Processing
and visual programming communities. What are the open pieces? It
makes sense to share libraries, but if I make a print from a piece of code,
do I then have to share the exact source and app for how that exact print
was made? What does it mean when I am investing money in a print, and
it is a limited series but I’m sharing the code? The art world is still based
on scarcity and we’re interested in computers that are copy-machines.
I see both sides of the argument and I am still trying to see how I fit
into it. It gets trickier when you are asked to release a piece rather than
a tool. If you are an Open Source artist and you make a toolset, that is
easier to share because people use that to make their own things. But
then an artist gets asked: how come I can’t get the file of that print? I
think that is a really hard question.
ER

FS

But isn’t the tool often the piece, and vice versa?

I agree. And I haven’t solved that question yet. Lately I’ve been a lot
less excited about running workshops for example. A lot of the people
that want to take part in the workshops are actually the opposition. Often
they own a club and they want to install a cool light-show or they are into
viral marketing. I never know which way to go with that. It depends on
what side of the curve of frustration I am on at that moment.
ER

Earlier you brought up the contrast between people that were more
visually invested and others that are more interested in the performance
aspect. I wanted to hear a bit more about the continuum in the culture and
how GML fits into that?
JH

35

Anton Marini: Some personal projects of mine, for example specific effects and ‘looks’ that I have a
personal attachment to, I don’t release
https://github.com/kylemcdonald/SharingInterviews/blob/master/antonmarini.markdown

247

Tying the story to data

My focus has been on tags, this one portion of graffiti. I do think
there could be cool uses for more involved pieces. It would be great if
someone else would come in and do that, because it is a part of graffiti that
I haven’t studied that much. I would not even be able to write a specssheet for it; it requires a lot of different things when you paint these
super-involved murals, when you have an hour or more time on your
hands a lot more things come into play. Color, nozzles, nozzle changes
and so on.
ER

JH

Z-axis becomes important?

Yes, and your distance from the wall, a lot of other things my brain
isn’t wrestling with. I think tags are always fundamental, even if they are
painting murals that take three days to paint, somewhere in their graffiti
education they start with the tags. You’re still going to be judged by the
community based on how you sign your name on the blackbook.
Graffiti is funny because it is almost conservative in terms of how a successful graffiti writer is viewed and it is reflected in how graffiti is in
some way similar in the world. In some way it is a let down, to travel
from Brooklyn to Paris to Brussels and it looks all the same but I think it
stems from the fact that the community is so tight-knit. But at the end
of the day it comes back to the tag always.
In terms of the performance, in a tag the relationship between form and
function is really tight. The way your hand moves and how the tag actually looks on the wall is dictated by the gesture you are making. A piece
where you have three hours, that tight synchronization isn’t there. With
a tag, every letter looks the way it does because that’s how it needs to be
drawn, because it needs to be connected to this other letter. There’s a
lot of respect for writers that do oneliners, and even if your tag has more
than one line, a good graffiti writer has often a one line version. If you
don’t have to pick up the pen it is a really economical stroke.
ER

JH

It is almost like hacking the limitations of gesture.

It is a very specific design requirement. How to write a name that is
interesting to think about and to look at, you have to do it in 5 seconds,
you have to do it in one line, you have to do it on each type of surface.
On top of that, you have to do it a million times, for twenty years.
ER

In Seattle they call a piece that stays up for a longer time a ‘burner’. I
was connecting that to an archival practice of ephemera. It is a self-agreed
JH

248

Tying the story to data

upon archival process, and it means that the piece will not be touched, even
for years.

ER Graffiti has an interesting relationship to archiving. On the one hand,
many graffiti writers think: Now that tag’s done, but I’ve got another
million of them. While others do not want people painting over them,
the city or other graffiti writers. Also if a tag has been up there for a few
years, it acquires more reverence and it is even worse when it is painted
over.
But I think that GML is different, it is really more similar to a photo of
the tag. It is not trying to be the actual thing.
FS

Once a tag is saved in GML, what can be done with the data?

I am myself reluctant to take any of these tags that I’ve collected and
do anything with it at all without talking closely to whoever’s tag it is,
because it is such an intimate thing. In that sense it is strange to have
an open data repository and to be so reluctant to use it in a way that is
looking at anyone too specifically.
The sculpture I’ve been working on is an average from a workshop; sixteen different graffiti writers merged into one. I don’t want to take advantage of any one writer. But this has nothing to do with the licence,
it is totally a different topic. If someone uploads to the 000000book site,
legally anyone should be able to do anything that they can do under the
Creative Commons licence that’s on the site but I think socially within
the community, it is a huge thing.
ER

There must be some social limits to referentiality. Like beat jacking for
DJs or biting rhymes for MCs, there must be a moment where you are not
just homaging, but stealing a style.
JH

I’ve seen cases where both parties have been happy, like when Yamaguchi
Takahiro used some GML data from KATSU and piped it into Google
Maps, so he was showing these big KATSU tags all over the earth which
was a nice web-based implementation. I think he was doing what a graffiti writer does naturally: Get out there and make the tag bigger but in
different ways. He is not taking KATSU-data from the database without
shining light back on him.
ER

GML seems very inspired by the practice of Free Software, but at the
same time it reiterates the conventional hierarchies of who are supposed to
FS

249

Tying the story to data

use what ... in which way ... from who. For me the excitement with open
licences is that you can do things without asking permission. So, usage
can develop even if it is not already prescribed by the culture. How would
someone like me, pretty far removed from graffiti culture ever know what I
am entitled to do?

I have my reasons for which I would and would not use certain pieces
of data in certain contexts, but I like the fact that it is open for people
that might use it for other things, even if I would not push some of those
boundaries myself.
ER

Even when I am sometimes disappointed by the actual closedness of
F/LOSS, at least in theory through its licensing and refusal to limit who is
entitled and who’s not, it is a liberating force. It seems GML is only half
liberating?
FS

I agree. I think the lack of that is related to the data. The looseness of
its licence makes it less of an invitation in a sense. If the people that put
data up there would sit down and really talk about what this means, when
they would really walk through all the implications of what it means to
public domain a piece, that would be great. I would love that. Then you
could use it without having to worry about all the morality issues and
people’s feelings. It would be more free.
I think it would be good to do a workshop with graffiti writers where
beyond capturing data, you reserve an hour after the workshop to talk to
everybody about what it would mean to add an open licence. I’ve done
workshops with graffiti writers and I talked to everyone: Look, I am
going to upload this tag up to this place where everyone can download them
after the workshop, cool? And they go cool. But still, even then, do I really
feel comfortable that they understand what they’ve gotten into? Even if
someone has chosen a ShareAlike licence, I would be nervous I think.
Maybe I am putting too much weight on it. People outside Free Software
are already used to attaching Creative Commons licences to their videos.
Maybe I am too close to graffiti. I still hold the tag as primal!
ER

It is interesting to be worried about copyright on something that is
illegal, things you can not publicly claim ownership of.
JH

Would you agree that standards are a normalizing practice, that in a
way GML is part of a legalizing process?
FS

250

Tying the story to data

For that to happen, a larger community would have to get involved. It
would need to be Gesture Markup Language, and a community other than
graffiti writers would need to get involved.
ER

FS
ER

Would you be interested in legalizing graffiti?
No. That’s why I stopped doing projections.

Not legal forms of graffiti, but more like the vision of KRS-One of
the Hip Hop city, 36 where graffiti would obviously be legal. Does that
fundamentally change the nature of graffiti?
JH

To me it is just not graffiti anymore. It is just painting. It changes what
it is. For me, its power stems from it being illegal. The motion happens
because it is illegal.
ER

In a sense, but there is also the calligraphic aspect of it. In Brooklyn,
a lot of the building owners say: yeah, throw it up and those are some
of the craziest pieces I know of, not from a tag-standpoint, but more as
complex graffiti visuals.
JH

I am always for de-criminalization. I don’t think anyone should go to
jail over a piece of paint that you could cover over in 5 seconds. And that
KRS-One city you mentioned would be cool to see.
ER

It is his Temple of Hip Hop, the idea to build a city of Hip Hop
where the entire culture can be there without any external repression.
It’s an utopian ideal obviously.
JH

Of course I would like to see that. If nothing else, you would totally
level the playing field between us and the advertisers. The only ones that
would get up messages in the city would be the ones with more time on
their hands.
ER

At the risk of stretching coherency, Hip Hop and Free Software
are both global insurgent subcultures that have emerged from being kind
of thrown away as fads and then become objects of pondering in multinational boardrooms. So I was hoping to open you up to riff on that:
zooming out, GML is a handshake point between these two cultures, but
GML is a specific thing within this larger world of F/LOSS and graffiti
JH

36

KRS-One Master Teacher. AN INTRODUCTION TO HIP HOP .
http://www.krs-one.com/#!temple-of-hip-hop/c177q

251

Tying the story to data

in the larger world of hiphop. What other types of contact points might
there be? Do you see any similarities and differences?

For me, even beyond technology and beyond graffiti it all boils down to
this idea of the hack that is really a phenomenon that has been going on
forever. It’s taking this system that has some sort of rigidity and repeating
elements and flipping it into doing something else. I see this in Hip Hop,
of course. The whole idea of sampling, the whole idea of turning a playback
device into a musical instrument, the idea of touching the record: all of
these things are hacks. We could go into a million examples of how graffiti
is like hacker culture.
In terms of that handshake moment between the two communities, I think
that is about realizing that its not about the code and in some sense its not
about the spraypaint. There’s this empowering idea of individual small actors
assuming control over systems that are bigger than themselves. To me, that’s
the connection point, whether its Hip Hop or rap or programming.
The similarities are there. I think there are huge differences in those communities too. One of them is this idea of the hustler from Hip Hop: the
idea of hustling doesn’t have anything to do with the economy of giftgiving. The idea that Jay-Z has popularized in Hip Hop and that rap music
and graffiti have at their core has to do with work ethic, but there’s also a
kind of braggadocio about making it yourself and attaining value yourself
and it definitely comes back to making money in the end. The idea of being
‘self-made’ in a way is empowering but I think that in the Open Source
movement or the Free Software movement the idea of hustling does not apply. It’s not that people don’t hustle on a day to day basis. You disagree with
me?
ER

It’s interesting because the more you were talking, the more I was
not sure of whether you were speaking about Hip Hop or Free Software
or maybe even more specifically the Open Source kind of ideological development. You have people like David Hannemeier Hansson who developed Ruby on Rails and basically co-opted an entire programming
language to the point where you can’t mention Ruby without people
thinking of his framework. He’s a hustler du jour: this guy’s been in
Linux Journal in a fold-out spread of him posing with a Lamborghini or
something. Talk about braggadocio! You get into certain levels or certain
dynamics within the community where its really like pissing contests.
JH

252

Tying the story to data

I like that, I think there’s something there. At the instigation of the
Open Source Initiative, though: like Linus ‘pre-stock option’, sitting in his
bedroom not seeing the sun for a year and hacking and nerding out. To me
they are so different, the idea of making this thing just for fun with a kind
of optimistic view on collaboration and sharing. I know it can turn into
money, I know it can turn into fame, I know it can turn into Lamborghinis
but I feel like where its coming from is different.
ER

I agree, that’s clearly a distinction between the two. They are not
coming from the same thing. But for me its also interesting to think
about it in terms that these are both sort of movements that have at times
been given liberational trappings, people have assigned liberatory powers
to these movements. Statistically the GPL is considerably more popular
than the Open Source licences, but I don’t know if you sat everybody
down and took a poll which side they would land on, whether they were
more about making money than they were about sharing. Are people
writing blogposts because they really want to share their ideas or because
they want to show how much cooler they are?
JH

You’re totally right and I think people in this scene are always looking
for examples of people making money, succeeding, good things coming to
people for reasons that aren’t just selflessness. People that are into Open
Source usually love to be able to point to those things, that this isn’t some
purely altruistic thing.
ER

Maybe you could take some of the hustle and turn it into something
in the Free Software world, mix and match.
JH

ER I think this line of inquiry is an interesting one that could be the
subject of a documentary or something. These communities that seem very
different until you start finding things that at their core really really similar.

It would be so interesting to have a cribs moment with some gangsta
or rapper who came from that, and he’s sort of showing off his stuff and
he has this machismo about him. Not necessarily directly mysognistic
but a macho kind of character and then take a nerd and have them do the
same.
JH

FS

Would they really be so different?

253

Tying the story to data

Obviously some rappers and some nerds, I mean that’s one of the
beauties – I mean its a global movement, you can’t help but have diversity
– but if we’re just speaking in generalizations?
JH

FS

There’s a lot of showing off in F/LOSS too.

Yeah, and there’s a lot of chauvinism. And when you said that selfmade thing, that’s the Free Software idea number one.
JH
ER

I think that part is a direct connection.

And they’re coming from two completely different strata, from a
class-based analysis which is absent from a lot of discussion. Even on
that level, how to integrate them to me is a political question to some
degree.
JH

ER
FS
ER
FS

Right.

Will any features of GML ever be deprecated?

Breaking currently existing software? I hope not.
Basically I’m asking for your long-term vision?

When the spec was being made of course it wasn’t just me, it was a
group of people debating these things and of course nobody wants things
to break. The idea was that we tried to get in as many things as we could
think of and have the base stay kind of what it was with the idea that you
could add more stuff into it. It’s easy enough to do, of course its not a
super-rigid standard. If you look at what the base .gml file is, the minimum
requirements for GML to compile, its so so stripped down. As long as it
just remains time/x-y-z, I don’t think that’s going to change, no.
But I’m also hoping that I’m not gonna be the main GML developer. I’m
already not, there’s already people doing way more stuff with it than I am.
ER

FS

How does it work when someone proposes a feature?

ER They just e-mail me (laughs). But right now there hasn’t been a ton
of that because it’s such a simple thing, once you start cramming too much
into it it starts feeling wrong. But all its gonna take is for someone to make
a new app that needs something else and then there will be a reason to
change it but I think the change will always be adding, not removing.

254

The following text is a transcription of a talk by and conversation with Denis Jacquerye in the context of the Libre
Graphics Research Unit in 2012. We invited him in the
context of a session called Co-position where we tried to
re-imagine layout from scratch. The text-encoding standard Unicode and moreover Denis’ precise understanding of the many cultural and political path-dependencies
involved in the making of it, felt like an obvious place
to start. Denis Jacquerye is involved in language technology, software localization and font engineering. He’s
been the co-lead of the DéjàVu Font project and works
with the African Network for Localization (ANLoc) to remove language limitations that exist in today’s technology.
Denis currently lives in London.This text is also available
in Considering your tools. 1 A shorter version has been published in Libre Graphics Magazine 2.1.
This presentation is about the struggle of some people to use typography
in their languages, especially with digital type because there is quite a complex set of elements that make this universe of digital type. One of the
basic things people do when they want to use their languages, they end up
with these type of problems down here, where some characters are shown,
some aren’t, sometimes they don’t match within the font. Because one font
has one of the character they need and then another one doesn’t. Like
for example when a font has the capital letter but not the corresponding
lowercase letter. Users don’t really know how to deal with that, they just
try different fonts and when they’re more courageous, they go online and
find how to complain about those to developers – I mean font designers or
engineers. And those people try to solve those problems as well as they
can. But sometimes it’s pretty hard to find out how to solve them. Adding
missing characters is pretty easy but sometimes you also have language re-

1

Considering your tools: a reader for designers and developers http://reader.lgru.net

261

quirements that are very complex. Like here for example, in Polish, you
have the ogonek, which is like a little tail that shows that a vowel is nasalized. Most fonts actually have that character, but for some languages, people
are used to have that little tail centred which is quite rare to see in a font.
So when font designers face that issue, they have to make a choice rather
they want to go with one tradition or another, and if they want to go one
way they’re scattered to those people. Also you have problems of spacing
things differently, like a stacking of different accents – called diacritics or
diacritical marks. Stacking this high up often ends up on the line above, so
you have to find a solution to make it less heavy on a line, and then in some
languages, instead of stacking them, they end up putting them side by side,
which is yet another point where you have to make a choice.
But basically, all these things are based on how type is represented on computers. You used to have simple encodings like ASCII, the basic Western
Latin alphabet where each character was represented by bytes. The character could be displayed with different fonts, with different styles, they could
not meet the requirements of different people. And then they made different encodings because they were a lot of different requirements and it’s
technically impossible to fit them all in ASCII.
Often they would start with ASCII and then add the specific requirements
but soon they ended up having a lot of different standards because of all the
different needs. So one single byte of representation would have different
meanings and each of these meanings could be displayed differently in fonts.
But old webpages are often using old encodings. If your browser is not
using the right encoding you would have jibbish displayed because of this
chaos of encodings. So in the late eighties, they started thinking about
those problems and in the nineties they started working on Unicode: several
companies got together and worked on one single unifying standard that
would be compatible with all the pre-used standards or the new coming
ones.
Unicode is pretty well defined, you have a universal code point to represent to identify a character, and then that character can be displayed with
different glyphs depending on the font or the style selected. With that
framework, when you need to have the proper character displayed, you have
to go the code point in a font editor, change the shape of the character and
it can be displayed properly. Then sometimes there’s just no code point for
the character you need because it hasn’t been added, it wasn’t in any existing
262

standard or nobody has ever needed it before or people who needed it just
used old printers and metal type.
So in this case, you have to start to deal with the Unicode organization itself.
They have a few ways to communicate like the mailing list, the public, and
recently they also opened a forum where you can ask questions about the
characters you need as you might just not find them.
In most operating systems, you have a character map application where you
can access all the characters, either all the characters that exist in Unicode or
the ones available in the font you’re using. And it’s quite hard to find what
you need, as it’s most of the time organized with a very restrictive set of
rules. Characters are just ordered in the way they’re ordered within Unicode
using their code point order: for example, capital A is 41, and then B is 42,
etc. The further you go in the alphabet the further you go in the Unicode
blocks and tables, and there is a lot of different writing systems ... Moreover
because Unicode is sort of expanding organically – work is done on one
script, and then on another, then coming back to previous scripts to add
things – things are not really in a logical or practical order. Basic Latin is all
the way up there, and more far, you have Latin Extended A, (Conditional)
Extended Latin, Latin Extended B, C and D. Those are actually quite far
apart within Unicode, and each of them can have a different setup: for
example, here you have a capital letter that is just alone, and here you have
a capital letter and a lowercase letter. So when you know the character you
want to use, sometimes you would find the uppercase letter but you’d have
to keep looking for the corresponding lowercase.
Basically when you have a character that you can’t find, people from the
mailing list or the forum can tell you if it would be relevant to include it
in Unicode or not. And if you’re very motivated, you can try to meet the
inclusion criterias. But for a proper inclusion, there has to be a formal
proposal using their template with questions to answer, you also have to
provide proof that the characters you want to add are actually used or how
they would be used.

263

The criterias are quite complicated because you have to make sure that this is
not a glyphic variant (the same character but represented differently). Then
you also have to prove the character doesn’t already exist because sometimes
you just don’t know it’s a variant of another one; sometimes they just want
to make it easier and claim it’s a variant of another one even though you
don’t agree. For example, making sure it’s not just a ligature as sometimes
ligatures are used as a single character, sometimes they exist for aesthetic
reasons. Eventually you have to provide an actual font with the character so
that they can use it in their documentation.
How long does it take usually?

It depends as sometimes they accept it right away if you explain your request
properly and provide enough proof, but they often ask for revisions to the
proposals and then it can be rejected because it doesn’t meet the criterias.
Actually those criterias have changed a bit in the past. They started with
Basic Latin and then added special characters which were used: here for example is the international phonetic alphabet but also all the accented ones ...
As they were used in other encodings and that Unicode initially wanted to
be compatible with everything that already exists, they added them. Then
they figured they already had all those accented characters from other encodings so they’re also going to add all the ones they know are used even
though they were not encoded yet. They ended up with different names because they had different policies at the beginning instead of having the same
policy as now. They added here a bunch of Latin letters with marks that
were used for example in transcription. So if you’re transcribing Sanskrit for
example, you would use some of the characters here. Then at some point
they realized that this list of accented characters would get huge, and that
there must be a smarter way to do this. Therefore they figured you could
actually use just parts of those characters as they can be broken apart: a
base letter and marks you add to it. You may have a single character that
can be decomposed canonically between the letter B and a colon dot above,
and you have the character for the dot above in the block of the diacritical
marks. You have access to all the diacritical marks they thought were useful
at some point. At that point, when they realized they would end up having
thousands of accented characters they figured with this way where we can
have just any possibility, so from now on, they’re just going to say if you
want to have an accented character that hasn’t been encoded already, just
264

use the parts that can represent it. Then in 1996, some people for Yoruba,
a spoken language in Nigeria, made a proposal to add the characters with
diacritics they needed and Unicode just rejected the proposal as they could
compose those characters by combining existing parts.
Weren’t the elements they needed already in the toolbox?

Yes, the encoding parts are there, meaning it can be represented with
Unicode but the software didn’t handle them properly so it made more
sense to the Yoruba speakers to have it encoded it in Unicode.

So you could type, but you’d need to type two characters of course?

Yes, the way you type things is a big problem. Because most keyboards
are based on old encodings where you have accented characters as single
characters, so when you want to do a sequence of characters, you actually
have to type more, or you’d have to have a special keyboard layout allowing
you to have one key mapped to several characters. So that’s technically
feasible but it’s a slow process to have all the possibilities. You might have
one whic is very common so developers end up adding it to the keyboard
layouts or whatever applications they’re using, but not when other people
have different needs.
There is a lot of documentation within Unicode, but it’s quite hard to find
what you want when you’re just starting, and it’s quite technical. Most of it
is actually in a book they publish at every new version. This book has a few
chapters that describe how Unicode works and how characters should work
together, what properties they have. And all the differences between scripts
are relevant. They also have special cases trying to cater to those needs that
weren’t met or the proposals that were rejected. They have a few examples
in the Unicode book: in some transcription systems they have this sequence
of characters or ligature; a t and a s with a ligature tie and then a dot above.
So the ligature tie means that t and s are pronounced together and the dot
above is err ... has a different meaning (laughs). But it has a meaning! But
because of the way characters work in Unicode, applications actually reorder
it whatever you type in, it’s reordered so that the ligature tie ends up being
moved after the dot. So you always have this representation because you
have the t, there should be the dot, and then there should be the ligature tie
and then the s. So the t goes first, the dot goes above the t, the ligature tie
goes above everything and then the s just goes next to the t. The way they
265

explain how to do this is supposed to do the t, the ligature tie, and then a
special diacritical mark that prevents any kind of reordering, then you can
add the dot and then you can do the s. So this kind of use is great as you
have a solution, it’s just super hard because you have to type five characters
instead of ... well ... four (laughs). But still, most of the libraries that are
rendering fonts don’t handle it properly and then even most fonts don’t
plan for it. So even if the fonts did anyway the libraries wouldn’t handle it
properly. Then there are other things that Unicode does: because of that
separation between accents and characters and then the composition, you
can actually normalize how things are ordered. This sequence of characters
can be reordered into the pre-composed one with a circumflex or whatever;
you have combining marks in the normalized order. All these things have
to be handled in the libraries, in the application or in the fonts.
The documentation of Unicode itself is not prescriptive, meaning that the
shape of the glyphs are not set in stone. So you can still have room to
have the style you want, the style your target users want. For example
if we have different glyphs: Unicode has just one shape and it’s the font
designer’s choice to have different ones. Unicode is not about glyphs, it’s
really about how information is represented, how it’s displayed. Or you have
two characters displayed as a ligature: it is actually encoded as one character
because of previous encodings. But if ever it would be a new case, Unicode
wouldn’t stake the ligature as a single character.

266

So all this information is really in a corner there. It’s quite rare to find fonts
that actually use this information to provide to the needs of the people who
need specific features. One of the way to implement all those features is
with TrueType OpenType and there are also some alternatives like Graphite
which is a subset of a TrueType OpenType font. But then, you need your
applications to be able to handle Graphite. So eventually the real unique
standard is TrueType Opentype. It’s pretty well documented and very technical because it allows to do many things for many different writing systems.
But it’s slow to update so if there’s a mistake in the actual specifications of
OpenType, it takes a while before they correct it and before that correction shows up in your application. It’s quite flexible and one of the big
issue it that it has its own language code system, meaning that some identified languages just can’t be identified in OpenType. One of the features in
OpenType is managing language environment. If I’m using Polish, I’d want
this shape; if I’m using Navajo, I’d want this shape. That’s very cool because you can make just one font that’s used by Polish speakers and Navajo
speakers without them worrying about changing fonts as long as they specify the language they’re using. But you can’t use this feature for languages
which aren’t in the OpenType specifications as they have their own way of
describing languages than Unicode. It’s really frustrating because, you can
find all the characters in Unicode, not organized in a practical way: you have
to look all around the tables to find the characters that may be used by one
language, and then you have to look around for how to actually use them.
It is a real lack of awareness within the font designer community. Because
even when they might add all the characters you need, they might just not
add the positioning, so for example you have a ... when you combine with a
circumflex, it doesn’t position well because most of the font designers still
work with the old encoding mindset when you have one character for one
accentuated letter. Sometimes they just think that following the Unicode
blocks is good enough. But then you have problems where, as you can see
in the Basic Latin charts at the beginning, the capital is in one block and
its lowercase in a different block. And then they just work on one block,
they just don’t do the other one because they don’t think it’s necessary, but
yet, two blocks of the same letter are there, so it would make sense to have
both. It’s hard because there’s very few connections between the Unicode
world, people working on OpenType libraries, font designers and the actual
needs of the users.
267

At the beginning of the presentation you went for the code point of the characters,
all your characters are subtitled by their code points; it’s kind of the beauty of
Unicode to name everything, every character.
Those names are actually quite long. One funny thing about this. Unicode
has the policy of not changing the names of the characters, so they have an
errata where they realized that oh, we shouldn’t have named this that, so here’s
the actual name that makes sense, and the real name is wrong.

Pierre refers to the fact that in the character mappings that each of the glyphs
also has a description. And those are sometimes so abstract and poetic that
this was a start of a work from OSP, the Dingbats Liberation Fest, to try
to re-imagine what shapes would belong to those descriptions. So ‘combining
dot above’ that’s the textual description of the code point. But of course there
are thousands of them so they come up with the most fantastic gymnastics ...
So when people come in a project like DéjàVu, they have to understand
all that to start contributing. How does this training, teaching, learning
process takes place?

Usually most people are interested in what they know. They have a specific
need and they realize they can add it to DéjàVu, so they learn how to play
with FontForge. After a while, what they’ve done is good and we can use
it. Some people end up adding glyphs they’re not familiar with. For example we had Ben doing Arabic: it was mostly just drawing and then asking
for feedback on the mailing list; then we got some feedback, we changed
some things, eventually released it, getting more feedback (laughs) because
more people complained ... So it’s a lot of just drawing what you can from
resources you can find. It’s often based on other typefaces therefore sometimes you’re just copying mistakes from other typefaces ... So eventually it’s
just the feedback from the users that’s really helpful because you know that
people are using it, trying it, and then you know how to make it better.

268

(Type) designer Pedro Amado is amongst many other things
initiator of TypeForge 1 , a website dedicated to the development
of ‘collaborative type’ with Open Source tools. While working
as design technician at FBAUP 2 , he is about to finish a MA
with a paper on collaborative methods for the creation of art
and design projects. When I e-mailed him in 2006 about open
font design and how he sees that developing, he responded with
a list of useful links, but also with:
Developing design teaching based on
Open Source is one of my goals, because
I think that is the future of education.

This text is based on the conversation about design, teaching
and software that followed.

You told me you are employed as ‘design technician’ ... what does that
mean?

It means that I provide assistance to teachers and students in the Design
Department. I implemented scanning/printing facilities for example, and
currently I develop and give workshops on Digital Technologies – software
is a BIG issue for me right now! Linux and Open Source software are slowly
entering the design spaces of our school. For me it has been a ‘battle’ to
find space for these tools. I mean – we could migrate completely to OSS
tools, but it’s a slow progress. Mainly because people (students) need (and
want) to be trained in the same commercial applications as the ones they
will encounter in their professional life.
How did Linux enter the design lab? How did that start?

It started with a personal curiosity, but also for economical reasons. Our
school can’t afford to acquire all the software licenses we’d like. For example, we can’t justify to pay approx. 100 x 10 licenses, just to implement
1
2

http://www.typeforge.net/
http://www.fba.up.pt/

275

the educational version of Fontlab on some of our computers; especially because this package is only used by a part of our second year design students.
You can image what the total budget will be with all the other needs ... I
personally believe that we can find everything we need on the web. It’s a
matter of searching long enough! So this is how I was very happy to find
Fontforge. An Open Source tool that is solid enough to use in education
and can produce (as far as I have been able to test) almost professional results in font development. At first I couldn’t grasp how to use it under X 3
on Windows, so one day I set out to try and do it on Linux ... and one thing
lead to another ...

What got you into using OSS? Was it all one thing leading to another?

Uau ... can’t remember ... I believe it had to do with my first experiences
online; I don’t think I knew the concept before 2000. I mean I’ve started
using the web (IRC and basic browsing) in 1999, but I think it had to do
with the search of newer and better tools ...
I think I also started to get into it around that time. But I think I was
more interested in copyleft though, than in software.

Oh ... (blush) not me ... I got into it definitely for the ‘free beer’ aspect!
By 2004 I started using DTP applications on Linux (still in my own time)
and began to think that these tools could be used in an educational context,
if not professionally. In the beginning of 2006 I presented a study to the
coordinator of the Design Department at FBAUP, in which I proposed to
start implementing Open Source tools as an alternative to the tools we were
missing. Blender for 3D animation, FontForge for type design, Processing
for interactive/graphic programming and others as a complement to proprietary packages: GIMP, Scribus and Inkscape to name the most important
ones. I ran into some technical problems that I hope will be sorted out
soon; one of the strategies is to run these software packages on a migration
basis – as the older computers in our lab won’t be able to run MacOS 10.4+,
we’ll start converting them to Linux.
3

Cygwin/X is a port of the X Window System to the Cygwin API layer for the Microsoft
Windows family of operating systems
Cygwin/x: X windows – on windows! http://x.cygwin.com/, 2014. [Online; accessed 5.8.2014]

276

I wanted to ask you about the relation between software and design.
To me, economy, working process, but also aesthetics are a product of
software, and at the same time software itself is shaped through use. I
think the borders between software and design are not so strictly drawn.

It’s funny you put things in that perspective. I couldn’t agree more.
Nevertheless I think that design thinking prevails (or it should) as it must
come first when approaching problems. If the design thinking is correct,
the tools used should be irrelevant. I say ‘should’ because in a perfect environment we could work within a team where all tools (software/hardware)
are mastered. Rarely this happens, so much of our design thinking is still
influenced by what we can actually produce.

Do you mean to say that what we can think is influenced by what we
can make? This would work for me! But often when tools are mastered,
they disappear in the background and in my opinion that can become a
problem.

I’m not sure if I follow your point. I agree with the border between design
and software is not so strict nevertheless, I don’t agree with economy, process
and aesthetics are a product of software. As you’ve come to say what we think
is influenced by what we can make ... this is an outside observation ...
A technique is produced inside a culture,
therefore one s society is conditioned by
it s techniques. Conditioned, not determined 4

Design, like economics and software, is a product of culture. Or is it
the other way around? The fact is that we can’t really tell what comes first.
Culture is defined by and defines technology. Therefore it’s more or less
simple to accept that software determines (and is determined) by it’s use.
This is an intricate process ... it kind of goes roundabout on itself ...
4

Pierre Lévy. Cyberculture (Electronic Mediations). University Of Minnesota Press, 2001

277

And where does design fit in in your opinion? Or more precisely:
designers?

Design is a cultural aspect. Therefore it does not escape this logic. Using
a practical standpoint: Design is a product of economics and technology.
Nevertheless the best design practices (or at least the one’s that have endured
the test of time) and the most renowned designers are the one’s that can
escape the the economic and technological boundaries. The best design
practices are the ones that are not products of economics and technology
... they are kind of approaching a universal design status (if one exists). Of
course ... it’s very theoretical, and optimistic ... but it should be like this ...
otherwise we’ll stop looking for better or newer solutions, and we’ll stop
pushing boundaries and design as technology and other areas will stagnate.
On the other hand, there is a special ‘school’ of thought manifested through
some of the Portuguese Design Association members, saying that the design
process should lead the process of technological development. Henrique
Cayate (I think it was in November last year) said that design should lead the
way to economy and technology in society. I think this is a bit far fetched ...

Do you think software defines form and/or content? How is software
related to design processes?
I think these are the essential questions related to the use of OSS. Can
we think about what we can make without thinking about process? I believe
that in design processes, as in design teaching, concepts should be separated
from techniques or software as much as possible.
To me, exactly because techniques and software are intertwined, software matters and should offer space for thinking (software should therefore not be separated from design). You could also say: design becomes
exceptionally strong when it makes use of its context, and responds to it
in an intelligent way. Or maybe I did not understand what you meant by
being ‘a product of ’. To me that is not necessarily a negative point.
Well ... yes ... that could be a definition of good design, I guess. I think
that as a cultural produce, techniques can’t determine society. It can and
will influence it, but at the same time it will also just happen. When we talk
278

about Design and Software I see the same principle reflected. Design being
the ‘culture’ or society and software being the tools or techniques that are
developed to be used by designers. So this is much the same as Which came
first? The chicken, or the egg? Looking at it from a designers (not a software
developers) point of view, the tools we use will always condition our output.
Nevertheless I think it’s our role as users to push tools further and let developers know what we want to do with them. Whether we do animation on
Photoshop, or print graphics on Flash that’s our responsibility. We have to
use our tools in a responsible way. Knowing that the use we make of them
will eventually come back at us. It’s a kind of responsible feedback.
Using Linux in a design environment is not an obvious choice. Most
designers are practically married to their Adobe Suite. How come it is
entering your school after all?

Very slowly! Linux is finally becoming valuable for Design/DTP area as
it has been for long on the Internet/Web and programming areas. But you
can’t expect GIMP to surpass Photoshop. At least not in the next few years.
And this is the reality. If we can, we must train our students to use the
best tools available. Ideally all tools available, so they won’t have problems
when faced with a tool professionally. The big question is still, how we
besides teaching students theory and design processes (with the help of free
tools), help them to become professionals. We also have to teach them
how to survive a professional relationship with professional tools like the
Adobe Suite. As I am certain that Linux and OSS (or F/LOSS) will be
part of education’s future, I am certain of it’s coexistence along side with
commercial software like Adobe’s. It’s only a matter of time. Being certain
of this, the essential question is: How will we manage to work parallel in
both commercial and free worlds?

Do you think it is at all possible to ‘survive’ on other tools than the
ones Adobe offers?

Well ... I seem not to be able to dedicate myself entirely to these new
tools ... To depend solely on OSS tools ... I think that is not possible, at
least not at this moment. But now is the time to take these OSS tools
and start to teach with them. They must be implemented in our schools.
279

I am certain that sooner or later this will be common practice throughout
European schools.
Can you explain a bit more, what you mean by ‘real world’?

Being a professional graphic designer is what we call the ‘real world’ in
our school. I mean, having to work full time doing illustration, corporate
identity, graphic design, etc., to make a living, deliver on time to clients and
make a profit to pay the bills by the end of the month!

Do you think OSS can/should be taught differently? It seems selfteaching is built in to these tools and the community around it. It means
you learn to teach others in fact ... that you actually have to leave the
concept of ‘mastering’ behind?
I agree. The great thing about Linux is precisely that – as it is developed
by users and for users – it is developing a sense of community around it, a
sense of given enough eyeballs, someone will figure it out.
Well, that does not always work, but most of the time ...

I believe that using Open Source tools is perfect to teach, especially
first year students. Almost no one really understands what the commands
behind the menus of Photoshop mean, at least not the people I’ve seen in
my workshops. I guess GIMP won’t resolve this matter, but it will help
them think about what they are doing to digital images. Especially when
they have to use unfamiliar software. You first have to teach the design
process and then the tool can be taught correctly, otherwise you’ll just be
teaching habits or tricks. As I said before, as long as design prevails and not
the tool/technique, and you teach the concepts behind the tools in the right
way, people will adapt seamlessly to new tools, and the interface will become
invisible!

Do you think this means you will need to restructure the curriculum?
I imagine a class in bugreporting ... or getting help online ...

mmhh ... that could be interesting. I’ve never thought about it in that
way. I’ve always seen bugreporting and other community driven activities
280

as part of the individual aspect of working with these tools ... but basically
you are suggesting to implement an ‘Open Source civic behavior class’ or
something like that?

Ehm ... Yes! I think you need to learn that you own your tools, meaning
you need to take care of them (ie: if something does not work, report)
but at the same time you can open them up and get under the hood ...
change something small or something big. You also need to learn that
you can expect to get help from other people than your tutor ... and that
you can teach someone else.

The aspect of taking responsibility, this has to be cultivated – a responsible use of these tools. About changing things under the hood ... well this I
think it will be more difficult. I think there is barely space to educate people to hack their own tools let alone getting under the hood and modifying
them. But you are right that under the OSS communication model, the
peer review model of analysis, communication is getting less and less hierarchical. You don’t have to be an expert to develop new or powerful tools or
other things ... A peer-review model assumes that you just need to be clever
and willing to work with others. As long as you treat your collaborators
as peers, whether or not they are more or less advanced than you, this will
motivate them to work harder. You should not disregard their suggestions
and reward them with the implementations (or critics) of their work.

How does that model become a reality in teaching? How can you
practice this?

Well ... for example use public communication/distribution platforms
(like an expanded web forum) inside school, or available on the Internet;
blog updates and suggestions constantly; keep a repository of files; encourage the use of real time communication technologies ... as you might have
noticed is almost the formula used in e-learning solutions.
And also often an argument for cutting down on teaching hours.

That actually is and isn’t true. You can and will (almost certainly) have
less and less traditional classes, but if the teachers and tutors are dedicated,
281

they will be more available than ever! This will mean that students and
teachers will be working together in a more informal relationship. But it
can also provoke an invasion of the personal space of teachers ...
It is hard to put a border when you are that much involved. I am
just thinking how you could use the community around Open Source
software to help out. I mean ... if the online teaching tools would be
open to others outside the school too, this would be the advantage. It
would also mean that as a school, you contribute to the public domain
with your classes and courses.

That is another question. I think schools should contribute to public
domain knowledge. Right now I am not sharing any of the knowledge
about implementing OSS on a school like ours with the community. But
if all goes well I’ll have this working by December 2006. I’m working on
a website where I can post the handbooks for workshops and other useful
resources.
I am really curious about your experiences. However convinced I am
of the necessity to do it, I don’t think it is easy to open education up to
the public, especially not for undergraduate education.

I do have my doubts too. If you look at it on a commercial perspective,
students are paying for their education ... should we share the same content
to everyone? Will other people explore these resources in a wrong way?
Will it really contribute to the rest of the community? What about profit?
Can we afford to give this knowledge away for free, I mean, as a school this
is almost our only source of income? Will the prestige gained, be worth
the possible loss? These are important questions that I need to think more
about.

OK, I will be back with you in 6 month to find out more! My last question ... why would you invest time and energy in OSS when you think
good designers should escape economical and technological boundaries?
If we invest energy on OSS tools now, we’ll have the advantage of already
being savvy by the time they become widely accepted. The worst case scenario would be that you’ve wasted time perfecting your skills or learned a
282

new tool that didn’t become a standard ... How many times have we done
this already in our life? In any way, we need to learn concepts behind
the tools, learn new and different tools, even unnecessary ones in order to
broaden our knowledge base – this will eventually help us think ‘out of the
box’ and hopefully push boundaries further [not so much as escaping them].
For me OSS and its movement have reached a maturity level that can prove
it’s own worth in society. Just see Firefox – when it reached general user
acceptance level (aka ‘project maturity’ or ‘development state’), they started
to compete directly with MS Internet Explorer. This will happen with the
rest (at least that’s what I believe). It’s a matter of quality and doing the
correct broadcast to the general public. Linux started almost as a personal
project and now it’s a powerhouse in programming or web environments.
Maybe because these are areas that require constant software and hardware
attention it became an obvious and successful choice. People just modified it
as they needed it done. Couldn’t this be done as effectively (or better) with
commercial solutions? Of course. But could people develop personalized
solutions to specific problems in their own time frame? Probably not ... But
it means that the people involved are, or can resource to, computer experts.
What about the application of these ideas to other areas? The justice department of the Portuguese government (Ministério da Justiça) is for example
currently undergoing a massive informatics (as in the tools used) change –
they are slowly migrating their working platform to an Open Source Linux
distribution – Caixa Mágica (although it’s maintained and given assistance
by a commercial enterprise by the same name). By doing this, they’ll cut
costs dramatically and will still be able to work with equivalent productivity
(one hopes: better!). The other example is well known. The Spanish region of Estremadura looked for a way to cut costs on the implementation
of information technologies in their school system and developed their own
Linux Distro called Linex – it aggregates the software bundle they need,
and best of all has been developed and constantly tweaked by them. Now
Linux is becoming more accessible for users without technical training, and
is in a WYSIWYG state of development, I really believe we should start
using it seriously so we can try and test it and learn how we can use in in
our everyday life (for me this process has already started ... ). People aren’t
stupid. They’re just ‘change resistant’. One of the aspects I think that will
get peoples’ attention will be that a ‘free beer’ is as good as a commercial
one.
283

August 2006. One of the original co-conspirators of the
OSP adventure is the Brussels graphiste going under the
name Harrisson. His interest in Open Source software
flows with the culture of exchange that keeps the offcentre music scene alive, as well as with the humanist
tradition persistingly present in contemporary typography. Harrisson’s visual frame of reference is eclectic and
vibrant, including modernist giants, vernacular design,
local typographic culture, classic painting, drawing and
graffiti. Too much food for one conversation.

FS
H

FS
H

FS

H

You could say that ‘A typeface is entirely derivative’, but others argue, that maybe
the alphabet is, but not the interpretations of it.

The main point of typography and ownership today is that there is a blurred
border between language and letters. So: now you can own the ‘shape’ of
a letter. Traditionally, the way typographers made a living was by buying
(more or less expensive) lead fonts, and with this tool they printed books
and got paid for that. They got paid for the typesetting, not for the type.
That was the work of the foundries. Today, thanks to the digital tools, you
can easily switch between type design, type setting and graphic design.

What about the idea that fonts might be the most ‘pirated’ digital object possible?
Copying is much more difficult when you’ve got lead type to handle!

Yes, digitalisation changed the rules. Just as .mp3 changed the philosophy
of music. But in typography, there is a strange confrontation between this
flux of copied information, piracy and old rules of ownership from the past.

Do you think the culture of sharing fonts changed? Or: the culture of distributing
them? If you look at most licences for fonts, they are extremely restrictive. Even
99% of free fonts do not allow derivative works.

The public good culture is paradoxically not often there. Or at least the
economical model of living with public good idea is not very developed.
While I think typography, historically, is always seen as a way to share
knowledge. Humanist stuff.

287

The art and craft of typeface design is
currently headed for extinction due to the
illegal proliferation of font software,
piracy, and general disregard for proper
licensing etiquette. 1

H
FS

H

FS

Emigré ... Did they not live
from the copyrights of fonts?!

You are right.
They are
like a commercial record company. Can you imagine what
would happen if you would
open up the typographic trade
– to ‘Open Source’ this economy? Stop chasing piracy and
allow users to embed, study,
copy, modify and redistribute
typefaces?

Well we are not that far from
this in fact. Every designer
has at least 500 fonts on their
computer, not licenced, but
copied because it would be impossible to pay for!

Even the distribution model of fonts is very peer-to-peer as well. The reality
might come close, but font licences tell a different story.
I believe that we live in an era where
anything that can be expressed as bits
will be. I believe that bits exist to
be copied. Therefore, I believe that any
business-model that depends on your bits
not being copied is just dumb, and that
lawmakers who try to prop these up are like
governments that sink fortunes into protecting people who insist on living on the
sides of active volcanoes. 2

1
2

http://redesign.emigre.com/FAQ.php
Cory Doctorow in http://craphound.com/bio.php

288

H
FS

H

FS
H

FS

H

FS

I am not saying all fonts should be open, but it is just that it would be interesting
when type designers were testing and experimenting with other ways of developing
and distributing type, with another economy.

Yes, but fonts have a much more reduced user community than music or
bookpublishing, so old rules stay.

Is that it? I am surprised to see that almost all typographers and foundries take the
‘piracy is a crime’ side on this issue. While typographers are early and enthusiastic
adopters of computer technology, they have not taken much from the collaborative
culture that came with it.
This is the ‘tradition’ typography inherited. Typography was one of the
first laboratories for fractioning work for efficiency. It was one of the first
modern industries, and has developed a really deep culture where it is not
easy to set doubts in. 500 years of tradition and only 20 years of computers.
The complexity comes from the fact it is influenced by a multiple series of
elements, from history and tradition to the latest technologies. But it is
always related to an economic production system, so property and ‘secretsof-the-trade’ have a big influence on it.

I think it is important to remember how the current culture of (not) sharing fonts
is linked to its history. But books have been made for quite a while too.
Open Source systems may be not so much influencing distribution, licences
and economic models in typography, but can set original questions to this
problematic of digital type. Old tools and histories are not reliable anymore.

Yes. with networked software it is rather obvious that it is useful to work together.
I try to understand how this works with respect to making a font. Would that
work?

Collaborative type is extremely important now, I think. The globalisation of
computer systems sets the language of typography in a new dimension. We
use computers in Belgium and in China. Same hardware. But language is
the problem! A French typographer might not be the best person to define
a Vietnamese font. Collaborativity is necessary! Pierre Huyghebaert told me
he once designed an Arabic font when he was in Lebanon. For him, the
font was legible, but nobody there was able to read it.
But how would you collaborate than? I mean ... what would be the reason for
a French typographer to collaborate with one from China? What would that
bring? I’m imagining some kind of hybrid result ... kind of interesting.
289

H

FS
H

FS
H

Again, sharing. We all have the idea that English is the modern Latin,
and if we are not careful the future of computers will result in a language
reductionism.

What interest me in Open Source, is the potential for ‘biodiversity’.

I partially agree, and the Open Source idea contradicts the reductionist
approach by giving more importance to local knowledge. A collaboration
between an Arabic typographer and a French one can be to work on tools
that allow both languages to co-exist. LaTeX permits that, for example.
Not QuarkXpress!

Where does your interest in typography actually come from?

I think I first looked at comic books, and then started doodling in the
margins of schoolbooks. As a teenager, I used to reproduce film titles such
as Aliens, Terminator or other sci-fi high-octane typographic titles.

Basically, I’m a forger! In writing, you need to copy to understand. Thats an
old necessity. If you use a typeface, you express something. You’re putting
drawings of letters next to each other to compose a word/text. A drawing
is always emotionally charged, which gives color (or taste) to the message.
You need to know what’s inside a font to know what it expresses.

FS
H

How do you find out what’s inside?

By reproducing letters, and using them. A Gill Sans does not have the same
emotional load as a Bodoni. To understand a font is complicated, because
it refers to almost every field in culture. The banners behind G.W. Bush
communicate more than just ‘Mission Accomplished’. Typefaces carry a
‘meta language’.
290

FS
H

FS
H

It is truly embedded content

Exactly! It is still very difficult to bridge the gap between personal emotions
and programming a font. Moreover, there are different approaches, from
stroke design to software that generates fonts. And typography is standardisation. The first digital fonts are drawn fixed shapes, letter by letter,
‘outstrokes’. But there is another approach where the letters are traced by
the computer. It needs software to be generated. In Autocad, letters are
‘innerstroke’ that can vary of weight. Letterrors’ Beowolf 3 is also an example of that kind of approach. interesting way to work, but the font depends
on the platform it goes with. Beowolf only works on OS9. It also set the
question of copyright very far. It’s a case study in itself.

So it means, the font is software in fact?

Yes, but the interdependence between font
and operating systems is very strong, contrary to a fixed format such as TrueType.
For printed matter, this is much more
complicated to achieve. There are inbetween formats, such as Multiple Master
Technology for example. It basically
means, that you have 2 shapes for 1 glyph,
and you can set an ‘alternative’ shape between the 2 shapes. At Adobe they still do
not understand why it was (and still is) a
failure ...
3

Beowolf by Just van Rossum and Erik van Blokland (1989)

Instead of recreating a fixed outline or bitmap, the Randomfont redefines its outlines every
time they are called for. http://letterror.com/writing/is-best-really-better

291

The Metapolator Uinverse by Simon Egli (2014)

FS

H

FS

I really like this idea ... to have more than one master. Imagine you own one
master and I own the other and than we adjust and tweak from different sides.
That would be real collaborative type! Could ‘multiple’ mean more than one you
think?

It is a bit more complicated than drawing a simple font in Fontographer or
Fontforge. Pierre told me that the MM feature is still available in Adobe
Illustrator, but that it is used very seldomly. Multiple Master fonts are also
a bit complicated to use. I think there were a lot of bugs first, and then you
need to be a skilled designer to give these fonts a nice render. I never heard
of an alternative use of it, with drawing or so. In the end it was probably
never a success because of the software dependency.

While I always thought of fonts as extremely cross media. Do you remember which
classic font was basically the average between many well-known fonts? Frutiger?
292

H

FS

H
FS

Fonts are Culture Capsules! It was Adrian Frutiger. But he wasn’t the only
one to try ... It was a research for the Univers font I think. Here again we
meet this paradox of typography: a standardisation of language generating
cultural complexity.

Univers. That makes sense. Amazing to see those examples
together. It seems digital typography got stuck at some
point, and I think some of the ideas and practices that are
current in Open Source could help break out of it.
Yes of course. And it is almost virgin space.

In 2003 the Danish government released ‘Union’, a
font that could be freely used for publications concerning
Danish culture. I find this an intrigueing idea, that a font
could be seen as some kind of ‘public good’.

Univers by Adrian Frutiger (1954)

Union by Morten Rostgaard Olsen (2003)

H

FS

H

I am convinced that knowledge needs to be open ...
(speaking as the son of a teacher here!). One medium
for knowledge is language and its atoms are letters.

But if information wants to be free, does that mean that
design needs to be free too? Is there information possible
without design?

This is why I like books. Because it’s a mix between
information and beauty – or can be. Pfff, there is nothing without design
... It is like is there something without language, no?

293

One of the things that is notable about
OSP is that the problems that you encounter
are also described, appearing on your blog.
This is something unusual for a company attempting to produce the impression of an
efficient solution . Obviously the readers
of the blog only get a formatted version
of this, as a performed work? What s the
thinking here?"

This interview about the practice of OSP was carried out by
e-mail between March and May 2008. Matthew Fuller writes
about software culture and has a contagious interest in technologies that exceed easy fit solutions. At the time, he was
David Gee reader in Digital Media at the Centre for Cultural
Studies, Goldsmiths College, University of London, and had
just edited Software Studies, A Lexicon, 1 and written Media
Ecologies: Materialist Energies in Art and Technoculture 2 and
Behind the Blip: Essays on the Culture of Software. 3

OSP is a graphic design agency working solely with Open Source software. This
surely places you currently as a world first, but what exactly does it mean in
practice? Let’s start with what software you use?

There are other groups publishing with Free Software, but design collectives
are surprisingly rare. So much publishing is going on around Open Source
and Open Content ... someone must have had the same idea? In discussions
about digital tools you begin to find designers expressing concern over the
fact that their work might all look the same because they use exactly the
same Adobe suite and as a way to differentiate yourself, Free Software could
soon become more popular. I think the success of Processing is related
to that, though I doubt such a composed project will ever make anyone
seriously consider Scribus for page layout, even if Processing is Open Source.
1

2
3

Matthew Fuller. Software Studies: A Lexicon. The MIT Press, 2008
Matthew Fuller. Media Ecologies: Materialist Energies in Art and Technoculture.
The MIT Press, 2007
Matthew Fuller. Behind the Blip: Essays on the culture of software. Autonomedia, 2003

297

OSP usually works between GIMP, 4 Scribus 5 and Inkscape 6 on Linux distributions and OSX. We are fans of FontForge, 7 and enjoy using all kinds
of commandline tools, psnup, ps2pdf and uniq to name a few.
How does the use of this software change the way you work, do you see some
possibilities for new ways of doing graphic design opening up?

For many reasons, software has become much more present in our work; at
any moment in the workflow it makes itself heard. As a result we feel a bit
less sure of ourselves, and we have certainly become slower. We decided to
make the whole process into some kind of design/life experiment and that
is one way to keep figuring out how to convert a file, or yet another discussion with a printer about which ‘standard’ to use, interesting for ourselves.
Performing our practice is as much part of the project as the actual books,
posters, flyers etc. we produce.
One way a shift of tools can open up new ways of doing graphic design, is
because it makes you immediately aware of the ‘resistance’ of digital material. At the point we can’t make things work, we start to consider formats,
standards and other limitations as ingredients for creative work. We are
quite excited for example about exploring dynamic design for print in SVG,
a by-product of our battle with converting files from Scalable Vector Format
into Portable Document Format.
Free Software allows you to engage on many levels with the technologies
and processes around graphic design. When you work through it’s various
interfaces, stringing tools together, circumventing bugs and/or gaps in your
own knowledge, you understand there is more to be done than contributing
code in C++. It is an invitation to question assumptions of utility, standards
and usability. This is exactly the stuff design is made of.

Following this, what kind of team have you built up, and what new competencies
have you had to develop?

The core of OSP is five people 8 , and between us we mix amongst others typography, layout, cartography, webdesign, software development, drawing,
4
5
6
7
8

image manipulation
page layout
vector editing
font editor
Pierre Huyghebaert, Harrisson, Yi Jiang, Nicolas Malevé and me

298

programming, open content licensing and teaching. Around it is a larger
group of designers, a mathematician, a computer scientists and several Free
Software coders that we regularly exchange ideas with.
It feels we often do more unlearning than learning; a necessary and interesting skill to develop is dealing with incompetence – what can it be else than
a loss of control? In the mean time we expand our vocabulary so we can fuel
conversations (imaginary and real life) with people behind GIMP, Inkscape,
Scribus etc.; we learn how to navigate our computers using commandline
interfaces as well as KDE, GNOME and others; we find out about file formats and how they sometimes can and often cannot speak to each other;
how to write manuals and interact with mailing lists. The real challenge is
to invent situations that subvert strict divisions of labour while leaving space
for the kind of knowledge that comes with practice and experience.
Open fonts seem to be the beginnings of a big success, how does it fit into the
working practices of typographers or the material with which they work?

Type design is an extraordinary area where Free Software and design naturally meet. I guess this area of work is what kernel coding is for a Linux
developer: only a few people actually make fonts but many people use them
all the time. Software companies have been inconsistent in developing proprietary tools for editing fonts, which has made the work of typographers
painfully difficult at times. This is why George Williams decided to develop
FontForge, and release it under a BSD license: even if he stops being interested, others can take over. FontForge has gathered a small group of fans
who through this tool, stay into contact with a more generous approach to
software, characters and typefaces.
The actual material of a typeface has since long migrated from poisonous
lead into sets of ultra light vector drawings, held together in complicated
kerning systems. When you take this software-like aspect as a startingpoint,
many ways to collaborate (between programmers and typographers; between
people speaking different languages) open up, as long as you let go of the
uptight licensing policies that apply to most commercial fonts. I guess the
image of the solitary master passing on the secret trade to his devoted pupils
does not sit very well with the invitation to anyone to run, copy, distribute,
study, change and improve. How open fonts could turn the patriarchal guild
299

system inside out that has been carefully preserved in the closed world of
type design, is obviously of interest as well.
Very concretely, computer-users really need larger character sets that allow
for communication between let’s say Greek, Russian, Slovak and French.
These kinds of vast projects are so much easier to develop and maintain in
a Free Software way; the DéJàVu font project shows that it is possible to
work with many people spread over different countries modifying the same
set of files with the help of versioning systems like CVS.
But what it all comes down to probably ... Donald Knuth is the only person
I have seen both Free Software developers and designers wear on their Tshirts.

The cultures around each of the pieces of software are quite distinct. People
often lump all F/LOSS development into one kind of category, whereas even in
the larger GNU/Linux distros there is quite a degree of variation, but with the
smaller more specialised projects this is perhaps even more the case. How would
you characterise the scenes around each of these applications?

The kinds of applications we use form a category in themselves. They are
indeed small projects so ‘scene’ fits them better than ‘culture’. Graphics
tools differ from archetypal Unix/Linux code and language based projects
in that Graphical User Interfaces obviously matter and because they are used
in a specialised context outside its own developers circle. This is interesting because it makes F/LOSS developer communities connect with other
disciplines (or scenes?) such as design, printing and photography.
A great pleasure in working with F/LOSS is to experience how software
can be done in many ways; each of the applications we work with is alive
and particular. I’ll just portray Scribus and Inkscape here because from the
differences between these two I think you can imagine what else is out there.
The Scribus team is rooted in the printing and pre-press world and naturally
their first concern is to create an application that produces reliable output.
Any problem you might run in to at a print shop will be responded to
immediately, even late night if necessary. Members of the Scribus team are
a few years older than average developers and this can be perceived through
the correct and friendly atmosphere on their mailing list and IRC channel,
and their long term loyalty to this complex project. Following its more
industrial perspective, the imagined design workflow built in to the tool is
300

linear. To us it feels almost pre-digital: tasks and responsibilities between
editors, typesetters and designers are clearly defined and lined up. In this
view on design, creative decisions are made outside the application, and the
canvas is only necessary for emergency corrections. Unfortunately for us,
who live of testing and trying, Scribus’ GUI is a relatively underdeveloped
area of a project that otherwise has matured quickly.
Inkscape is a fork of a fork of a small tool initially designed to edit vector
files in SVG format. It stayed close to its initial starting point and is in a way
a much more straightforward project than Scribus. Main developer Bryce
Harrington describes Inkscape as a relatively unstructured coming and going
of high energy collective work much work is done through a larger group of
people submitting small patches and it’s developers community is not very
tightly knit. Centered around a legible XML format primarily designed
for the web, Inkscape users quickly understand the potential of scripting
images and you can find a vibrant plug in culture even if the Inkscape code
is less clean to work with than you might expect. Related to this interest
in networked visuals, is the involvement of Inkscape developers in the Open
Clip Art project and ccHost, a repository system wich allows you to upload
images, sounds and other files directly from your application. It is also no
surprise that Inkscape implemented a proper print dialogue only very late,
and still has no way to handle CMYK output.
There’s a lot of talk about collaboration in F/LOSS development, something
very impressive, but often when one talks to developers of such software there is
a lot to discuss about the rather less open ways in which power struggles over the
meaning or leadership of software projects are carried out by, for instance, hiding
code in development, or by only allowing very narrowly technical approaches to
development to be discussed. This is only one tendency, but one which tends to
remain publicly under-discussed. How much of this kind of friction have you
encountered by acting as a visible part of a new user community for F/LOSS?

I can’t say we feel completely at home in the F/LOSS world, but we have not
encountered any extraordinary forms of friction yet. We have been allowed
the space to try our own strategies at overcoming the user-developer divide:
people granted interviews, accepted us when we invited ourselves to speak
at conferences and listened to our stories. But it still feels a bit awkward,
and I sometimes wonder whether we ever will be able to do enough. Does
301

constructive critique count as a contribution, even when it is not delivered
in the form of a bug report? Can we please get rid of the term ‘end-user’?
Most discussions around software are kept strictly technical, even when
there are many non-technical issues at stake. We are F/LOSS enthusiasts
because it potentially pulls the applications we use into some form of public
space where they can be examined, re-done and taken apart if necessary; we
are curious about how they are made because of what they (can) make you
do. When we asked Andreas Vox, a main Scribus developer whether he saw
a relation between the tool he contributed code to, and the things that were
produced by it, he answered: Preferences for work tools and political preference
are really orthogonal. This is understandable from a project-management
point of view, but it makes you wonder where else such a debate should take
place.
The fact that compared to proprietary software projects, only a very small
number of women is involved in F/LOSS makes apparent how openness
and freedom are not simple terms to put in practice. When asked whether
gender matters, the habitual answer is that opportunities are equal and from
that point a constructive discussion is difficult. There are no easy solutions,
but the lack of diversity needs to be put on the roadmap somehow, or as a
friend asked: Where do I file a meta-bug?
Visually, or in terms of the aesthetic qualities of the designs you have developed
would you say you have managed to achieve anything unavailable through the
output of the Adobe empire?

The members of OSP would never have come up with the idea to combine
their aesthetics and skills using Adobe, so that makes it difficult to do a
‘before’ and ‘after’ comparison. Or maybe we should call this an achievement
of Free Software too?
Using F/LOSS has made us reconsider the way we work and sometimes this
is visible in the design we produce, more often in the commissions we take
on or the projects we invest in. Generative work has become part of our
creative suite and this certainly looks different than a per-page treatment;
also deliberate traces of the production process (including printing and prepress) add another layer to what we make.
Of all smaller and larger discoveries, the Spiro toolkit that Free Software
activist, Ghostscript maintainer, typophile and Quaker Raph Levien devel302

ops, must be the most wonderful. We had taken Bézier curves for granted,
and never imagined how the way it is mathematically defined would matter
that much. Instead of working with fixed anchor points and starting from
straight lines that you first need to bend, Spiro is spiral-based and vectors
suddenly have a sensational flow and weight. From Pierre Bézier writing his
specification as an engineer for the Renault car factory to Levien’s Spiro,
digital drawing has changed radically.

You have a major signage project coming up, how does this commission map across
to the ethics and technologies of F/LOSS?

We are right in the middle of it. At this moment ‘The Pavilion of Provisionary
Happiness’ celebrating the 50th anniversary of the Belgian World Exhibition,
is being constructed out of 30.000 beer crates right under the Brussels’
Atomium. That’s a major project done the Belgian way.
We have developed a signage system, or actually a typeface, which is defined
through the strange material and construction work going on on site. We
use holes in the facade that are in fact handles of beer crates as connector
points to create a modular font that is somewhere between Pixacao graffiti
and Cuneiform script. It is actually a play on our long fascination with
engineered typefaces such as DIN 1451; mixing universal application with
specific materials, styles and uses – this all links back to our interest in Free
Software.
Besides producing the signage, OSP will co-edit and distribute a modest
publication documenting the whole process; it makes legible how this temporary yellow cathedral came about. And the font will of course be released
in the public domain.
It is not an easy project but I don’t know how much of it has to do with
our software politics; our commissioners do not really care and also we have
kept the production process quite simple on purpose. But by opening our
sources, we can use the platform we are given in a more productive way; it
makes us less dependent because the work will have another life long after
the deadline has passed.
On this project, and in relation to the seeming omnipresence in F/LOSS of the
idea that this technology is ‘universal’, how do you see that in relation to fonts,
and their longer history of standards?
303

That is indeed a long story, but I’ll give it a try. First of all, I think the idea
of universal technology appears to be quite omnipresent everywhere; the
mix-up between ubiquitousness and ‘universality’ is quickly made. In Free
Software this idea gains force only when it gets (con)fused with freedom
and openness and when conditions for access are kept out of the discussion.
We are interested in early typographic standardization projects because their
minimalist modularity brings out the tension between generic systems and
specific designs. Ludwig Goller, a Siemens engineer wo headed the Committee
for German Industry Standards in the 1920s stated that For the typefaces of
the future neither tools nor fashion will be decisive. His committee supervised
the development of DIN 1451, a standard font that should connect economy of use with legibility, and enhance global communication in service of
the German industry. I think it is no surprise that a similar phrasing can be
found in W3C documents; the idea to unify the people of the world through
a common language re-surfaces and has the same tendency to negate materiality and specificity in favour of seamless translation between media and
markets.
Type historian Ellen Lupton brought up the possibility of designing typographic systems that are accessible but not finite nor operating within a
fixed set of parameters. Although I don’t know what she means by using the
term ‘open universal’, I think this is why we are attracted to Free Software:
it has the potential to open up both the design of parameters as well as their
application. Which leads to your next question.
You mentioned the use of generative design just now. How far do you go into
this? Within the generative design field there seem to be a couple of tendencies, one
that is very pragmatic, simply about exploring a space of possible designs through
parametric definition in order to find, select and breed from and tweak a good
result that would not be necessarily imaginable otherwise, the other being more
about the inefible nature of the generative process itself, something vitalist. These
tendencies of not of course exclusive, but how are they inflected or challenged in
your use of generative techniques?

I feel a bit on thin ice here because we only start to explore the area and we
are certainly not deep into algorithmic design. But on a more mundane level
... in the move from print to design for the web, ‘grids’ have been replaced by
‘templates’ that interact with content and context through filters. Designers
304

have always been busy with designing systems and formats, 9 but stepped in
to manipulate singular results if necessary.
I referred to ‘generative design’ as the space opening up when you play
with rules and their affordances. The liveliness and specificity of the work
results from various parameters interfering with each other, including the
ones we can get our hands on. By making our own manipulations explicit,
we sometimes manage to make other parameters at play visible too. Because
at the end of the day, we are rather bored by mysterious beauty.

One of the techniques OSP uses to get people involved with the process and the
technologies is the ‘Print Party’, can you say what that is?

‘Print Parties’ are irregular public performances we organise when we feel
the need to report on what we discovered and where we’ve been; as antiheroes of our own adventures we open up our practice in a way that seems
infectious. We make a point of presenting a new experiment, of producing
something printed and also something edible on site each time; this mix of
ingredients seems to work best. ‘Print Parties’ are how we keep contact with
our fellow designers who are interested in our journey but have sometimes
difficulty following us into the exotic territory of BoF, Version Control and
GPL3.

You state in a few texts that OSP is interested in glitches as a productive force in
software, how do you explain this to a printer trying to get a file to convert to the
kind of thing they expect?
Not! Printing has become cheap through digitization and is streamlined to
the extreme. Often there is literally no space built in to even have a second
look at a differently formatted file, so to state that glitches are productive
is easier said than done. Still, those hickups make processes tangible, especially at moments you don’t want them to interfere.
For a book we are designing at the moment, we might partially work by
hand on positive film (a step now also skipped in file-to-plate systems). It
makes us literally sit with pre-press professionals for a day and hopefully we
can learn better where to intervene and how to involve them into the process.
To take the productive force of glitches beyond predictable aesthetics, means

9

it really made me laugh to think of Joseph Müller Brockman as vitalist

305

most of all a shift of rhythm – to effect other levels than the production
process itself. We gradually learn how our ideas about slow cooking design
can survive the instant need to meet deadlines. The terminology is a bit
painful but to replace ‘deadline’ by ‘milestone’, and ‘estimate’ by ‘roadmap’
is already a beginning.

One of the things that is notable about OSP is that the problems that you encounter are also described, appearing on your blog. This is something unusual
for a company attempting to produce the impression of an efficient ‘solution’.
Obviously the readers of the blog only get a formatted version of this, as a performed work? What’s the thinking here?

‘Efficient solutions’ is probably the last thing we try to impress with, though
it is important for us to be grounded in practice and to produce for real
under conventional conditions. The blog is a public record of our everyday
life with F/LOSS; we make an effort to narrate through what we stumble
upon because it helps us articulate how we use software, what it does to us
and what we want from it; people that want to work with us, are somehow
interested in these questions too. Our audience is also not just prospective
clients, but includes developers and colleagues. An unformatted account,
even if that was possible, would not be very interesting in that respect; we
turn software into fairytales if that is what it takes to make our point.
In terms of the development of F/LOSS approaches in areas outside software,
one of the key points of differentiation has been between ‘recipes’ and ‘food’, bits
and atoms, genotype and phenotype. That is that software moves the kinds of
rivalry associated with the ownership and rights to use and enjoy a physical object
into another domain, that of speed and quality of information, which network
distribution tends to mitigate against. This is also the same for other kinds of
data, such as music, texts and so on. (This migration of rivalry is often glossed
over in the description of ‘goods’ being ‘non-rivalrous’.) Graphic Design however
is an interesting middle ground in a certain way in that it both generates files of
many different kinds, and, often but not always, provides the ‘recipes’ for physical
objects, the actual ‘voedingstof ’, such as signage systems, posters, books, labels and
so on. Following this, do you circulate your files in any particular way, or by
other means attempt to blur the boundary between the recipe and the food?
306

We have just finished the design of a font (NotCourier-sans), a derivative of
Nimbus Mono, which is in turn a GPL’ed copy of the well known Courier
typeface that IBM introduced in 1955. Writing a proper licence for it,
opened up many questions about the nature of ‘source code’ in design, and
not only from a legalist perspective. While this is actually relatively simple
to define for a font (the source is the object), it is much less clear what it
means for a signage system or a printed book.
One way we deal with this, is by publishing final results side by side with ingredients and recipes. The raw files themselves seem pretty useless once the
festival is over and the book printed, so we write manuals, stories, histories.
We also experiment with using versioning systems, but the softwares available are only half interesting to us. Designed to support code development,
changes in text files can be tracked up to the minutest detail but unless you
are ready to track binary code, images and document layouts function as
black boxes. I think this is something we need to work on because we need
better tools to handle multiple file formats collaboratively, and some form
of auto-documentation to support the more narrative work.
On the other hand, manuals and licences are surprisingly rich formats if you
want to record how an object came into life; we often weave these kinds
of texts back into the design itself. In the case of NotCourierSans we will
package the font with a pdf booklet on the history of the typeface – mixing
design geneology with suggestions for use.
I think the blurring of boundaries happens through practice. Just like
recipes are linked in many ways to food, 10 design practice connects objects
to conditions. OSP is most of all interested in the back-and-forth between
those two states of design; rendering their interdepence visible and testing
out ways of working with it rather than against it. Hopefully both the food
and the recipe will change in the process.

10

tasting, trying, writing, cooking

307

This brief interview with Ludivine Loiseau and Pierre Marchand
from OSP was made in December 2012 by editor and designer
Manuel Schmalstieg. It unravels the design process of Aether9,
a book based on the archives of a collaborative adventure exploring the danger zones of networked audio-visual live performance. The text was published in that same publication.
Can you briefly situate the collective work of Open Source Publishing
(OSP)?

OSP is a working group producing graphic design objects using only
Libre and/or Open Source software. Founded in 2006 in the frame of the
arts organisation Constant 1 , the OSP caravan consists today of a dozen
individuals of different backgrounds and practices.
Since how long are you working as a duo, and as a team in OSP?
3 to 4 years.

And how many books have you conceived?

As a team, it’s our first ‘real’ book. We previously worked together on a
somewhat similar project of archive exploration, but without printed material in
the end. 2
Similar in the type of content or in the process?

The process: we developed scripts to ‘scrap’ the project archives, but it’s output
was more abstract; we collected the fonts used in all the files and produced a graph
from this process. These archives weren’t structured, so the exploration was less
linear.
You rapidly chose TeX/ConTeXt as a software environment to produce
this book. Was it an obvious choice given the nature of the project, or did you
hesitate between different approaches?

The construction of the book focused on two axes/threads: chronology
and a series of ‘trace-route’ keywords. Within this approach of reading and
navigation using cross-references, ConTeXt appeared as an appropriate tool.
1
2

http://www.constantvzw.org
http://www.ooooo.be/interpunctie/

311

The world of TeX 3 is very intriguing, in particular for graphic designers.
It seems to me that it is always a struggle to push back the limits of what is
‘intended’ by the software.
ConTeXt is a constant fight! I wouldn’t say the same about other TeX
system instances. With ConTeXt, we found ourselves facing a very personal
project, because composition decisions are hardcoded to the liking of the package
main maintainer. And when we clash with these decisions, we are in the strange
position of using a tool while not agreeing with its builder.
As a concrete example, we could mention the automatic line spacing
adjustments. It was a struggle to get it right on the lines that include
keywords typeset with our custom ‘traced’ fonts. ConTeXt tried to do better,
and was increasing the line height of those words, as if it wanted to avoid
collisions.
Were you ever worried that what you wanted to obtain was not doable?
Did you reject some choices – in the graphic design, the layout, the structure
– because of software limitations?
Yes. Opting for a two column layout appeared to be quite tough when
filling in the content, as it introduced many gaps. At some point we decided
to narrow the format on a single column. To obtain the two columns
layout in the final output, the whole book was recomposed during the pdfconstruction, through OSPImpose.
This allowed us to make micro adjustments in the end of the production
process, while introducing new games, such as shifting the images on double pages.
What is OSPImpose?
It’s a re-writing of a pdf imposition software that I wrote a couple years ago
for PoDoFo.
Again regarding ConTeXt: this system was used for other OSP works
– notably for the book Verbindingen/Jonctions 10; Tracks in electr(on)ic
fields. 4 Is it currently the main production tool at OSP?
It’s more like an in-depth initiation journey!
But it hasn’t become a standard in our workflow yet. In fact, each
new important book layout project raises each time the question of the
3
4

a software written in 1978 by Donald Knuth
distinguished by the Fernand Baudin Prize 2009

312

tool. Scribus and LibreOffice (spreadsheet) are also part of our book making
toolbox.
During our work session with you at Constant Variable, we noticed
that it was difficult to install a sufficiently complete TeX/ConTeXt/Python
environment to be able to generate the book. Is Pierre’s machine still the only
one, or did you manage to set it up on other computers?

Now we all have similar setups, so it’s a generalized generation. But it’s true
that this represented a difficulty at some times.
The source code and the Python scripts created for the book are publicly
accessible on the OSP Git server. Would these sources be realistically reusable? Could other publication projects use parts of the code ? Or, without
any explicit documentation, would it be highly improbable?

Indeed, the documentation part is still on the to-do list. Yet a large part
of the code is quite directly reusable. The code allows to parse different types
of files. E-mails and chat-logs are often found in project archives. Here the
Python scripts allows to order them according to date information, and will
automatically assign a style to the different content fields.

The code itself is a documentation source, as much on concrete aspects, such
as e-mail parsing, than on a possible architecture, on certain coding motives, etc.
And most importantly, is consists in a form of common experience.
Do you think you will reuse some of the general functions/features of
archive parsing for other projects ?
Hard to say. We haven’t anything in perspective that is close to the Aether9
project. But for sure, if the need of such treatment comes up again, we’ll retrieve
these software components.
Maybe for a publication/compilation of OSP’s adventures.

Have there been ‘revelations’, discoveries of unsuspected Python/ConText
features during this development?

I can’t recall having this kind of pleasure. The revelation, at least from
my point of view, happened in the very rich articulation of a graphical intention enacted in programming objects. It remains a kind of uncharted territory,
exploring it is always an exciting adventure.
313

Three fonts are used in the book: Karla, Crimson and Consola Mono.
Three pretty recent fonts, born in the webfonts contexts I believe. What
considerations brought you to this choice?
Our typographical choices and researches lead us towards fonts with
different style variations. As the textual content is quite rich and spreads
on several layers, it was essential to have variation possibilities. Also, each
project brings the opportunity to test new fonts and we opted for recently
published fonts, indeed published, amongst others, on the Google font directory. Yet Karla and Crimson aren’t fonts specifically designed for a web
usage. Karla is one of the rare libre grotesque fonts, and it’s other specificity
it that it includes Tamil glyphs.
Apart from the original glyphs specially created for this book, you drew the
Ç glyph that was missing to Karla ... Is it going to be included to its official
distribution?
Oh, that’s a proposal for Jonathan Pinhorn. We haven’t contacted him
yet. For the moment, this cedilla has been snatched from the traced variant
collections.
Were there any surprises when printing? I am thinking in particular of
your choice of a colored ink instead of the usual black, or to the low res quality
(72dpi) of most of the images.
At the end of the process, the spontaneous decision to switch to blue ink was
a guaranteed source of surprise. We were confident that it wouldn’t destroy the
book, and we surely didn’t take too many risks since we were working with low
res images. But we weren’t sure how the images would react to such an offense. It
was an great surprise to see that it gave the book a very special radiance.
What are your next projects?
We are currently operating as an invited collective at the Valence Academy
of Fine Arts in the frame of a series of workshops named ‘Up pen down’.
We’re preparing a performance for the Balsamine theatre 5 on the topic of
Bootstrapping. In April we will travel as a group to Madrid to LGRU 6 and
LGM 7 . We also continually work on ‘Co-position”’, a project for building
a post-gutenberg typographical tool.
5
6
7

http://www.balsamine.be/
http://lgru.net/
the international Libre Graphics Meeting: http://libregraphicsmeeting.org/2013/

314

Performing Libre Graphics

In April 2014 I traveled from Leipzig to the north of
Germany to meet with artist Cornelia Sollfrank. It was
right after the Libre Graphics Meeting, and the impressions from the event were still very fresh. Cornelia had
asked me for a video interview as part of Giving what you
don’t have, 1 a series of conversations about what she refers
to as ‘complex copyright-critical practices’. She was interested in forms of appropriation art that instead of claiming
some kind of ‘super-user’ status for artists, might provide
a platform for open access and Free Culture not imaginable elsewhere. I’ve admired Cornelia’s contributions to
hacker culture for long. She pioneered as a cyberfeminist
in the 1990s with the hilarious and intelligent net-art piece
Female Extension 2 , co-founded Old Boys Network 3 and
developed seminal projects such as the Net Art Generator.
The opportunity to spend two sunny spring days with her
intelligence, humour and cyberfeminist wisdom could not
have come at a better moment.
What is Libre Graphics?

Libre Graphics is quite a large ecosystem of software tools; of people, people
that develop these tools but also people that use these tools; practices, like
how do you work with them, not just how do you make things quickly and
in an impressive way, but also how these tools might change your practice;
and cultural artifacts that result from it. It is all these elements that come
together, I would call Libre Graphics. The term ‘libre’ is chosen deliberately.
1
2
3

http://postmedialab.org/GWYDH
http://artwarez.org/femext/content/femextEN.html
http://www.obn.org/

319

Performing Libre Graphics

It is slightly more mysterious than the term ‘free’, especially when it turns up
in the English language. It sort of hints that there is something different,
something done on purpose. And it is also a group of people that are
inspired by Free Software culture, by Free Culture, by thinking about how
to share both their tools, their recipes and the outcomes of all this. Libre
Graphics goes in many directions. But it is an interesting context to work
in, that for me has been quite inspiring for a few years now.

The context of Libre Graphics

The context of Libre Graphics is multiple. I think that I am excited about
it and also part of why it is sometimes difficult to describe it in a short
sentence. The context is design, and people that are interested in design, in
creating visuals, animation, videos, typography ... and that is already multiple contexts, because each of these disciplines have their own histories,
and their own types of people that get touched by them. Then there is
software, people that are interested in the digital material. They say, I am
excited about raw bits and the way a vector gets produced. And that is a
very, almost formal, interest in how graphics are made. Then there is people that do software. They’re interested in programming, in programming
languages, in thinking about interfaces, and thinking about ways software
can become a tool. And then there are people that are interested in Free
Software. How can you make digital tools that can be shared, but also,
how can that produce processes that can be shared. Free Software activists
to people that are interested in developing specific tools for sharing design
and software development processes, like Git or Subversion, those kind of
things. I think the multiple contexts are really special and rich in Libre
Graphics.

Free Software culture

Free Software culture, and I use the term ‘culture’ because I am interested
in, let’s say, the cultural aspect of it, and this includes software. For me
software is a cultural object. But I think it is important to emphasize this,
320

Performing Libre Graphics

because it easily turns into a technocentric approach, which I think is important to stay away from. Free Software culture is the thinking that, when
you develop technology, and I am using technology in the sense that it is
cultural as well to me, deeply cultural, you need to take care as well of sharing the recipes, for how this technology has been developed. This produces
many different other tools, ways of working, ways of speaking, vocabularies, because it changes radically the way we make and the way we produce
hierarchies. It means for example, if you produce a graphic design artifact,
that you share all the source files that were necessary to make it; but you
also share as much as you can, descriptions or narrations of how it came to
be, which does include maybe how much was paid for it, where difficulties
were in negotiating with the printer; and what elements were included, because a graphic design object is usually a compilation of different elements;
what software was used to make it, and where it might have resisted. The
consequences of taking the Free Software culture serious in a design context, means that you care about all these different layers of the work, all the
different conditions that actually made the work happen.

Free Culture

The relationship from Libre Graphics to Free Culture is not always that
explicit. For some people it is enough to work with tools that are released
under a GPL, an open content licence. And there it stops. Even their work
will be released under proprietary licences. For others, it is important to
make the full circle and to think about what the legal status is of the work
they release. That is the more general one. Then, Free Culture, we can use
that very loosely, as in ‘everything that is circulating under conditions that
it can be reused and remade’. That would be my position. Free Culture
is of course also referred to a very specific idea of how that would work,
namely Creative Commons. For myself Creative Commons is problematic, although I value the fact that it exists and has really created a broader
discussion around licences in creative practices. I value that. For me the distinction Creative Commons makes for almost all the licences they promote,
between commercial and non-commercial work, and as a consequence, between professional and amateur work, I find that very problematic. Because
I think one of the most important elements of Free Software culture for me,
321

Performing Libre Graphics

is the possibility for people from different backgrounds, with different skill
sets, to actually engage with the digital artifacts they’re surrounded with.
By making this lazy separation between commercial and non-commercial,
which especially in the context of the web as it is right now, is not really
easy to hold up, seems really problematic. It creates an illusion of clarity
that I think actually makes more trouble than clarity. So I use Free Culture
licences, I use licences that are more explicit about the fact that anyone can
use whatever I produce in any context. Because I think that is where the
real power is of Free Software culture. For me Free Software licences and
all the licences that are around it, because I think there is many different
types and that is interesting, is that they have a viral power built in. So if
you apply a Free Software licence to, for example, a typeface, it means that
someone else, even someone else you don’t know, has the permission and
doesn’t have to ask for a permission, to reuse the typeface, to change it, to
mix it with something else, to distribute it and to sell it. That is one part,
that is already very powerful. But the real secret of such a licence is, that
once this person re-releases the typeface, it means that they need to keep
that same licence and it propagates across the network and that is where it
is really powerful.

Free tools

It is important to use tools that are released under conditions that allow
me to look further than its surface. For many reasons. There is an ethical
reason. It is very problematic I think, as a friend explained last week, to feel
that you’re renting a room in a hotel. That is often the way practitioners
nowadays relate to their tools. They have no right to move the furniture.
They have no right to invite friends to their hotel room. They have to check
out at eleven, etc. it is a very sterile relationship to the tools. That is one
part. The other is that there is little way to come into contact with the
cultural aspects of the tools. Some things that I suspected before starting
to use Free Software tools for my practice, but has been already for almost
ten years, continuously exciting, is the whole, let’s say, all the other elements
around it. The way people organize themselves in conferences, mailing lists,
the kinds of communication that happens, the vocabularies, the histories,
the connections between different disciplines ... And all that is available to
322

Performing Libre Graphics

look at, to work with, to come into contact with; to speak to people that do
these tools and ask them, why is it like this and not like that. And that to
me seems obvious that artists want to have that kind of layered relationship
with their tools, and not just only accept whatever comes out of next door
shop. I have a very different, almost different physical experience of these
tools, because I can enter on many levels. That makes them part of my
practice, not just means to an end. I really can take them into my practice.
That I find interesting, as an artist and as a designer.

Artifacts

The outcomes of this type of practice are different, or at least, let’s say, in
the kind of work I make, try to make and the people I like to work with.
There is obviously also groups of people that would like to do Hollywood
movies with those tools. That is kind of interesting, that that happens.
For me somehow the technological context or conditions that made a work
possible, will always occur in the final result. So, that is one part. And
the other is that the product is never the end. It means that in whatever
way source materials will be released, will be made available, it means that
a product is always the beginning of another product, either by me or by
other people. I think that is two things that you can always see in the kind
of works we make when we do libre-graphics-my-style. When we make a
book, for example, what is already different, is when we start the process, it
is not yet defined what tool we will use. There is a whole array of tools you
can choose from. I mean, books are basically text on paper, and there are
many ways to arrive at that output. For one book we did a few years ago,
we decided for the first time, because we had never used this tool before,
to use TeX, a typesetting system that is developed by Donald Knuth in the
context of academic publishing. That has been around as an almost mythological solution for a perfect typesetting. We were curious about whether
we could use that system that is developed in a very specific context for an
art catalog that we wanted to make. We had to learn how to use this tool,
which meant that we somehow had to learn the vocabulary, understand its
sort of perspective; things that were possible or not, get used to the kind of
humor that is quite terrible in these manuals; accept that certain things that
we thought would be easy, were actually not easy at all; and then understand
323

Performing Libre Graphics

how we could use the things that were popping up or not working or that
were different, how we could use them in our advantage. The final result
is a book that is slightly strange, because there are some mistakes that have
been left in, deliberately or by accident sometimes. The book contains an
extensive description of how it was made. Both visually, like it explains the
technical details of how it was made, but also the description of that learning
process. Another example of how tools, practice and outcomes are somehow
connected, but also the whole politics around it, because often these projects
are also ways of teasing out; ways licences, practice and tools somehow interact, is a project called ‘Sans Guilt’. It is a play with the ‘Gill Sans’ which
is a famous classic typeface that is claimed to be owned by a company called
Monotype. But according to our understanding, they have no right to actually claim this typeface as such. But through their communication they do
so. OSP was invited to work in an art academy in London, where they had
a lead version. And we decided to play with the typeface. The typeface OSP
released has many different versions, not versions as in bold, light etc. but
it has different levels of ‘licencing risk’. One is a straight scan of the prints
that were made at that workshop. Another version is more guilty, in the
sense that it is an extraction from a .pdf using the Monotype Gill. Another
is a redrawn version that takes the matrix, the spacing of a Monotype Gill,
but combines it with a redrawn example. All different variations of this font
touch on different elements of licencing problems that might occur with
typefaces. We sent our experiment to Monotype, because we wanted to hear
from them what they thought. After a few months we received a letter from
a lawyer saying, would you please identify yourself. We decided to write
back as we are, which is, 25 people from 20 different countries with stable
and unstable addresses. This long list probably made that we never heard
anything again, and ‘Sans Guilt’ is still available from our website under an
open font licence. What the is important, the typeface is different, in the
sense that the specimen is not much about showing off how beautiful it will
look in any context, but has the description of the process, the motivation
of why we did it, the letter we sent to Monotype, the response we got, ...
The whole packaging of the font becomes then a way of speaking about all
these layers that are in our practice.

324

Performing Libre Graphics

Libre fonts

A very exciting part of Libre Graphics is the Libre Font movement, which
is strong and has been strong for a long time. Fonts are the basic building
blocks of how graphics come to life. When you type something, it is there.
And the fact that that part of the work is free, is important on many levels.
Things you often don’t think about when you speak English and you stay
within a limited character set, is that, when you live in let’s say India, the
language you speak is not available as a digital typeface, meaning that when
you want to produce a book in the tools that are available or publish it
online, your language has no way of expressing itself. That has to do with
commercial interests, laws, ways the technical infrastructure has been built.
By understanding that it is important that you can express yourself in the
language and with the characters you need, it is also obvious that that part
needs to be free. Fonts are also interesting because they exist on many
levels. They exist in your system; they’re almost software because they’re
quite complicated objects; they appear in your screen, they are when you
print a document; they are there all the time. We consider the alphabet as
a totally accessible and available and a universal right to have the alphabet
at our disposal. So it is about ‘freeing the A’, you know. That’s quite a
beautiful energy. I think that has made the Libre Font movement very
strong. Something that has happened the last years and brings up new
problems and potential areas to work on, is fonts available for the web.
Web fonts have really exploded the amount of free fonts available. Before,
fonts were always, let’s say, when they were used, tied to a document, and
there was some kind of fantasy about that you could hold them, you could
somehow contain them, licence them and keep them in check. With the
web that idea has gone. And many people have decided to liberate their
fonts to be able to make them usable for a website. Because if you think
about it, if you use a font on a website, it means that it has to be able to
travel everywhere. Everyone has to be able to look at what the font does,
but it is not just an output. It is not just an endpoint. The font is active,
it means it is available. In theory, any font that appears on the web is both
display and program. By displaying the page, you need to run the font.
That means the font needs to be available as a source and as a result. That
means you have to publish your font. This has really created a big boom in
the last few years in Free Fonts, because that is the easiest way to deal with
that problem: allow people to download these fonts, but in a way that keeps
authorship clear, that keeps genealogy clear, and also propagates then the
possibility of making new fonts based on someone else’s work.
325

Performing Libre Graphics

Free artifacts / open standards

It took me a while to figure this out. For me it was obvious that if you would
use Free Software, you would produce free artifacts. It seems obvious, but it
is not at all the case. There is full-fledged commercial production happening
with these tools. But one thing that keeps the results, the outcomes of these
projects freer than most commercial tools, is that there is really an emphasis
on open document formats. That is extremely important, because first of
all, it is very obvious that the documents that you produce with the tool,
should not belong to the software vendor. They are yours. And to be able
to own your own documents, you need to be able to inspect how they’re
produced. I know many tragic stories of designers that lost documents
because they could never open them again. There is really an emphasis
and a lot of work on making sure that the documents produced from these
tools remain ‘inspectable’, are documented, that either you can open them
in another tool or could develop a tool to have these files available for you.
It is really part and parcel of Free Software culture, that you care about that
what generates your artifact, but also the materiality of your artifact. Open
standards are important. Or maybe let’s say it is is important that file formats
are documented and can be understood. What is interesting to see is that in
this whole Libre Graphics world there is also a strong tradition of reverse
engineering, document activism, I would call it. They claim: documents need
to be free, and we will risk breaking the law to be able to understand how nonfree documents actually are constructed. They are really working on trying to
understand non-free documents, to be able to read them and to be able to
develop tools for them, that they can be reused and remade. The difference
between a free and a non-free document is that, for example, an InDesign
file, which is the result of a commercial product, there is no documentation
available of how this file works. This means that the only way to open the
document, is with that particular program. It means there is a connection
between that what you’ve made and the software you used to produce it. It
also means that if the software updates or the licence runs out, you will not
have access to your own file. It means it is fixed. You can never change it
and you can never allow anyone else to change it. An open document format
has documentation. That means that not only the software that created it,
is available, and in that way you can understand how it was made, but also
there is independent documentation available that whenever a project, like
a software, doesn’t work anymore, or is too old to be run, or you don’t have
326

Performing Libre Graphics

it available, you have other ways of understanding the document and being
able to open it and reuse and remake it. What is important, is that around
these open formats, you see a whole ecosystem exists of tools to inspect, to
create, to read, to change, to manipulate these formats. I think it is very
easy to see how around InDesign files this culture does not exist at all.

Sharing practise / re-learn

This way of working changes the way you learn, and therefore the way you
teach. And as many of us have understood the relation between learning
and practice, we’ve all been somehow involved in education. Many of us are
teaching in formal design or art education. And it is very clear how those
traditional schools are really not fit for the type of learning and teaching that
needs to happen around Libre Graphics. One of the problems we run into, is
the fact that validation systems are really geared towards judging individuals.
And our type of practice is always multiple. It is always about things that
happen with many people. And it is really difficult to inspire students to
work that way, and at the same time know that at the end of the day, they’ll
be judged on what they produced as an individual. In traditional education
there is always a separation between teaching technology and practice. You
have, in different ways, you have the studio practice, and then you have the
workshops. And it is very difficult to make conceptual connections between
the two. We end up trying to make that happen, but it is clearly not made
for that. And then there is the problem of hierarchies between tutor and
student, that are hard to break in formal education, just because the setup is,
even in very informal situations, that someone comes to teach and someone
else comes to be taught. And there is no way to truly break that hierarchy,
because that is the way a school works. For years we are thinking about how
to do teaching differently or how to do learning differently, and last year, for
the first time, we organized a summer school. Just like a kind of experiment
to see if we could learn and teach differently. The title, the name of the
school is Relearn. Because the sort of relearning for yourself but also to
others, through teaching learning, has become really a good methodology,
it seems.
If I say ‘we’, that’s always a bit uncomfortable, because I like to be clear about
who that is, but when I’m speaking here, there is many ‘wes’ in my mind.
327

Performing Libre Graphics

There is a group of designers called OSP. They have started in 2006 with
the simple decision to not use any proprietary software anymore for their
work. And from that this whole set of questions and practices and methods developed. Right now, that’s about twelve people working in Brussels,
having a design practice. I am lucky to be honory member of this group.
I’m in close contact with them, but I’m not actively working with the design
group. Another ‘we’, an overlapping ‘we’, is Constant, an association for
arts and media active in Brussels since 1996. Or 1997 maybe. Our interest
is more in mixing Copyleft thinking, Free Software thinking and feminism.
In many ways that intersects with OSP but they might phrase it in a different way. Another ‘we’ is the Libre Graphics community, which is even a
more uncomfortable ‘we’. Because it includes engineers that would like to
conquer the world ... and small hyper intelligent developers that creep out
of their corners to talk about the very strange worlds they’re creating. Or
typographers that care about universal typefaces, or ... I mean there is many
different people that are involved in that world. I think for this conversation, the ‘wes’ are: OSP, Constant and the Libre Graphics community,
whatever that is.

Libre Graphics annual meeting Leipzig 2014

We worked on a Code of conduct, which is something that seems to appear
in Free Software or tech conferences more and more. It comes a bit from
US context. We have started to understand that the fact that Free Software
is free, doesn’t mean that everyone feels welcome. For long there have been
and there still are large problems with diversity in this community. The
excitement about freedom has led people to think that people that were not
there would probably not want to be there and therefore had no role to be
there. For example, the fact that there are not a lot of women active in Free
Software, a lot less than in proprietary software, which is quite painful if
you think about it. It has to do with this sort of cyclical effect of because
women are not there, they will probably not be interested, and because they’re
not interested, they might not be capable or feel capable of being active. So they
might not belong. There is also a very brutal culture of harassment, of
racist and sexist language, of using imagery that is let’s say unacceptable,
and that needs to be dealt with. Over the last two years I think, documents
328

Performing Libre Graphics

like Codes of conduct have started to come up from feminists that are active
in this world, like Geek feminism or the Ada initiative, as a way to deal
with this. And what it does, is it describes ... it is slightly pompous, in the
sense that it describes your values. But it is a way to acknowledge the fact
that these communities have a problem with harassment, first. That they
explicitly say we want diversity, which is important. That it gives very clear
and practical guidelines for what someone that feels harassed can do, who
he or she can speak to, and what will be the consequences. Meaning that
it takes away the burden, at least as much as possible, from someone that is
harassed to defend actually the gravity of the case.

Art as integrative concept

For me calling myself an artist is useful, is very useful. I’m not busy with
let’s say, the constitutional art context. That doesn’t help me, at all. But
what does help me is the figure of the artist, the kinds of intelligences that
I sort of project on myself and I use from others and my colleagues, before
and contemporary. Because it allows me to not have too many ... to be able
to define my own context and concepts, without forgetting practice. And I
think art is one of the rare places that allows this. Not only allows it, but
actually rigorously asks for it. It is really wanting me to be explicit about my
historical connections, my way of making, my references, my choices, that
are part of the situation I build. And the figure of the artist is a very useful
toolbox in itself. And I think I use it, more than I would have thought. It
allows me to make these cross connections in a productive way.

329

The making of Conversations was on many levels a process of dialogue, between people, processes, and systems.
Xavier Klein and Christoph Haag were as much involved
in editorial decisions as they were in creating an experimental platform that would allow us to produce a publication in a way true to the content of the conversations
it would contain. In August 2014 we discussed the ideas
behind their designs and the status of the systems they
were developing for the book that you are reading right
now.
I wanted to ask you Xavier, how did you end up in Germany?
It’s a long story, so I’ll make it short. I benefit from the Leonardo program, a
scholarship to do an internship abroad. So I searched for graphic design studios
that use Open Source and Free Software. I asked OSP first, but they said No.
I didn’t know LAFKON at this time, and a friend told me: Hey there is this
graphic design studio in Germany, so I asked and they said Yes. So I was
happy. ( laughs)
How did you start working on this book?

I thought it would be nice to have a project during Xavier’s stay in Augsburg
with a specific outcome. Something going beyond pure experimentation.
So I asked Constant if there were any projects that need to be worked on.
And I’m really happy with the Conversations publication, because it is a
good mixture. There is the technical experiment, how you would approach
something like this using Free Software. And there is the editing side.
To read all these opinions and reflections. It’s really interesting from the
content side, at least for me – I don’t dare to speak for Xavier. So that’s
basically how it started.
You developed a constellation of tools that together are producing the book.
Can you explain what the elements are, how this book is made?
333

We decided in the beginning to use Etherpad for the editing. A lot of
documentation during Constant events was done with Etherpad and I found
its very direct access to editing quite inspiring. Earlier this year we prepared a
workshop for the Libre Graphics Meeting, where we’d have a transformation
from Etherpad pages to a printable .pdf. The idea was to somehow separate
the content editing and the rendering. Basically I wanted to follow some
kind of ‘pull logic’. At a certain point in the process, there is an interface
where you can pull out something without the need to interfere too much
with the inner workings of this part. There is the stable part, the editing on
the Etherpad, and there is something, that can be more experimental and
unstable which transforms the content to again a stable, printable version. I
tried to create a custom markdown dialect, meant to be as simple as possible.
It should reduce to some elements, the elements that are actually needed.
For example if we have an interview, what is required from the content side?
We have text and changing speakers. That’s more or less the most important
informations.
So on the first level, we have this simple format and from there the transformation process starts. The idea was to have a level, where basically anybody,
who knows how to use a text editor, can edit the text. But at the same
time it should have more layers of complexity. It actually can get quite
complex during the transformation process. But it should always have this
level, where it’s quite simple. So just text and for example this one markup
element for ok now the speaker changes.
In the beginning we experimented with differents tools, basically small
scripts to perform all kinds of layout task. Xavier for example prepared a
hotglue2svg converter. After that, we thought, why don’t we try to connect different approaches? Not only the very strict markdown to TeX to
.pdf transformations, but to think about, under which circumstances you
would actually prefer a canvas-based approach. What can you do on a canvas
that you can’t do or is much harder with a markup language.
It seems you are developing an adhoc markup language? Is that related to
what you wrote in the workshop description for Operating Systems: 1 Using
operating systems as a metaphor, we try to imagine systems that are both
structured and open?

Yes. The idea was to have these connected/disconected parts. So you have
the part where the content is edited in collaboration and you have the transformer script running separately on the individuals’ computers. For me this
1

http://libregraphicsmeeting.org/2014/program/

334

solved in a way the problem of stability. You can use a quite elaborated,
reliable software like Etherpad and derive something from it without going
to its inner workings. You just pull the content from it, without affecting
the software too much. And you have the part, where it can get quite experimental and unreliable, without affecting all collaborators. Because the
process runs on your own computer and not on the server.
The markup concept comes from the documentation of a video streaming
workshop in Linz. There we wanted to have the possibility to write the
documentation collaboratively during the workshop and we needed also to
solve problems like How about the inclusion of images? That is where the first
markup element came from, which basically just was was a specific line of
text, which indicates ‘here should be this/that image’. If this specific line
appears in the text during the transformation process, it triggers an action
that will look for a specific file in the repository. If the image exists, it will
write the matching macro command for LaTeX. If the image is not in the
repository, it will do nothing. The idea was, that the creation of the .pdf
should happen anyway, e.g. although somebody’s repository might be not at
the latest state and a missing image would prevent LaTeX from rendering
the document. It should also ignore errors, for example if someone mistypes
the name of image or the command. It should not stop the process, but
produce a different output, e.g. without the image.
Why do you think the process should not stop when there’s an error? Why is
that so important?

For me it was important to ensure some kind of feedback, even if there might
be ‘errors’ in the output. Not just ‘not work’. It can be really frustrating,
when the first thing you have to do, is to find and solve a problem – which
can be quite hard with this sort of unprofessional scripts – before there’s is
happening anything at all. So at a certain point, at least something should
appear, even if it’s not necessarily the way it was originally intended. Like
a tolerance for errors, which would even produce something, that maybe
different from what you expected. But it should produce ‘something’.
You imagine a kind of iterative development that we know from working with
code, that allows you to keep differents versions, that keeps flowing in a way.
For example, this specific markup format. It’s basically markdown and
I wanted some more elements, like footnotes and the option to include
citations and comments. I find it quite handy, when you write software,
335

that you have the possibility to include comments that are not part of the
actual output, but part of the working process. I also enjoy this while
writing text (e.g. with LaTeX), because I can keep comments or previous
versions or drafts. So I really have my working version and transform this
to some kind of output.
But back to the etherpash workshop. Commands are basically comments
that will trigger some action, for example the inclusion of a graphic or
changing the font or anything. These commands are referenced in a separate
file, so everybody can have different versions of the commands on their own
machine. It would not affect the other people. For example, if you wanted
to have a much more elaborated GRAFIK command, you could write it and
use it within your transformer of the document or you could introduce new
commands, that are written on the main pad, but would be ignored for
other people, because they have a different reference file. Does this make
sense?
Yes. In a way, there are a lot of grey zones. There are elements that are
global and elements that are local; elements can easily go parallel and none
of the commands actually has always the same output, for everyone.

They can, but they do not need to. You can stick to the very basic version
that comes directly from the repository. You could use this version to create
a .pdf in the ‘original’ way, but you can easily change it on different levels.
You can change the Bash commands that are triggered by the transformer
script, you can work on the LaTeX macros or change the script itself. I
found it quite important to have different levels of complexity. You may go
deeper, but you do not necessarily have to. The Etherpad content is the very
top level. You don’t have to install a software on your computer, you can
just open up a browser and edit the text. So this should make the access to
collaboration easier. Because for a lot of experimental software you spend a
lot of time to get it even running. Most often you have a very steep learning
curve and I found it interesting, to separate this learning curve in a way. So
you have different layers and if you really want to reconfigure on a deep level,
you can, but you do not necessarily have to.
I guess you are talking about collaboration across different levels of complexity, where different elements can transform the final outcome. But if you
take the analogy of CSS, or let’s say a Content Management System that
generates HTML, you could say that this also creates divisions of labour. So
rather than making collaboration possible, it confines people to to different
336

files. How do you think your systems invite people to take part in different
levels? Are these layers porous at all? Can they easily slip between different
roles, let’s say an editor, a typographer and a programmer?
Up to a certain extent it’s like a division of labour. But if you call it a
separation of tasks, it makes definitely sense for me. It can be quite hard, if
you have to take over responsability for everything at the same time. So it
makes sense for me, also for collaboration, to offer this separation. Because
it can be good to have the possibility not to have to deal with the whole
system and everything at the same time. You should be able to do so, but
you should not necessarily have to. I think this is important, because a lot
of frustration regarding Free Software systems comes from the necessity to
go to the deep level at an early stage. I mean it’s an interesting problem.
The promise of convenience is quite hard, because most times is does not
really work. And it’s also fine that it doesn’t really work. At the same time
it’s frightening for people to get into it and so I think, it’s good to do this
step by step and also to have an easy top level opportunity to go into, for
example, programming. This is also a thing I became really interrested in.
The principle of the commandline to ‘extend usage into programming’. 2
You do not have to have a development environment and then you compile
software and then you have software, but you have this flexible interface for
your daily tasks. If you really need to go a deeper level, you can, at least with
Free Software. But you don’t have to ... compile your kernel every time.

Not every time! What I find interesting about your work is that you prefer not
to conceal any layers. References, commands, markup hints at the existence
of other layers, and the potential to go somewhere else. I wanted you to ask
about your fascination or interest in something ‘old school’ as Bash scripting.
Why is it so interesting?

Maybe at first point, it’s a bit of a fascination for the obscure. That normally,
as a graphic designer you wouldn’t think of using the commandline for your
work. When I started to use GNU/Linux, I’d try to stay away from the terminal. Which is basically, as I realised pretty soon, not possible. 3 At some
point, Bash scripting became really fascinating, because of the possibility to
use automation to correct or add functionalities. With the commandline
it’s easy to automate repetitive tasks, e.g. you can write a small script that
2
3

Florian Cramer. (echo echo) echo (echo): Command Line Poetics, 2007
let’s say hard

337

creates a separate .svg file for each layer in a .svg file 4 , convert this separated .svg files to .pdf files 5 and combine the .pdf files to a multipage
.pdf 6 . Just by collecting commands you’d normally type on your commandline interface. So in this case, automation helps to work around a missing
multipage support in inkscape. Not by changing the application itself, but
by plugging something ‘on top’ of it. I like to think of the Bash as glue
between different applications. So if we have a look now at the setup for
the conversations publication, we may see that Bash makes it really easy to
develop own configurations and setups. I actually thought about prefering
the word ‘setup’ to ‘writing software’ ...

Are you saying you prefer setup ‘over’ configuration?

Setup or configuration of software ‘over’ actually writing software. Because
for me it’s often more about connecting different applications. For example,
here we have a browser-based text editor, from which the content is automatically pulled and transformed via text-transform tools and then rendered
as a .pdf. What I find interesting, is that the scripts in between may actually be not very stable, but connect two stables parts. One is the Etherpad,
where the export function is taken ‘as is’ and you’ve got the final state of a
.pdf. In between, I try to have this flexible thing, that just needs to work
at this moment, in my special case. I mean certain scripts may reach quite
an amount of stability, but not necessarily. So it’s very good to have this
fixed state at the end.

You mean the .pdf?

I mean the .pdf, because ... These scripts are quite personal software and
so I don’t really think about other users beside me. For me it’s a whole
different subject to go to the usability level. That’s maybe also a cause for
the open state of the scripts. It would not make much sense – if I want to
have the opportunity for other people to make use of these things – to have
black boxes. Because for this, they are much too fragile. They can be taken
over, but there is no promise of ... convenience? 7 And it’s also important
for myself, because the setups are really tailored to a specific use case and
4
5
6
7

using sed, stream editor for filtering and transforming text
using inkscape on the commandline
using pdftk
... distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without
even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE. Free Software Foundation. GNU General Public License, 2007

338

therefore more or less temporary. So I need to be able to read and adapt
them myself.
I know that you usually afterwards you provide a description of how the collage
was made. You publish the scripts, and sketches and intermediary outcomes.
So it seems that usability is more in how you give access to the process rather
than the outcome. Or would you say that software is the outcome?

Actually for me the process is the more interesting part of the work. A lot of
the projects are maybe more like a proof of concept, than finished pieces of
software. I often reuse parts of these setups or software pieces, so it’s more
collections of ‘How to do something’ then really a finished thing, that’s now
suitable to produce this or that.
I’m just wondering, looking at your designs, if you would like that layering,
this unstability to be somehow legible in the .pdf or the printed object?

I don’t think that this unstability is really legible. Because in the process
there’s a certain point where definitive decisions are taken. It’s also part of
the concept. You make decisions and that make the final state of the object
what it is. And if you want to get back to the more flexible part, then you
would really have to get back. So I don’t actually think that it is legible in
the final output, on the first sight, that it is based on a very fluid working
process. And for me that’s quite ok. It’s also important for me – because
I tend not to do so – to take a decision at a certain point. But that’s not
necessarily the ultimate decision and therefore it’s also important to keep
the option open to redefine ... ‘the thing’.

What you’re saying, is that you can be decisive in your design decisions because
the outcome could also be another. You could always regenerate the .pdf
with other decisions.
Yes. For example, I would regenerate the .pdf with the same decisions,
another person maybe would take different decisions. But that’s one step
before the final object. For example, if we do not talk about the .pdf, but
we actually talk about the book, then it’s very clear, that there are decisions,
that need to be taken or that have been taken. And actually I like the feeling
of convenience when things get finished. They are done. Not configurable
forever.

( laughs) That’s convenient, if things get done!
339

For this specific book, you have made a few decisions, for example your selection of fonts is particular.
Xavier, can you say something about the typography of Conversations?

Huuumn yep, for the typographic decisions ... in the beginning we searched for
fancy fonts, but in a way came back to use very classic fonts, respectively one classic
font. So the Junicode 8 for the text and the OCR-A 9 for anything else. Because
we decided to focus on testing different ways of layouting and use the fonts as a
way to keep a certain continuity between the parts. We thought this can be more
interesting, than to show that we can find a lot of beautiful, fancy fonts.

So in the beginning, we thought about having a different font for every
speaker, but sooner or later we realised that it would be good to have something that keeps the whole thing together. Right now, this are the two
fonts. The Junicode, which is a font for medievalists, and the OCR-A,
which is a optical character recognition font from the early age of computer technology. So the hypothesis was, to have this combination – a very
classical typeface inspired by the 16th century and a typeface optimized for
machine reading – that maybe will produce an interesting clash of two different approaches. While at the same time providing a continuous element
throughout the book. But that still has to be proven in the final layout.

I find it interesting that both fonts in their own way are somehow conversational. They are both used in situations where one system needs to talk to
another.

Yeah, definitely in a way. They are both optimised for a special usage, which,
by the way, isn’t the usage of our case. One for the display of medieval
texts, where you have to have lot of different signs and ligatures and ... that’s
the Junicode. The other one, the OCR-A, is optimized to be legible by
machines. So that are two different directions of conversation. And they’re
both Free and Open Source fonts ...
And for the layout? How are the divider pages going to be constructed?

For the divider pages, it’s an application ‘Built with Processing’, done by
Benjamin 10 . In a way, it’s a different approach, because it’s a software with
an extensive Graphical User Interface, with a lot of options. So it’s different
8
9

10

http://junicode.sourceforge.net/
http://sourceforge.net/projects/ocr-a-font/
Stephan

340

from the very modular, connective approach. There we decided to have this
software, which is directly controlled by the controller, the person who uses
it. And again, there is this moment of definitive decision. Ok, this is exactly
how I want the title pages to look. And then they are put in a fixed state.
At the same time, the software will be part of the repository, to be usable
as a tool. So it’s a very ... not a ‘very classic’ ... approach. To write ‘your’
software for ‘your’ very specific use case. In a more monolithic way ...
Just to add this. In this custom markdown dialect, I decided at a point
to include a command, which is INCLUDEPAGES, where you can provide
a .pdf file via an url to be included in the document. So the .pdf may
be stored anywhere, as long as it is accessible over the internet. I found
this an interesting opportunity for collaboration. Because if somebody does
not want to stick to the grid given by the LaTeX configuration or to this
kind of working in general, this person could create a .pdf, store it online,
reference it and the file will be included. This can be a very disconnected
way of contributing to the final book. And that’s also a thing we’re now
trying to test ourselves. Because in the beginning we developed a lot of
different little scripts, for example the hotglue2svg converter. And right
now we’re trying to extend this. For example, to create one interview in
Scribus and include the .pdf made with Scribus. To also test ourselves
different approaches.
This book will be both a collage and have a overall, predefined structure
provided by the lay-out engine?

I’m trying to make pragmatic use of the functionalities of LaTeX, which is
used for the final compiling of the .pdf. So for example, also ready-made
.pdf files included into the final document are referenced in the table of
contents.

Can you explain that again ?

Separate .pdfs, that are included into the final document will be referenced
in the table of contents. We can still make use of the automatic generation
of page numbers in the table of contents, so there it goes together. There
are certain borders, for example since the .pdfs are more like finished documents, indexing will probably not work. Because even if you can extract
references from the .pdf, I didn’t find a way until now, how to find out the
page number in a reliable way. There you also realise, that you can do much
more with the plain text sources than you can do with a finished document.
341

But I think that’s ok. In this case you wouldn’t to have a keyword reference
to the .pdf, while it’s still in the table of contents ...
What if someone would want to use one of these interviews for something else?
How could this book becoming source for an another publication?
That’s also an advantage of the quite simple source format on the Etherpad.
It can be easily converted to e.g. simple markdown, just by a little script.
I found this quite important – because at this point we’re putting quite an
amount of work into the preparation of the texts – to have it not in a format
that is not parseable. I really wanted to keep the documents transformable
in a easy way. So now you could just have a ~fiveliner, that will pull the text
from the Etherpad and convert it to simple markdown or to HTML.
Wonderful.

If you have a more or less clean source format, then it’s in most cases easy
to convert it to different formats. For example, the Evan Roth interview,
you provided as a ConTeXt file. So with some text manipulation, it was
easy to do the transformation to our Etherpad markup. And it would be
harder if the content is stored as an Open Office document, but still feasible.
.pdf in a way is the worst case, because it’s much harder to extract usable
content again, depending on the creator. So I think it’s important to keep
the content in a readable and understandable source format.

Xavier, what is going to happen next?

Right now, I’m the guy who tests on Scribus, Inkscape. But I don’t know if it’s
the answer to your question.

I was just curious because you have a month to work on this still, so I was
wondering ... are there other things you are testing or trying ?

Yeah, I think I want to finish the hotglue2svg.sh, I mean it’s my first
Bash program, I want to raise my baby. ( laughs) But right now I’m trying to
find different ways of layouts. The first one is the one with the big squares, the
big unicode characters and all the arrows. So it’s very complicated, but it’s the
attempt to find an another way to express a conversation in text.

Can you say more about that ?

Because in the beginning, my first try was to keep the ‘life’ of a conversation in
the text with some things, like indentation or with graphic things, like the choice
342

of the unicode characters. If this can be a way to express a conversation. Because
it’s hard to it with programming stuff so we’re using GUI based software.

It’s a bit coming to the question, what you are doing differently, if you work
with a direct visual feedback. So you don’t try to reduce the content to get
it through a logical structure. Because that’s in a way how the markdown
to LaTeX transformation is doing it. You set certain rules, that may be in
special cases soft rules, but you really try to establish a logical structure and
have a set of rules and apply them. For me, it’s also an interesting question.
If you think of grid based graphic design, where you try to introduce a set
of rules in the beginning and then to keep for the rest of the project, that’s
in a way a very obvious case for computation. Where you just apply a set of
rules. With this application of rules you are a lot confronted in daily graphic
design. And this is also a way of working you learn during your studies.
Stick to certain logical or maybe visual grids. And so now the question is:
What’s the difference if you do a really visual layout. Do you deal differently
with the content, does it make sense, or if you’re just always coming back
to a certain grid, then you might as well do it by computation. So that’s
something that we wanted to find out. What advantage do you really gain
from having a canvas-based approach throughout the layout process.
In a way the interviews are very similar, because it’s always peoples speaking,
but at the same time each of the conversations is slightly different. So in what
way is the difference between them made legible, through the same set of rules
or by making specifics rules for each of them?
If you do the layout by hand you can take decisions that would be much
harder to translate to code. For example, how to emphasize certain part
of the text or the speaker. You’re much closer to the interpretation of the
content? You’re not designing the ruleset but you are really working on the
visual design of the content ... The point why it’s interesting to me is because
working as a designer you get quite often reduced to this visual design of the
content, at the same it may make sense in a lot of cases. So it’s a evaluation
of these different approaches. Do you design the ruleset or do you design
the final outcome? And I think it has both advantages and disadvantages.

343

Colophon

In conversation with: Agnes Bewer, Alexandre Leray, An Mertens, Andreas Vox, Asheesh
Laroia, Carla Boserman,Christina Clar, Chris Lilley, Christoph Haag, Claire Williams, Cornelia
Sollfrank, Dave Crossland, Dmytry Kleiner, Denis Jacquery, Dmytri Kleiner, Eleanor Greenhalgh,
Eric Schrijver, Evan Roth, Femke Snelting, Franziska Kleiner, George Williams, Gijs de Heij,
Harrisson, Ivan Monroy Lopez, John Haltiwanger, John Colenbrander, Juliane De Moerlooze,
Julien Deswaef, Larisa Blazic, Ludivine Loiseau, Manuel Schmalstieg, Matthew Fuller, Michael
Murtaugh, Michael Terry, Michele Walther, Miguel Arana Catania, momo3010, Nicolas Malevé,
Pedro Amado, Peter Westenberg, Pierre Huyghebaert, Pierre Marchand, Sarah Magnan, Stéphanie
Vilayphiou, Tom Lechner, Urantsetseg Ulziikhuu, Xavier Klein

Concept, development and design: Christoph Haag, Xavier Klein, Femke Snelting

Editorial team: Thomas Buxó, Loraine Furter, Maryl Genc, Pierre Huyghebaert, Martino Morandi
Transcriptions: An Mertens, Boris Kish, Christoph Haag, Femke Snelting, George Williams, Gijs
de Heij, ginger coons, Ivan Monroy Lopez, John Haltiwanger, Ludivine Loiseau, Martino Morandi,
Pierre Huyghebaert, Urantsetseg Ulziikhuu, Xavier Klein
Chapter opener: Built with petter by Benjamin Stephan
-> http://github.com/b3nson/petter

Tools: basename, bash, bibtex, cat, Chromium, cp, curl, dpkg, egrep, Etherpad, exit,
ftp, gedit, GIMP, ghostscript, Git, GNU coreutils, grep, ImageMagick, Inkscape, Kate, man,
makeindex, meld, ne, pandoc, pdflatex, pdftk, Processing, python, read, rev, Scribus,
sed, vim, wget
Fonts: Junicode by Peter S. Baker, OCR-A by John Sauter

Source Files:
Texts, fonts and pdf: http://conversations.tools
Software: https://github.com/lafkon/conversations
Published by: Constant Verlag (Brussels, January 2015)
ISBN: 9789081145930

Copyright (C) Constant 2014
Copyleft: This work is free. You may copy, distribute and modify
it according to the terms of the Free Art License (see appendix)
This publication is made possible by the Libre Graphics Community, through the financial support
from the European Commission (Libre Graphics Research Unit) and the Flemish authorities.

Printed in Germany.

http://www.online-druck.biz

351

Acid Test, 145–147
Activism, 302, 320, 326
Adafruit, 225
Adobe Illustrator, 66, 101, 159–161, 292
Adobe InDesign, 15, 16, 19, 326, 327
Adobe PageMaker, 16, 17, 159, 160
Adobe Photoshop, 279, 280
Adobe Systems, 8, 24, 101, 142, 156,
157, 159–162, 279, 291,
297, 302
Algorithm, 227, 236
Amado, Pedro, 275
Anthropology, 41, 202, 232
AOL Inc., 25
Apple Inc., 8, 23, 24, 142, 159–162
Application Programming Interface, 118,
276
Arana Catania, Miguel, 88
Arduino, 83, 226
Artist, 7–9, 17, 73, 99–101, 146, 190,
191, 213–215, 223, 224,
240, 247, 319, 323, 329

Bézier, Pierre, 303
Baker, Peter S., 351
Barragán, Carlos, 88
Beauty, 14, 23, 32, 47, 55, 59, 78, 81,
162, 176, 230, 236, 268,
293, 305, 324, 325, 340
Benkler, Yochai, 187, 192, 193
Bewer, Agnes, 37
Blanco, Chema, 90
Blazic, Larisa, 7
Blender, 55, 72, 221, 222, 276
Blokland, Petr van, 158, 159
Body, 39, 77, 135, 141, 146, 178, 219,
242
Boserman, Carla, 86
Bradney, Craig, 13, 16, 80
Brainch, 110
Brainerd, Paul, 159
Brussels, 3, 37, 71, 187, 195, 203, 213,
245, 248, 287, 303, 328,
351
Buellet, Stéphane, 215
Bug, 17, 23, 25, 66, 119, 171, 172, 201,
203–205, 292, 298, 302

Bugreport, 280
Bush, George W., 290
Buxó, Thomas, 351

Canvas, 13, 17, 57, 58, 63, 65, 66, 301,
334
Carson, David, 161
Cayate, Henrique, 278
Chastanet, François, 233
Clar, Christina, 99
Colenbrander, John, 99
Collaboration, 3, 7, 9, 57, 100, 101, 109–
112, 116–120, 126, 127,
160, 162, 203, 213, 215,
223, 224, 232, 244, 246,
253, 275, 289, 290, 292,
301, 311, 334, 336, 337,
341
Commandline Interface, 39, 59, 298,
299, 336–338, 342, 351
Commons, 192–194
Communism, 187, 192–194
computer department, 275
Constant, 3, 99, 109, 124, 137, 171, 213,
246, 283, 311–313, 328,
333, 334, 351
ConTeXt, 42, 47–55, 57–62, 66, 67, 103,
127, 128, 155, 181, 182,
191, 192, 261, 276, 278,
300, 304, 311–314, 320–
324, 328, 329, 342
Contract, 189
coons, ginger, 351
Copyleft, 162, 276, 328
Creative Commons, 18, 27, 218, 244,
249, 250, 321
Crossland, Dave, 29, 92, 155, 351
CSS, 53, 54, 116, 117, 142, 144–146, 336

Dahlström, Erik, 138
Dance, 64, 81, 219
de Heij, Gijs, 351
de Moerlooze, Juliane, 37
Debian, 3, 37, 38, 40, 41, 100, 156, 201,
203, 205, 207
Designer, 3, 7–9, 16, 17, 23, 28, 99, 114,
115, 135, 140, 142, 146,

147, 149, 150, 155, 158,
160, 163, 164, 174, 187–
190, 193, 194, 227, 235,
261, 262, 266, 267, 275,
278, 279, 282, 288, 292,
297, 299–301, 304, 305,
311, 323, 326, 328, 343
Desktop Publishing, 9, 61, 159–161,
276, 279
Deswaef, Julien, 88
Developer, 3, 7–9, 13–15, 17, 19, 23, 40,
47, 49, 54, 55, 58, 59, 71,
74, 99, 102, 104, 105, 112,
115, 123, 128, 135, 149,
150, 155, 162, 166, 171,
174, 177, 179, 183, 190,
196, 201, 203, 204, 207,
208, 213, 215, 216, 225,
233, 235, 254, 261, 265,
279, 299–302, 306, 328,
337
Documentation, 27, 43, 51, 52, 54, 55,
57, 60, 176, 208, 230–232,
238, 239, 264–266, 307,
313, 326, 334, 335
Dropbox, 118, 128
Duffy, Maírín, 206

Education, 8, 42, 43, 100, 165, 166, 248,
275, 276, 279, 282, 327
Efficiency, 41, 43, 75, 78, 206, 289, 297,
306
Egli, Simon, 292
Ehr, Jim von, 160, 161
Emmons, Andrew, 138
Encoding, 24, 261, 262, 264–267
ePub, 105
Etherpad, 117, 118, 334–336, 338, 342,
351
EyeWriter, 214, 223–225, 227, 228, 235,
236
Farhner, Todd, 145
Feminism, 37, 41, 328, 329
Firefox, 144, 177, 283
Flash, 101, 208, 215, 279

FontForge, 23, 25–27, 29, 30, 32,
165, 166, 268, 276,
298, 299
FontLab, 28, 162, 163, 276
Fontographer, 24, 160–163, 292
Free Art License, 244, 351, 354
Free Culture, 7, 8, 13, 102–104,
201, 319–322
Freeman, Mark, 203
Fried, Limor, 225
FrontPage, 25
Frutiger, Adrian, 293
Fuller, Matthew, 297
Fun, 14, 15, 49, 57, 65, 67, 72, 78,
217, 227, 232, 235,
238, 246, 253
Furter, Loraine, 351

163,
292,

165,

206,
236,

Gaulon, Benjamin, 223
Genc, Maryl, 351
Gender, 9, 47, 48, 201, 204, 205, 302
Ghali, Jean, 80
GIMP, 171, 172, 174, 179–183, 276, 279,
280, 298, 299, 351
Git, 57, 109–121, 123–125, 127–129,
203, 313, 320, 351
GitHub, 7, 111, 116, 120–124, 126, 128
Gitorious, 111, 116, 121, 122, 124
Glitch, 305
Glyph, 31, 48, 120, 121, 165, 262, 266,
268, 291, 314
Gnu General Public License, 219, 253,
305, 321, 338
Goller, Ludwig, 304
Google Summer of Code, 205, 206
Graphic Design, 7, 9, 111, 113, 115, 116,
119, 156, 159, 161, 162,
175, 227, 280, 287, 297,
298, 306, 311, 312, 321,
333, 343
Graphical User Interface, 14, 29, 73,
159–161, 300, 301, 340,
343
Greenhalgh, Eleanor, 90, 99
Haag, Christoph, 99, 333, 351

Hagen, Hans, 47–50, 55, 56
Haltiwanger, John, 47, 213, 351
Hannemeier Hansson, David, 252
Harrington, Bryce, 301
Harrison, 155, 187, 287
Hello World, 235
Hickson, Ian, 146
HTML, 24–27, 48, 52–54, 116, 137,
138, 141, 149, 175, 319,
336, 342
Hugin, 82
Huyghebaert, Pierre, 48, 58, 109, 135,
155, 289, 298, 351

Imposition, 73, 75, 76, 80, 81, 83, 312
Infrastructure, 27, 50, 160, 172, 173,
180, 325
Inkscape, 66, 72, 117, 143, 205, 276,
298–301, 338, 342, 351
Internet Explorer, 142, 144, 197, 283
Internet Relay Chat, 19, 138, 203, 206,
208, 276, 300
iPhone, 226, 230, 238
IT Department, 8, 156, 275
Jacquerye, Denis, 165, 261
Jay-Z, 252
Jenkins, Mark, 220
Joint Photographic Experts Group, 128
Juan Coco, Mireia, 94

Karow, Peter, 159
KATSU, 220, 249
Kerning, 31, 52, 299
Kish, Boris, 351
Klein, Xavier, 333, 351
Kleiner, Dmytri, 187
Kleiner, Franziska, 187
Knuth, Donald, 51, 54, 80, 158, 300,
312, 323
Kostrzewa, Michael Dominic, 149
KRS-One, 251

Labour, 183, 187–190, 192–194, 197,
299, 336, 337
LAFKON Publishing, 333

Laidout, 71–73, 75, 78–80, 82, 83
Laroia, Asheesh, 201
LaTeX, 49–51, 60, 66, 290, 335, 336,
341, 343
Laughing, 23, 25–27, 31, 38, 56, 64,
74, 79, 139, 144, 146, 189,
194, 196, 204, 208, 216,
220, 221, 224, 227, 230,
232, 233, 240, 246, 254,
265, 266, 268, 305, 333,
339, 342
Lawyer, 136, 146, 149, 192, 324
Lechner, Tom, 71
Lee, Tim Berners, 139
Leray, Alexandre, 109
Levien, Raph, 302
Libre Fonts, 196, 275, 287, 299, 324, 325
Libre Graphics Meeting, 3, 7, 8, 13,
23, 71, 110, 135, 149, 150,
155, 171, 181, 201, 208,
314, 319, 328, 334
Libre Graphics Research Unit, 3, 109,
261, 314, 351
Lilley, Chris, 135
Linnell, Peter, 13, 17, 18
Loiseau, Ludivine, 71, 109, 155, 311,
351
Lua, 50–52, 59, 60
Lupton, Ellen, 304
Müller Brockman, Joseph, 305
Macromedia, 24, 101, 137, 161
Magnan, Sarah, 109
Mailing list, 40, 41, 47, 50, 162, 202,
205, 263, 299, 300, 322
Malevé, Nicolas, 135, 261
Mansoux, Aymeric, 8
Manual, 43, 51, 56, 60, 61, 157, 201, 299,
307
Marchand, Pierre, 58, 71, 109, 261, 311,
351
Marini, Anton, 247
Markdown, 52, 53, 105, 247, 334, 335,
341–343
Markup, 52, 53, 213–215, 222, 224, 237,
251, 334, 335, 337, 342
Marx, Karl, 187, 188

Mathematics, 26, 37, 39, 40, 42, 43, 71,
72, 155, 158
Mauss, Marcel, 187, 195
MediaWiki, 173, 181
Mercurial, 110
Meritocracy, 126
Mertens, An, 37, 351
Metafont, 158
Microsoft, 16, 18, 24, 25, 56, 57, 144,
150, 162, 197, 276, 283
Monotype, 324
Monroy Lopez, Ivan, 111, 171, 351
Morandi, Martino, 351
Moskalenko, Oleksandr, 13
Multiple Master, 291, 292
Murtaugh, Michael, 99
Netscape, 142

Open Font Library, 27
OpenOffice, 117, 342
Opera, 138, 144
OSP, 3, 57, 81, 109–112, 114, 120, 122–
126, 128, 135, 155, 187,
207, 227, 268, 287, 297,
298, 302, 303, 305–307,
311–313, 324, 328, 333
OSPimpose, 312
Otalora, Olatz, 94

Pérez Aguilar, Ana, 94
PDF, 14, 18, 52, 72, 73, 122, 128, 129,
156, 298, 307, 312, 324,
334–336, 338, 339, 341,
342
Peer production, 187, 189, 191, 192, 194,
195, 197, 288
PfaEdit, 25
Pinhorn, Jonathan, 314
Piracy, 15, 287–289
Pixar, 197
Plain Text, 80, 140, 341
Podofoimpose, 71, 80, 81
Police, 150, 179, 215, 223, 239–241
PostScript, 18, 24, 25, 27, 159–162
Printing, 14, 15, 17, 18, 23, 24, 53, 72,
76, 77, 83, 103, 129, 148,

158–161, 223, 234, 247,
263, 275, 279, 298, 300–
302, 304, 305, 314, 324,
325
Problems, 28, 39, 42, 43, 47, 48, 80–82,
104, 111, 121, 122, 128,
137, 144, 157, 187, 193,
195, 196, 201, 203, 205,
217, 219, 226, 229, 233,
239, 242, 265, 277, 289,
300, 327, 329, 335, 337
processing.org, 67, 247, 276, 297, 340
Public Domain, 218, 221, 250, 282, 303
Qt, 78
QuarkXpress, 15, 161, 196, 290

Recipe, 125, 127, 128, 306, 307, 320, 321
Relearn Summerschool, 109, 327
Release early, release often, 114, 221
Robofog, 161
Robofont, 161
Rossum, Just van, 161
Roth, Evan, 213

Safari, 144
Samedies, 37, 40, 203
Sauter, John, 351
Schmalstieg, Manuel, 311
Schmid, Franz, 13, 15, 17
Schrijver, Eric, 109
Scribus, 13–19, 57, 61, 62, 65, 71, 79–81,
113, 115, 128, 157, 187,
196, 197, 276, 297–302,
313, 341, 342, 351
Scribus file, 113, 119
Sexism, 40, 328
Shakespeare, William, 23, 25, 26
Sikking, Peter, 172
Smythe, Dallas, 190
Snelting, Femke, 3, 297, 319, 351
Sobotka, Troy James, 227
Sollfrank, Cornelia, 319
SourceForge, 111
Sparkleshare, 118
Spencer, Susan, 92

Stable, 51, 58, 324, 334, 335, 338
Stallman, Richard, 165
Standards, 17, 101, 135, 136, 138, 140,
141, 145–147, 223, 250,
262, 291, 293, 298, 303,
304, 326
Stephan, Benjamin, 340, 351
Stroke, 65, 214, 216, 234, 243, 248, 291
Subtext, 47, 53, 54, 56, 57, 61
Sugrue, Chris, 214
SVG, 119, 135, 136, 138, 140, 143–145,
148, 215, 298, 301, 334,
338, 341
SVN, 111, 112, 117, 120, 320
Telekommunisten, 187
TEMPT, 214, 223, 224, 235, 236
Terry, Michael, 171
TeX, 47–49, 51, 52, 55, 59, 60, 80, 158,
161, 312, 323, 334
Torrone, Phil, 225
Torvalds, Linus, 112, 114, 115, 118, 246,
252
Tschichold, Jan, 52, 57
Tucker, Benjamin, 187, 189, 192
Typesetting, 24, 51–55, 57, 60, 61, 66,
158, 287, 323
Typography, 3, 9, 16, 24, 48, 51, 53,
61, 117, 155–159, 161–
165, 187, 195, 196, 220,
235, 261, 276, 287–291,
293, 298–300, 304, 314,
340
Ubuntu, 102
Ulziikhuu, Urantsetseg, 99, 351
Undocumented, 50
Unicode, 23, 24, 26, 27, 48, 261–268,
342, 343
Universal Font Object, 163
Unstable, 324, 334
User, 3, 9, 13–17, 19, 25, 32, 37, 47, 49,
50, 52, 54–56, 58, 64, 79,

100–102, 104, 141, 146,
159, 160, 166, 171–177,
179, 181, 182, 196, 208,
215, 222, 261, 266–268,
279, 280, 283, 288, 289,
300–302, 319, 338, 340
Utopia, 100, 251

Veen, Jeff, 146
Version Control, 7, 57, 109–112, 116–
119, 123–125, 127, 144,
149, 201, 202, 207, 264,
300, 305, 307
Vilayphiou, Stéphanie, 109, 213
Visual Culture, 113, 114, 117, 122, 124,
128
Vox, Andreas, 13, 80, 302, 351

Wall, Larry, 63
Walther, Michele, 213
Warnock, John, 159
Watson, Theo, 214
Westenberg, Peter, 187, 213
What You See Is What You Get, 25, 61–
65, 283
Wilkinson, Jamie, 214
Williams, Claire, 99
Williams, George, 14, 23, 79, 299, 351
Wishlist, 246
Wium Lie, Håkon, 142, 146
Workflow, 52, 53, 60, 105, 109, 115, 119,
298, 300, 312
World Wide Web Consortium, 135–142,
145, 146, 304
XML, 52, 80, 144, 148, 158, 163, 175,
213, 214, 216, 233, 234,
301
Yildirim, Muharrem, 242, 245
Yuill, Simon, 232

Free Art License 1.3. (C) Copyleft Attitude, 2007. You can make reproductions and distribute this license verbatim (without any changes). Translation: Jonathan Clarke, Benjamin
Jean, Griselda Jung, Fanny Mourguet, Antoine Pitrou. Thanks to framalang.org
PREAMBLE

The Free Art License grants the right to freely
copy, distribute, and transform creative works
without infringing the author’s rights.
The Free Art License recognizes and protects
these rights. Their implementation has been
reformulated in order to allow everyone to use
creations of the human mind in a creative manner, regardless of their types and ways of expression.
While the public’s access to creations of the human mind usually is restricted by the implementation of copyright law, it is favoured by
the Free Art License. This license intends to
allow the use of a works resources; to establish
new conditions for creating in order to increase
creation opportunities. The Free Art License
grants the right to use a work, and acknowledges the right holders and the users rights and
responsibility.
The invention and development of digital technologies, Internet and Free Software have
changed creation methods: creations of the
human mind can obviously be distributed, exchanged, and transformed. They allow to produce common works to which everyone can
contribute to the benefit of all.
The main rationale for this Free Art License
is to promote and protect these creations of
the human mind according to the principles
of copyleft: freedom to use, copy, distribute,
transform, and prohibition of exclusive appropriation.
DEFINITIONS

“work” either means the initial work, the subsequent works or the common work as defined
hereafter:
“common work” means a work composed of the
initial work and all subsequent contributions to
it (originals and copies). The initial author is
the one who, by choosing this license, defines
the conditions under which contributions are
made.
“Initial work” means the work created by the
initiator of the common work (as defined
above), the copies of which can be modified by
whoever wants to
“Subsequent works” means the contributions
made by authors who participate in the evolution of the common work by exercising the
rights to reproduce, distribute, and modify that
are granted by the license.
“Originals” (sources or resources of the work)
means all copies of either the initial work or any
subsequent work mentioning a date and used

by their author(s) as references for any subsequent updates, interpretations, copies or reproductions.
“Copy” means any reproduction of an original
as defined by this license.
OBJECT

The aim of this license is to define the conditions under which one can use this work freely.
SCOPE

This work is subject to copyright law. Through
this license its author specifies the extent to
which you can copy, distribute, and modify it.
FREEDOM TO COPY (OR TO MAKE
REPRODUCTIONS)

You have the right to copy this work for yourself, your friends or any other person, whatever
the technique used.
FREEDOM TO DISTRIBUTE, TO
PERFORM IN PUBLIC

You have the right to distribute copies of this
work; whether modified or not, whatever the
medium and the place, with or without any
charge, provided that you: attach this license
without any modification to the copies of this
work or indicate precisely where the license can
be found, specify to the recipient the names of
the author(s) of the originals, including yours
if you have modified the work, specify to the
recipient where to access the originals (either
initial or subsequent). The authors of the originals may, if they wish to, give you the right to
distribute the originals under the same conditions as the copies.
FREEDOM TO MODIFY

You have the right to modify copies of the originals (whether initial or subsequent) provided
you comply with the following conditions: all
conditions in article 2.2 above, if you distribute
modified copies; indicate that the work has
been modified and, if it is possible, what kind
of modifications have been made; distribute the
subsequent work under the same license or any
compatible license. The author(s) of the original work may give you the right to modify it
under the same conditions as the copies.
RELATED RIGHTS

Activities giving rise to authors rights and
related rights shall not challenge the rights
granted by this license. For example, this is the
reason why performances must be subject to the
same license or a compatible license. Similarly,
integrating the work in a database, a compilation or an anthology shall not prevent anyone
from using the work under the same conditions
as those defined in this license.
INCORPORATION OF THE WORK

Incorporating this work into a larger work that
is not subject to the Free Art License shall not
challenge the rights granted by this license. If
the work can no longer be accessed apart from
the larger work in which it is incorporated, then
incorporation shall only be allowed under the

condition that the larger work is subject either
to the Free Art License or a compatible license.
COMPATIBILITY

A license is compatible with the Free Art License provided: it gives the right to copy, distribute, and modify copies of the work including for commercial purposes and without any
other restrictions than those required by the
respect of the other compatibility criteria; it
ensures proper attribution of the work to its
authors and access to previous versions of the
work when possible; it recognizes the Free Art
License as compatible (reciprocity); it requires
that changes made to the work be subject to the
same license or to a license which also meets
these compatibility criteria.
YOUR INTELLECTUAL RIGHTS

This license does not aim at denying your author’s rights in your contribution or any related
right. By choosing to contribute to the development of this common work, you only agree to
grant others the same rights with regard to your
contribution as those you were granted by this
license. Conferring these rights does not mean
you have to give up your intellectual rights.
YOUR RESPONSIBILITIES

The freedom to use the work as defined by
the Free Art License (right to copy, distribute,
modify) implies that everyone is responsible for
their own actions.
DURATION OF THE LICENSE

This license takes effect as of your acceptance
of its terms. The act of copying, distributing,
or modifying the work constitutes a tacit agreement. This license will remain in effect for as
long as the copyright which is attached to the
work. If you do not respect the terms of this
license, you automatically lose the rights that
it confers. If the legal status or legislation to
which you are subject makes it impossible for
you to respect the terms of this license, you may
not make use of the rights which it confers.
VARIOUS VERSIONS OF THE LICENSE

This license may undergo periodic modifications to incorporate improvements by its authors (instigators of the Copyleft Attitude
movement) by way of new, numbered versions.
You will always have the choice of accepting the
terms contained in the version under which the
copy of the work was distributed to you, or alternatively, to use the provisions of one of the
subsequent versions.
SUB-LICENSING

Sub-licenses are not authorized by this license.
Any person wishing to make use of the rights
that it confers will be directly bound to the authors of the common work.
LEGAL FRAMEWORK

This license is written with respect to both
French law and the Berne Convention for the
Protection of Literary and Artistic Works.

Constant
Mondotheque: A Radiated Book
2016


P.1

Mondotheque::a
radiated
book/
un
livre
irradiant/
een
irradiërend
boek

P.2

P.3

Index
• Mondotheque::a radiated book/un livre irradiant/een
irradiërend boek
◦ Property:Person (agents + actors)
◦ EN Introduction
◦ FR Préface
◦ NL Inleiding
• Embedded hierarchies
◦ FR+NL+EN A radiating interview/Un entrevue irradiant/Een irradiërend gesprek
◦ EN Amateur Librarian - A Course in Critical Pedagogy TOMISLAV MEDAK &
MARCELL MARS (Public Library project)
◦ FR Bibliothécaire amateur - un cours de pédagogie critique TOMISLAV MEDAK
& MARCELL MARS







EN

A bag but is language nothing of words MICHAEL MURTAUGH
A Book of the Web DUSAN BAROK
EN
The Indexalist MATTHEW FULLER
NL
De Indexalist MATTHEW FULLER
FR
Une lecture-écriture du livre sur le livre ALEXIA DE VISSCHER
EN

• Disambiguation
◦ EN An experimental transcript SÎNZIANA PĂLTINEANU
◦ EN+FR LES UTOPISTES and their common logos/et leurs logos communs
DENNIS POHL





EN

X = Y DICK RECKARD
Madame C/Mevrouw C FEMKE SNELTING
EN
A Pre-emptive History of the Google Cultural Institute GERALDINE
EN+NL

JUÁREZ




FR
EN

Une histoire préventive du Google Cultural Institute GERALDINE JUÁREZ
Special:Disambiguation

• Location, location, location
◦ EN From Paper Mill to Google Data Center SHINJOUNG YEO
◦ EN House, City, World, Nation, Globe NATACHA ROUSSEL
◦ EN The Smart City - City of Knowledge DENNIS POHL
◦ FR La ville intelligente - Ville de la connaissance DENNIS POHL
◦ EN The Itinerant Archive
• Cross-readings
◦ EN Les Pyramides
◦ EN Transclusionism
◦ EN Reading list
◦ FR+EN+NL Colophon/Colofon
Last
Revision:
2·08·2016

P.4

P.5

Property:Person
Meet the cast of historical, contemporary and fictional people that populate La
Mondotheque.

Unknown man,Andrew
Warden Boyd Carnegie
Rayward, Françoise
Levie, Alex Wright

André CanonneArni Jonsson , Barack ObamaBernard Otlet Bernard Otlet, Bernard Otlet, Bill Echikson
Sauli Niinistö
Patrick
Patrick
Lafontaine Lafontaine

Bill Echikson, Delphine JenartDelphine Jenart,
Elio Di Rupo Unknown man,Elio Di Rupo, Elio Di Rupo, Elio Di Rupo, Sylvia Van
Delphine Jenart
Nooka Kiili ,
Elio Di Rupo, Sylvia Van Sylvia Van Thierry GeertsPeteghem, Elio
Joyce Proot
Roi Albert II, Peteghem
Peteghem
Di Rupo, JeanJean-Claude
Paul Deplus
Marcourt

Elio Di Rupo, Elio Di Rupo, Elio Di Rupo, Elio Di Rupo Alexander De Elio Di Rupo, Nicolas Sarkozy,
Eric E. SchmidtErnest de Potter
Thierry Geerts,Guy Quaden , Rudy Demotte
Croo, Elio Di Unknown man,Eric E. Schmidt
Unknown man Yves Vasseur
Rupo
Roi Albert II,
Jean-Claude
Marcourt

Evgeny
Rodionov

P.6

Stéphanie
Alexia de Visscher,
Femke Snelting,Robert M. Nicolas Malevé,
Stéphanie
Stéphanie
François
Manfroid, Femke
Michael Murtaugh,
Dennis Pohl, Ochshorn, JanMichael
Manfroid, Femke
Manfroid, Femke
Schuiten
Snelting, Dick Femke Snelting,Alexia de
Gerber , FemkeMurtaugh, Alexia
Snelting, Natacha
Snelting, Natacha
Reckard
Sînziana
Visscher, Andre
Snelting, Marcell
de Visscher, Roussel, Dick Roussel, Dick
Castro
Mars, Sebastian
Femke Snelting,Reckard
Reckard
Păltineanu, Nicolas
Luetgert , Donatella
Sînziana
Malevé
Portoghese Păltineanu

P.7

Gustave AbeelsHarm Post

Henri La
Fontaine

Henri La
Fontaine

Henri La
Fontaine

Mathilde Lhoest,
Henri La
Henri La
Fontaine
Fontaine

Igor PlatounoffWilhelmina
Coops, Igor
Platounoff

Annie Besant, Jean François Jean Otlet Jr. Bill Echikson, Jean-Paul Deplus
Annie Besant, Louis Masure,Unidentified Woman,
Marcel Flamion
Jean Delville Fueg
Jean-Paul Deplus
Jiddu
Mademoiselle Poels,
Mademoiselle Poels
Krishnamurti Mademoiselle de
Bauche

Marie-Louise Paul Otlet, Paul Otlet
Philips
Madame Taupin
, Pierre
Bourgeois

Paul Otlet

Wilhelmina Paul Otlet
Coops, Paul
Otlet

Marie Van Paul Otlet, Cato
Paul Otlet, Cato
Mons , Paul van Nederhasselt
van Nederhasselt
Otlet

Unidentified Wilhelmina Paul Otlet
Woman, Paul Coops, Paul
Otlet
Otlet

Paul Otlet

Jiddu Krishnamurti
Paul Otlet
, Paul Otlet, Jean
Delville

Unidentified Paul Otlet
Woman, Paul
Otlet, Georges
Lorphèvre

P.8

Paul Otlet

P.9

Cato van
Le Corbusier, Paul Otlet,
Nederhasselt, Paul
Paul Otlet, Georges
Otlet
Hélène de
Lorphèvre
Mandrot

Unidentified Paul Otlet, Henri
Paul Otlet
Woman, Jean La Fontaine,
Delville, Paul Mathilde Lhoest
Otlet, Henri La
Fontaine

Unidentified Unidentified Paul Otlet, Unidentified Paul Otlet,
Woman, Paul Woman, Paul Mathilde La Woman, W.E.B.
Unidentified
Otlet
Otlet, GeorgesFontaine , Henri
Du Bois, Paul Woman
Lorphèvre
La Fontaine Otlet, Henri La
Fontaine, Jean
Delville

Paul Panda, Unidentified Paul Otlet
Unidentified Woman, Paul
Woman, HenriOtlet
La
Fontaine, Cato van
Nederhasselt, Paul
Otlet, W.E.B. Du
Bois, Blaise Diagne
, Mathilde Lhoest

Unidentified Woman,
Sebastien
Paul Otlet, Cato
Delneste
van
Nederhasselt, Georges
Lorphèvre, André
Colet, Thea Coops,
Broese van Groenou

Steve Crossan Stéphanie
Manfroid

Sylvia Van
Peteghem

Thea Coops Unidentified Unidentified Unidentified Unidentified Unidentified Unidentified Unidentified
Woman
Woman
Woman
Woman, LouisWoman
Woman, LouisWoman
Masure
Masure

Unidentified Unidentified Unidentified Unidentified Unidentified Unidentified Vint Cerf, Chris
Vint Cerf
Woman
Woman
Woman
Woman
Woman
Woman
Burns

P.10

Vint Cerf

P.11

Wilhelmina
Coops

Wilhelmina
Coops

Wilhelmina
Coops

Wilhelmina
Coops

Yves Bernard

Introduction
This Radiated Book started three years ago with an e-mail from the Mundaneum archive
center in Mons. It announced that Elio di Rupo, then prime minister of Belgium, was about
to sign a collaboration agreement between the archive center and Google. The newsletter
cited an article in the French newspaper Le Monde that coined the Mundaneum as 'Google
on paper' [1]. It was our first encounter with many variations on the same theme.
The former mining area around Mons is also where Google has installed its largest
datacenter in Europe, a result of negotiations by the same Di Rupo[2]. Due to the re-branding
of Paul Otlet as ‘founding father of the Internet’, Otlet's oeuvre finally started to receive
international attention. Local politicians wanting to transform the industrial heartland into a
home for The Internet Age seized the moment and made the Mundaneum a central node in
their campaigns. Google — grateful for discovering its posthumous francophone roots — sent
chief evangelist Vint Cerf to the Mundaneum. Meanwhile, the archive center allowed the
company to publish hundreds of documents on the website of Google Cultural Institute.
While the visual resemblance between a row of index drawers and a server park might not
be a coincidence, it is something else to conflate the type of universalist knowledge project
imagined by Paul Otlet and Henri Lafontaine with the enterprise of the search giant. The
statement 'Google on paper' acted as a provocation, evoking other cases in other places
where geographically situated histories are turned into advertising slogans, and cultural
infrastructures pushed into the hands of global corporations.
An international band of artists, archivists and activists set out to unravel the many layers of
this mesh. The direct comparison between the historical Mundaneum project and the mission
of Alphabet Inc[3] speaks of manipulative simplification on multiple levels, but to de-tangle its
implications was easier said than done. Some of us were drawn in by misrepresentations of
the oeuvre of Otlet himself, others felt the need to give an account of its Brussels' roots, to reinsert the work of maintenance and caretaking into the his/story of founding fathers, or joined
out of concern with the future of cultural institutions and libraries in digital times.
We installed a Semantic MediaWiki and named it after the Mondotheque, a device
imagined by Paul Otlet in 1934. The wiki functioned as an online repository and frame of
reference for the work that was developed through meetings, visits and presentations[4]. For
Otlet, the Mondotheque was to be an 'intellectual machine': at the same time archive, link
generator, writing desk, catalog and broadcast station. Thinking the museum, the library, the
encyclopedia, and classificatory language as a complex and interdependent web of relations,
Otlet imagined each element as a point of entry for the other. He stressed that responses to

P.12

P.13

displays in a museum involved intellectual and social processes that where different from
those involved in reading books in a library, but that one in a sense entailed the other. [5]. The
dreamed capacity of his Mondotheque was to interface scales, perspectives and media at the
intersection of all those different practices. For us, by transporting a historical device into the
future, it figured as a kind of thinking machine, a place to analyse historical and social
locations of the Mundaneum project, a platform to envision our persistent interventions
together. The speculative figure of Mondotheque enabled us to begin to understand the
situated formations of power around the project, and allowed us to think through possible
forms of resistance. [6]
The wiki at http://mondotheque.be grew into a labyrinth of images, texts, maps and semantic
links, tools and vocabularies. MediaWiki is a Free software infrastructure developed in the
context of Wikipedia and comes with many assumptions about the kind of connections and
practices that are desirable. We wanted to work with Semantic extensions specifically
because we were interested in the way The Semantic Web[7] seemed to resemble Otlet's
Universal Decimal Classification system. At many moments we felt ourselves going down
rabbit-holes of universal completeness, endless categorisation and nauseas of scale. It made
the work at times uncomfortable, messy and unruly, but it allowed us to do the work of
unravelling in public, mixing political urgency with poetic experiments.
This Radiated Book was made because we wanted to create a moment, an incision into that
radiating process that allowed us to invite many others a look at the interrelated materials
without the need to provide a conclusive document. As a salute to Otlet's ever expanding
Radiated Library, we decided to use the MediaWiki installation to write, edit and generate
the publication which explains some of the welcome anomalies on the very pages of this
book.
The four chapters that we propose each mix fact and fiction, text and image, document and
catalogue. In this way, process and content are playing together and respond to the specific
material entanglements that we encountered. Mondotheque, and as a consequence this
Radiated book, is a multi-threaded, durational, multi-scalar adventure that in some way
diffracts the all-encompassing ambition that the 19th century Utopia of Mundaneum stood
for.
Embedded hierarchies addresses how classification systems, and the dream of their universal
application actually operate. It brings together contributions that are concerned with
knowledge infrastructures at different scales, from disobedient libraries, institutional practices
of the digital archive, meta-data structures to indexing as a pathological condition.
Disambiguation dis-entangles some of the similarities that appear around the heritage of Paul
Otlet. Through a close-reading of seemingly similar biographies, terms and vocabularies it relocates ambiguity to other places.

Location, location, location is an account of geo-political layers at work. Following the
itinerant archive of Mundaneum through the capital of Europe, we encounter local, national
and global Utopias that in turn leave their imprint on the way the stories play out. From the
hyperlocal to the global, this chapter traces patterns in the physical landscape.
Cross-readings consists of lists, image collections and other materials that make connections
emerge between historical and contemporary readings, unearthing possible spiritual or
mystical underpinnings of the Mundaneum, and transversal inclusions of the same elements in
between different locations.
The point of modest operations such as Mondotheque is to build the collective courage to
persist in demanding access to both the documents and the intellectual and technological
infrastructures that interface and mediate them. Exactly because of the urgency of the
situation, where the erosion of public institutions has become evident, and all forms of
communication seem to feed into neo-liberal agendas eventually, we should resist
simplifications and find the patience to build a relation to these histories in ways that makes
sense. It is necessary to go beyond the current techno-determinist paradigm of knowledge
production, and for this, imagination is indispensable.

Paul Otlet, design for Mondotheque (Mundaneum archive center, Mons)
Last
Revision:
2·08·2016

1. Jean-Michel Djian, Le Mundaneum, Google de papier, Le Monde Magazine, 19 december 2009

P.14

P.15

2. « À plusieurs
reprises, on a eu chaud, parce qu’il était prévu qu’au moindre couac sur ce point, Google arrêtait tout » Libre Belgique, 27 april
2007
3. Sergey and I are seriously in the business of starting new things. Alphabet will also include our X lab, which incubates new
efforts like Wing, our drone delivery effort. We are also stoked about growing our investment arms, Ventures and Capital, as
part of this new structure. Alphabet Inc. will replace Google Inc. as the publicly-traded entity (...) Google will become a whollyowned subsidiary of Alphabet https://abc.xyz/
4. http://mondotheque.be
5. The Mundaneum is an Idea, an Institution, a Method, a Body of workmaterials and Collections, a Building, a Network. Paul
Otlet, Monde (1935)
6. The analyses of these themes are transmitted through narratives -- mythologies or fictions, which I have renamed as "figurations"
or cartographies of the present. A cartography is a politically informed map of one's historical and social locations, enabling the
analysis of situated formations of power and hence the elaboration of adequate forms of resistance Rosi Braidotti, Nomadic
Theory (2011)
7. Some people have said, "Why do I need the Semantic Web? I have Google!" Google is great for helping people find things, yes!
But finding things more easily is not the same thing as using the Semantic Web. It's about creating things from data you've
complied yourself, or combining it with volumes (think databases, not so much individual documents) of data from other sources
to make new discoveries. It's about the ability to use and reuse vast volumes of data. Yes, Google can claim to index billions of
pages, but given the format of those diverse pages, there may not be a whole lot more the search engine tool can reliably do.
We're looking at applications that enable transformations, by being able to take large amounts of data and be able to run models
on the fly - whether these are financial models for oil futures, discovering the synergies between biology and chemistry researchers
in the Life Sciences, or getting the best price and service on a new pair of hiking boots. Tim Berners-Lee interviewed in
Consortium Standards Bulletin, 2005 http://www.consortiuminfo.org/bulletins/semanticweb.php

P.20

P.21
Embedded
hierarchies

P.26

P.27

A
radiating
interview/
Un
entrevue
irradiant/
Een
irradiërend
gesprek
Stéphanie Manfroid and Raphaèle Cornille are responsible for the
Mundaneum archives in Mons. We speak with them about the relationship
between the universe of Otlet and the concrete practice of scanning, meta-data
and on-line publishing, and the possibilities and limitations of their work with
Google. How to imagine a digital archive that could include the multiple
relationships between all documents in the collection? How the make visible
the continuous work of describing, maintaining and indexing?

EN

The interview is part of a series of interviews with Belgian knowledge
institutions and their vision on digital information sharing. The voices of Sylvia
Van Peteghem and Dries Moreels (Ghent University), Églantine Lebacq and
Marc d'Hoore (Royal library of Belgium) resonate on the following pages.
We hear from them about the differences and similarities in how the three
institutions deal with the unruly practice of digital heritage.

The full interviews with the Royal Library of Belgium and Ghent University
Library can be found in the on-line publication.

• RC = Raphaèle Cornille (Mundaneum archive center, responsable des collections
iconographiques)
• SM = Stéphanie Manfroid (Mundaneum archive center, responsable des archives)
• ADV = Alexia de Visscher
• FS = Femke Snelting

Mons, 21 avril 2016
PAS MAL DE CHOSES À FAIRE

ADV : Dans votre politique de numérisation, quelle infrastructure d’accès envisagez-vous et
pour quel type de données et de métadonnées ?
RC : On numérise depuis longtemps au Mundaneum, depuis 1995. À l’époque, il y avait
déjà du matériel de numérisation. Forcément pas avec les même outils que l’on a aujourd’hui,
on n’imaginait pas avoir accès aux bases de données sur le net. Il y a eu des évolutions
techniques, technologiques qui ont été importantes. Ce qui fait que pendant quelques années
on a travaillé avec le matériel qui était toujours présent en interne, mais pas vraiment avec un
plan de numérisation sur le long terme. Juste pour répondre à des demandes, soit pour nous,
parce qu’on avait des publications ou des expositions ou parce qu’on avait des demandes
extérieures de reproductions.
L’objectif évidemment c’est de pouvoir mettre à la disposition du public tout ce qui a été
numérisé. Il faut savoir que nous avons une base de données qui s’appelle Pallas[1] qui a été
soutenue par la Communauté Française depuis 2003. Malheureusement, le logiciel nous
pose pas mal de problème. On a déjà tenté des intégrations d’images et ça ne s’affiche pas
toujours correctement. Parfois on a des fiches descriptives mais nous n’avons pas l’image qui
correspond.
SM : Les archives soutenues par la Communauté française, mais aussi d’autres centres, ont
opté pour Pallas. C’est ainsi que ce système permettait une compréhension des archives en
Belgique et en Communauté française notamment.

L’idée c’est que les centres d’archives utilisent tous un même système. C’est une belle
initiative, et dans ce cadre là, c’était l’idée d’avoir une plateforme générale, où toutes les
sources liées aux archives publiques, enfin les archives soutenues par la Communauté
Française - qui ne sont pas publiques d’ailleurs - puissent être accessibles à un seul et même
endroit.
RC : Il y avait en tout cas cette idée par la suite, d’avoir une plate-forme commune, qui
s’appelle numériques.be[2]. Malheureusement, ce qu’on trouve sur numeriques.be ne
correspond au contenu sur Pallas, ce sont deux structures différentes. En gros, si on veut
diffuser sur les deux, c’est deux fois le travail.
En plus, ils n’ont pas configuré numérique.be pour qu’il puisse être moissonné par
Europeana[3]. Il y a des normes qui ne correspondent pas encore.
SM : Ce sont des choix politiques là. Et nous on dépend de ça. Et nous, nous dépendons de
choix généraux. Il est important que l’on comprenne bien la situation d'centre d’archives
comme le nôtre. Sa place dans le paysage patrimoniale belge et francophone également.
Notre intention est de nous situer tant dans ce cadre qu’à un niveau européen mais aussi
international. Ce ne sont pas des combinaisons si aisées que cela à mettre en place pour ces
différents publics ou utilisateurs par exemple.
RC : Soit il y a un problème technique, soit il y a un problème d’autorisation. Il faut savoir
que c’est assez complexe au niveau des métadonnées, il y a pas mal de choses à faire. On a
pendant tout un temps numérisé, mais on a généré les métadonnées au fur et à mesure, donc
il y aussi un gros travail à réaliser par rapport à ça. Normalement, pour le début 2017 on
envisagera le passage à Europeana avec des métadonnées correctes et le fait qu’on puisse
verser des fichiers corrects.
C’est assez lourd comme travail parce que nous devons générer les métadonnées à chaque
fois. Si vous prenez le Dublin Core[4], c’est à chaque fois 23 champs à remplir par document.
On essaye de remplir le maximum. De temps en temps, ça peut être assez lourd quand
même.
LA VIE DE LA PIÈCE

FS : Pouvez-vous nous parler du détail de la lecture des documents d’Otlet et de la rédaction
de leur description, le passage d’un document « Otletien » à une version numérisée ?

P.30

P.31

RC : Il faut déjà au minimum avoir un inventaire. Il faut
que les pièces soient numérotées, sinon c’est un peu
difficile de retracer tout le travail. Parfois, ça passe par
une petite phase de restauration parce qu’on a des
documents poussiéreux et quand on scanne ça se voit.
Parfois, on doit faire des mises à plat, pour les journaux
par exemple, parce qu’ils sont pliés dans les boîtes. Ça
prend déjà un petit moment avant de pouvoir les
numériser. Ensuite, on va scanner le document, ça c’est la
partie la plus facile. On le met sur le scanner, on appuie
sur un bouton, presque.
Si c’est un manuscrit, on ne va pas pouvoir océriser. Par
contre, si c’est un document imprimé, là, on va l’océriser
en sachant qu’il va falloir le revérifier par la suite, parce
qu’il y a toujours un pourcentage d’erreur. Par exemple,
dans les journaux, en fonction de la typographie, si vous
avez des mots qui sont un peu effacés avec le temps, il
faut vérifier tout ça. Et puis, on va générer les
métadonnées Dublin Core. L’identifiant, un titre, tout ce
qui concerne les contributeurs : éditeurs, illustrateurs,
imprimeurs etc . c’est une description, c’est une
indexation par mots clefs, c’est une date, c’est une
localisation géographique, si il y en a une. C’est aussi,
faire des liens avec soit des ressources en interne soit des
ressources externes. Donc par exemple, moi si je pense à
une affiche, si elle a été dans une exposition si elle a été
publiée, il faut mettre toutes les références.

From Voor elk boek is een gebruiker:
SVP: Wij scannen op een totaal
andere manier. Bij Google gaat het
om massa-productie. Wij kiezen zelf
voor kleinere projecten. We hebben
een vaste ploeg, twee mensen die
voltijds scannen en beelden
verwerken, maar daarmee begin je
niet aan een project van 250.000
boeken. We doen wel een scan-ondemand of selecteren volledige
collecties. Toen we al onze
2.750.000 fiches enkele jaren
geleden door een externe firma lieten
scannen had ik medelijden met de
meisjes die de hele dag de
invoerscanner bedienden. Hopeloos
saai.
From X = Y:
According to the ideal image
described in "Traité", all the tasks of
collecting, translating, distributing,
should be completely automatic,
seemingly without the necessity of
human intervention. However, the
Mundaneum hired dozens of women
to perform these tasks. This humanrun version of the system was not
considered worth mentioning, as if it
was a temporary in-between phase
that should be overcome as soon as
possible, something that was staining
the project with its vulgarity.

SM : La vie de la pièce.
RC : Et faire le lien par exemple vers d’autres fonds, une autre lettre… Donc, vous avez
vraiment tous les liens qui sont là. Et puis, vous avez la description du fichier numérique en
lui-même. Nous on a à chaque fois quatre fichiers numériques : Un fichier RAW, un fichier
Tiff en 300 DPI, un JPEG en 300 DPI et un dernier JPE en 72 DPI, qui sont en fait les
trois formats qu’on utilise le plus. Et puis, là pareil, vous remettez un titre, une date, vous
avez aussi tout ce qui concerne les autorisations, les droits… Pour chaque document il y a
tout ces champs à remplir.
SM : Face à un schéma d’Otlet, on se demandait parfois ce que sont tous ces gribouillons.
On ne comprend pas tout de suite grand chose.
FS : Qui fait la description ? Plusieurs personnes ou quelqu’un qui travaille seul ?

RC : Ça demande quand même une certaine discipline, de la concentration et du temps pour
pouvoir le faire bien.
RC : Généralement c’est quelqu’un seul qui décrit. Là c’est un texte libre, donc c’est encore
assez facile. Maintenant quand vous devez indexer, il faut utiliser des Thesaurus existants, ce
qui n’est pas toujours facile parce que parfois ce sont des contraintes, et que ce n’est pas tout
à fait le vocabulaire que vous avez l’habitude d’utiliser.
SM : On a rencontré une firme, effectivement, quelqu’un qui pensait qu’on allait pouvoir
automatiser la chaîne de description des archives avec la numérisation y compris. Il ne
comprenait pas que c’était une tâche impossible. C’est une tâche humaine. Et franchement,
toute l’expérience qu’on peut avoir par rapport à ça aide énormément. Je ne pense pas, là
maintenant, qu’un cerveau humain puisse être remplacé par une machine dans ce cadre. Je
n’y crois pas.
UNE MÉTHODE D’INDEXATION STANDARDISÉE

FS : Votre travail touche très intimement à la pratique d’Otlet même. En fait, dans les
documents que nous avons consultés, nous avons vus plusieurs essais d’indexation, plusieurs
niveaux de systèmes de classement. Comment cela se croise-t-il avec votre travail de
numérisation ? Gardez-vous une trace de ces systèmes déjà projetés sur les documents euxmêmes ?
SM : Je crois qu’il y a deux éléments. Ici, si la question portait sur les étapes de la
numérisation, on part du document lui-même pour arriver à un nommage de fichier et il y a
une description avec plusieurs champs. Si finalement la pièce qui est numérisée, elle a sa
propre vie, sa propre histoire et c’est ça qu’on comprend. Par contre, au départ, on part du
principe que le fond est décrit, il y a un inventaire. On va faire comme si c’était toujours le
cas, ce n’est pas vrai d’ailleurs, ce n’est pas toujours le cas.
Et autre chose, aujourd’hui nous sommes un centre d’archives. Otlet était dans une
conception d’ouverture à la documentation, d’ouverture à l’Encyclopédie, vraiment quelque
chose de très très large. Notre norme de travail c’est d’utiliser la norme de description
générale des archives[5], et c’est une autre contrainte. C’est un gros boulot ça aussi.
On doit pouvoir faire des relations avec d’autres éléments qui se trouvent ailleurs, d’autres
documents, d’autres collections. C’est une lecture, je dirais presque en réseau des documents.
Évidemment c’est intéressant. Mais d’un autre côté, nous sommes archivistes, et c’est pas
qu’on n’aime pas la logique d’Otlet, mais on doit se faire à une discipline qui nous impose
aussi de protéger le patrimoine ici, qui appartient à la Communauté Française et qui donc
doit être décrit de manière normée comme dans les autres centres d’archives.

P.32

P.33

C’est une différence de dialogues. Pour moi ce n’est pas un détail du tout. Le fait que par
exemple, certains vont se dire « vous ne mettez pas l’indice CDU dans ces champs » ... vous
n’avez d’ailleurs pas encore posé cette question … ?
ADV : Elle allait venir !
SM : Aujourd’hui on ne cherche pas par indice CDU, c’est tout. Nous sommes un centre
d’archives, et je pense que ça a été la chance pour le Mundaneum de pouvoir mettre en
avant la protection de ce patrimoine en tant que tel et de pouvoir l’ériger en tant que
patrimoine réel, important pour la communauté.
RC : En fait la classification décimale n’étant pas une méthode d’indexation standardisée,
elle n’est pas demandée dans ces champs. Pour chaque champ à remplir dans le Dublin
Core, vous avez des normes à utiliser. Par exemple, pour les dates, les pays et la langue vous
avez les normes ISO, et la CDU n’est pas reconnue comme une norme.
Quand je décris dans Pallas, moi je mets l’indice CDU. Parce que les collections
iconographiques sont classées par thématique. Les cartes postales géographiques sont
classées par lieu. Et donc, j’ai à chaque fois l’indice CDU, parce que là, ça a un sens de le
mettre.
FS : C’est très beau d’entendre cela mais c’est aussi tragique dans un sens. Il y a eu tellement
d’efforts faits à cette époque là pour trouver un standard ...
UN AXE DE COMMUNICATION

SM : La question de la légitimité du travail d’Otlet se place sur un débat contemporain qui
est amené sur la gestion des bases de données, en gros. Ça c’est un axe qui est de
communication, ce n’est pas le seul axe de travail de fond dans nos archives. Il faut distinguer
des éléments et la politique de numérisation, je ne suis pas en train de vouloir dire : « Tiens,
on est dans la gestion de méga-données chez nous. »
Nous ne gérons pas de grandes quantités de données. Le Big Data ne nous concerne pas
tout à fait, en terme de données conservées chez nous. Le débat nous intéresse au même titre
que ce débat existait sous une autre forme fin du 19e siècle avec l’avènement de la presse
périodique et la multiplication des titres de journaux ainsi que la diffusion rapide d’une
information.
RC : Le fait d’avoir eu Paul Otlet reconnu comme père de l’internet etcetera, d’avoir pu le
rattacher justement à des éléments actuels, c’était des sujets porteurs pour la communication.
Ça ne veut pas dire que nous ne travaillons que là dessus. Il en a fait beaucoup plus que ça.
C’était un axe porteur, parce qu’on est à l’ère de la numérisation, parce qu’on nous demande

de numériser, de valoriser. On est encore à travailler sur les archives, à dépouiller les
archives, à faire des inventaires et donc on est très très loin de ces réflexions justement Big
Data et tout ça.
FS : Est-il imaginable qu’Otlet ait inventé le World Wide Web ?
SM : Franchement, pour dire les choses platement : C’est impossible, quand on a un regard
historique, d’imaginer qu’Otlet a imaginé… enfin il a imaginé des choses, oui, mais est-ce
que c’est parce que ça existe aujourd’hui qu’on peut dire « il a imaginé ça » ?. C’est ce qu’on
appelle de l’anachronisme en Histoire. Déontologiquement, ce genre de choses un historien
ne peut pas le faire. Quelqu’un d’autre peut se permettre de le faire. Par exemple, en
communication c’est possible. Réduire à des idées simples est aussi possible. C’est même un
avantage de pouvoir le faire. Une idée passera donc mieux.
RC : Il y a des concepts qu’il avait déjà compris.
From Voor elk boek is een gebruiker:
Maintenant, en fonction de l’époque, il n’a pas pu tout
Dus in de 19e eeuw wou Vander
mettre en place mais, il y a des choses qu’il avait
Haeghen een catalogus, en Otlet een
comprises dès le départ. Par exemple, standardiser les
bibliografie. En vandaag heeft Google
alles samen met de volledige tekst
choses pour pouvoir les changer. Ça il le comprend dès
erbij die dan nog op elk woord
le départ, c’est pour ça, la rédaction des fiches, c’est
doorzoekbaar is. Dat is de droom van
standardisé, vous ne pouvez pas rédiger n’importe
zowel Vander Haeghen als Otlet
méér dan verder zetten. Vanuit die
comment. C’est pour ça qu’il développe la CDU, il faut
gedachte zijn wij vanzelfsprekend
un langage qui soit utilisable par tous. Il imagine avec les
meegegaan. We hebben aan de
Google onderhandelaars gevraagd:
moyens de communications qu’il a à l’époque, il imagine
waarom doet Google dit? Het
déjà un moment pouvoir les combiner, sans doute parce
antwoord was: “Because it's in the
qu’il a vu un moment l’évolution des techniques et qu’il
heart of the founders”. Moesten wij de
idealen van Vander Haeghen en
pense pouvoir aller plus loin. Il pense à la
Otlet niet als voorbeeld hebben
dématérialisation quand il utilise des microfilms, il se dit
gehad, dan was er misschien twijfel
« attention la conservation papier, il y a un soucis. Il faut
geweest, maar nu niet.
conserver le contenu et donc il faut le passer sur un autre
support ». D’abord il va essayer sur des plaques
photographiques, il calcule le nombre de pages qu’il peut mettre sur une plaque et voilà. Il
transforme ça en autre support.
Je pense qu’il a imaginé des choses, parce qu’il avait cette envie de communiquer le savoir,
ce n’est pas quelqu’un qui a un moment avait envie de collectionner sans diffuser, non. C’était
toujours dans cette idée de diffuser, de communiquer quelques soient les personnes, quelque
soit le pays. C’est d’ailleurs pour ça qu’il adapte le Musée International, pour que tout le
monde puisse y aller, même ceux qui ne savaient pas lire avaient accès aux salles et
pouvaient comprendre, parce qu’il avait organisé les choses de telles façons. Il imagine à
chaque fois des outils de communication qui vont lui servir pour diffuser ses idées, sa pensée.

P.34

P.35

Qu’il ait imaginé à un moment donné qu’on puisse lire des choses à l’autre bout du monde ?
Il a du y penser, mais maintenant, techniquement et technologiquement, il n’a pas pu
concevoir. Mais je suis sûre qu’il avait envisagé le concept.
CELUI QUI FAIT UN PEU DE TOUT, IL LE FAIT UN PEU
MOINS BIEN

SM : Otlet, à son époque, a par moments réussi à se faire détester par pas mal de gens,
parce qu’il y avait une sorte de confusion au niveau des domaines dans lesquels il exerçait. À
la fois, cette fascination de créer une cité politique qui est la Cité Mondiale, et le fait de
vouloir mélanger les genres, de ne pas être dans une volonté de standardisation avec des
spécialistes, mais aussi une volonté de travailler avec le monde de l’industrie, parce que c’est
ce qu’il a réussi. C’est un réel handicap à cette époque là parce que vous avez une
spécialisation dans tous les domaines de la connaissance et finalement celui qui fait un peu de
tout, il le fait un peu mal moins bien. Dans certains milieux ou après une lecture très
superficielle du travail mené par Otlet, on comprend que le personnage bénéficie d’un a
priori négatif car il a mélangé les genres ou les domaines. Par exemple, Otlet s’est attaqué à
différentes institutions pour leur manque d’originalité en terme de bibliographie. La
Bibliothèque Royale en a fait les frais. Ça peut laisser quelques traces inattendues dans
l’histoire. L’héritage d’Otlet en matière bibliographique n’est pas forcément mis en évidence
dans un lieu tel que la bibliothèque nationale. C’est on le comprend difficile d’imaginer une
institution qui explique certains engagements de manière aussi personnalisée ou
individualisée. On va plutôt parler d’un service et de son histoire dans une période plus
longue. On évite ainsi d’entrer dans des détails tels que ceux-là.
Effectivement, il y a à la fois le Monsieur dans son époque, la vision que les scientifiques vont
en garder aujourd’hui et des académiques. Et puis, il y a la fascination de tout un chacun.
Notre travail à nous, c’est de faire de tout. C’est à la fois de faire en sorte que les archives
soient disponibles pour le tout un chacun, mais aussi que le scientifique qui a envie d’étudier,
dans une perspective positive ou négative, puisse le faire.
ON EST PAS DANS L’OTLETANEUM ICI !

FS : Le travail d’Otlet met en relation l’organisation du savoir et de la communication.
Comment votre travail peut-il, dans un centre d’archives qui est aussi un lieu de rencontre et
un musée, être inspiré - ou pas - par cette mission qu’Otlet s’était donné ?
SM : Il y a quand même un chose qui est essentielle, c’est qu’on est pas dans l’Otletaneum
ici, on n’est pas dans la fondation Otlet.

Nous sommes un centre d’archives spécialisé, qui a conservé toutes les archives liées à une
institution. Cette institution était animée par des hommes et des femmes. Et donc, ce qui les
animaient, c’était différentes choses, dont le désir de transmission. Et quand à Otlet, on a
identifié son envie de transmettre et il a imaginé tous les moyens. Il n’était pas ingénieur non
plus, il ne faut pas rire. Et donc, c’est un peu comme Jules Verne, il a rêvé le monde, il a
imaginé des choses différentes, des instruments. Il s’est mis à rêver à certaines choses, à des
applications. C’est un passionné, c’est un innovateur et je pense qu’il a passionné des gens
autour de lui. Mais, autour de lui, il y avait d’autres personnes, notamment Henri La
Fontaine, qui n’est pas moins intéressant. Il y avait aussi le Baron Descamps et d’autres
personnes qui gravitaient autour de cette institution. Il y avait aussi tout un contexte
particulier lié notamment à la sociologie, aux sciences sociales, notamment Solvay, et voilà.
Tout ceux qu’on retrouve et qui ont traversé une quarantaine d’années.
Aujourd’hui, nous sommes un centre d’archives avec des supports différents, avec cette
volonté encyclopédique qu’ils ont eu et qui a été multi supports, et donc l’œuvre phare n’a
pas été uniquement Le Traité de Documentation. C’était intéressant de comprendre sa
genèse avec les visites que vous aviez fait, mais il y d’autres fonds, notamment des fonds liés
au pacifisme, à l’anarchisme et au féminisme. Et aussi tout ce département iconographique
avec ces essais un peu particuliers qui ne sont pas super connus.
Donc on n’est pas dans l’Otletaneum et nous ne sommes pas dans le sanctuaire d’Otlet.
ADV : La question est plutôt : comment s’emparer de sa vision dans votre travail ?
SM : J’avais bien compris la question.
En rendant accessible ses archives, son patrimoine et en participant à la meilleure
compréhension à travers nos efforts de valorisation : des publications, visites guidées mais
aussi le programme d’activités qui permettent de mieux comprendre son travail. Ce travail
s’effectue notamment à travers le label du Patrimoine Européen mais aussi dans le cadre de
Mémoire du Monde[6].
RC : Ce n’est pas parce que Otlet a écrit que La Fontaine n’a pas travaillé sur le projet. Ce
n’était pas du tout les mêmes personnalités.
SM : On est sur des stéréotypes.
ADV : Otlet a tout de même énormément écrit ?
SM : Otlet a beaucoup synthétisé, diffusé et lu. Il a été un formidable catalyseur de son
époque.
RC : C’est plutôt perdre la pensée d’Otlet en allant dans un seul sens, parce que lui il voulait
justement brasser des savoirs, diffuser l’ensemble de la connaissance. Pour nous l’objectif

P.36

P.37

c’est vraiment de pouvoir tout exploiter, tous les sujets, tous les supports, toutes les
thématiques… Quand on dit qu’il a préfiguré internet, c’est juste deux schémas d’Otlet et on
tourne autour de deux schémas depuis 2012, même avant d’ailleurs, ces deux schémas A4.
Ils ne sont pas grands.
SM : Ce qui n’est pas juste non plus, c’est le caractère réducteur par lequel on passe quand
on réduit le Mundaneum à Otlet et qu’on ne réduit Otlet qu’à ça. Et d’un autre côté, ce que
je trouve intéressant aussi, c’est les autres personnalités qui ont décidé de refaire aussi le
monde par la fiche et là, notre idée était évidemment de mettre en évidence toutes ces
personnes et les compositions multiformes de cette institution qui avait beaucoup d’originalité
et pas de s’en tenir à une vision « La Fontaine c’est le prix Nobel de la paix, Otlet c’est
monsieur Internet, Léonie La Fontaine c’est Madame féminisme, Monsieur Hem Day[7] c’est
l’anarchiste … » On ne fait pas l’Histoire comme ça, en créant des catégories.
RC : Je me souviens quand je suis arrivée ici en 2002 : Paul Otlet c’était l’espèce de savant
fou qui avait voulu créer une cité mondiale et qui l’avait proposée à Hitler. Les gens avaient
oublié tout ce qu’il avait fait avant.
Vous avez beaucoup de bibliothèques qui aujourd’hui encore classent au nom de la CDU
mais ils ne savent pas d’où ça vient. Tout ce travail on l’a fait et ça remettait, quand même,
les choses à leur place et on l’a ouvert quand même au public. On a eu des ouvertures avec
des différents publics à partir de ce moment là.
SM : C’est aussi d’avoir une vision globale sur ce que les uns et les autres ont fait et aussi de
ce qu’a été l’institution, ce qui est d’ailleurs l’une des plus grosse difficulté qui existe. C’est de
s’appeler Mundaneum dans l’absolu.
On est le « Mundaneum Centre d’archives » depuis 1993. Mais le Mundaneum c’est une
institution qui nait après la première guerre mondiale, dont le nom est postérieur à l'IIB.
Dans ses gênes, elle est bibliographique et peut-être que ce sont ces différentes notions qu’il
faut essayer d’expliquer aux gens.
Mais c’est quand même formidable de dire que Paul Otlet a inventé internet, pourquoi pas.
C’est une formule et je pense que dans l’absolu la formule marque les gens. Maintenant, il
n’a pas inventé Google. J’ai bien dit Internet.
POUR LA CARICATURE, C’EST SYMPA. POUR LA RÉALITÉ
MOINS.

FS : Qu’est ce que votre collaboration avec Google vous a-t-elle apportée ? Est-ce qu'ils vous
ont aidé à numériser des documents?

RC : C’est nous qui avons numérisé. C’est moi qui met les images en ligne sur Google.
Google n’a rien numérisé.
ADV : Mais donc vous vous transmettez des images et des métadonnées à Google mais le
public n’a pas accès à ces images … ?
RC : Ils ont accès, mais ils ne peuvent pas télécharger.
FS : Les images que vous avez mises sur Google Cultural Institute sont aujourd’hui dans le
domaine public et donc en tant que public, je ne peux pas voir que les images sont libres de
droit, parce qu’elles sont toutes sous la licence standard de Google.
RC : Ils ont mis « Collection de la Fédération Wallonie Bruxelles » à chaque fois. Puisque
ça fait partie des métadonnées qui sont transmises avec l’image.
ADV : Le problème, actuellement, comme il n’y a pas de catalogue en ligne, c’est qu’il n’y a
pas tant d’autres accès. À part quelques images sur numeriques.be, quand on tape « Otlet »
sur un moteur de recherche, on a l’impression que ce n’est que via le Google Cultural Institute
par lequel on a accès et en réalité c’est un accès limité.
SM : C’est donc une impression.
RC : Vous avez aussi des images sur Wikimedia commons. Il y a la même chose que sur
Google Cultural Institute. C’est moi qui les met des deux cotés, je sais ce que je mets. Et là
je suis encore en train d’en uploader dessus, donc allez y. Pour l’instant, c’est de nouveau des
schémas d’Otlet, en tout cas des planches qui sont mises en ligne.
Sur Wikimédia Commons je sais pas importer les métadonnées automatiquement. Enfin
j’importe un fichier et puis je dois entrer les données moi-même. Je ne peux pas importer un
fichier Excel. Dans Google je fais ça, j’importe les images et ça se fait tout seul.
AV : Et vous pouvez pas trouver une collaboration avec les gens de Wikimédia Commons ?
RC : En fait, ils proposent des systèmes d’importations mais qui ne fonctionnent pas ou alors
qui ne fonctionnent pas avec Windows. Et donc, moi je ne vais pas commencer à installer un
PC qui fonctionne avec Linux ou Ubuntu juste pour pouvoir uploader sur Wikimédia.
AV : Mais eux peuvent le faire ?
RC : On a eu la collaboration sur Le traité de Documentation, puisque c’est eux qui ont
travaillés. Ils ont tout retranscrit.
Aussi, il faut dédommager les bénévoles. Ça je peux vous garantir. Ils sont bénévoles jusqu’à
un certain point. Mais si vous leur confiez du travail comme ça … Ils sont bénévoles parce

P.38

P.39

que quand ils retravaillent des fiches sur Wikipédia, parce que c’est leur truc, ils en ont envie,
c’est leur volonté.
Je ne mets pas plus sur Google Cultural Institute que sur Wikipédia. Je ne favorise pas
Google. Ce qu’il y a sur le Cultural Institute, c’est qu’on a la possibilité de réaliser des
expositions virtuelles et quand j’upload là, c’est parce qu’on a une exposition qui va être faite.
On essaye de faire des expositions virtuelles. C’est vrai que ça fonctionne bien pour nous en
matière de communication pour les archives. Ça, il ne faut pas s’en cacher. J’ai beaucoup de
demandes qui arrivent, des demandes d’images, par ce biais là. Ça nous permet de valoriser
des fonds et des thématiques qu’on ne pourrait pas faire dans l’espace.
On a fait une exposition sur Léonie Lafontaine, qui a permis de mettre en ligne une centaine
de documents liés au féminisme, ça n’avait jamais été fait avant. C’était très intéressant et ça
a eu un bon retour pour les autres expositions aussi. Moi, c’est plutôt comme ça que j’utilise
Google Cultural Institute. Je ne suis pas pro Google mais là, j’ai un outil qui me permet de
valoriser les archives.
ADV : Google serait-il la seule solution pour valoriser vos archives ?
SM : Notre solution c’est d’avoir un logiciel à nous. Pourquoi avoir cette envie d’alimenter
d’autres sites ? Parce qu’on ne l’a pas sur le nôtre. Pour rappel, on travaille pour la
Communauté Française qui est propriétaire des collections et avec laquelle on est
conventionné. Elle ne nous demande pas d’avoir un logiciel externe. Elle demande qu’on ait
notre propre produit aussi. Et c’est là dessus que l’on travaille depuis 2014, pour le
remplacement de Pallas, parce que ça fait des années qu’ils nous disent qu’ils ne vont plus
soutenir. C’est plutôt ça qui nous met dans une situation complètement incompréhensible.
Comment voulez vous qu’on puisse faire transparaître ce que nous avons si on n’a pas un
outil qui permette aux chercheurs, quels qu’ils soient, scientifiques ou non, pour qu’ils
puissent être autonomes dans leur recherches ? Et pour nous, le travail que nous avons fait
en terme d’inventaire et de numérisation, qu’il soit exploitable de manière libre ?
Moi, franchement, je me demande, si cette question et cette vision que vous avez, elle ne se
poserait pas si finalement nous étions déjà sur autre chose que Pallas. On est dans un
inconfort de travail de base.
Je pense aussi que l’information à donner de notre part c’est de dire « il y a tout ceci qui
existe, venez le voir ».
On arrive à sensibiliser aussi sur les collections qu’il y a au centre d’archives et c’est bien,
c’est tout à fait intéressant. Maintenant ce serait bien aussi de franchir une autre étape et
d’éduquer sur l'ouverture au patrimoine. C’est ça aussi notre mission.

Donc Google a sa propre politique. Nous avons mis à disposition quelques expositions et
ceci en est l’intérêt. Mais on a quand même tellement de partenaires différents avec lesquels
on a travaillé. On ne privilégie pas un seul partenaire. Aujourd’hui, certaines firmes viennent
vers nous parce qu’elles ont entendu parler justement plus de Google que du Mundaneum et
en même temps du Mundaneum par l’intermédiaire de Google.
Ce sont des éléments qui nous permettent d’ouvrir peut-être le champ du dialogue avec
d’autres partenaires mais qui ne permettent pas d’aller directement en profondeur dans les
archives, enfin, dans le patrimoine réel que l’on a.
Je veux dire, on aura beau dire qu’on fait autre chose, on ne verra que celui là parce que
Google est un mastodonte et parce que ça parle à tout le monde. On est dans une aire de
communication particulière.
RC : Maintenant la collaboration Google et l’image que vous en avez et bien nous on en
pâtit énormément au niveau des archives. Et encore, parce que souvent les gens nous disent
« mais vous avez un gros mécène »
SM : Ils nous réduisent à ça. Pour la caricature c’est sympa. Pour la réalité moins.
FS : Quand on parle aux gens de l’Université de Gand, c’est clair que leur collaboration avec
Google Books a eu une autre fonction. Ce ne sont que des livres, des objets qui sont scannés
de manière assez brutes. Il n’y a pas de métadonnées complexes, c’est plutôt une question de
volume.
SM : La politique de numérisation de l’Université de
Gand, je pense, est plus en lien avec ce que Google
imagine. C’est-à-dire quelle est la plus value que ça leur
apporte de pouvoir travailler à la fois une bibliothèque
universitaire telle que la bibliothèque de l’Université de
Gand, et le fait de l’associer avec le Mundaneum ?
FS : C’est aussi d'autres besoins, un autre type d’accès ?
Dans une bibliothèque les livres sont là pour être lus, j’ai
l’impression que ce n’est pas la même vision pour un
centre d’archives.
SM : C’est bien plus complexe dans d’autres endroits.

From Voor elk boek is een gebruiker:
SVP: Maar ... je kan niet bij Google
gaan aankloppen, Google kiest jou.
Wij hebben wel hun aandacht
gevraagd voor het Mundaneum met
de link tussen Vander Haeghen en
Otlet. Als Google België iets
organiseert, proberen ze ons altijd te
betrekken, omdat wij nu eenmaal een
universiteit zijn. U heeft het
Mundaneum gezien, het is een zeer
mooi archief, maar dat is het ook.
Voor ons zou dat enkel een stuk van
een collectie zijn. Ze worden ook op
een totaal andere manier gesteund
door Google dan wij.

Notre intention en terme de numérisation n’est pas celle
là, et nous ne voyons pas notre action, nous, uniquement
par ce biais là. À Gand, ils ont numérisé des livres. C’est leur choix soutenu par la Région
flamande. De notre côté, nous poursuivons une même volonté d’accès pour le public et les

P.40

P.41

chercheurs mais avec un matériel un patrimoine, bien différent de livres publiés uniquement !
Le travail avec Google a permis de collaborer plusieurs fois avec l’Université mais nous
l’avions déjà fait avant de se retrouver avec Google sur certaines activités et l’accueil de
conférenciers. Donc, il y a un partenariat avec l’Université gantoise qui est intéressée par
l’histoire d’Otlet, l’histoire des idées mais aussi de l’internationalisme, de l’architecture de la
schématique. C’est d’ailleurs très enrichissant comme réflexion.
TOUT NUMÉRISER

FS : J’ai entendu quelqu’un se demander « pourquoi ne pas numériser toutes les fiches
bibliographiques qui sont dans les tiroirs » ?
RC : Ça ne sert à rien. Toutes les fiches ça n’aurait pas de sens. Maintenant, ce serait
intéressant d’en étudier quelques-unes.
Il y avait un réseau aussi autour du répertoire. C’est à dire que si on a autant de fiches, ce
n'est pas seulement parce qu’on a des fiches qui ont été rédigées à Bruxelles, on a des fiches
qui viennent du monde entier. Dans chaque pays il y avait des institutions responsables de
réaliser des bibliographies et de les renvoyer à Bruxelles.
Ça serait intéressant d’avoir un échantillon de toutes ces institutions ou de toutes ces fiches
qui existent. Ça permettrait aussi de retrouver la trace de certaines institutions qui n’existent
plus aujourd’hui. On a quand même eu deux guerres, il y a eu des révolutions etcetera. Ils
ont quand même travaillé avec des institutions russes qui n’existent plus aujourd’hui. Par ce
biais là, on pourrait retrouver leur trace. Même chose pour des ouvrages. Il y a des ouvrages
qui n’existent plus et pour lesquels on pourrait retrouver la trace. Il faut savoir qu’après la
deuxième guerre mondiale, en 46-47, le président du Mundaneum est Léon Losseau. Il est
avocat, il habite Mons, sa maison d’ailleurs est au 37 rue de Nimy, pas très loin. Il collabore
avec le Mundaneum depuis ses débuts et donc vu que les deux fondateurs sont décédés
pendant la guerre, à ce moment là il fait venir l’UNESCO à Bruxelles. Parce qu’on est dans
une phase de reconstruction des bibliothèques, beaucoup de livres ont été détruits et on
essaye de retrouver leur traces. Il leur dit « venez à Bruxelles, nous on a le répertoire de tous
ces bouquins, venez l’utiliser, nous on a le répertoire pour reconstituer toutes les
bibliothèques ».
Donc, tout numériser, non. Mais numériser certaines choses pour montrer le mécanisme de
ce répertoire, sa constitution, les différents répertoires qui existaient dans ce répertoire et de
pouvoir retrouver la trace de certains éléments, oui.
Si on numérise tout, cela permettrait d’avoir un état des lieux des sources d’informations qui
existaient à une époque pour un sujet.
SM : Le cheminement de la pensée.

Il y a des pistes très intéressantes qui vont nous permettre d’atteindre des aspects
protéiformes de l’institution, mais c’est vaste.
LA MÉMOIRE VIVE DE L’INSTITUTION

FS : Nous étions très touchées par les fiches annotées de la CDU que vous nous avez
montrées la dernière fois que nous sommes venues.
RC : Le travail sur le système lui-même.
SM : C’est fantastique effectivement, avec l’écriture d’Otlet.
SM : Autant on peut dire qu'Otlet est un maître du marketing, autant il utilisait plusieurs
termes pour décrire une même réalité. C’est pour ça que ne s’attacher qu’à sa vision à lui
c’est difficile. Comme classer ses documents, c’est aussi difficile.
ADV : Otlet n’a-t-il pas laissé suffisamment de documentation ? Une documentation qui
explicite ses systèmes de classement ?
RC : Quand on a ouvert les boîtes d'Otlet en 2002, c’était des caisses à bananes non
classées, rien du tout. En fonction de ce qu’on connaissait de l’histoire du Mundaneum à
l’époque on a pu déterminer plus ou moins des frontières et donc on avait l'Institut
international de bibliographie, la CDU, la Cité Mondiale aussi, le Musée International.
SM : Du pacifisme ...
RC : On a appelé ça « Mundapaix » parce qu’on ne savait pas trop comment le mettre dans
l’histoire du Mundaneum, c’était un peu bizarre. Le reste, on l'avait mis de côté parce qu’on
n'était pas en mesure, à ce moment là, de les classer dans ce qu’on connaissait. Puis, au fur
et à mesure qu’on s’est mis à lire les archives, on s’est mis à comprendre des choses, on a
découvert des institutions qui avaient été créées en plus et ça nous a permis d’aller
rechercher ces choses qu’on avait mises de coté.
Il y avait tellement d’institutions qui ont été créées, qui ont pu changer de noms, on ne sait
pas si elles ont existé ou pas. Il faisait une note, il faisait une publication où il annonçait :
« l’office centrale de machin chose » et puis ce n'est même pas sûr qu’il ait existé quelque
part.

P.42

P.43

Parfois, il reprend la même note mais il change certaines
choses et ainsi de suite … rien que sa numérotation c’est
pas toujours facile. Vous avez l’indice CDU, mais
ensuite, vous avez tout le système « M » c’est la référence
aux manuels du RBU. Donc il faut seulement aller
comprendre comment le manuel du RBU est organisé.
C’est à dire trouver des archives qui correspondent pour
pouvoir comprendre cette classification dans le « M ».
RC : On n’a pas trouvé un moment donné, et on aurait
bien voulu trouver, un dossier avec l’explication de son
classement. Sauf qu’il ne nous l’a pas laissé.
SM : Peut-être qu’il est possible que ça ait existé, et je
me demande comment cette information a été expliquée
aux suivants. Je me demande même si George Lorphèvre
savait, parce qu'il n’a pas pu l’expliquer à Boyd
Rayward. En tout cas, les explications n’ont pas été
transmises.

From De Indexalist:
"Bij elke verwijzing stond weer een
andere verwijzing, de één nog
interessanter dan de ander. Elk
vormde de top van een piramide van
weer verdere literatuurstudie, zwanger
met de dreiging om af te dwalen. Elk
was een strakgespannen koord dat
indien niet in acht genomen de auteur
in de val van een fout zou lokken, een
vondst al uitgevonden en
opgeschreven."
From The Indexalist:
“At every reference stood another
reference, each more interesting than
the last. Each the apex of a pyramid
of further reading, pregnant with the
threat of digression, each a thin high
wire which, if not observed might lead
the author into the fall of error, a
finding already found against and
written up.”

L’équipe du Mundaneum a développé une expérience
de plusieurs années et une compréhension sur les archives et leur organisation. Nous avons
par exemple découvert l’existence de fichiers particuliers tels que les fichiers « K ». Ils sont
liés à l’organisation administrative interne. Il a fallu montrer les éléments et archives sur
lesquels nous nous sommes basés pour bien prouver la démarche qui était la nôtre. Certains
documents expliquaient clairement cela. Mais si vous ne les avez jamais vu, c’est difficile de
croire un nouvel élément inconnu !
RC : On n’a pas beaucoup d’informations sur l’origine des collections, c’est-à-dire sur
l’origine des pièces qui sont dans les collections. Par hasard, je vais trouver un tiroir où il est
mis « dons » et à l’intérieur, je ne vais trouver que des fiches écrites à la main comme « dons
de madame une telle de deux drapeaux pour le Musée International » et ainsi de suite.
Il ne nous a pas laissé un manuel à la fin de ses archives et c’est au fur et à mesure qu’on lit
les archives qu’on arrive à faire des liens et à comprendre certains éléments. Aujourd’hui,
faire une base de données idéale, ce n’est pas encore possible, parce qu’il y a encore
beaucoup de choses que nous-mêmes on ne comprend pas. Qu’on doit encore découvrir.
ADV : Serait-il imaginable de produire une documentation issue de votre cheminement dans
la compréhension progressive de cette classification ? Par exemple, des textes enrichis donnant
une perception plus fine, une trace de la recherche. Est-ce que c’est quelque chose qui pourrait
exister ?
RC : Oui, ce serait intéressant.

Par exemple si on prend le répertoire bibliographique. Déjà, il n’y a pas que des références
bibliographiques dedans. Vous avez deux entrées : entrée par matière, entrée par auteur,
donc vous avez le répertoire A et le répertoire B. Si vous regardez les étiquettes, parfois,
vous allez trouver autre chose. Parfois, on a des étiquettes avec « ON ». Vous savez ce que
c’est ? C’est « catalogue collectif des bibliothèque de Belgique ». C’est un travail qu’ils ont
fait à un moment donné. Vous avez les « LDC » les « Bibliothèques collectifs de sociétés
savantes ». Chaque société ayant un numéro, vous avez tout qui est là. Le « K » c’est tout ce
qui est administratif donc à chaque courrier envoyé ou reçu, ils rédigeaient une fiche. On a
des fiches du personnel, on sait au jour le jour qui travaillait et qui a faisait quoi… Et ça, il
ne l’a pas laissé dans les archives.
SM : C’est presque la mémoire vive de l’institution.
On a eu vraiment cette envie de vérifier dans le répertoire cette façon de travailler, le fait
qu’il y ait des informations différentes. Effectivement, c’était un peu avant 2008, qu’on l'a su
et cette information s’est affinée avec des vérifications. Il y a eu des travaux qui ont pu être
faits avec l’identification de séries particulières des dossiers numérotés que Raphaèle a
identifié. Il y avait des correspondances et toute une structuration qu’on a identifié aussi. Ce
sont des sections précises qui ont permis d’améliorer, à la fois la CDU, au départ de faire la
CDU, de faire le répertoire et puis de créer d’autres sections, comme la section féministe,
comme la section chasse et pêche comme la section iconographique. Et donc, par rapport à
ça, je pense qu’il y a vraiment tout un travail qui doit être mis en relation à partir d’une
observation claire, à partir d’une réflexion claire de ce qu’il y a dans le répertoire et dans les
archives. Et ça, c’est un travail qui se fait étape par étape. J’espère qu’on pourra quand
même bien avancer là dessus et donner des indications qui permettront d’aller un peu plus
loin, je ne suis pas sûre qu’on verra le bout.
C’est au moins de transmettre une information, de faire en sorte qu’elle soit utilisable et que
certains documents et ces inventaires soient disponibles, ceux qui existent aujourd’hui. Et que
ça ne se perde pas dans le temps.
FS : Un jour, pensez-vous pouvoir dire « voilà, maintenant c’est fini, on a compris » ?
SM : Je ne suis pas sûre que ce soit si impossible que ça.
Ça dépend de notre volonté et dialogue autour de ces documents. Un dialogue entre les
chercheurs de tout type et l’équipe du Mundaneum enrichit la compréhension. Plus on est
nombreux autour de certains points, plus la compréhension s’élargit. Ça implique bien
entendu une implication de partenaires externes également.
Aujourd’hui on est passé à une politique de numérisation par un matériel, par une
spécialisation du personnel. Et je pense que cette spécialisation nous a permis, depuis des
années, d’aller un peu plus profondément dans les archives et donc de mieux les comprendre.

P.44

P.45

Il y a un historique que l’on comprend véritablement bien aussi, il ne demande qu’à se
déployer. Il y a à comprendre comment on va pouvoir valoriser cela autour de journées,
autour de publications, autour d’outils qui sont à notre disposition. Et donc, autour de
catalogues en ligne, notamment, et de notre propre catalogue en ligne.
C’EST ÇA QU’IL FAUT IMAGINER

FS : Les méthodes et les standards de documentation changent, l’histoire institutionnelle et les
temps changent, les chercheurs passent… vous avez vécu avec tout ça depuis longtemps. Je
me demande comment le faire transparaître, le faire ressentir?
SM : C’est vrai qu’on aimerait bien pouvoir axer aussi la communication de l’institution sur
ces différents aspects. C’est bien ça notre rêve en fait, ou notre aspiration. Pour l’instant, on
est plutôt en train de se demander comment on va mieux communiquer, sur ce que nous
faisons nous ?
RC : Est-ce que ce serait uniquement en mettant en ligne des documents ? Ou imaginer une
application qui permettrait de les mettre en œuvre? Par exemple, si je prends la
correspondance, moi j’ai lu à peu près 3000 courriers. En les lisant, on se rend vraiment
compte du réseau. C’est-à-dire qu’on se rend compte qu’il a de la correspondance à peu près
partout dans le monde. Que ce soit avec des particuliers, avec des bibliothèques, avec des
universités, avec des entreprises et donc déjà rien qu’avec cet échantillon-là, ça donne une
masse d’informations. Maintenant, si on commence à décrire dans une base de données,
lettre par lettre, je ne suis pas sûre que cela apporte quelque chose. Par contre, si on imagine
une application qui permette de faire ressortir sur une carte à chaque fois le nom des
correspondants, là, ça donne déjà une idée et ça peut vraiment mettre en œuvre toute cette
correspondance. Mais prise seule juste comme ça, est-ce que c’est vraiment intéressant ?
Dans une base de données dite « classique », c’est ça aussi le problème avec nos archives, le
Mundaneum n'étant pas un centre d’archives comme les autres de par ses collections, c’est
parfois difficile de nous adapter à des standards existants.
ADV : Il n’y aurait pas qu’un seul catalogue ou pas une seule manière de montrer les
données. C’est bien ça ?
RC : Si vous allez sur Pallas vous avez la hiérarchie du fond Otlet. Est-ce que ça parle à
quelqu’un, à part quelqu’un qui veut faire une recherche très spécifique ? Mais sinon ça ne
lui permet pas de vraiment visualiser le travail qui a été fait, et même l’ampleur du travail.
Nous, on ne peut pas se conformer à une base de donnée comme ça. Il faut que ça existe
mais ça ne transparaît pas le travail d'Otlet et de La Fontaine. Une vision comme ça, ce n'est
pas Mundaneum.

SM : Il n’y a finalement pas de base de données qui arrive à la cheville de ce qu’ils ont
imaginés en terme de papier. C’est ça qu’il faut imaginer.
FS : Pouvez-vous nous parler de cette vision d’un catalogue possible ? Si vous aviez tout
l’argent et tout le temps du monde ?
SM : On ne dort plus alors, c’est ça ?
Il y a déjà une bonne structure qui est là, et l’idée c’est vraiment de pouvoir lier les
documents, les descriptions. On peut aller plus loin dans les inventaires et numériser les
documents qui sont peut-être les plus intéressants et peut-être les plus uniques. Maintenant,
le rêve serait de numériser tout, mais est-ce que ce serait raisonnable de tout numériser ?
FS : Si tous les documents étaient disponibles en ligne ?
RC : Je pense que ça serait difficile de pouvoir transposer la pensée et le travail d'Otlet et
La Fontaine dans une base de données. C’est à dire, dans une base de données, c’est
souvent une conception très carrée : vous décrivez le fond, la série, le dossier, la pièce. Ici
tout est lié. Par exemple, la collection d’affiches, elle dépend de l’Institut International de
Photographie qui était une section du Mundaneum, c’était la section qui conserve l’image.
Ça veut dire que je dois d’abord comprendre tous les développements qui ont eu lieu avec le
concept de documentation pour ensuite lier tout le reste. Et c’est comme ça pour chaque
collection parce que ce ne sont pas des collections qui sont montées par hasard, elles
dépendaient à chaque fois d’une section spécialisée. Et donc, transposer ça dans une base de
données, je ne sais pas comment on pourrait faire.
Je pense aussi qu’aujourd’hui on n’est pas encore assez loin dans les inventaires et dans toute
la compréhension parce qu’en fait à chaque fois qu’on se plonge dans les archives, on
comprend un peu mieux, on voit un peu plus d’éléments, un peu plus de complexité, pour
vraiment pouvoir lier tout ça.
SM : Effectivement nous n’avons pas encore tout compris, il y a encore tous les petits
offices : office chasse, office pêche et renseignements…
RC : À la fin de sa vie, il va aller vers tout ce qui est standardisation, normalisation. Il va être
membre d’associations qui travaillent sur tout ce qui est norme et ainsi de suite. Il y a cet
aspect là qui est intéressant parce que c’est quand même une grande évolution par rapport au
début.
Avec le Musée International, c’est la muséographie et la muséologie qui sont vraiment une
grosse innovation à l’époque. Il y a déjà des personnes qui s’y sont intéressé mais peut-être
pas suffisamment.

P.46

P.47

Je rêve de pouvoir reconstituer virtuellement les salles d’expositions du Musée International,
parce que ça devait être incroyable de voyager là dedans. On a des plans, des photos. Même
si on n’a plus d’objets, on a suffisamment d’informations pour pouvoir le faire. Et il serait
intéressant de pouvoir étudier ce genre de salle même pour aujourd’hui, pour la
muséographie d’aujourd’hui, de reprendre exemple sur ce qu’il a fait.
FS : Si on s’imagine le Mundaneum virtuel, vraiment, si on essaye de le reconstruire à partir
des documents, c’est excitant !
SM : On en parle depuis 2010, de ça.
FS : C’est pas du tout comme le scanner hig-tech de Google Art qui passe devant le Mona
Lisa …
SM : Non. C’est un autre travail
FS : Ce n’est pas ça le musée virtuel.
RC : C’est un autre boulot.
Last
Revision:
2·08·2016

1. Logiciel fourni par la Communauté française aux centres d’archives privées. « Pallas permet de décrire, de gérer et de consulter
des documents de différents types (archives, manuscrits, photographies, images, documents de bibliothèques) en tenant compte
des conditions de description spécifiques à chaque type de document. » http://www.brudisc.be/fr/content/logiciel-pallas
2. « Images et histoires des patrimoines numérisés » [1]
3. « Notre mission : On transforme le monde par la culture! Nous voulons construire sur le riche héritage culturel européen et
donner aux gens la possibilité de le réutiliser facilement, pour leur travail, pour leur apprentissage personnel ou tout simplement
pour s’amuser. » http://www.europeana.eu
4. « The Dublin Core Metadata Initiative (DCMI) supports shared innovation in metadata design and best practices across a
broad range of purposes and business models. » http://dublincore.org/about-us/
5. La norme générale et internationale de description archivistique, ISAD(G) http://www.ica.org/sites/default/files/

CBPS_2000_Guidelines_ISAD%28G%29_Second-edition_FR.pdf

6. « L'UNESCO a mis en place le Programme Mémoire du monde en 1992. Cette mise en oeuvre est d'abord née de la prise
de conscience de l'état de préservation alarmant du patrimoine documentaire et de la précarité de son accès dans différentes
régions du monde. » http://www.unesco.org/new/fr/communication-and-information/memory-of-the-

world/about-the-programme

7. Marcel Dieu dit Hem Day

Amateur
Librarian
-A
Course
in
Critical
Pedagogy
Tomislav Medak & Marcell Mars (Public Library project)

A proposal for a curriculum in amateur librarianship, developed through the
activities and exigencies of the Public Library project. Drawing from a historic
genealogy of public library as the institution of access to knowledge, the
proletarian tradition of really useful knowledge and the amateur agency driven
by technological development, the curriculum covers a range of segments from
immediately applicable workflows for scanning, sharing and using e-books,
over politics and tactics around custodianship of online libraries, to applied
media theory implicit in the practices of amateur librarianship. The proposal is
made with further development, complexification and testing in mind during the
future activities of the Public Library and affiliated organizations.
PUBLIC LIBRARY, A POLITICAL GENEALOGY

Public libraries have historically achieved as an institutional space of exemption from the
commodification and privatization of knowledge. A space where works of literature and
science are housed and made accessible for the education of every member of society
regardless of their social or economic status. If, as a liberal narrative has it, education is a
prerequisite for full participation in a body politic, it is in this narrow institutional space that
citizenship finds an important material base for its universal realization.

P.48

P.49

The library as an institution of public access and popular literacy, however, did not develop
before a series of transformations and social upheavals unfolded in the course of 18th and
19th century. These developments brought about a flood of books and political demands
pushing the library to become embedded in an egalitarian and democratizing political
horizon. The historic backdrop for these developments was the rapid ascendancy of the book
as a mass commodity and the growing importance of the reading culture in the aftermath of
the invention of the movable type print. Having emerged almost in parallel with capitalism, by
the early 18th century the trade in books was rapidly expanding. While in the 15th century
the libraries around the monasteries, courts and universities of Western Europe contained no
more than 5 million manuscripts, the output of printing presses in the 18th century alone
exploded to formidable 700 million volumes.[1] And while this provided a vector for the
emergence of a bourgeois reading public and an unprecedented expansion of modern
science, the culture of reading and Enlightenment remained largely a privilege of the few.
Two social upheavals would start to change that. On 2 November 1789 the French
revolutionary National Assembly passed a decision to seize all library holdings from the
Church and aristocracy. Millions of volumes were transferred to the Bibliothèque Nationale
and local libraries across France. At the same time capitalism was on the rise, particularly in
England. It massively displaced the impoverished rural population into growing urban
centres, propelled the development of industrial production and, by the mid-19th century,
introduced the steam-powered rotary press into the commercial production of books. As
books became more easily mass-produced, the
commercial subscription libraries catering to the better-off
parts of society blossomed. This brought the class aspect
of the nascent demand for public access to books to the
fore.
After the failed attempt to introduce universal suffrage
and end the system of political representation based on
property entitlements through the Reform Act of 1832,
the English Chartist movement started to open reading
rooms and cooperative lending libraries that would
quickly become a popular hotbed of social exchange
between the lower classes. In the aftermath of the
revolutionary upheavals of 1848, the fearful ruling
classes finally consented to the demand for tax-financed
public libraries, hoping that the access to literature and
edification would after all help educate skilled workers
that were increasingly in demand and ultimately
hegemonize the working class for the benefits of
capitalism's culture of self-interest and competition.[2]

management hierarchies, and national
security issues. Various sets of these
conditions that are at work in a
particular library, also redefine the
notion of publishing and of the
publication, and in turn the notion of
public.

From Bibliothécaire amateur un cours de pédagogie
critique:
Puisqu'il était de plus en plus facile de
produire des livres en masse, les
bibliothèques privées payantes, au
service des catégories privilégiées de
la société, ont commencé à se
répandre. Ce phénomène a mis en
relief la question de la classe dans la
demande naissante pour un accès
public aux livres.

REALLY USEFUL KNOWLEDGE
[3]

It's no surprise that the Chartists, reeling from a political defeat, had started to open reading
rooms and cooperative lending libraries. The education provided to the proletariat and the
poor by the ruling classes of that time consisted, indeed, either of a pious moral edification
serving political pacification or of an inculcation of skills and knowledge useful to the factory
owner. Even the seemingly noble efforts of the Society for the Diffusion of the Useful
Knowledge, a Whig organization aimed at bringing high-brow learning to the middle and
working classes in the form of simplified and inexpensive publications, were aimed at dulling
the edge of radicalism of popular movements.[4]
These efforts to pacify the downtrodden masses pushed them to seek ways of self-organized
education that would provide them with literacy and really useful knowledge – not applied,
but critical knowledge that would allow them to see through their own political and economic
subjection, develop radical politics and innovate shadow social institutions of their own. The
radical education, reliant on meagre resources and time of the working class, developed in the
informal setting of household, neighbourhood and workplace, but also through radical press
and communal reading and discussion groups.[5]
The demand for really useful knowledge encompassed a critique of “all forms of ‘provided’
education” and of the liberal conception “that ‘national education’ was a necessary condition
for the granting of universal suffrage.” Development of radical “curricula and pedagogies”
formed a part of the arsenal of “political strategy as a means of changing the world.”[6]
CRITICAL PEDAGOGY

This is the context of the emergence of the public library. A historical compromise between a
push for radical pedagogy and a response to dull its edge. And yet with the age of
digitization, where one would think that the opportunities for access to knowledge have
expanded immensely, public libraries find themselves increasingly limited in their ability to
acquire and lend both digital and paper editions. It is a sign of our radically unequal times
that the political emancipation finds itself on a defensive fighting again for this material base of
pedagogy against the rising forces of privatization. Not only has mass education become
accessible only under the condition of high fees, student debt and adjunct peonage, but the
useful knowledge that the labour market and reproduction of the neoliberal capitalism
demands has become the one and only rationale for education.

P.50

P.51

No wonder that over the last 6-7 years we have seen self-education, shadow libraries and
amateur librarians emerge again to counteract the contraction of spaces of exemption that
have been shrunk by austerity and commodity.
The project Public Library was initiated with the counteraction in mind. To help everyone
learn to use simple tools to be able to act as an Amateur Librarian – to digitize, to collect, to
share, to preserve books and articles that were unaffordable, unavailable, undesirable in the
troubled corners of the Earth we hail from.
Amateur Librarian played an important role in the narrative of Public Library. And it seems
it was successful. People easily join the project by 'becoming' a librarian using Calibre[7] and
[let’s share books].[8] Other aspects of the Public Library narrative add a political articulation
to that simple yet disobedient act. Public Library detects an institutional crisis in education,
an economic deadlock of austerity and a domination of commodity logic in the form of
copyright. It conjures up the amateur librarians’ practice of sharing books/catalogues as a
relevant challenge against the convergence of that crisis, deadlock and copyright regime.
To understand the political and technological assumptions and further develop the strategies
that lie behind the counteractions of amateur librarians, we propose a curriculum that is
indebted to a tradition of critical pedagogy. Critical pedagogy is a productive and theoretical
practice rejecting an understanding of educational process that reduces it to a technique of
imparting knowledge and a neutral mode of knowledge acquisition. Rather, it sees the
pedagogy as a broader “struggle over knowledge, desire, values, social relations, and, most
important, modes of political agency”, “drawing attention to questions regarding who has
control over the conditions for the production of knowledge.”[9]

No industry in the present demonstrates more the
asymmetries of control over the conditions of production
of knowledge than the academic publishing. The denial
of access to outrageously expensive academic
publications for many universities, particularly in the
Global South, stands in stark contrast to the super-profits
that a small number of commercial publishers draws from
the free labour of scientists who write, review and edit
contributions and the extortive prices their institutional
libraries have to pay for subscriptions. It is thus here that
the amateur librarianship attains its poignancy for a
critical pedagogy, inviting us to closer formulate and
unfold its practices in a shared process of discovery.
A CURRICULUM

Public library is:
• free access to books for every member of society,
• library catalogue,
• librarian.

The curriculum in amateur librarianship develops aspects
and implications of this definition. Parts of this curriculum
have evolved over a number of workshops and talks
previously held within the Public Library project, parts of
it are yet to evolve from a process of future research,
exchange and knowledge production in the education
process. While schematic, scaling from the immediately
practical, over strategic and tactical, to reflexive registers
of knowledge, there are actual – here unnamed – people
and practices we imagine we could be learning from.
The first iteration of this curriculum could be either a
summer academy rostered with our all-star team of
librarians, designers, researchers and teachers, or a small
workshop with a small group of students delving deeper
into one particular aspect of the curriculum. In short it is
an open curriculum: both open to educational process
and contributions by others. We welcome comments,
derivations and additions.

From Bibliothécaire

amateur un cours de pédagogie
critique:
Actuellement, aucune industrie ne
montre plus d'asymétries au niveau du
contrôle des conditions de production
de la connaissance que celle de la
publication académique. Refuser
l'accès à des publications
académiques excessivement chères
pour beaucoup d'universités, en
particulier dans l'hémisphère sud,
contraste ostensiblement avec les
profits énormes qu'un petit nombre
d'éditeurs commerciaux tirent du
travail bénévole de scientifiques qui
écrivent, révisent et éditent des
contributions et avec les prix
exorbitants des souscriptions que les
bibliothèques institutionnelles doivent
payer.
From Voor elk boek is een
gebruiker:
FS: Hoe gaan jullie om met boeken
en publicaties die al vanaf het begin
digitaal zijn? DM: We kopen e-books
en e-tijdschriften en maken die
beschikbaar voor onderzoekers. Maar
dat zijn hele andere omgevingen,
omdat die content niet fysiek binnen
onze muren komt. We kopen toegang
tot servers van uitgevers of de
aggregator. Die content komt nooit bij
ons, die blijft op hun machines staan.
We kunnen daar dus eigenlijk niet
zoveel mee doen, behalve verwijzen
en zorgen dat het evengoed vindbaar
is als de print.

P.52

P.53

MODULE 1: WORKFLOWS
• from book to e-book
◦ digitizing a book on a
book scanner
◦ removing DRM and
converting e-book
formats
• from clutter to catalogue
◦ managing an e-book
library with Calibre
◦ finding e-books and
articles on online
libraries
• from reference to bibliography
◦ annotating in an ebook reader device or
application
◦ creating a scholarly
bibliography in Zotero
• from block device to network device
◦ sharing your e-book
library on a local
network to a reading
device
◦ sharing your e-book
library on the
internet with [let’s
share books]
• from private to public IP space
◦ using [let’s share
books] &
library.memoryoftheworld.org
◦ using logan & jessica
◦ using Science Hub
◦ using Tor

MODULE 2: POLITICS/TACTICS
• from developmental subordination to subaltern disobedience
◦ uneven development &
political strategies
◦ strategies of the
developed v strategies
of the
underdeveloped : open
access v piracy
• from property to commons
◦ from property to
commons
◦ copyright, scientific
publishing, open
access
◦ shadow libraries,
piracy,
custodians.online
• from collection to collective action
◦ critical pedagogy &
education
◦ archive, activation &
collective action

MODULE 3: ABSTRACTIONS IN ACTION
• from linear to computational
◦ library &
epistemology:
catalogue, search,
discovery, reference
◦ print book v e-book:
page, margin, spine
• from central to distributed
◦ deep librarianship &
amateur librarians

P.54

P.55

◦ network infrastructure
(s)/topologies (ruling
class studies)
• from factual to fantastic
◦ universe as library as
universe

READING LIST
• Mars, Marcell; Vladimir, Klemo. Download & How to:
Calibre & [let’s share books]. Memory of the World (2014)
https://www.memoryoftheworld.org/blog/2014/10/28/
calibre-lets-share-books/
• Buringh, Eltjo; Van Zanden, Jan Luiten. Charting the “Rise of
the West”: Manuscripts and Printed Books in Europe, A
Long-Term Perspective from the Sixth through Eighteenth
Centuries. The Journal of Economic History (2009) http://
journals.cambridge.org/article_S0022050709000837
• Mattern, Shannon. Library as Infrastructure. Places Journal
(2014) https://placesjournal.org/article/library-asinfrastructure/
• Antonić, Voja. Our beloved bookscanner. Memory of the
World (2012) https://www.memoryoftheworld.org/
blog/2012/10/28/our-beloved-bookscanner-2/
• Medak, Tomislav; Sekulić, Dubravka; Mertens, An. How to:
Bookscanning. Memory of the World (2014) https://
www.memoryoftheworld.org/blog/2014/12/08/how-tobookscanning/
• Barok, Dusan. Talks/Public Library. Monoskop (2015)
http://monoskop.org/Talks/Public_Library
• Custodians.online. In Solidarity with Library Genesis and
Science Hub (2015) http://custodians.online
• Battles, Matthew. Library: An Unquiet History Random
House (2014)
• Harris, Michael H. History of Libraries of the Western World.
Scarecrow Press (1999)
• MayDay Rooms. Activation (2015) http://
maydayrooms.org/activation/
• Krajewski, Markus. Paper Machines: About Cards &
Catalogs, 1548-1929. MIT Press (2011) https://
library.memoryoftheworld.org/b/
PaRC3gldHrZ3MuNPXyrh1hM1meyyaqvhaWlHTvr53NRjJ2k

For updates: https://www.zotero.org/groups/amateur_librarian__a_course_in_critical_pedagogy_reading_list
Last
Revision:
1·08·2016

1. For an economic history of the book in the Western Europe see Eltjo Buringh and Jan Luiten Van Zanden, “Charting the ‘Rise
of the West’: Manuscripts and Printed Books in Europe, A Long-Term Perspective from the Sixth through Eighteenth
Centuries,” The Journal of Economic History 69, No. 02 (June 2009): 409–45, doi:10.1017/S0022050709000837,
particularly Tables 1-5.
2. For the social history of public library see Matthew Battles, Library: An Unquiet History (Random House, 2014) chapter 5:
“Books for all”.
3. For this concept we remain indebted to the curatorial collective What, How and for Whom/WHW, who have presented the
work of Public Library within the exhibition Really Useful Knowledge they organized at Museo Reina Sofía in Madrid,
October 29, 2014 – February 9, 2015.
4. “Society for the Diffusion of Useful Knowledge,” Wikipedia, the Free Encyclopedia, June 25, 2015, https://

en.wikipedia.org/w/index.php?
title=Society_for_the_Diffusion_of_Useful_Knowledge&oldid=668644340.

5. Richard Johnson, “Really Useful Knowledge,” in CCCS Selected Working Papers: Volume 1, 1 edition, vol. 1 (London u.a.:
Routledge, 2014), 755.
6. Ibid., 752.
7. http://calibre-ebook.com/
8. https://www.memoryoftheworld.org/blog/2014/10/28/calibre-lets-share-books/
9. Henry A. Giroux, On Critical Pedagogy (Bloomsbury Academic, 2011), 5.

P.56

P.57

Bibliothécaire
amateur
- un
cours de
pédagogie
critique
Tomislav Medak & Marcell Mars (Public Library project)

Proposition de programme d'études de bibliothécaire amateur développé à
travers les activités et les exigences du projet Public Library. Prenant pour
base la généalogie historique de la bibliothèque publique en tant qu'institution
permettant l'accès à la connaissance, la tradition prolétaire de la connaissance
réellement utile et la puissance de l'amateur motivée par le développement
technologique, le programme couvre différents secteurs : depuis les flux de
travail directement applicables comme la numérisation, le partage et l'utilisation
de livres électroniques, à la politique et la tactique de conservation des
bibliothèques en ligne, en passant par la théorie médiatique appliquée qui est
implicite dans les pratiques du bibliothécaire amateur. La proposition est plus
amplement développée, complexifiée et sera testée durant les futures activités
de Public Library et des organisations affiliées.
BIBLIOTHÈQUE PUBLIQUE : UNE GÉNÉALOGIE POLITIQUE

Historiquement, les bibliothèques publiques sont parvenues à être un espace institutionnel
exempté de la marchandisation et de la privatisation de la connaissance. Un espace dans
lequel les œuvres littéraires et scientifiques sont abritées et rendues accessibles pour
l'éducation de chaque membre de la société, quel que soit son statut social ou économique.
Si, du point de vue libéral, l'éducation est un prérequis à la véritable participation au corps

politique, c'est dans cet espace institutionnel étroit que la citoyenneté trouve une base
matérielle importante à sa réalisation universelle.
Si aujourd'hui elle est une institution d'accès public et de savoir populaire, il a fallu une série
de transformations et de bouleversements sociaux au 18e et 19e siècle pour que la
bibliothèque se développe. Ces développements ont provoqué l'arrivée d'un flot de livres et
d'exigences politiques qui ont encouragé la bibliothèque à s'intégrer dans un horizon politique
démocratisant et égalitaire. En toile de fond historique de ces développements, il y eut
l'ascendance rapide du livre en tant que commodité de masse et l'importance croissante de la
culture de la lecture suite à l'invention des caractères d'imprimerie mobiles. Ayant émergé à
la même époque que le capitalisme, au début du 18e siècle le commerce des livres, était en
pleine expansion. Alors qu'au 15e siècle, en Europe occidentale, les bibliothèques qui se
trouvaient autour des monastères, des tribunaux et des universités ne contenaient pas plus de
cinq millions de manuscrits, la production de l'imprimerie a atteint 700 millions de volumes,
et ce, au 18e siècle seulement.[1] Et alors que cela a offert un vecteur à l'émergence d'un
public de lecteurs bourgeois et contribué à une expansion sans précédent de la science
moderne, la culture de la lecture et des Lumières restait alors principalement le privilège
d'une minorité.
Deux bouleversements sociaux allaient commencer à changer cela. Le 2 novembre 1789,
l'Assemblée nationale de la Révolution française a approuvé la saisie de tous les biens
bibliothécaires de l'Église et de l'aristocratie. Des millions de volumes ont été transférés à la
Bibliothèque Nationale ainsi qu'aux bibliothèques régionales, à travers la France. Au même
moment, le capitalisme progressait, en particulier en Angleterre. Ce mouvement a
massivement déplacé une population rurale pauvre dans les centres urbains en pleine
croissance et propulsé le développement de la production industrielle. À la moitié du 19e
siècle, il a également a introduit la presse typographique à vapeur dans la production
commerciale de livres. Puisqu'il était de plus en plus facile de produire des livres en masse,
les bibliothèques privées payantes, au service des
catégories privilégiées de la société, ont commencé à se
répandre. Ce phénomène a mis en relief la question de la
classe dans la demande naissante pour un accès public
aux livres.
Après une tentative ratée d'introduction du suffrage
universel en vue d'en finir avec le système de
représentation politique basée sur les droits de propriété à
travers l'Acte de réforme de 1832, le mouvement anglais
du chartisme a commencé à ouvrir des salles de lectures
et des bibliothèques de prêts coopératifs qui allaient
bientôt devenir un foyer pour l'échange social entre les
classes populaires. Suite aux mouvements
révolutionnaires de 1848, les classes dirigeantes

P.58

P.59

apeurées ont fini par accepter de répondre à la demande qui réclamait des librairies financées
par l'argent public. Elles espéraient qu'un accès à la littérature et à l'édification favoriserait
l'éducation des travailleurs qualifiés qui étaient de plus en plus en demande, mais
souhaitaient aussi maintenir l'hégémonie sur la classe ouvrière au profit de la culture du
capitalisme, de l'intérêt personnel et de la compétition.[2]
LA CONNAISSANCE RÉELLEMENT UTILE
[3]

Sans surprise, les chartistes, qui s'étaient retrouvés chancelants après une défaite politique,
avaient commencé à ouvrir des salles de lecture et des bibliothèques de prêts coopératifs. En
effet, à l'époque, l'éducation proposée au prolétariat et aux pauvres par les classes dirigeantes
consistait, soit à une édification morale pieuse au service de la pacification politique, soit à
l'inculcation de qualifications ou de connaissances qui seraient utiles au propriétaire de
l'usine. Même les efforts aux allures nobles de la Society for the Diffusion of the Useful
Knowledge, une organisation du parti whig cherchant à apporter un apprentissage intellectuel
à la classe ouvrière et à la classe moyenne sous la forme de publications bon marché et
simplifiées, avaient pour objectif l'atténuation de la tendance radicale des mouvements
populaires[4]
Ces efforts de pacification des masses opprimées les ont poussées à chercher des manières
d'organiser par elles-mêmes une éducation qui leur apporterait l'alphabétisation et une
connaissance réellement utile : une connaissance non pas appliquée, mais critique qui leur
permettrait de voir à travers leur propre soumission politique et économique, de développer
une politique radicale et d'innover leurs propres institutions sociales d'opposition. L'éducation
radicale, dépendante du peu de ressources et du manque de temps de la classe ouvrière, s'est
développée dans les cadres informels des foyers, des quartiers et des lieux de travail, mais
également à travers une presse radicale, une lecture commune et des groupes de discussion.[5]
La demande pour une connaissance réellement utile comprenait une critique de « toute
forme d'éducation “fournie” » et de la conception libérale selon laquelle « une “éducation
nationale” était une condition nécessaire à la garantie du suffrage universel ». Un
développement de « programmes et de pédagogies » radicaux constituait une part de l'arsenal
de « stratégie politique comme moyen de changer le monde »[6]
PÉDAGOGIE CRITIQUE

L'émergence de la bibliothèque publique a donc eu lieu dans le contexte d'un compromis
historique entre la formation des fondements d'une pédagogie radicale et une réaction visant
à l'atténuer. Pourtant, à l'âge de la numérisation dans lequel nous pourrions penser que les
opportunités pour un accès à la connaissance se sont largement étendues, les bibliothèques

publiques se retrouvent particulièrement limitées dans leurs possibilités d'acquérir et de prêter
des éditions aussi bien sous une forme papier que numérique. Cette difficulté est un signe de
l'inégalité radicale de notre époque : une fois encore, l'émancipation politique se bat de
manière défensive pour une base matérielle pédagogique contre les forces croissantes de la
privatisation. Non seulement l'éducation de masse est devenue accessible à prix d'or
uniquement, entrainant la dette étudiante et la servitude qui y est associée, mais la
connaissance utile exigée par le marché du travail et la reproduction du capitalisme néolibéral
sont devenues la seule logique de l'éducation.
Sans surprise, au cours des six-sept dernières années, nous avons vu l'apprentissage
autodidacte, les bibliothèques de l'ombre et les bibliothécaires amateurs émerger pour contrer
la contraction des espaces d'exemption réduits par l'austérité et la commodification. Le projet
Public Library a été initié dans l'idée de contrer ce phénomène. Pour aider tout le monde à
apprendre l'utilisation d'outils simples permettant d'agir en tant qu'Amateur Librarian :
numériser, rassembler, partager, préserver des livres, des articles onéreux, introuvables ou
indésirables dans les coins mouvementés de notre planète.
Amateur Librarian a joué un rôle important dans le système narratif de Public Library. Un
rôle qui semble avoir porté ses fruits. Les gens rejoignent facilement le projet en « devenant »
bibliothécaire grâce à l'outil Calibre[7] et [let’s share books].[8] D'autres aspects du narratif de
Public Library ajoutent une articulation politique à cet acte simple, mais désobéissant. Public
Library perçoit une crise institutionnelle dans l'éducation, une impasse économique
d'austérité et une domination de la logique de commodité sous la forme du droit d'auteur.
Elle fait apparaitre la pratique du partage de livres et de catalogues des bibliothécaires
amateurs comme un défi pertinent à l'encontre de la convergence de cette crise, de cette
impasse et du régime du droit d'auteur.
Pour comprendre les hypothèses politiques et technologiques et développer plus en
profondeur les stratégies sur lesquelles les réactions des bibliothécaires amateurs se basent,
nous proposons un programme issu de la tradition pédagogique critique. La pédagogie
critique est une pratique productive et théorique qui rejette la définition du procédé
éducationnel comme réduit à une simple technique de communication de la connaissance et
présentée comme un mode d'acquisition neutre. Au contraire, la pédagogie est perçue plus
largement comme « une lutte pour la connaissance, le désir, les valeurs, les relations sociales,
et plus important encore, les modes d'institution politique », « une attention portée aux
questions relatives au contrôle des conditions de production de la connaissance. »[9]

P.60

P.61

Actuellement, aucune industrie ne montre plus
d'asymétries au niveau du contrôle des conditions de
production de la connaissance que celle de la publication
académique. Refuser l'accès à des publications
académiques excessivement chères pour beaucoup
d'universités, en particulier dans l'hémisphère sud,
contraste ostensiblement avec les profits énormes qu'un
petit nombre d'éditeurs commerciaux tirent du travail
bénévole de scientifiques qui écrivent, révisent et éditent
des contributions et avec les prix exorbitants des
souscriptions que les bibliothèques institutionnelles
doivent payer. C'est donc ici que la bibliothèque amateur
atteint le sommet de son intensité en matière de
pédagogie critique : elle nous invite à formuler et à narrer
plus précisément sa pratique à travers un processus
partagé de découverte.
UN PROGRAMME

Une bibliothèque publique, c'est :
• un libre accès aux livres pour tous les membres de la
société,
• un catalogue de bibliothèque,
• un bibliothécaire.

From Amateur

Librarian - A
Course in Critical Pedagogy:
No industry in the present
demonstrates more the asymmetries of
control over the conditions of
production of knowledge than the
academic publishing. The denial of
access to outrageously expensive
academic publications for many
universities, particularly in the Global
South, stands in stark contrast to the
super-profits that a small number of
commercial publishers draws from the
free labour of scientists who write,
review and edit contributions and the
extortive prices their institutional
libraries have to pay for subscriptions.
From Voor elk boek is een
gebruiker:
FS: Hoe gaan jullie om met boeken
en publicaties die al vanaf het begin
digitaal zijn? DM: We kopen e-books
en e-tijdschriften en maken die
beschikbaar voor onderzoekers. Maar
dat zijn hele andere omgevingen,
omdat die content niet fysiek binnen
onze muren komt. We kopen toegang
tot servers van uitgevers of de
aggregator. Die content komt nooit bij
ons, die blijft op hun machines staan.
We kunnen daar dus eigenlijk niet
zoveel mee doen, behalve verwijzen
en zorgen dat het evengoed vindbaar
is als de print.

Le programme de bibliothécaire amateur développe
plusieurs aspects et implications d'une telle définition.
Certaines parties du programme ont été construites à
partir de différents ateliers et exposés qui se déroulaient précédemment dans le cadre du
projet Public Library. Certaines parties de ce programme doivent encore évoluer s'appuyant
sur un processus de recherche futur, d'échange et de production de connaissance dans le
processus éducatif. Tout en restant schématique en allant de la pratique immédiate, à la
stratégie, la tactique et au registre réflectif de la
connaissance, il existe des personnes et pratiques - non
citées ici - desquelles nous imaginons pouvoir apprendre.
La première itération de ce programme pourrait aussi
bien être une académie d'été avec notre équipe
sélectionnée de bibliothécaires, concepteurs, chercheurs,
professeurs, qu'un petit atelier avec un groupe restreint
d'étudiants se plongeant dans un aspect précis du
programme. En résumé, ce programme est ouvert, aussi

bien au processus éducationnel qu'aux contributions des autres. Nous sommes ouverts aux
commentaires, aux dérivations et aux ajouts.
MODULE 1 : FLUX DE TRAVAIL
• du livre au livre électronique
◦ numériser un livre
avec un scanner de
livres
◦ supprimer la gestion
des droits numériques
et convertir au format
livre numérique
• du désordre au catalogue
◦ gérer une bibliothèque
de livres numériques
avec Calibre
◦ trouver des livres
numériques et des
articles dans des
bibliothèques en ligne
• de la référence à la bibliographie
◦ annoter à partir d'une
application ou d'un
appareil de lecture de
livres électroniques
◦ créer une
bibliographie
académique sur Zotero
• du dispositif de bloc au périphérique réseau
◦ partager votre
bibliothèque de livres
numériques d'un
périphérique local à
un appareil de lecture
◦ partager votre
bibliothèque de livres
numériques sur
internet avec [let’s
share books]

P.62

P.63

• de l'espace IP privé à l'espace IP public
◦ utiliser [let’s share
books] et
library.memoryoftheworld.org
◦ utiliser logan &
jessica
◦ utiliser Science Hub
◦ utiliser Tor

MODULE 2 : POLITIQUE/TACTIQUE
• du développement de la subordination à la désobéissance
subalterne
◦ développement inégal
et stratégies
politiques
◦ stratégies de
développement contre
les stratégies de sous
développement : accès
ouvert contre piratage
• de la propriété au commun
◦ de la propriété au
commun
◦ droit d'auteur,
publication
scientifique, accès
ouvert
◦ bibliothèque de
l'ombre, piratage,
custodians.online
• de la collection à l'action collective
◦ pédagogie critique et
éducation
◦ archive, activation et
action collective

MODULE 3 : ABSTRACTIONS DANS L'ACTION
• du linéaire à l'informatique
◦ bibliothèque

◦ livre imprimé et livre
numérique : page,
marge, dos
• du central au distribué
◦ bibliothécaires
professionnels et
bibliothécaires
amateurs
◦ infrastructure(s) de
réseau/topologies
(études des classes
dirigeantes)
• du factuel au fantastique
◦ l'univers pour
bibliothèque, la
bibliothèque pour
univers

LISTE DE LECTURE
• Mars, Marcell; Vladimir, Klemo. Download & How to:
Calibre & [let’s share books]. Memory of the World (2014)
https://www.memoryoftheworld.org/blog/2014/10/28/
calibre-lets-share-books/
• Buringh, Eltjo; Van Zanden, Jan Luiten. Charting the “Rise of
the West”: Manuscripts and Printed Books in Europe, A
Long-Term Perspective from the Sixth through Eighteenth
Centuries. The Journal of Economic History (2009) http://
journals.cambridge.org/article_S0022050709000837
• Mattern, Shannon. Library as Infrastructure. Places Journal
(2014) https://placesjournal.org/article/library-asinfrastructure/
• Antonić, Voja. Our beloved bookscanner. Memory of the
World (2012) https://www.memoryoftheworld.org/
blog/2012/10/28/our-beloved-bookscanner-2/
• Medak, Tomislav; Sekulić, Dubravka; Mertens, An. How to:
Bookscanning. Memory of the World (2014) https://
www.memoryoftheworld.org/blog/2014/12/08/how-tobookscanning/
• Barok, Dusan. Talks/Public Library. Monoskop (2015)
http://monoskop.org/Talks/Public_Library
• Custodians.online. In Solidarity with Library Genesis and
Science Hub (2015) http://custodians.online

P.64

P.65

• Battles, Matthew. Library: An Unquiet History Random
House (2014)
• Harris, Michael H. History of Libraries of the Western World.
Scarecrow Press (1999)
• MayDay Rooms. Activation (2015) http://
maydayrooms.org/activation/
• Krajewski, Markus. Paper Machines: About Cards &
Catalogs, 1548-1929. MIT Press (2011) https://
library.memoryoftheworld.org/b/
PaRC3gldHrZ3MuNPXyrh1hM1meyyaqvhaWlHTvr53NRjJ2k

Dernière version: https://www.zotero.org/groups/amateur_librarian__a_course_in_critical_pedagogy_reading_list
Last
Revision:
1·08·2016

1. 1. Pour une histoire économique du livre en Europe occidentale, voir Eltjo Buringh et Jan Luiten Van Zanden, « Charting the
‘Rise of the West’ : Manuscripts and Printed Books in Europe, A Long-Term Perspective from the Sixth through Eighteenth
Centuries, » The Journal of Economic History 69, n°. 02 (juin 2009) : 409–45, doi :10.1017/S0022050709000837, en
particulier les tableaux 1-5.
2. 2. Pour une histoire sociale de la bibliothèque publique, voir Matthew Battles, Library: An Unquiet History (Random House,
2014) chapitre 5 : “Books for all”.
3. 3. Pour ce concept, nous sommes redevables au collectif de curateurs What, How and for Whom/WHW, qui a présenté le
travail de Public Library dans le cadre de l'exposition Really Useful Knowledge qu'ils ont organisée au Museo Reina Sofía à
Madrid, entre 29 octobre 2014 et le 9 février 2015.
4. 4. « Society for the Diffusion of Useful Knowledge, » Wikipedia, the Free Encyclopedia, Juin 25, 2015, https://

en.wikipedia.org/w/index.php?
title=Society_for_the_Diffusion_of_Useful_Knowledge&oldid=668644340.

5. 5. Richard Johnson, « Really Useful Knowledge, » dans CCCS Selected Working Papers: Volume 1, 1 édition, vol. 1
(Londres u.a. : Routledge, 2014), 755.
6. Ibid., 752.
7. http://calibre-ebook.com/
8. https://www.memoryoftheworld.org/blog/2014/10/28/calibre-lets-share-books/
9. Henry A. Giroux, On Critical Pedagogy (Bloomsbury Academic, 2011), 5.

A bag
but is
language
nothing
of words
(language is nothing but a bag of words)
MICHAEL MURTAUGH

In text indexing and other machine reading applications the term "bag of
words" is frequently used to underscore how processing algorithms often
represent text using a data structure (word histograms or weighted vectors)
where the original order of the words in sentence form is stripped away. While
"bag of words" might well serve as a cautionary reminder to programmers of
the essential violence perpetrated to a text and a call to critically question the
efficacy of methods based on subsequent transformations, the expression's use
seems in practice more like a badge of pride or a schoolyard taunt that would
go: Hey language: you're nothin' but a big BAG-OF-WORDS.
BAG OF WORDS

In information retrieval and other so-called machine-reading applications (such as text
indexing for web search engines) the term "bag of words" is used to underscore how in the
course of processing a text the original order of the words in sentence form is stripped away.
The resulting representation is then a collection of each unique word used in the text,
typically weighted by the number of times the word occurs.
Bag of words, also known as word histograms or weighted term vectors, are a standard part
of the data engineer's toolkit. But why such a drastic transformation? The utility of "bag of
words" is in how it makes text amenable to code, first in that it's very straightforward to
implement the translation from a text document to a bag of words representation. More

P.66

P.67

significantly, this transformation then opens up a wide collection of tools and techniques for
further transformation and analysis purposes. For instance, a number of libraries available in
the booming field of "data sciences" work with "high dimension" vectors; bag of words is a
way to transform a written document into a mathematical vector where each "dimension"
corresponds to the (relative) quantity of each unique word. While physically unimaginable
and abstract (imagine each of Shakespeare's works as points in a 14 million dimensional
space), from a formal mathematical perspective, it's quite a comfortable idea, and many
complementary techniques (such as principle component analysis) exist to reduce the
resulting complexity.
What's striking about a bag of words representation, given is centrality in so many text
retrieval application is its irreversibility. Given a bag of words representation of a text and
faced with the task of producing the original text would require in essence the "brain" of a
writer to recompose sentences, working with the patience of a devoted cryptogram puzzler to
draw from the precise stock of available words. While "bag of words" might well serve as a
cautionary reminder to programmers of the essential violence perpetrated to a text and a call
to critically question the efficacy of methods based on subsequent transformations, the
expressions use seems in practice more like a badge of pride or a schoolyard taunt that would
go: Hey language: you're nothing but a big BAG-OF-WORDS. Following this spirit of the
term, "bag of words" celebrates a perfunctory step of "breaking" a text into a purer form
amenable to computation, to stripping language of its silly redundant repetitions and foolishly
contrived stylistic phrasings to reveal a purer inner essence.
BOOK OF WORDS

Lieber's Standard Telegraphic Code, first published in 1896 and republished in various
updated editions through the early 1900s, is an example of one of several competing systems
of telegraph code books. The idea was for both senders and receivers of telegraph messages
to use the books to translate their messages into a sequence of code words which can then be
sent for less money as telegraph messages were paid by the word. In the front of the book, a
list of examples gives a sampling of how messages like: "Have bought for your account 400
bales of cotton, March delivery, at 8.34" can be conveyed by a telegram with the message
"Ciotola, Delaboravi". In each case the reduction of number of transmitted words is
highlighted to underscore the efficacy of the method. Like a dictionary or thesaurus, the book
is primarily organized around key words, such as act, advice, affairs, bags, bail, and bales,
under which exhaustive lists of useful phrases involving the corresponding word are provided
in the main pages of the volume. [1]

P.68

P.69

P.70

P.71

[...] my focus in this chapter is on the inscription technology that grew parasitically
alongside the monopolistic pricing strategies of telegraph companies: telegraph code
books. Constructed under the bywords “economy,” “secrecy,” and “simplicity,”

telegraph code books matched phrases and words with code letters or numbers. The
idea was to use a single code word instead of an entire phrase, thus saving money by
serving as an information compression technology. Generally economy won out over
[2]
secrecy, but in specialized cases, secrecy was also important.

In Katherine Hayles' chapter devoted to telegraph code books she observes how:
The interaction between code and language shows a steady movement away from a
human-centric view of code toward a machine-centric view, thus anticipating the
[3]
development of full-fledged machine codes with the digital computer.

Aspects of this transitional moment are apparent in a notice included prominently inserted in
the Lieber's code book:
After July, 1904, all combinations of letters that do not exceed ten will pass as one
cipher word, provided that it is pronounceable, or that it is taken from the following
languages: English, French, German, Dutch, Spanish, Portuguese or Latin -[4]
International Telegraphic Conference, July 1903

Conforming to international conventions regulating telegraph communication at that time, the
stipulation that code words be actual words drawn from a variety of European languages
(many of Lieber's code words are indeed arbitrary Dutch, German, and Spanish words)

P.72

P.73

underscores this particular moment of transition as reference to the human body in the form
of "pronounceable" speech from representative languages begins to yield to the inherent
potential for arbitrariness in digital representation.
What telegraph code books do is remind us of is the relation of language in general to
economy. Whether they may be economies of memory, attention, costs paid to a
telecommunicatons company, or in terms of computer processing time or storage space,
encoding language or knowledge in any form of writing is a form of shorthand and always
involves an interplay with what one expects to perform or "get out" of the resulting encoding.
Along with the invention of telegraphic codes comes a paradox that John Guillory has
noted: code can be used both to clarify and occlude. Among the sedimented structures
in the technological unconscious is the dream of a universal language. Uniting the
world in networks of communication that flashed faster than ever before, telegraphy
was particularly suited to the idea that intercultural communication could become
almost effortless. In this utopian vision, the effects of continuous reciprocal causality
expand to global proportions capable of radically transforming the conditions of human
[5]
life. That these dreams were never realized seems, in retrospect, inevitable.

P.74

P.75

Far from providing a universal system of encoding messages in the English language,
Lieber's code is quite clearly designed for the particular needs and conditions of its use. In
addition to the phrases ordered by keywords, the book includes a number of tables of terms
for specialized use. One table lists a set of words used to describe all possible permutations of
numeric grades of coffee (Choliam = 3,4, Choliambos = 3,4,5, Choliba = 4,5, etc.); another
table lists pairs of code words to express the respective daily rise or fall of the price of coffee
at the port of Le Havre in increments of a quarter of a Franc per 50 kilos ("Chirriado =
prices have advanced 1 1/4 francs"). From an archaeological perspective, the Lieber's code
book reveals a cross section of the needs and desires of early 20th century business
communication between the United States and its trading partners.
The advertisements lining the Liebers Code book further situate its use and that of
commercial telegraphy. Among the many advertisements for banking and law services, office
equipment, and alcohol are several ads for gun powder and explosives, drilling equipment
and metallurgic services all with specific applications to mining. Extending telegraphy's
formative role for ship-to-shore and ship-to-ship communication for reasons of safety,
commercial telegraphy extended this network of communication to include those parties
coordinating the "raw materials" being mined, grown, or otherwise extracted from overseas
sources and shipped back for sale.

"RAW DATA NOW!"
Tim Berners-Lee: [...] Make a beautiful website, but
first give us the unadulterated data, we want the data.
We want unadulterated data. OK, we have to ask for
raw data now. And I'm going to ask you to practice
that, OK? Can you say "raw"?
Audience: Raw.
Tim Berners-Lee: Can you say "data"?
Audience: Data.
TBL: Can you say "now"?
Audience: Now!
TBL: Alright, "raw data now"!
[...]

From La ville intelligente - Ville de la
connaissance:
Étant donné que les nouvelles formes
modernistes et l'utilisation de
matériaux propageaient l'abondance
d'éléments décoratifs, Paul Otlet
croyait en la possibilité du langage
comme modèle de « données brutes »,
le réduisant aux informations
essentielles et aux faits sans ambiguïté,
tout en se débarrassant de tous les
éléments inefficaces et subjectifs.
From The Smart City - City of
Knowledge:
As new modernist forms and use of
materials propagated the abundance
of decorative elements, Otlet believed
in the possibility of language as a
model of 'raw data', reducing it to
essential information and
unambiguous facts, while removing all
inefficient assets of ambiguity or
subjectivity.

So, we're at the stage now where we have to do this -the people who think it's a great idea. And all the
people -- and I think there's a lot of people at TED
who do things because -- even though there's not an
immediate return on the investment because it will only really pay off when everybody
else has done it -- they'll do it because they're the sort of person who just does things
which would be good if everybody else did them. OK, so it's called linked data. I want
[6]
you to make it. I want you to demand it.
UN/STRUCTURED

As graduate students at Stanford, Sergey Brin and Lawrence (Larry) Page had an early
interest in producing "structured data" from the "unstructured" web. [7]
The World Wide Web provides a vast source of information of almost all types,
ranging from DNA databases to resumes to lists of favorite restaurants. However, this
information is often scattered among many web servers and hosts, using many different
formats. If these chunks of information could be extracted from the World Wide Web
and integrated into a structured form, they would form an unprecedented source of
information. It would include the largest international directory of people, the largest
and most diverse databases of products, the greatest bibliography of academic works,
and many other useful resources. [...]

P.76

P.77

2.1 The Problem
Here we define our problem more formally:
Let D be a large database of unstructured information such as the World Wide Web
[8]
[...]

In a paper titled Dynamic Data Mining Brin and Page situate their research looking for rules
(statistical correlations) between words used in web pages. The "baskets" they mention stem
from the origins of "market basket" techniques developed to find correlations between the
items recorded in the purchase receipts of supermarket customers. In their case, they deal
with web pages rather than shopping baskets, and words instead of purchases. In transitioning
to the much larger scale of the web, they describe the usefulness of their research in terms of
its computational economy, that is the ability to tackle the scale of the web and still perform
using contemporary computing power completing its task in a reasonably short amount of
time.
A traditional algorithm could not compute the large itemsets in the lifetime of the
universe. [...] Yet many data sets are difficult to mine because they have many
frequently occurring items, complex relationships between the items, and a large
number of items per basket. In this paper we experiment with word usage in documents
on the World Wide Web (see Section 4.2 for details about this data set). This data set
is fundamentally different from a supermarket data set. Each document has roughly
150 distinct words on average, as compared to roughly 10 items for cash register
transactions. We restrict ourselves to a subset of about 24 million documents from the
web. This set of documents contains over 14 million distinct words, with tens of
thousands of them occurring above a reasonable support threshold. Very many sets of
[9]
these words are highly correlated and occur often.
UN/ORDERED

In programming, I've encountered a recurring "problem" that's quite symptomatic. It goes
something like this: you (the programmer) have managed to cobble out a lovely "content
management system" (either from scratch, or using any number of helpful frameworks)
where your user can enter some "items" into a database, for instance to store bookmarks.
After this ordered items are automatically presented in list form (say on a web page). The
author: It's great, except... could this bookmark come before that one? The problem stems
from the fact that the database ordering (a core functionality provided by any database)
somehow applies a sorting logic that's almost but not quite right. A typical example is the
sorting of names where details (where to place a name that starts with a Norwegian "Ø" for
instance), are language-specific, and when a mixture of languages occurs, no single ordering
is necessarily "correct". The (often) exascerbated programmer might hastily add an
additional database field so that each item can also have an "order" (perhaps in the form of a
date or some other kind of (alpha)numerical "sorting" value) to be used to correctly order
the resulting list. Now the author has a means, awkward and indirect but workable, to control

the order of the presented data on the start page. But one might well ask, why not just edit
the resulting listing as a document? Not possible! Contemporary content management
systems are based on a data flow from a "pure" source of a database, through controlling
code and templates to produce a document as a result. The document isn't the data, it's the
end result of an irreversible process. This problem, in this and many variants, is widespread
and reveals an essential backwardness that a particular "computer scientist" mindset relating
to what constitutes "data" and in particular it's relationship to order that makes what might be
a straightforward question of editing a document into an over-engineered database.
Recently working with Nikolaos Vogiatzis whose research explores playful and radically
subjective alternatives to the list, Vogiatzis was struck by how from the earliest specifications
of HTML (still valid today) have separate elements (OL and UL) for "ordered" and
"unordered" lists.
The representation of the list is not defined here, but a bulleted list for unordered lists,
and a sequence of numbered paragraphs for an ordered list would be quite appropriate.
[10]
Other possibilities for interactive display include embedded scrollable browse panels.

Vogiatzis' surprise lay in the idea of a list ever being considered "unordered" (or in
opposition to the language used in the specification, for order to ever be considered
"insignificant"). Indeed in its suggested representation, still followed by modern web
browsers, the only difference between the two visually is that UL items are preceded by a
bullet symbol, while OL items are numbered.
The idea of ordering runs deep in programming practice where essentially different data
structures are employed depending on whether order is to be maintained. The indexes of a
"hash" table, for instance (also known as an associative array), are ordered in an
unpredictable way governed by a representation's particular implementation. This data
structure, extremely prevalent in contemporary programming practice sacrifices order to offer
other kinds of efficiency (fast text-based retrieval for instance).
DATA MINING

In announcing Google's impending data center in Mons, Belgian prime minister Di Rupo
invoked the link between the history of the mining industry in the region and the present and
future interest in "data mining" as practiced by IT companies such as Google.
Whether speaking of bales of cotton, barrels of oil, or bags of words, what links these subjects
is the way in which the notion of "raw material" obscures the labor and power structures
employed to secure them. "Raw" is always relative: "purity" depends on processes of
"refinement" that typically carry social/ecological impact.

P.78

P.79

Stripping language of order is an act of "disembodiment", detaching it from the acts of writing
and reading. The shift from (human) reading to machine reading involves a shift of
responsibility from the individual human body to the obscured responsibilities and seemingly
inevitable forces of the "machine", be it the machine of a market or the machine of an
algorithm.
The computer scientists' view of textual content as
"unstructured", be it in a webpage or the OCR scanned
pages of a book, reflect a negligence to the processes and
labor of writing, editing, design, layout, typesetting, and
eventually publishing, collecting and cataloging [11].

From X = Y:
Still, it is reassuring to know that the
products hold traces of the work, that
even with the progressive removal of
human signs in automated processes,
the workers' presence never
disappears completely. This presence
is proof of the materiality of
information production, and becomes
a sign of the economies and
paradigms of efficiency and
profitability that are involved.

"Unstructured" to the computer scientist, means nonconformant to particular forms of machine reading.
"Structuring" then is a social process by which particular
(additional) conventions are agreed upon and employed.
Computer scientists often view text through the eyes of
their particular reading algorithm, and in the process
(voluntarily) blind themselves to the work practices which have produced and maintain these
"resources".
Berners-Lee, in chastising his audience of web publishers to not only publish online, but to
release "unadulterated" data belies a lack of imagination in considering how language is itself
structured and a blindness to the need for more than additional technical standards to connect
to existing publishing practices.
Last
Revision:
2·08·2016

1. Benjamin Franklin Lieber, Lieber's Standard Telegraphic Code, 1896, New York; https://archive.org/details/
standardtelegrap00liebuoft
2. Katherine Hayles, "Technogenesis in Action: Telegraph Code Books and the Place of the Human", How We Think: Digital
Media and Contemporary Technogenesis, 2006
3. Hayles
4. Lieber's
5. Hayles
6. Tim Berners-Lee: The next web, TED Talk, February 2009 http://www.ted.com/talks/tim_berners_lee_on_the_next_web/
transcript?language=en
7. "Research on the Web seems to be fashionable these days and I guess I'm no exception." from Brin's Stanford webpage
8. Extracting Patterns and Relations from the World Wide Web, Sergey Brin, Proceedings of the WebDB Workshop at EDBT
1998, http://www-db.stanford.edu/~sergey/extract.ps
9. Dynamic Data Mining: Exploring Large Rule Spaces by Sampling; Sergey Brin and Lawrence Page, 1998; p. 2 http://
ilpubs.stanford.edu:8090/424/
10. Hypertext Markup Language (HTML): "Internet Draft", Tim Berners-Lee and Daniel Connolly, June 1993, http://
www.w3.org/MarkUp/draft-ietf-iiir-html-01.txt
11. http://informationobservatory.info/2015/10/27/google-books-fair-use-or-anti-democratic-preemption/#more-279

A Book
of the
Web
DUSAN BAROK

Is there a vital difference between publishing in print versus online other than
reaching different groups of readers and a different lifespan? Both types of texts
are worth considering preserving in libraries. The online environment has
created its own hybrid form between text and library, which is key to
understanding how digital text produces difference.
Historically, we have been treating texts as discrete units, that are distinguished by their
material properties such as cover, binding, script. These characteristics establish them as
either a book, a magazine, a diary, sheet music and so on. One book differs from another,
books differ from magazines, printed matter differs from handwritten manuscripts. Each
volume is a self-contained whole, further distinguished by descriptors such as title, author,
date, publisher, and classification codes that allow it to be located and referred to. The
demarcation of a publication as a container of text works as a frame or boundary which
organises the way it can be located and read. Researching a particular subject matter, the
reader is carried along by classification schemes under which volumes are organised, by
references inside texts, pointing to yet other volumes, and by tables of contents and indexes of
subjects that are appended to texts, pointing to places within that volume.
So while their material properties separate texts into distinct objects, bibliographic information
provides each object with a unique identifier, a unique address in the world of print culture.
Such identifiable objects are further replicated and distributed across containers that we call
libraries, where they can be accessed.
The online environment however, intervenes in this condition. It establishes shortcuts.
Through search engine, digital texts can be searched for any text sequence, regardless of
their distinct materiality and bibliographic specificity. This changes the way they function as a
library, and the way its main object, the book, should be rethought.
(1) Rather than operate as distinct entities, multiple texts are simultaneously accessible
through full-text search as if they are one long text, with its portions spread across the

P.80

P.81

web, and including texts that had not been considered as candidates for library
collections.
(2) The unique identifier at hand for these text portions is not the bibliographic
information, but the URL.
(3) The text is as long as web-crawlers of a given search engine are set to reach,
refashioning the library into a storage of indexed data.

These are some of the lines along which online texts appear to produce difference. The first
contrasts the distinct printed publication to the machine-readable text, the second the
bibliographic information to the URL, and the third the library to the search engine.
The introduction of full-text search has created an
environment in which all machine-readable online
documents in reach are effectively treated as one single
document. For any text-sequence to be locatable, it
doesn't matter in which file format it appears, nor whether
its interface is a database-powered website or mere
directory listing. As long as text can be extracted from a
document, it is a container of text sequences which itself
is a sequence in a 'book' of the web.
Even though this is hardly news after almost two decades
of Google Search ruling, little seems to have changed
with respect to the forms and genres of writing. Loyal to
standard forms of publishing, most writing still adheres to
the principle of coherence, based on units such as book
chapters, journal papers, newspaper articles, etc., that are
designed to be read from beginning to end.

From Voor elk boek is een gebruiker:
FS: Maar het gaat toch ook over de
manier waarop jullie toegang bieden,
de bibliotheek als interface? Online
laten jullie dat nu over aan Google.
SVP: De toegang gaat niet meer
over: “deze instelling heeft dit, deze
instelling heeft iets anders”, al die
instellingen zijn via dezelfde interface
te bereiken. Je kan doorheen al die
collecties zoeken en dat is ook weer
een stukje van die originele droom van
Otlet en Vander Haeghen, het idee
van een wereldbibliotheek. Voor elk
boek is er een gebruiker, de
bibliotheek moet die maar gaan
zoeken.
Wat ik intrigerend vind is dat alle
boeken één boek geworden zijn
doordat ze op hetzelfde niveau
doorzoekbaar zijn, dat is ongelooflijk
opwindend. Dat is een andere manier
van lezen die zelfs Otlet zich niet had
kunnen voorstellen. Ze zouden zot
worden moesten ze dit weten.

Still, the scope of textual forms appearing in search
results, and thus a corpus of texts in which they are being
brought into, is radically diversified: it may include
discussion board comments, product reviews, private emails, weather information, spam etc., the type of content
that used to be omitted from library collections. Rather than being published in a traditional
sense, all these texts are produced onto digital networks by mere typing, copying, OCR-ing,
generated by machines, by sensors tracking movement, temperature, etc.
Even though portions of these texts may come with human or non-human authors attached,
authors have relatively little control over discourses their writing gets embedded in. This is
also where the ambiguity of copyright manifests itself. Crawling bots pre-read the internet
with all its attached devices according to the agenda of their maintainers, and the decisions

about which, how and to whom the indexed texts are served in search results is in the code of
a library.
Libraries in this sense are not restricted to digitised versions of physical public or private
libraries as we know them from history. Commercial search engines, intelligence agencies,
and virtually all forms of online text collections can be thought of as libraries.
Acquisition policies figure here on the same level with crawling bots, dragnet/surveillance
algorithms, and arbitrary motivations of users, all of which actuate the selection and
embedding of texts into structures that regulate their retrievability and through access control
produce certain kinds of communities or groups of readers. The author's intentions of
partaking in this or that discourse are confronted by discourse-conditioning operations of
retrieval algorithms. Hence, Google structures discourse through its Google Search
differently from how the Internet Archive does with its Wayback Machine, and from how the
GCHQ does it with its dragnet programme.
They are all libraries, each containing a single 'book' whose pages are URLs with
timestamps and geostamps in the form of IP address. Google, GCHQ, JStor, Elsevier –
each maintains its own searchable corpus of texts. The decisions about who, to which
sections and under which conditions is to be admitted are
From Amateur Librarian - A Course
informed by a mix of copyright laws, corporate agendas,
in Critical Pedagogy:
management hierarchies, and national security issues.
As books became more easily massVarious sets of these conditions that are at work in a
produced, the commercial
subscription libraries catering to the
particular library, also redefine the notion of publishing
better-off parts of society blossomed.
and of the publication, and in turn the notion of public.
This brought the class aspect of the
Corporate journal repositories exploit publicly funded
research by renting it only to libraries which can afford it;
intelligence agencies are set to extract texts from any
moving target, basically any networked device, apparently
in public interest and away from the public eye; publiclyfunded libraries are being prevented by outdated
copyright laws and bureaucracy from providing digitised
content online; search engines create a sense of giving
access to all public record online while only a few know
what is excluded and how search results are ordered.

P.82

nascent demand for public access to
books to the fore.
From Bibliothécaire amateur - un
cours de pédagogie critique:
Puisqu'il était de plus en plus facile de
produire des livres en masse, les
bibliothèques privées payantes, au
service des catégories privilégiées de
la société, ont commencé à se
répandre. Ce phénomène a mis en
relief la question de la classe dans la
demande naissante pour un accès
public aux livres.

P.83

It is within and against this milieu that libraries such as
the Internet Archive, Wikileaks, Aaaaarg, UbuWeb,
Monoskop, Memory of the World, Nettime, TheNextLayer
and others gain their political agency. Their countertechniques for negotiating the publicness of publishing
include self-archiving, open access, book liberation,
leaking, whistleblowing, open source search algorithms
and so on.
Digitization and posting texts online are interventions in
the procedures that make search possible. Operating
online collections of texts is as much about organising
texts within libraries, as is placing them within books of
the web.

Originally written 15-16 June 2015 in Prague, Brno
and Vienna for a talk given at the Technopolitics seminar in Vienna on 16 June 2015.
Revised 29 December 2015 in Bergen.
Last
Revision:
1·08·2016

The
Indexalist
MATTHEW FULLER

I first spoke to the patient in the last week of that August. That evening the sun was tender in
drawing its shadows across the lines of his face. The eyes gazed softly into a close middle
distance, as if composing a line upon a translucent page hung in the middle of the air, the
hands tapping out a stanza or two of music on legs covered by the brown folds of a towelling
dressing gown. He had the air of someone who had seen something of great amazement but
yet lacked the means to put it into language. As I got to know the patient over the next few
weeks I learned that this was not for the want of effort.
In his youth he had dabbled with the world-speak
language Volapük, one designed to do away with the
incompatibility of tongues, to establish a standard in
which scientific intercourse might be conducted with
maximum efficiency and with minimal friction in
movement between minds, laboratories and publications.
Latin biological names, the magnificent table of elements,
metric units of measurement, the nomenclature of celestial
objects from clouds to planets, anatomical parts and
medical conditions all had their own systems of naming
beyond any specific tongue. This was an attempt to bring
reason into speech and record, but there were other
means to do so when reality resisted these early
measures.

The dabbling, he reflected, had become a little more than
that. He had subscribed to journals in the language, he
wrote letters to colleagues and received them in return. A
few words of world-speak remained readily on his tongue, words that he spat out regularly
into the yellow-wallpapered lounge of the sanatorium with a disgust that was lugubriously
palpable.
According to my records, and in piecing together the notes of previous doctors, there was
something else however, something more profound that the language only hinted at. Just as
the postal system did not require the adoption of any language in particular but had its

P.84

P.85

formats that integrated them into addressee, address line, postal town and country, something
that organised the span of the earth, so there was a sense of the patient as having sustained
an encounter with a fundamental form of organisation that mapped out his soul. More thrilling
than the question of language indeed was that of the system of organisation upon which
linguistic symbols are inscribed. I present for the reader’s contemplation some statements
typical of those he seemed to mull over.
“The index card system spoke to my soul. Suffice it to say that in its use I enjoyed the
highest form of spiritual pleasure, and organisational efficiency, a profound flowering of
intellect in which every thought moved between its enunciation, evidence, reference and
articulation in a mellifluous flow of ideation and the gratification of curiosity.” This sense of
the soul as a roving enquiry moving across eras, across forms of knowledge and through the
serried landscapes of the vast planet and cosmos was returned to over and over, a sense that
an inexplicable force was within him yet always escaping his touch.
“At every reference stood another reference, each more
interesting than the last. Each the apex of a pyramid of
further reading, pregnant with the threat of digression,
each a thin high wire which, if not observed might lead
the author into the fall of error, a finding already found
against and written up.” He mentions too, a number of
times, the way the furniture seemed to assist his thoughts
- the ease of reference implied by the way in which the
desk aligned with the text resting upon the pages of the
off-print, journal, newspaper, blueprint or book above
which further drawers of cards stood ready in their
cabinet. All were integrated into the system. And yet,
amidst these frenetic recollections there was a note of
mourning in his contemplative moods, “The superposition
of all planes of enquiry and of thought in one system
repels those for whom such harmonious speed is
suspicious.” This thought was delivered with a stare that
was not exactly one of accusation, but that lingered with
the impression that there was a further statement to follow
it, and another, queued up ready to follow.

As I gained the trust of the patient, there was a sense in
which he estimated me as something of a junior
collaborator, a clerk to his natural role as manager. A
lucky, if slightly doubtful, young man whom he might
mentor into efficiency and a state of full access to
information. For his world, there was not the corruption and tiredness of the old methods.
Ideas moved faster in his mind than they might now across the world. To possess a register of

thoughts covering a period of some years is to have an asset, the value of which is almost
incalculable. That it can answer any question respecting any thought about which one has
had an enquiry is but the smallest of its merits. More important is the fact that it continually
calls attention to matters requiring such attention.
Much of his discourse was about the optimum means of arrangement of the system, there
was an art to laying out the cards. As the patient further explained, to meet the objection that
loose cards may easily be mislaid, cards may be tabbed with numbers from one to ten. When
arranged in the drawer, these tabs proceed from left to right across the drawer and the
absence of a single card can thus easily be detected. The cards are further arranged between
coloured guide cards. As an alternative to tabbed cards, signal flags may be used. Here,
metal clips may be attached to the top end of the card and that stand out like guides. For use
of the system in relation to dates of the month, the card is printed with the numbers 1 to 31
at the top. The metal clip is placed as a signal to indicate the card is to receive attention on
the specified day. Within a large organisation a further card can be drawn up to assign
responsibility for processing that date’s cards. There were numerous means of working the
cards, special techniques for integrating them into any type of research or organisation, means
by which indexes operating on indexes could open mines of information and expand the
knowledge and capabilities of mankind.
As he pressed me further, I began to experiment with such methods myself by withdrawing
data from the sanatorium’s records and transferring it to cards in the night. The advantages of
the system are overwhelming. Cards, cut to the right mathematical degree of accuracy,
arrayed readily in drawers, set in cabinets of standard sizes that may be added to at ease,
may be apportioned out amongst any number of enquirers, all of whom may work on them
independently and simultaneously. The bound book, by contrast, may only be used by one
person at a time and that must stay upon a shelf itself referred to by an index card system. I
began to set up a structure of rows of mirrors on chains and pulleys and a set of levered and
hinged mechanical arms to allow me to open the drawers and to privately consult my files
from any location within the sanatorium. The clarity of the image is however so far too much
effaced by the diffusion of light across the system.
It must further be borne in mind that a system thus capable of indefinite expansion obviates
the necessity for hampering a researcher with furniture or appliances of a larger size than are
immediately required. The continuous and orderly sequence of the cards may be extended
further into the domain of furniture and to the conduct of business and daily life. Reasoning,
reference and the order of ideas emerging as they embrace and articulate a chaotic world and
then communicate amongst themselves turning the world in turn into something resembling
the process of thought in an endless process of consulting, rephrasing, adding and sorting.
For the patient, ideas flowed like a force of life, oblivious to any unnatural limitation. Thought
became, with the proper use of the system, part of the stream of life itself. Thought moved
through the cards not simply at the superficial level of the movement of fingers and the
mechanical sliding and bunching of cards, but at the most profound depths of the movement

P.86

P.87

between reality and our ideas of it. The organisational grace to be found in arrangement,
classification and indexing still stirred the remnants of his nervous system until the last day.
Last
Revision:
2·08·2016

P.138

P.139

An
experimental
transcript
SÎNZIANA PĂLTINEANU

Note: The editor has had the good fortune of finding a whole box of
handwritten index cards and various folded papers (from printed screenshots to
boarding passes) in the storage space of an institute. Upon closer investigation,
it has become evident that the mixed contents of the box make up one single
document. Difficult to decipher due to messy handwriting, the manuscript
poses further challenges to the reader because its fragments lack a preestablished order. Simply uploading high-quality facsimile images of the box
contents here would not solve the problems of legibility and coherence. As an
intermediary solution, the editor has opted to introduce below a selection of
scanned images and transcribed text from the found box. The transcript is
intended to be read as a document sample, as well as an attempt at manuscript
reconstruction, following the original in the author's hand as closely as possible:
pencilled in words in the otherwise black ink text are transcribed in brackets,
whereas curly braces signal erasures, peculiar marks or illegible parts on the
index cards. Despite shifts in handwriting styles, whereby letters sometimes
appear extremely rushed and distorted in multiple idiosyncratic ways, the
experts consulted unanimously declared that the manuscript was most likely
authored by one and the same person. To date, the author remains unknown.
Q

I've been running with a word in my mouth, running with this burning untitled shape, and I
just can't spit it out. Spit it with phlegm from a balcony, kiss it in a mirror, brush it away one
morning. I've been running with a word in my mouth, running...

… it must have been only last month that I began half-chanting-half-mumbling this looped
sequence of sentences on the staircase I regularly take down to work and back up to dream,
yet it feels as if it were half a century ago. Tunneling through my memory, my tongue begins
burning again and so I recollect that the subject matter was an agonizing, unutterable
obsession I needed to sort out most urgently. Back then I knew no better way than to keep
bringing it up obliquely until it would chemically dissolve itself into my blood or evaporate
through the pores of my skin. To whisper the obsession away, I thought not entirely so
naïvely, following a peculiar kind of vengeful logic, by emptying words of their pocket
contents on a spiraling staircase. An anti-incantation, a verbal overdose, a semantic dilution or
reduction – for the first time, I was ready to inflict harm on words! [And I am sure, the
thought has crossed other lucid minds, too.]
N

During the first several days, as I was rushing up and down the stairs like a Tasmanian devil,
swirling those same sentences in my expunction ritual, I hardly noticed that the brown
marbled staircase had a ravenous appetite for all my sound making and fuss: it cushioned the
clump of my footsteps, it absorbed the vibrations of my vocal chords and of my fingers
drumming on the handrail. All this unusual business must have carried on untroubled for
some time until that Wed. [?] morning when I tried approaching the employee at the
reception desk in the hideously large building where I live with a question about elevator
safety. I may take the elevator once in a blue moon, but I could not ignore the new
disquieting note I had been reading on all elevator doors that week:
m a k e / s u r e / t h e / e l e v a t o r / c a r / i s / s t a t i o n e d / o n / y o u r / f l
o o r / b e f o r e / s t e p p i n g / i n

T

P.140

P.141

Walking with a swagger, I entered the incandescent light field around the fancy semicircular,
brown reception desk, pressed down my palms on it, bent forward and from what I found to
be a comfortable inquiry angle, launched question mark after question mark: “Is everything
alright with the elevators? Do you know how worrisome I find the new warning on the
elevator doors? Has there been an accident? Or is this simply an insurance disclaimer-trick?”
Too many floors, too many times reading the same message against my will, must have
inflated my concern, so I breathed out the justification of my anxiety and waited for a
reassuring head shake to erase the imprint of the elevator shaft from my mind. Oddly, not the
faintest or most bored acknowledgment of my inquiry or presence came from across the desk.
From where I was standing, I performed a quick check to see if any cables came out of the
receptionist's ears. Nothing. Channels unobstructed, no ear mufflers, no micro-devices.
Suspicion eliminated, I waved at him, emitted a few other sounds – all to no avail. My
tunnel-visioned receptionist rolled his chair even closer to one of the many monitors under his
hooked gaze, his visual field now narrowed to a very acute angle, sheltered by his high desk.
How well I can still remember that at that exact moment I wished my face would turn into
the widest, most expensive screen, with an imperative, hairy ticker at the bottom –
h e y t o u c h m y s c r e e n m y m u s t a c h e s c r e e n e l e v a t o r t o u c h d o w n s
c r e a m

J

That's one of the first red flags I remember in this situation (here, really starting to come
across more or less as a story): a feeling of being silenced by the building I inhabited. [Or to
think about it the other way around: it's also plausible and less paranoid that upon hearing
my flash sentences the building manifested a sense of phonophobia and consequently
activated a strange defense mechanism. In any case, t]hat day, I had been forewarned, but I
failed to understand. As soon as I pushed the revolving door and left the building with a wry
smile [on my face], the traffic outside wolfed down the warning.
E

The day I resigned myself to those forces – and I assume, I had unleashed them upon myself
through my vengeful desire to hxxx {here, a 3-cm erasure} words until I could see carcass
after carcass roll down the stairs [truth be said, a practice that differed from other people's
doings only in my heightened degree of awareness, which entailed a partially malevolent but
perhaps understandable defensive strategy on my part] – that gloomy day, the burning
untitled shape I had been carrying in my mouth morphed into a permanent official of my
cavity – a word implant in my jaw! No longer do I feel pain on my tongue, only a tinge of
volcanic ash as an aftermath of this defeat.

U

I've been running with a word in my mouth, running with this burning untitled shape, and I
just can't spit it out. Spit it with phlegm from a balcony, kiss it in a mirror, brush it away one
morning. It has become my tooth, rooted in my nervous system. My word of mouth.
P

Since then, my present has turned into an obscure hole, and I can't climb out of it. Most of
the time, I'm sitting at the bottom of this narrow oubliette, teeth in knees, scribbling notes with
my body in a terribly twisted position. And when I'm not sitting, I'm forced to jump.
Agonizing thoughts numb my limbs so much so that I feel my legs turning to stone. On some
days I look up, terrified. I can't even make out whether the diffuse opening is egg- or squareshaped, but there's definitely a peculiar tic-tac sequence interspersed with neighs that my
pricked ears are picking up on. A sound umbrella, hovering somewhere up there, high above
my imploded horizon.
{illegible vertical lines resembling a bar code}
Hypotheses scanned and merged, I temporarily conclude that a horse-like creature with
metal intestines must be galloping round and round the hole I'm in. When I first noticed the
sound, its circular cadence was soft and unobtrusive, almost protective, but now the more laps
the clock-horse is running, the deeper the ticking and the neighing sounds are drilling into the
hole. I picture this as an ever rotating metal worm inside a mincing machine. If I point my
chin up, it bores through my throat!
B

P.142

P.143

What if, in returning to that red flag in my reconstructive undertaking [instead of “red flag”,
whose imperialist connotations strike me today, we cross it out and use “pyramid” to refer to
such potentially revealing frames, when intuitions {two words crossed out, but still legible:
seem to} give the alarm and converge before thoughts do], we posit that an elevator accident
occurred not long after my unanswered query at the High Reception Desk, and that I –
exceptionally – found myself in the elevator car that plummeted. Following this not entirely
bleak hypothesis, the oubliette I'm trapped in translates to an explainable state of blackout
and all the ticking and the drilling could easily find their counterparts in the host of medical
devices (and their noise-making) that support a comatose person. What if what I am
experiencing now is another kind of awareness, inside a coma, which will be gone once I
wake up in a few hours or days on a hospital bed, flowers by my side, someone crying / loud
as a horse / in the other corner of the room, next to a child's bed?
[Plausible as this scenario might be, it's still strange how the situation calls for reality-like
insertions to occur through “what if”s...]
H

Have I fallen into a lucid coma or am I a hallucination, made in 1941 out of gouache and
black pencil, paper, cardboard and purchased in 1966?
[To visualize the equation of my despair, the following elements are given: the abovewhispered question escalates into a desperate shout and multiplies itself over a considerable
stretch of time at the expense of my vocal chords. After all, I am not made of black pencil or
cardboard or paper. Despite this conclusion, the effort has left me sulking for hours without
being able to scribble anything, overwhelmed by a sensation of being pinched and pulled
sideways by dark particles inside the mineral dampness of this open tomb. What's the use of
a vertical territory if you can't sniff it all the way up?]
{several overlapping thumbmarks in black ink, lower right corner}
W

/ one gorgeous whale \
my memory's biomorphic shadow
can anyone write in woodworm language?

how to teach the Cyrillic alphabet to woodworms?
how many hypotheses to /re-stabilize\ one's situation?
how many pyramids one on top of the other to the \coma/ surface?
the denser the pyramid net, the more confusing the situation. true/false\fiction

O

Hasty recordings of several escape attempts. A slew of tentacle-thoughts are rising towards
the ethereal opening and here I am / hopeful and unwashed \ just beneath a submundane
landscape of groping, shimmering arms, hungry to sense and to collect every memory detail in
an effort of sense making, to draw skin over hypotheses and hypotheses over bones. It might
be morning, it might be yesterday's morning out there or any other time in the past, when as I
cracked the door to my workplace, I entered my co-workers' question game and paraverbal
exchange:
Puckered lips open: “Listen, whose childhood dream was it to have one of their eye-bulbs
replaced with a micro fish-eye lens implant?” Knitted eyebrows: “Someone whose neural
pathways zigzagged phrenologist categories?” Microexpressionist: “How many semioticiandentists and woodworm-writers have visited the Chaos Institute to date?” A ragged mane:
“The same number as the number of neurological tools for brain mapping that the Institute
owns?” {one lengthy word crossed out, probably a name}: “Would your brain topography get
upset and wrinkle if you imagined all the bureaucrats' desks from the largest country on earth
[by pop.] piled up in a pyramid?” Microexpressionist again: “Who wants to draft the call for
asemic writers?” Puckered lips closes {sic} the door.
I

It's a humongous workplace, with a blue entrance door, cluttered with papers on both sides.
See? Left hand on the entrance door handle, the woman presses it and the three of them
[guiding co-worker, faceless cameraman, scarlet-haired interviewer] squeeze themselves

P.144

P.145

inside all that paper. [Door shuts by itself.] Doesn't it feel like entering a paper sculpture? [,
she herself appearing for a split second to have undergone a material transformation, to have
turned into paper, the left side of her face glowing in a retro light. It's still her.] This is where
we work, a hybrid site officially called The Institute for Chaos and Neuroplasticity – packed
with folders, jammed with newspapers, stacks of private correspondence left and right,
recording devices, boxes with photographs, xeroxed documents on shelves, {several pea-sized
inkblots} printed screenshots and boarding passes – we keep it all, everything that museums
or archives have no interest in, all orphaned papers, photographic plates and imperiled books
or hard disks relatives might want to discard or even burn after someone's death. Exploring
leftovers around here can go up and down to horrifying and overwhelming sensorial levels...
Z

{a two-centimeter line of rust from a pin in the upper left corner of the index card}
Sociological-intelligence rumors have it that ours is the bureau for studying psychological
attachment to “garbage” (we very much welcome researchers), while others refer to the
Institute as the chaos-brewing place in the neighborhood because we employ absolutely no
classification method for storing papers or other media. The chances of finding us? [Raised
eyebrows and puckered lips as first responses to the scarlet-haired question.] Well, the
incidence is just as low as finding a document or device you're looking for in our storage.
Things are not lost; there are just different ways of finding them. A random stroll, a lucky find
– be that on-line or off-line –, or a seductive word of mouth may be the entrance points into
this experiential space, a manifesto for haphazardness, emotional intuitions, subversion of
neural pathways, and non-productive attitudes. A dadaist archive? queried Scarlet Hair.
Ours is definitely not an archive, there's no trace of pyramidal bureaucracy or taxonomy
here, no nation state at its birth. Hence you won't find a reservoir for national or racial
histories in here. Just imagine we changed perception scales, imagine a collective cut-up
project that we, chaos workers, are bringing together without scissors or screwdrivers because
all that gets through that blue door [and that is the only condition and standard] has already
been shaped and fits in here. [Guiding co-worker speaks in a monotonous and plain GPS
voice. Interview continues, but she forgets to mention that behind the blue door, in this very
big box 1. everyone is an authorized user and 2. time rests unemployed.]
K

Lately, several trucks loaded with gray matter have been adding extra hours of induced
chaos to everyone's content. Although it is the Institute's policy to accept paper donations
only from private individuals, it occasionally makes exceptions and takes on leftovers from
nonprofit organizations.

Each time this happens, an extended rite of passage follows so as to slightly delay and
thereby ease the arrival of chaos bits: the most reliable chaos worker, Microexpressionist by
metonymically selected feature, supervises the transfer of boxes at the very beginning of a
long hallway [eyeballs moving left to right, head planted in an incredibly stiff neck]. Then,
some fifty meters away, standing in front of the opened blue door, Puckered Lips welcomes
newcomers into the chaos, his gestures those of a marshaller guiding a plane into a parking
position. But once the gray [?] matter has passed over the threshold, once the last full
suitcase or shoe box with USB sticks has landed, directions are no longer provided.
Everyone's free to grow limbs and choose temporal neighbors.
L

… seated cross-legged at the longest desk ever, Ragged Mane is randomly extracting
photodocuments from the freshest chaos segment with a metallic extension of two of her
fingers [instead of a pince-nez, she's the one to carry a pair of tweezers in a small pocket at
all times]. “Look what I've just grabbed,” and she pushes a sepia photograph in front of
Knitted Eyebrows, whose otherwise deadpan face instantaneously gets stamped this time
with a question mark: “What is it?” “Another capture, of course! Two mustaches, one hat,
three pairs of glasses, some blurred figures in the background, and one most fascinating
detail!” – [… takes out a magnifying glass and points with one of her flashy pink fingers to
the handheld object under the gaze of four eyes on the left side of the photo. Then, Ragged
Mane continues:] “That raised right index finger above a rectangular-shaped object... you see
it?” “You mean [00:00 = insertion of a lengthy time frame = 00:47] could this mustachioed
fellow be holding a touchscreen mobile phone in his left hand?” For several unrecorded
skeptical moments, they interlock their eyes and knit their eyebrows closer together.
Afterward, eyes split again and roll on the surface of the photograph like black-eyed peas on
a kitchen table. “It's all specks and epoch details,” a resigned voice breaks from the chaos
silence, when, the same thought crosses their minds, and Ragged Mane and Knitted
Eyebrows turn the photo over, almost certain to find an answer. [A simultaneous hunch.] In
block letters it most clearly reads: “DOCUMENTING THE FILMING OF
PEACEMAKERS / ANALOGUE PHOTOGRAPHY ON FILM SET / BERN,
SWITZERLAND / 17.05.2008”
X

P.146

P.147

/ meanwhile, the clock-horse has grown really nervous out there – it's drawing smaller and
smaller circles / a spasmodic and repetitive activity causing dislocation / a fine powder
begins to float inside the oubliette in the slowest motion possible / my breathing has already
been hampered, but now my lungs and brain get filled with an asphyxiating smell of old
paper / hanging on my last tentacle-thought, on my tiptoes, refusing to choke and disintegrate
/ NOT READY TO BE RECYCLED / {messiest handwriting}
A Cyrillic cityscape is imagining how one day all the bureaucrats' desks from the largest
country on earth get piled up in a pyramid. “This new shape is deflating the coherence of my
horizon. [the cityscape worries] No matter!” Once the last desk is placed at the very top, the
ground cracks a half-open mouth, a fissure the length of Rxssxx. On the outside it's spotted
with straddled city topographies, inside, it's filled with a vernacular accumulation of anational
dust without a trace of usable pasts.
{violent horizontal strokes over the last two lines, left and right from the hole at the bottom of
the index card; indecipherable}
M

“What's on TV this afternoon?” This plain but beautifully metamorphosed question has just
landed with a bleep on the chaos couch, next to Ragged Mane, who usually loses no chance
to retort [that is, here, to admonish too hard a fall]: “Doucement!” Under the weight of a
short-lived feeling of guilt, {name crossed out} echoes back in a whisper – d – o – u – c – e
– m – e – n – t –, and then, as if after a palatable word tasting, she clicks her tongue and
with it, she searches for a point of clarification: “Doucement is an anagram for documenté –
which one do you actually mean?” [All conversations with {name crossed out} would suffer
unsettling Meaning U-turns because she specialized in letter permutation.]

Y

Gurgling sounds from a not-so-distant corner of the chaos dump make heads simultaneously
rotate in the direction of the TV screen, where a documentary has just started with a drone'seye view over a city of lined-up skyscrapers. Early on, the commentator breaks into unwitty
superlatives and platitudes, while the soundtrack unnecessarily dramatizes a 3D layering of
the city structure. Despite all this, the mood on the couch is patient, and viewers seem to
absorb the vignetted film. “A city like no other, as atypical as Cappadocia,” explains the low
trepid voice from the box, “a city whose peculiarity owes first to the alignment of all its
elements, where street follows street in a parallel fashion like in linear writing. Hence, reading
the city acquires a literal dimension, skyscrapers echo clustered block letters on a line, and
the pedestrian reader gets reduced to the size of a far-sighted microbe.”
[Woodworm laughs]
V

Minutes into the documentary, the micro-drone camera zooms into the silver district/chapter
of the city to show another set of its features: instead of steel and glass, what from afar
appeared to be ordinary skyscrapers turn out to be “300-meter-tall lofty towers of mailboxlike constructs of dried skin, sprayed on top with silver paint for rims, and decorated with
huge love padlocks. A foreboding district for newlyweds?” [nauseating atmosphere] Unable
to answer or to smell, the mosquito-sized drone blinks in the direction of the right page, and it
speedily approaches another windowless urban variation: the vastest area of city towers – the
Wood Drawers District. “Despite its vintage (here and there rundown) aura, the area is an
exquisite, segregated space for library aficionados, designed out of genetically-engineered
trees that grow naturally drawer-shaped with a remarkable capacity for self-(re)generation. In
terms of real proportions, the size of a mailbox- or a drawer-apartment is comparable to that

P.148

P.149

of a shipping container, from the alternative but old housing projects…” bla bla the furniture
bla... [that chaos corner, so remote and so coal black / that whole atmosphere with blurred
echoes beclouds my reasoning / and right now, I'm feeling nauseous and cursed with all the
words in an unabridged dictionary / new deluxe edition, with black covers and golden
characters]
D

In front of the place where, above a modest skyline, every single morning [scholars'] desks
conjoin in the shape of a multi-storied pyramid, there's a sign that reads: right here you can
bend forward, place your hands on your back, press down your spine with your thumbs and
throw up an index card, throw out a reality version, take out a tooth. In fact, take out all that
you need and once you feel relieved, exchange personas as if in an emergency situation.
Then, behind vermillion curtains, replace pronouns at will.
[Might this have been a pipe dream? An intubated wish for character replacement? {Name
crossed out} would whisper C E E H I N N O R T as place name]
R

[“gray – …
Other Color Terms –
argentine, cerise, cerulean, cyan, ocher, perse, puce, taupe, vermillion”]
To be able to name everything and everyone, especially all the shades in a gray zone, and
then to re-name, re-narrate/re-count, and re-photograph all of it. To treat the ensuing
multilayered landscape with/as an infinitive verb and to scoop a place for yourself in the
accordion of surfaces. For instance, take the first shot – you're being stared at, you're under
the distant gaze of three {words crossed out; illegible}. Pale, you might think, how pallid and
lifeless they appear to be, but try to hold their gaze and notice how the interaction grows
uncomfortable through persistence. Blink, if you must. Move your weight from one leg to the
other, and become aware of how unflinching their concentration remains, as if their eyes are
lured into a screen. And as you're trying to draw attention to yourself by making ampler,
pantomimic gestures, your hands touch the dark inner edges of the monitor you're [boxed] in.
Look out and around again and again...

G

Some {Same?} damned creature made only of arms and legs has been leaving a slew of
black dots all over my corridors and staircases, ashes on my handrails, and larger spots of
black liquid in front of my elevator doors on the southern track – my oldest and dearest
vertically mobile installation, the one that has grown only ten floors high. If I were in shape,
attuned and wired to my perception angles and sensors, I could identify beyond precision that
it is a 403 cabal plotting I begin fearing. Lately, it's all been going really awry. Having failed
at the character recognition of this trickster creature, the following facts can be enumerated in
view of overall [damage] re-evaluation, quantification, and intruder excision: emaciating
architectural structure, increasingly deformed spiraling of brown marbled staircases, smudged
finger- and footprints on all floors, soddened and blackened ceilings, alongside thousands of
harrowing fingers and a detection of an insidious and undesirable multiplication of {word
crossed out: white} hands [tbc].
C

Out of the blue, the clock-horse dislocated particles expand in size, circle in all directions like
giant flies around a street lamp, and then in the most predictable fashion, they collide with my
escapist reminiscences multiple times until I lose connection and the landscape above comes
to a [menacing] stillness. [How does it look now? a scarlet-haired question.] I'm blinking, I'm
moving my weight from one leg to the other, before I can attempt a description of the earth
balls that stagnate in the air among translucent tentacles [they're almost gone] and floating
dioramas of miniatures. Proportions have inverted, scraped surfaces have commingled and
my U-shaped. reality. and. vision. are. stammering... I can't find my hands!
...

P.150

P.151

-- Ospal ( talk ) 09:27, 19 November 2015 (CET) Here is where the transcript ENDS,
where the black text lines dribble back into the box. For information on document location or
transcription method, kindly contact the editor.

Last
Revision:
28·06·2016

LES
UTOPISTES
and
their
common
logos/et
leurs
logos
communs
DENNIS POHL
EN

In itself this list is just a bag of words that orders the common terms used in the works of
Le Corbusier and Paul Otlet with the help of text comparison. The quantity of similar words
relates to the word-count of the texts, which means that each appearance has a different
weight. Taken this into account, the appearance of the word esprit for instance, is more
significant in Vers une Architecture (127 times) than in Traité de documentation (240
times), although the total amount of appearances is almost two times higher.
Beyond the mere quantified use of a common language, this list follows the intuition that
there is something more to elaborate in the discourse between these two utopians. One
possible reading can be found in The Smart City, an essay that traces their encounter.
FR

Cette liste n'est en elle même qu'un sac de mots qui organisent les termes les plus
communs utilisés dans les travaux de Le Corbusier et Paul Otlet en utilisant un comparateur
de texte. Le nombre de mots similaires rapotés par le comptage automatique des mots du
texte, ceci signifie que chaque occurence a une valeur différente. Prenons l'exemple des
aparitions du mot esprit par exemple sont plus significatives dans Vers une Architecture (127

P.152

P.153

fois) plutot que dans le Traité de documentation (240 fois), et ceci bien que le nombre
d'occurences est pratiquement 2 fois plus élevé.
Au delà de simplement comptabiliser la pratique d'un langage commun, mais cette liste suit
une intuition qu'il y a quelque chose qui mériterait une recherche plus approfondie sur le
discours de ces deux utopistes. Une proposition pour une telle recherche peut être trouvée
dans La Ville Intelligente, un essai qui retrace leur rencontre.
Books taken into consideration/Livres prise en compte:
• Le Corbusier, Vers une Architecture, Paris: les éditions G. Crès, 1923. Wordcount: 32733.
• Paul Otlet, Traité de documentation: le livre sur le livre, théorie et pratique, Bruxelles:
Mundaneum, Palais Mondial, 1934. Word-count: 356854.
• Le Corbusier, Urbanisme, Paris: les éditions G. Crès, 1925. Word-count: 37699.
• Paul Otlet, Monde: essai d'universalisme - Connaissance du Monde, Sentiment du
Monde, Action organisee et Plan du Monde, Bruxelles: Editiones Mundeum 1935.
Word-count: 140209.
acquis

appears 5 times in Vers une 21 times in Traité de
Architecture,
documentation,

6 times in
Urbanisme and

11 times in
Monde.

activité

appears 10 times in Vers
une Architecture,

43 times in Traité de
documentation,

10 times in
Urbanisme and

78 times in
Monde.

actuel

appears 9 times in Vers une 27 times in Traité de
Architecture,
documentation,

6 times in
Urbanisme and

22 times in
Monde.

actuelle

appears 7 times in Vers une 19 times in Traité de
Architecture,
documentation,

8 times in
Urbanisme and

26 times in
Monde.

actuelles

appears 5 times in Vers une 6 times in Traité de
Architecture,
documentation,

6 times in
Urbanisme and

6 times in
Monde.

affaires

appears 6 times in Vers une 42 times in Traité de
Architecture,
documentation,

30 times in
Urbanisme and

19 times in
Monde.

air

appears 12 times in Vers
une Architecture,

12 times in Traité de
documentation,

14 times in
Urbanisme and

16 times in
Monde.

aise

appears 7 times in Vers une 71 times in Traité de
Architecture,
documentation,

6 times in
Urbanisme and

12 times in
Monde.

alors

appears 32 times in Vers
une Architecture,

165 times in Traité de 38 times in
documentation,
Urbanisme and

52 times in
Monde.

angle

appears 5 times in Vers une 18 times in Traité de
Architecture,
documentation,

16 times in
Urbanisme and

7 times in
Monde.

années

appears 7 times in Vers une 89 times in Traité de
Architecture,
documentation,

10 times in
Urbanisme and

42 times in
Monde.

ans

appears 17 times in Vers
une Architecture,

91 times in Traité de
documentation,

16 times in
Urbanisme and

109 times in
Monde.

architecture

appears 199 times in Vers
une Architecture,

51 times in Traité de
documentation,

26 times in
Urbanisme and

11 times in
Monde.

art

appears 44 times in Vers
une Architecture,

370 times in Traité de 6 times in
documentation,
Urbanisme and

60 times in
Monde.

aspect

appears 5 times in Vers une 45 times in Traité de
Architecture,
documentation,

8 times in
Urbanisme and

29 times in
Monde.

auto

appears 10 times in Vers
une Architecture,

13 times in Traité de
documentation,

12 times in
Urbanisme and

5 times in
Monde.

autrement

appears 6 times in Vers une 15 times in Traité de
Architecture,
documentation,

6 times in
Urbanisme and

10 times in
Monde.

avant

appears 8 times in Vers une 131 times in Traité de 6 times in
Architecture,
documentation,
Urbanisme and

45 times in
Monde.

avoir

appears 13 times in Vers
une Architecture,

208 times in Traité de 6 times in
documentation,
Urbanisme and

72 times in
Monde.

base

appears 8 times in Vers une 119 times in Traité de 6 times in
Architecture,
documentation,
Urbanisme and

66 times in
Monde.

beauté

appears 14 times in Vers
une Architecture,

14 times in
Urbanisme and

21 times in
Monde.

beaucoup

appears 9 times in Vers une 114 times in Traité de 8 times in
Architecture,
documentation,
Urbanisme and

23 times in
Monde.

besoin

appears 16 times in Vers
une Architecture,

82 times in Traité de
documentation,

8 times in
Urbanisme and

40 times in
Monde.

calcul

appears 19 times in Vers
une Architecture,

15 times in Traité de
documentation,

24 times in
Urbanisme and

21 times in
Monde.

cause

appears 6 times in Vers une 47 times in Traité de
Architecture,
documentation,

6 times in
Urbanisme and

26 times in
Monde.

cela

appears 16 times in Vers
une Architecture,

16 times in
Urbanisme and

31 times in
Monde.

cellule

appears 7 times in Vers une 9 times in Traité de
Architecture,
documentation,

10 times in
Urbanisme and

7 times in
Monde.

centre

appears 7 times in Vers une 55 times in Traité de
Architecture,
documentation,

50 times in
Urbanisme and

44 times in
Monde.

P.154

34 times in Traité de
documentation,

99 times in Traité de
documentation,

P.155

chapitre

appears 7 times in Vers une 35 times in Traité de
Architecture,
documentation,

chacun

appears 6 times in Vers une 151 times in Traité de 6 times in
Architecture,
documentation,
Urbanisme and

60 times in
Monde.

chemins

appears 9 times in Vers une 18 times in Traité de
Architecture,
documentation,

12 times in
Urbanisme and

5 times in
Monde.

chemin

appears 7 times in Vers une 19 times in Traité de
Architecture,
documentation,

18 times in
Urbanisme and

9 times in
Monde.

choses

appears 43 times in Vers
une Architecture,

215 times in Traité de 20 times in
documentation,
Urbanisme and

157 times in
Monde.

chose

appears 34 times in Vers
une Architecture,

110 times in Traité de 12 times in
documentation,
Urbanisme and

52 times in
Monde.

ciel

appears 8 times in Vers une 13 times in Traité de
Architecture,
documentation,

48 times in
Urbanisme and

18 times in
Monde.

cinquante

appears 5 times in Vers une 6 times in Traité de
Architecture,
documentation,

8 times in
Urbanisme and

5 times in
Monde.

circulation

appears 6 times in Vers une 27 times in Traité de
Architecture,
documentation,

44 times in
Urbanisme and

8 times in
Monde.

cité

appears 10 times in Vers
une Architecture,

29 times in Traité de
documentation,

34 times in
Urbanisme and

35 times in
Monde.

claire

appears 6 times in Vers une 18 times in Traité de
Architecture,
documentation,

6 times in
Urbanisme and

6 times in
Monde.

compte

appears 11 times in Vers
une Architecture,

96 times in Traité de
documentation,

8 times in
Urbanisme and

37 times in
Monde.

construction

appears 50 times in Vers
une Architecture,

24 times in Traité de
documentation,

14 times in
Urbanisme and

8 times in
Monde.

conception

appears 23 times in Vers
une Architecture,

62 times in Traité de
documentation,

8 times in
Urbanisme and

64 times in
Monde.

construire

appears 17 times in Vers
une Architecture,

10 times in Traité de
documentation,

6 times in
Urbanisme and

9 times in
Monde.

contre

appears 13 times in Vers
une Architecture,

91 times in Traité de
documentation,

6 times in
Urbanisme and

79 times in
Monde.

conà

appears 9 times in Vers une 49 times in Traité de
Architecture,
documentation,

10 times in
Urbanisme and

20 times in
Monde.

12 times in
Urbanisme and

5 times in
Monde.

constructions appears 7 times in Vers une 8 times in Traité de
Architecture,
documentation,

10 times in
Urbanisme and

9 times in
Monde.

connaissance

appears 5 times in Vers une 76 times in Traité de
Architecture,
documentation,

8 times in
Urbanisme and

56 times in
Monde.

conditions

appears 5 times in Vers une 111 times in Traité de 8 times in
Architecture,
documentation,
Urbanisme and

57 times in
Monde.

cours

appears 8 times in Vers une 150 times in Traité de 8 times in
Architecture,
documentation,
Urbanisme and

65 times in
Monde.

coup

appears 7 times in Vers une 34 times in Traité de
Architecture,
documentation,

6 times in
Urbanisme and

14 times in
Monde.

crise

appears 11 times in Vers
une Architecture,

8 times in Traité de
documentation,

6 times in
Urbanisme and

45 times in
Monde.

création

appears 22 times in Vers
une Architecture,

82 times in Traité de
documentation,

10 times in
Urbanisme and

48 times in
Monde.

créer

appears 10 times in Vers
une Architecture,

57 times in Traité de
documentation,

10 times in
Urbanisme and

25 times in
Monde.

crée

appears 10 times in Vers
une Architecture,

26 times in Traité de
documentation,

6 times in
Urbanisme and

18 times in
Monde.

culture

appears 7 times in Vers une 33 times in Traité de
Architecture,
documentation,

6 times in
Urbanisme and

68 times in
Monde.

demain

appears 7 times in Vers une 17 times in Traité de
Architecture,
documentation,

6 times in
Urbanisme and

11 times in
Monde.

dessus

appears 6 times in Vers une 28 times in Traité de
Architecture,
documentation,

16 times in
Urbanisme and

21 times in
Monde.

devant

appears 18 times in Vers
une Architecture,

75 times in Traité de
documentation,

12 times in
Urbanisme and

43 times in
Monde.

dire

appears 17 times in Vers
une Architecture,

185 times in Traité de 16 times in
documentation,
Urbanisme and

72 times in
Monde.

disposition

appears 5 times in Vers une 83 times in Traité de
Architecture,
documentation,

doit

appears 13 times in Vers
une Architecture,

domaines

appears 5 times in Vers une 42 times in Traité de
Architecture,
documentation,

6 times in
Urbanisme and

38 times in
Monde.

donne

appears 8 times in Vers une 148 times in Traité de 12 times in
Architecture,
documentation,
Urbanisme and

44 times in
Monde.

droite

appears 11 times in Vers
une Architecture,

8 times in
Monde.

P.156

6 times in
Urbanisme and

408 times in Traité de 14 times in
documentation,
Urbanisme and

40 times in Traité de
documentation,

36 times in
Urbanisme and

8 times in
Monde.
134 times in
Monde.

P.157

droits

appears 8 times in Vers une 22 times in Traité de
Architecture,
documentation,

droit

appears 6 times in Vers une 106 times in Traité de 36 times in
Architecture,
documentation,
Urbanisme and

125 times in
Monde.

désordre

appears 7 times in Vers une 9 times in Traité de
Architecture,
documentation,

12 times in
Urbanisme and

12 times in
Monde.

effet

appears 7 times in Vers une 78 times in Traité de
Architecture,
documentation,

6 times in
Urbanisme and

32 times in
Monde.

encore

appears 25 times in Vers
une Architecture,

enfin

appears 5 times in Vers une 46 times in Traité de
Architecture,
documentation,

ensemble

appears 16 times in Vers
une Architecture,

329 times in Traité de 14 times in
documentation,
Urbanisme and

123 times in
Monde.

entre

appears 29 times in Vers
une Architecture,

342 times in Traité de 18 times in
documentation,
Urbanisme and

246 times in
Monde.

esprit

appears 127 times in Vers
une Architecture,

240 times in Traité de 36 times in
documentation,
Urbanisme and

150 times in
Monde.

espace

appears 20 times in Vers
une Architecture,

69 times in Traité de
documentation,

16 times in
Urbanisme and

122 times in
Monde.

esprits

appears 6 times in Vers une 44 times in Traité de
Architecture,
documentation,

6 times in
Urbanisme and

35 times in
Monde.

exemple

appears 5 times in Vers une 143 times in Traité de 12 times in
Architecture,
documentation,
Urbanisme and

30 times in
Monde.

existence

appears 5 times in Vers une 73 times in Traité de
Architecture,
documentation,

10 times in
Urbanisme and

75 times in
Monde.

face

appears 15 times in Vers
une Architecture,

11 times in Traité de
documentation,

12 times in
Urbanisme and

18 times in
Monde.

faire

appears 51 times in Vers
une Architecture,

410 times in Traité de 24 times in
documentation,
Urbanisme and

faites

appears 7 times in Vers une 45 times in Traité de
Architecture,
documentation,

faut

appears 46 times in Vers
une Architecture,

285 times in Traité de 54 times in
documentation,
Urbanisme and

126 times in
Monde.

fer

appears 12 times in Vers
une Architecture,

30 times in Traité de
documentation,

14 times in
Monde.

16 times in
Urbanisme and

197 times in Traité de 22 times in
documentation,
Urbanisme and
8 times in
Urbanisme and

6 times in
Urbanisme and

14 times in
Urbanisme and

37 times in
Monde.

106 times in
Monde.
29 times in
Monde.

137 times in
Monde.
12 times in
Monde.

fin

appears 5 times in Vers une 122 times in Traité de 6 times in
Architecture,
documentation,
Urbanisme and

66 times in
Monde.

fois

appears 11 times in Vers
une Architecture,

208 times in Traité de 8 times in
documentation,
Urbanisme and

77 times in
Monde.

font

appears 24 times in Vers
une Architecture,

93 times in Traité de
documentation,

10 times in
Urbanisme and

25 times in
Monde.

fond

appears 5 times in Vers une 67 times in Traité de
Architecture,
documentation,

6 times in
Urbanisme and

29 times in
Monde.

forme

appears 14 times in Vers
une Architecture,

france

appears 6 times in Vers une 190 times in Traité de 6 times in
Architecture,
documentation,
Urbanisme and

57 times in
Monde.

grande

appears 40 times in Vers
une Architecture,

202 times in Traité de 82 times in
documentation,
Urbanisme and

69 times in
Monde.

grand

appears 34 times in Vers
une Architecture,

276 times in Traité de 34 times in
documentation,
Urbanisme and

89 times in
Monde.

grands

appears 24 times in Vers
une Architecture,

187 times in Traité de 24 times in
documentation,
Urbanisme and

88 times in
Monde.

grandes

appears 21 times in Vers
une Architecture,

182 times in Traité de 36 times in
documentation,
Urbanisme and

93 times in
Monde.

grandeur

appears 11 times in Vers
une Architecture,

34 times in Traité de
documentation,

6 times in
Urbanisme and

19 times in
Monde.

gros

appears 5 times in Vers une 25 times in Traité de
Architecture,
documentation,

6 times in
Urbanisme and

8 times in
Monde.

guerre

appears 5 times in Vers une 115 times in Traité de 8 times in
Architecture,
documentation,
Urbanisme and

137 times in
Monde.

géométrie

appears 17 times in Vers
une Architecture,

14 times in Traité de
documentation,

24 times in
Urbanisme and

12 times in
Monde.

hauteur

appears 14 times in Vers
une Architecture,

21 times in Traité de
documentation,

10 times in
Urbanisme and

8 times in
Monde.

haute

appears 9 times in Vers une 34 times in Traité de
Architecture,
documentation,

8 times in
Urbanisme and

13 times in
Monde.

haut

appears 9 times in Vers une 71 times in Traité de
Architecture,
documentation,

18 times in
Urbanisme and

24 times in
Monde.

heures

appears 15 times in Vers
une Architecture,

20 times in
Urbanisme and

16 times in
Monde.

P.158

442 times in Traité de 18 times in
documentation,
Urbanisme and

45 times in Traité de
documentation,

106 times in
Monde.

P.159

heure

appears 15 times in Vers
une Architecture,

histoire

appears 6 times in Vers une 338 times in Traité de 10 times in
Architecture,
documentation,
Urbanisme and

183 times in
Monde.

homme

appears 74 times in Vers
une Architecture,

189 times in Traité de 66 times in
documentation,
Urbanisme and

315 times in
Monde.

hommes

appears 11 times in Vers
une Architecture,

122 times in Traité de 30 times in
documentation,
Urbanisme and

144 times in
Monde.

hors

appears 9 times in Vers une 36 times in Traité de
Architecture,
documentation,

10 times in
Urbanisme and

12 times in
Monde.

humaine

appears 19 times in Vers
une Architecture,

72 times in Traité de
documentation,

14 times in
Urbanisme and

96 times in
Monde.

humain

appears 10 times in Vers
une Architecture,

45 times in Traité de
documentation,

16 times in
Urbanisme and

61 times in
Monde.

idées

appears 14 times in Vers
une Architecture,

283 times in Traité de 6 times in
documentation,
Urbanisme and

80 times in
Monde.

idée

appears 13 times in Vers
une Architecture,

168 times in Traité de 6 times in
documentation,
Urbanisme and

75 times in
Monde.

immenses

appears 11 times in Vers
une Architecture,

22 times in Traité de
documentation,

8 times in
Urbanisme and

12 times in
Monde.

immense

appears 8 times in Vers une 62 times in Traité de
Architecture,
documentation,

6 times in
Urbanisme and

25 times in
Monde.

industrielle

appears 12 times in Vers
une Architecture,

6 times in
Urbanisme and

14 times in
Monde.

industriels

appears 5 times in Vers une 18 times in Traité de
Architecture,
documentation,

6 times in
Urbanisme and

9 times in
Monde.

jeu

appears 14 times in Vers
une Architecture,

39 times in Traité de
documentation,

6 times in
Urbanisme and

29 times in
Monde.

jour

appears 13 times in Vers
une Architecture,

216 times in Traité de 22 times in
documentation,
Urbanisme and

69 times in
Monde.

lequel

appears 5 times in Vers une 67 times in Traité de
Architecture,
documentation,

10 times in
Urbanisme and

19 times in
Monde.

libre

appears 7 times in Vers une 48 times in Traité de
Architecture,
documentation,

6 times in
Urbanisme and

45 times in
Monde.

lieu

appears 10 times in Vers
une Architecture,

384 times in Traité de 6 times in
documentation,
Urbanisme and

89 times in
Monde.

58 times in Traité de
documentation,

7 times in Traité de
documentation,

32 times in
Urbanisme and

28 times in
Monde.

logique

appears 14 times in Vers
une Architecture,

117 times in Traité de 8 times in
documentation,
Urbanisme and

39 times in
Monde.

loin

appears 11 times in Vers
une Architecture,

46 times in Traité de
documentation,

34 times in
Urbanisme and

17 times in
Monde.

louis

appears 11 times in Vers
une Architecture,

33 times in Traité de
documentation,

6 times in
Urbanisme and

10 times in
Monde.

lumière

appears 45 times in Vers
une Architecture,

77 times in Traité de
documentation,

10 times in
Urbanisme and

38 times in
Monde.

machine

appears 17 times in Vers
une Architecture,

119 times in Traité de 20 times in
documentation,
Urbanisme and

29 times in
Monde.

machines

appears 12 times in Vers
une Architecture,

83 times in Traité de
documentation,

10 times in
Urbanisme and

29 times in
Monde.

main

appears 8 times in Vers une 96 times in Traité de
Architecture,
documentation,

10 times in
Urbanisme and

15 times in
Monde.

mal

appears 15 times in Vers
une Architecture,

33 times in Traité de
documentation,

8 times in
Urbanisme and

26 times in
Monde.

masse

appears 6 times in Vers une 35 times in Traité de
Architecture,
documentation,

8 times in
Urbanisme and

52 times in
Monde.

masses

appears 5 times in Vers une 21 times in Traité de
Architecture,
documentation,

12 times in
Urbanisme and

19 times in
Monde.

mesure

appears 20 times in Vers
une Architecture,

110 times in Traité de 16 times in
documentation,
Urbanisme and

46 times in
Monde.

milieu

appears 7 times in Vers une 58 times in Traité de
Architecture,
documentation,

20 times in
Urbanisme and

56 times in
Monde.

moderne

appears 31 times in Vers
une Architecture,

79 times in Traité de
documentation,

20 times in
Urbanisme and

35 times in
Monde.

moins

appears 16 times in Vers
une Architecture,

243 times in Traité de 10 times in
documentation,
Urbanisme and

93 times in
Monde.

moment

appears 11 times in Vers
une Architecture,

105 times in Traité de 18 times in
documentation,
Urbanisme and

36 times in
Monde.

monde

appears 18 times in Vers
une Architecture,

177 times in Traité de 26 times in
documentation,
Urbanisme and

331 times in
Monde.

montre

appears 10 times in Vers
une Architecture,

27 times in Traité de
documentation,

6 times in
Urbanisme and

11 times in
Monde.

morale

appears 6 times in Vers une 32 times in Traité de
Architecture,
documentation,

6 times in
Urbanisme and

35 times in
Monde.

P.160

P.161

moyens

appears 16 times in Vers
une Architecture,

125 times in Traité de 20 times in
documentation,
Urbanisme and

59 times in
Monde.

moyen

appears 5 times in Vers une 268 times in Traité de 8 times in
Architecture,
documentation,
Urbanisme and

97 times in
Monde.

mécanique

appears 12 times in Vers
une Architecture,

50 times in Traité de
documentation,

31 times in
Monde.

nature

appears 18 times in Vers
une Architecture,

120 times in Traité de 20 times in
documentation,
Urbanisme and

166 times in
Monde.

nouveau

appears 39 times in Vers
une Architecture,

98 times in Traité de
documentation,

16 times in
Urbanisme and

43 times in
Monde.

nouvelle

appears 13 times in Vers
une Architecture,

129 times in Traité de 6 times in
documentation,
Urbanisme and

60 times in
Monde.

nouvelles

appears 6 times in Vers une 180 times in Traité de 6 times in
Architecture,
documentation,
Urbanisme and

65 times in
Monde.

nécessaire

appears 11 times in Vers
une Architecture,

80 times in Traité de
documentation,

12 times in
Urbanisme and

43 times in
Monde.

or

appears 10 times in Vers
une Architecture,

63 times in Traité de
documentation,

14 times in
Urbanisme and

45 times in
Monde.

ordre

appears 59 times in Vers
une Architecture,

421 times in Traité de 30 times in
documentation,
Urbanisme and

organes

appears 5 times in Vers une 74 times in Traité de
Architecture,
documentation,

6 times in
Urbanisme and

21 times in
Monde.

outil

appears 19 times in Vers
une Architecture,

12 times in Traité de
documentation,

6 times in
Urbanisme and

5 times in
Monde.

outillage

appears 11 times in Vers
une Architecture,

28 times in Traité de
documentation,

14 times in
Urbanisme and

6 times in
Monde.

paris

appears 20 times in Vers
une Architecture,

192 times in Traité de 60 times in
documentation,
Urbanisme and

16 times in
Monde.

part

appears 13 times in Vers
une Architecture,

214 times in Traité de 14 times in
documentation,
Urbanisme and

77 times in
Monde.

partie

appears 11 times in Vers
une Architecture,

222 times in Traité de 10 times in
documentation,
Urbanisme and

58 times in
Monde.

partout

appears 8 times in Vers une 48 times in Traité de
Architecture,
documentation,

12 times in
Urbanisme and

28 times in
Monde.

passé

appears 17 times in Vers
une Architecture,

12 times in
Urbanisme and

49 times in
Monde.

55 times in Traité de
documentation,

16 times in
Urbanisme and

128 times in
Monde.

passion

appears 8 times in Vers une 6 times in Traité de
Architecture,
documentation,

pensée

appears 10 times in Vers
une Architecture,

291 times in Traité de 12 times in
documentation,
Urbanisme and

127 times in
Monde.

perfection

appears 12 times in Vers
une Architecture,

14 times in Traité de
documentation,

10 times in
Urbanisme and

7 times in
Monde.

petit

appears 11 times in Vers
une Architecture,

88 times in Traité de
documentation,

14 times in
Urbanisme and

23 times in
Monde.

petite

appears 7 times in Vers une 28 times in Traité de
Architecture,
documentation,

10 times in
Urbanisme and

18 times in
Monde.

petites

appears 5 times in Vers une 25 times in Traité de
Architecture,
documentation,

6 times in
Urbanisme and

12 times in
Monde.

peuvent

appears 13 times in Vers
une Architecture,

198 times in Traité de 12 times in
documentation,
Urbanisme and

45 times in
Monde.

pied

appears 13 times in Vers
une Architecture,

12 times in Traité de
documentation,

8 times in
Monde.

plan

appears 86 times in Vers
une Architecture,

151 times in Traité de 32 times in
documentation,
Urbanisme and

174 times in
Monde.

place

appears 32 times in Vers
une Architecture,

208 times in Traité de 14 times in
documentation,
Urbanisme and

62 times in
Monde.

plans

appears 15 times in Vers
une Architecture,

60 times in Traité de
documentation,

12 times in
Urbanisme and

27 times in
Monde.

pleine

appears 6 times in Vers une 12 times in Traité de
Architecture,
documentation,

10 times in
Urbanisme and

6 times in
Monde.

point

appears 18 times in Vers
une Architecture,

278 times in Traité de 16 times in
documentation,
Urbanisme and

133 times in
Monde.

pourrait

appears 10 times in Vers
une Architecture,

93 times in Traité de
documentation,

12 times in
Urbanisme and

32 times in
Monde.

poésie

appears 5 times in Vers une 83 times in Traité de
Architecture,
documentation,

6 times in
Urbanisme and

7 times in
Monde.

pratique

appears 15 times in Vers
une Architecture,

98 times in Traité de
documentation,

6 times in
Urbanisme and

28 times in
Monde.

pratiques

appears 5 times in Vers une 44 times in Traité de
Architecture,
documentation,

6 times in
Urbanisme and

11 times in
Monde.

première

appears 11 times in Vers
une Architecture,

133 times in Traité de 8 times in
documentation,
Urbanisme and

38 times in
Monde.

P.162

58 times in
Urbanisme and

22 times in
Urbanisme and

14 times in
Monde.

P.163

prix

appears 7 times in Vers une 133 times in Traité de 8 times in
Architecture,
documentation,
Urbanisme and

35 times in
Monde.

principes

appears 5 times in Vers une 132 times in Traité de 12 times in
Architecture,
documentation,
Urbanisme and

53 times in
Monde.

problème

appears 53 times in Vers
une Architecture,

92 times in Traité de
documentation,

28 times in
Urbanisme and

88 times in
Monde.

programme

appears 14 times in Vers
une Architecture,

24 times in Traité de
documentation,

6 times in
Urbanisme and

12 times in
Monde.

produit

appears 13 times in Vers
une Architecture,

81 times in Traité de
documentation,

24 times in
Urbanisme and

38 times in
Monde.

progrès

appears 9 times in Vers une 133 times in Traité de 14 times in
Architecture,
documentation,
Urbanisme and

73 times in
Monde.

puis

appears 10 times in Vers
une Architecture,

115 times in Traité de 6 times in
documentation,
Urbanisme and

48 times in
Monde.

quatre

appears 11 times in Vers
une Architecture,

114 times in Traité de 12 times in
documentation,
Urbanisme and

40 times in
Monde.

qualité

appears 6 times in Vers une 39 times in Traité de
Architecture,
documentation,

quelque

appears 14 times in Vers
une Architecture,

132 times in Traité de 6 times in
documentation,
Urbanisme and

64 times in
Monde.

quelques

appears 12 times in Vers
une Architecture,

167 times in Traité de 10 times in
documentation,
Urbanisme and

33 times in
Monde.

raison

appears 6 times in Vers une 112 times in Traité de 38 times in
Architecture,
documentation,
Urbanisme and

77 times in
Monde.

rapport

appears 6 times in Vers une 106 times in Traité de 6 times in
Architecture,
documentation,
Urbanisme and

33 times in
Monde.

rapide

appears 5 times in Vers une 53 times in Traité de
Architecture,
documentation,

8 times in
Urbanisme and

16 times in
Monde.

règle

appears 5 times in Vers une 22 times in Traité de
Architecture,
documentation,

10 times in
Urbanisme and

5 times in
Monde.

résoudre

appears 5 times in Vers une 18 times in Traité de
Architecture,
documentation,

6 times in
Urbanisme and

8 times in
Monde.

sens

appears 31 times in Vers
une Architecture,

176 times in Traité de 14 times in
documentation,
Urbanisme and

64 times in
Monde.

sentiment

appears 10 times in Vers
une Architecture,

33 times in Traité de
documentation,

69 times in
Monde.

6 times in
Urbanisme and

14 times in
Urbanisme and

8 times in
Monde.

services

appears 5 times in Vers une 107 times in Traité de 20 times in
Architecture,
documentation,
Urbanisme and

24 times in
Monde.

seule

appears 7 times in Vers une 93 times in Traité de
Architecture,
documentation,

8 times in
Urbanisme and

43 times in
Monde.

siècle

appears 6 times in Vers une 283 times in Traité de 20 times in
Architecture,
documentation,
Urbanisme and

93 times in
Monde.

sol

appears 28 times in Vers
une Architecture,

10 times in Traité de
documentation,

20 times in
Urbanisme and

24 times in
Monde.

solution

appears 8 times in Vers une 26 times in Traité de
Architecture,
documentation,

8 times in
Urbanisme and

25 times in
Monde.

solutions

appears 6 times in Vers une 10 times in Traité de
Architecture,
documentation,

16 times in
Urbanisme and

10 times in
Monde.

souvent

appears 7 times in Vers une 207 times in Traité de 10 times in
Architecture,
documentation,
Urbanisme and

30 times in
Monde.

suivant

appears 12 times in Vers
une Architecture,

102 times in Traité de 16 times in
documentation,
Urbanisme and

30 times in
Monde.

surface

appears 25 times in Vers
une Architecture,

51 times in Traité de
documentation,

19 times in
Monde.

système

appears 10 times in Vers
une Architecture,

256 times in Traité de 32 times in
documentation,
Urbanisme and

129 times in
Monde.

série

appears 56 times in Vers
une Architecture,

98 times in Traité de
documentation,

8 times in
Urbanisme and

24 times in
Monde.

sécurité

appears 5 times in Vers une 5 times in Traité de
Architecture,
documentation,

6 times in
Urbanisme and

9 times in
Monde.

table

appears 7 times in Vers une 113 times in Traité de 6 times in
Architecture,
documentation,
Urbanisme and

9 times in
Monde.

tableau

appears 5 times in Vers une 106 times in Traité de 8 times in
Architecture,
documentation,
Urbanisme and

24 times in
Monde.

technique

appears 6 times in Vers une 153 times in Traité de 8 times in
Architecture,
documentation,
Urbanisme and

60 times in
Monde.

tel

appears 11 times in Vers
une Architecture,

114 times in Traité de 10 times in
documentation,
Urbanisme and

32 times in
Monde.

telle

appears 10 times in Vers
une Architecture,

105 times in Traité de 8 times in
documentation,
Urbanisme and

28 times in
Monde.

tels

appears 6 times in Vers une 47 times in Traité de
Architecture,
documentation,

P.164

16 times in
Urbanisme and

8 times in
Urbanisme and

16 times in
Monde.

P.165

temps

appears 24 times in Vers
une Architecture,

terrain

appears 7 times in Vers une 11 times in Traité de
Architecture,
documentation,

toutes

appears 32 times in Vers
une Architecture,

591 times in Traité de 14 times in
documentation,
Urbanisme and

259 times in
Monde.

toujours

appears 22 times in Vers
une Architecture,

147 times in Traité de 20 times in
documentation,
Urbanisme and

65 times in
Monde.

tour

appears 5 times in Vers une 71 times in Traité de
Architecture,
documentation,

travail

appears 27 times in Vers
une Architecture,

travers

appears 7 times in Vers une 58 times in Traité de
Architecture,
documentation,

18 times in
Urbanisme and

40 times in
Monde.

trop

appears 15 times in Vers
une Architecture,

93 times in Traité de
documentation,

16 times in
Urbanisme and

28 times in
Monde.

trouve

appears 9 times in Vers une 93 times in Traité de
Architecture,
documentation,

10 times in
Urbanisme and

32 times in
Monde.

très

appears 18 times in Vers
une Architecture,

209 times in Traité de 16 times in
documentation,
Urbanisme and

47 times in
Monde.

univers

appears 15 times in Vers
une Architecture,

27 times in Traité de
documentation,

8 times in
Urbanisme and

68 times in
Monde.

unique

appears 8 times in Vers une 60 times in Traité de
Architecture,
documentation,

10 times in
Urbanisme and

23 times in
Monde.

usines

appears 13 times in Vers
une Architecture,

6 times in
Urbanisme and

6 times in
Monde.

vastes

appears 6 times in Vers une 14 times in Traité de
Architecture,
documentation,

12 times in
Urbanisme and

14 times in
Monde.

vers

appears 15 times in Vers
une Architecture,

156 times in Traité de 28 times in
documentation,
Urbanisme and

100 times in
Monde.

vie

appears 21 times in Vers
une Architecture,

249 times in Traité de 26 times in
documentation,
Urbanisme and

329 times in
Monde.

ville

appears 38 times in Vers
une Architecture,

30 times in Traité de
documentation,

122 times in
Urbanisme and

11 times in
Monde.

villes

appears 33 times in Vers
une Architecture,

34 times in Traité de
documentation,

52 times in
Urbanisme and

38 times in
Monde.

436 times in Traité de 22 times in
documentation,
Urbanisme and
16 times in
Urbanisme and

6 times in
Urbanisme and

403 times in Traité de 50 times in
documentation,
Urbanisme and

9 times in Traité de
documentation,

239 times in
Monde.
6 times in
Monde.

25 times in
Monde.
177 times in
Monde.

voir

appears 19 times in Vers
une Architecture,

252 times in Traité de 14 times in
documentation,
Urbanisme and

48 times in
Monde.

voit

appears 14 times in Vers
une Architecture,

50 times in Traité de
documentation,

28 times in
Urbanisme and

27 times in
Monde.

voilà

appears 13 times in Vers
une Architecture,

13 times in Traité de
documentation,

20 times in
Urbanisme and

23 times in
Monde.

volonté

appears 7 times in Vers une 39 times in Traité de
Architecture,
documentation,

8 times in
Urbanisme and

46 times in
Monde.

vue

appears 18 times in Vers
une Architecture,

272 times in Traité de 6 times in
documentation,
Urbanisme and

105 times in
Monde.

yeux

appears 41 times in Vers
une Architecture,

76 times in Traité de
documentation,

8 times in
Monde.

6 times in
Urbanisme and

Last
Revision:
3·08·2016

P.166

P.167

X=Y
DICK RECKARD

0. INNOVATION OF THE SAME

Last
Revision:
2·08·2016

The PR imagery produced by and around the
Mundaneum (disambiguation: the institution in
Mons) often suggests, through a series of
'samenesses', an essential continuity between
Otlet's endeavour and Internet-related products
and services, in particular Google's. A good
example is a scene from the video "From
industrial heartland to the Internet age",
published by The Mundaneum, 2014 , where the drawers of Mundaneum
(disambiguation: Otlet's Utopia) morph into the servers of one of Google's
data centres.
This approach is not limited to images: a recurring discourse that shapes some of the
exhibitions taking place in the Mundaneum maintains that the dream of the Belgian utopian
has been kept alive in the development of internetworked communications, and currently
finds its spitiual successor in the products and services of Google. Even though there are
many connections and similarities between the two endeavours, one has to acknowledge that
Otlet was an internationalist, a socialist, an utopian, that his projects were not profit oriented,
and most importantly, that he was living in the temporal and cultural context of modernism at
the beginning of the 20th century. The constructed identities and continuities that detach
Otlet and the Mundaneum from a specific historical frame, ignore the different scientific,
social and political milieus involved. It means that these narratives exclude the discording or
disturbing elements that are inevitable when considering such a complex figure in its entirety.
This is not surprising, seeing the parties that are involved in the discourse: these types of
instrumental identities and differences suit the rhetorical tone of Silicon Valley. Newly
launched IT products for example, are often described as groundbreaking, innovative and
different from anything seen before. In other situations, those products could be advertised
exactly the same, as something else that already exists[1]. While novelty and difference
surprise and amaze, sameness reassures and comforts. For example, Google Glass was
marketed as revolutionary and innovative, but when it was attacked for its blatant privacy

issues, some defended it as just a camera and a phone joined together. The samenessdifference duo fulfils a clear function: on the one hand, it suggests that technological
advancements might alter the way we live dramatically, and we should be ready to give up
our old-fashioned ideas about life and culture for the sake of innovation. On the other hand, it
proposes we should not be worried about change, and that society has always evolved
through disruptions, undoubtedly for the better. For each questionable groundbreaking new
invention, there is a previous one with the same ideal, potentially with just as many critics...
Great minds think alike, after all. This sort of a-historical attitude pervades techno-capitalist
milieus, creating a cartoonesque view of the past, punctuated by great men and great
inventions, a sort of technological variant of Carlyle's Great Man Theory. In this view, the
Internet becomes the invention of a few father/genius figures, rather than the result of a long
and complex interaction of diverging efforts and interests of academics, entrepreneurs and
national governments. This instrumental reading of the past is largely consistent with the
theoretical ground on which the Californian Ideology[2] is based, in which the conception of
history is pervaded by various strains of technological determinism (from Marshall McLuhan
to Alvin Toffler[3]) and capitalist individualism (in generic neoliberal terms, up to the fervent
objectivism of Ayn Rand).
The appropriation of Paul Otlet's figure as Google's grandfather is a historical simplification,
and the samenesses in this tale are not without fundament. Many concepts and ideals of
documentation theories have reappeared in cybernetics and information theory, and are
therefore present in the narrative of many IT corporations, as in Mountain View's case. With
the intention of restoring a historical complexity, it might be more interesting to play the
exactly the same game ourselves, rather than try to dispel the advertised continuum of the
Google on paper. Choosing to focus on other types of analogies in the story, we can maybe
contribute a narrative that is more respectful to the complexity of the past, and more telling
about the problems of the present.
What followings are three such comparisons, which focus on three aspects of continuity
between the documentation theories and archival experiments Otlet was involved in, and the
cybernetic theories and practices that Google's capitalist enterprise is an exponent of. The
First one takes a look at the conditions of workers in information infrastructures, who are
fundamental for these systems to work but often forgotten or displaced. Next, an account of
the elements of distribution and control that appear both in the idea of a Reseau Mundaneum
, and in the contemporary functioning of data centres, and the resulting interaction with other
types of infrastructures. Finally, there is a brief analysis of the two approaches to the
'organization of world's knowledge', which examines their regimes of truth and the issues that

P.168

P.169

come with them. Hopefully these three short pieces can provide some additional ingredients
for adulterating the sterile recipe of the Google-Otlet sameness.
A. DO ANDROIDS DREAM OF MECHANICAL TURKS?

In a drawing titled Laboratorium Mundaneum, Paul
Otlet depicted his project as a massive factory, processing
books and other documents into end products, rolled out
by a UDC locomotive. In fact, just like a factory,
Mundaneum was dependent on the bureaucratic and
logistic modes of organization of labour developed for
industrial production. Looking at it and at other written
and drawn sketches, one might ask: who made up the
workforce of these factories?
In his Traité de Documentation, Otlet describes
extensively the thinking machines and tasks of intellectual
work into which the Fordist chain of documentation is
broken down. In the subsection dedicated to the people
who would undertake the work though, the only role
described at length is the Bibliotécaire. In a long chapter
that explains what education the librarian should follow, which characteristics are required,
and so on, he briefly mentions the existence of “Bibliotecaire-adjoints, rédacteurs, copistes,
gens de service”[4]. There seems to be no further description nor depiction of the staff that
would write, distribute and search the millions of index cards in order to keep the archive
running, an impossible task for the Bibliotécaire alone.
A photograph from around 1930, taken in the Palais
Mondial, where we see Paul Otlet together with the rest
of the équipe, gives us a better answer. In this beautiful
group picture, we notice that the workforce that kept the
archival machine running was made up of women, but we
do not know much about them. As in telephone switching
systems or early software development[5], gender
stereotypes and discrimination led to the appointment of
female workers for repetitive tasks that required specific
knowledge and precision. According to the ideal image described in "Traité", all the tasks of
collecting, translating, distributing, should be completely
automatic, seemingly without the necessity of human
intervention. However, the Mundaneum hired dozens of
women to perform these tasks. This human-run version of RC : Il faut déjà au minimum avoir
the system was not considered worth mentioning, as if it
was a temporary in-between phase that should be
overcome as soon as possible, something that was staining the project with its vulgarity.
Notwithstanding the incredible advancement of information technologies and the automation
of innumerable tasks in collectiong, processing and distributing information, we can observe
the same pattern today. All automatic repetitive tasks that technology should be able to do for
us are still, one way or another, relying on human labour. And unlike the industrial worker
who obtained recognition through political movements and struggles, the role of many
cognitive workers is still hidden or under-represented. Computational linguistics, neural
networks, optical character recognition, all amazing machinic operations are still based on
humans performing huge amounts of repetitive intellectual tasks from which software can
learn, or which software can't do with the same efficiency. Automation didn't really free us
from labour, it just shifted the where, when and who of labour.[6]. Mechanical turks, content
verifiers, annotators of all kinds... The software we use requires a multitude of tasks which
are invisible to us, but are still accomplished by humans. Who are they? When possible,
work is outsourced to foreign English-speaking countries with lower wages, like India. In the
western world it follows the usual pattern: female, lower income, ethnic minorities.
An interesting case of heteromated labour are the socalled Scanops[7], a set of Google workers who have a
different type of badge and are isolated in a section of the
Mountain View complex secluded from the rest of the
workers through strict access permissions and fixed time
schedules. Their work consists of scanning the pages of
printed books for the Google Books database, a task that
is still more convenient to do by hand (especially in the
case of rare or fragile books). The workers are mostly
women and ethnic minorities, and there is no mention of
them on the Google Books website or elsewhere; in fact
the whole scanning process is kept secret. Even though
the secrecy that surrounds this type of labour can be
justified by the need to protect trade secrets, it again
conceals the human element in machine work. This is
even more obvious when compared to other types of
human workers in the project, such as designers and
programmers, who are celebrated for their creativity and
ingenuity.
However, here and there, evidence of the workforce shows up in the result of their labour.
Photos of Google Books employee's hands sometimes mistakenly end up in the digital
version of the book online[8].
Whether the tendency to hide the human presence is due to the unfulfilled wish for total
automation, to avoid the bad publicity of low wages and precarious work, or to keep an aura
of mystery around machines, remains unclear, both in the case of Google Books and the

P.170

P.171

Palais Mondial. Still, it is reassuring to know that the products hold traces of the work, that
even with the progressive removal of human signs in
automated processes, the workers' presence never
disappears completely. This presence is proof of the
materiality of information production, and becomes a sign
in a webpage or the OCR scanned
pages of a book, reflect a negligence
to the processes and labor of writing,
editing, design, layout, typesetting, and
eventually publishing, collecting and
[9]
cataloging .

In 2013, while Prime Minister Di Rupo was celebrating the beginning of the second phase
of constructing the Saint Ghislain data centre, a few hundred kilometres away a very similar
situation started to unroll. In the municipality of Eemsmond, in the Dutch province of
Groningen, the local Groningen Sea Ports and NOM development were rumoured to have
plans with another code named company, Saturn, to build a data centre in the small port of
Eemshaven.
A few months later, when it was revealed that Google
was behind Saturn, Harm Post, director of Groningen
Sea Ports, commented: "Ten years ago Eemshaven
became the laughing stock of ports and industrial
development in the Netherlands, a planning failure of the
previous century. And now Google is building a very
large data centre here, which is 'pure advertisement' for
Eemshaven and the data port."[10] Further details on tax
cuts were not disclosed and once finished, the data centre will provide at most 150 jobs in
the region.
Yet another territory fortunately chosen by Google, just like Mons, but what are the selection
criteria? For one thing, data centres need to interact with existing infrastructures and flows of
various type. Technically speaking, there are three prerequisites: being near a substantial
source of electrical power (the finished installation will consume twice as much as the whole
city of Groningen); being near a source of clean water, for the massive cooling demands;
being near Internet infrastructure that can assure adequate connectivity. There is also a
whole set of non-technical elements, that we can sum up as the social, economical and
political climate, which proved favourable both in Mons and Eemshaven.
The push behind constructing new sites in new locations, rather expanding existing ones, is
partly due to the rapid growth of the importance of Software as a service, so-called cloud
computing, which is the rental of computational power from a central provider. With the rise
of the SaaS paradigm the geographical and topological placement of data centres becomes of
strategic importance to achieve lower latencies and more stable service. For this reason,

Google has in the last 10 years been pursuing a policy of end-to-end connection between its
facilities and user interfaces. This includes buying leftover fibre networks[11], entering the
business of underwater sea cables[12] and building new data centres, including the ones in
Mons and Eemshaven.
The spread of data centres around the world, along the main network cables across
continents, represents a new phase in the diagram of the Internet. This should not be
confused with the idea of decentralization that was a cornerstone value in the early stages of
interconnected networks.[13] During the rapid development of the Internet and the Web, the
new tenets of immediacy, unlimited storage and exponential growth led to the centralization
of content in increasingly large server farms. Paradoxically, it is now the growing
centralization of all kind of operations in specific buildings, that is fostering their distribution.
The tension between centralization and distribution and the dependence on neighbouring
infrastructures as the electrical grid is not an exclusive feature of contemporary data storage
and networking models. Again, similarities emerge from the history of the Mundaneum,
illustrating how these issues relate closely to the logistic organization of production first
implemented during the industrial revolution, and theorized within modernism.
Centralization was seen by Otlet as the most efficient way to organize content, especially in
view of international exchange[14] which already caused problems related to space back then:
the Mundaneum archive counted 16 million entries at its peak, occupying around 150
rooms. The cumbersome footprint, and the growing difficulty to find stable locations for it,
concurred to the conviction that the project should be included in the plans of new modernist
cities. In the beginning of the 1930s, when the Mundaneum started to lose the support of the
Belgian government, Otlet thought of a new site for it as part of a proposed Cité Mondiale,
which he tried in different locations with different approaches.
Between various attempts, he participated in the competition for the development of the Left
Bank in Antwerp. The most famous modernist urbanists of the time were invited to plan the
development from scratch. At the time, the left bank was completely vacant. Otlet lobbied for
the insertion of a Mundaneum in the plans, stressing how it would create hundreds of jobs for
the region. He also flattered the Flemish pride by insisting on how people from Antwerp
were more hard working than the ones from Brussels, and how they would finally obtain their
deserved recognition, when their city would be elevated to World City status.[15] He partly
succeeded in his propaganda; aside from his own proposal, developed in collaboration with
Le Corbusier, many other participants included Otlet's Mundaneum as a key facility in their
plans. In these proposals, Otlet's archival infrastructure was shown in interaction with the
existing city flows such as industrial docks, factories, the
railway and the newly constructed stock market.[16]The
modernist utopia of a planned living environment implied
that methods similar to those employed for managing the
flows of coal and electricity could be used for the
organization of culture and knowledge.

P.172
From From Paper Mill to Google
Data Center:
In a sense, data centers are similar to
the capitalist factory system; but

P.173

The Traité de Documentation, published in 1934, includes an extended reflection on a
Universal Network of Documentation, that would coordinate the transfer of knowledge
between different documentation centres such as libraries or the Mundaneum[17]. In fact the
existing Mundaneum would simply be the first node of a wide network bound to expand to
the rest of the world, the Reseau Mundaneum. The nodes of this network are explicitly
described in relation to "post, railways and the press, those three essential organs of modern
life which function unremittingly in order to unite men, cities and nations."[18] In the same
period, in letter exchanges with Patrick Geddes and Otto Neurath, commenting on the
potential of heliographies as a way to distribute knowledge, the three imagine the White Link
, a network to distribute copies throughout a series of Mundaneum nodes[19]. As a result, the
same piece of information would be serially produced and logistically distributed, described
as a sort of moving Mundaneum idea, facilitated by the railway system[20]. No wonder that
future Mundaneums were foreseen to be built next to a train station.
In Otlet's plans for a Reseau Mundaneum we can already detect some of the key
transformations that reappear in today's data centre scenario. First of all, a drive for
centralization, with the accumulation of materials that led to the monumental plans of World
Cities. In parallel, the push for international exchange, resulting in a vision of a distribution
network. Thirdly, the placement of the hypothetic network nodes along strategic intersections
of industrial and logistic infrastructure.
While the plan for Antwerp was in the end rejected in favour of more traditional housing
development, 80 years later the legacy of the relation between existing infrastructural flows
and logistics of documentation storage is highlighted by the data ports plan in Eemshaven.
Since private companies are the privileged actors in these types of projects, the circulation of
information increasingly respond to the same tenets that regulate the trade of coal or
electricity. The very different welcome that traditional politics reserve for Google data centres
is a symptom of a new dimension of power in which information infrastructure plays a vital
role. The celebrations and tax cuts that politicians lavish on these projects cannot be
explained with 150 jobs or economic incentives for a depressed region alone. They also
indicate how party politics is increasingly confined to the periphery of other forms of power
and therefore struggle to assure themselves a strategic positioning.
C. 025.45UDC; 161.225.22; 004.659GOO:004.021PAG.

The Universal Decimal Classification[21] system, developed by Paul Otlet and Henri
Lafontaine on the basis of the Dewey Decimal Classification system is still considered one of
their most important realizations as well as a corner-stone in Otlet's overall vision. Its
adoption, revision and use until today demonstrate a thoughtful and successful approach to
the classification of knowledge.

The UDC differs from Dewey and other bibliographic systems as it has the potential to
exceed the function of ordering alone. The complex notation system could classify phrases
and thoughts in the same way as it would classify a book, going well beyond the sole function
of classification, becoming a real language. One could in fact express whole sentences and
statements in UDC format[22]. The fundamental idea behind it [23]was that books and
documentation could be broken down into their constitutive sentences and boiled down to a
set of universal concepts, regulated by the decimal system. This would allow to express
objective truths in a numerical language, fostering international exchange beyond translation,
making science's work easier by regulating knowledge with numbers. We have to understand
the idea in the time it was originally conceived, a time shaped by positivism and the belief in
the unhindered potential of science to obtain objective universal knowledge. Today,
especially when we take into account the arbitrariness of the decimal structure, it sounds
doubtful, if not preposterous.
However, the linguistic-numeric element of UDC which enables to express fundamental
meanings through numbers, plays a key role in the oeuvre of Paul Otlet. In his work we learn
that numerical knowledge would be the first step towards a science of combining basic
sentences to produce new meaning in a systematic way. When we look at Monde, Otlet's
second publication from 1935, the continuous reference to multiple algebraic formulas that
describe how the world is composed suggests that we could at one point “solve” these
equations and modify the world accordingly.[24] Complementary to the Traité de
Documentation, which described the systematic classification of knowledge, Monde set the
basis for the transformation of this knowledge into new meaning.
Otlet wasn't the first to envision an algebra of thought. It has been a recurring topos in
modern philosophy, under the influence of scientific positivism and in concurrence with the
development of mathematics and physics. Even though one could trace it back to Ramon
Llull and even earlier forms of combinatorics, the first to consistently undertake this scientific
and philosophical challenge was Gottfried Leibniz. The German philosopher and
mathematician, a precursor of the field of symbolic logic, which developed later in the 20th
century, researched a method that reduced statements to minimum terms of meaning. He
investigated a language which “... will be the greatest instrument of reason,” for “when there
are disputes among persons, we can simply say: Let us calculate, without further ado, and
see who is right”.[25] His inquiry was divided in two phases. The first one, analytic, the
characteristica universalis, was a universal conceptual language to express meanings, of which
we only know that it worked with prime numbers. The second one, synthetic, the calculus
ratiocinator, was the algebra that would allow operations between meanings, of which there is
even less evidence. The idea of calculus was clearly related to the infinitesimal calculus, a
fundamental development that Leibniz conceived in the field of mathematics, and which
Newton concurrently developed and popularized. Even though not much remains of
Leibniz's work on his algebra of thought, it was continued by mathematicians and logicians in
the 20th century. Most famously, and curiously enough around the same time Otlet

P.174

P.175

published Traité and Monde, logician Kurt Godel used the same idea of a translation into
prime numbers to demonstrate his incompleteness theorem.[26] The fact that the characteristica
universalis only made sense in the fields of logics and mathematics is due to the fundamental
problem presented by a mathematical approach to truth beyond logical truth. While this
problem was not yet evident at the time, it would emerge in the duality of language and
categorization, as it did later with Otlet's UDC.
The relation between organizational and linguistic aspects of knowledge is also one of the
open issues at the core of web search, which is, at first sight, less interested in objective
truths. At the beginning of the Web, around the mid '90s, two main approaches to online
search for information emerged: the web directory and web crawling. Some of the first search
engines like Lycos or Yahoo!, started with a combination of the two. The web directory
consisted of the human classification of websites into categories, done by an “editor”; crawling
in the automatic accumulation of material by following links with different rudimentary
techniques to assess the content of a website. With the exponential growth of web content on
the Internet, web directories were soon dropped in favour of the more efficient automatic
crawling, which in turn generated so many results that quality has become of key importance.
Quality in the sense of the assessment of the webpage content in relation to keywords as well
as the sorting of results according to their relevance.
Google's hegemony in the field has mainly been obtained by translating the relevance of a
webpage into a numeric quantity according to a formula, the infamous PageRank algorithm.
This value is calculated depending on the relational importance of the webpage where the
word is placed, based on how many other websites link to that page. The classification part is
long gone, and linguistic meaning is also structured along automated functions. What is left is
reading the network formation in numerical form, capturing human opinions represented by
hyperlinks, i.e. which word links to which webpage, and which webpage is generally more
important. In the same way that UDC systematized documents via a notation format, the
systematization of relational importance in numerical format brings functionality and
efficiency. In this case rather than linguistic the translation is value-based, quantifying network
attention independently from meaning. The interaction with the other infamous Google
algorithm, Adsense, adds an economic value to the PageRank position. The influence and
profit deriving from how high a search result is placed, means that the relevance of a wordwebsite relation in Google search results translates to an actual relevance in reality.
Even though both Otlet and Google say they are tackling the task of organizing knowledge,
we could posit that from an epistemological point of view the approaches that underlie their
respective projects, are opposite. UDC is an example of an analytic approach, which
acquires new knowledge by breaking down existing knowledge into its components, based on
objective truths. Its propositions could be exemplified with the sentences “Logic is a
subdivision of Philosophy” or “PageRank is an algorithm, part of the Google search engine”.
PageRank, on the contrary, is a purely synthetic one, which starts from the form of the
network, in principle devoid of intrinsic meaning or truth, and creates a model of the

network's relational truths. Its propositions could be exemplified with “Wikipedia is of the
utmost relevance” or “The University of District Columbia is the most relevant meaning of
the word 'UDC'”.
We (and Google) can read the model of reality created by the PageRank algorithm (and all
the other algorithms that were added during the years[27]) in two different ways. It can be
considered a device that 'just works' and does not pretend to be true but can give results
which are useful in reality, a view we can call pragmatic, or instead, we can see this model as
a growing and improving construction that aims to coincide with reality, a view we can call
utopian. It's no coincidence that these two views fit the two stereotypical faces of Google, the
idealistic Silicon Valley visionary one, and the cynical corporate capitalist one.
From our perspective, it is of relative importance which of the two sides we believe in. The
key issue remains that such a structure has become so influential that it produces its own
effects on reality, that its algorithmic truths are more and more considered as objective truths.
While the utility and importance of a search engine like Google are out of the question, it is
necessary to be alert about such concentrations of power. Especially if they are only
controlled by a corporation, which, beyond mottoes and utopias, has by definition the single
duty of to make profits and obey its stakeholders.
1. A good account of such phenomenon is described by David Golumbia. http://www.uncomputing.org/?p=221
2. As described in the classic text looking at the ideological ground of Silicon Valley culture. http://www.hrc.wmin.ac.uk/theorycalifornianideology-main.html
3. For an account of Toffler's determinism, see http://www.ukm.my/ijit/IJIT%20Vol%201%202012/7wan%20fariza.pdf .
4. Otlet, Paul. Traité de documentation: le livre sur le livre, théorie et pratique. Editiones Mundaneum, 1934: 393-394.
5. http://gender.stanford.edu/news/2011/researcher-reveals-how-%E2%80%9Ccomputer-geeks%E2%80%9D-replaced-%
E2%80%9Ccomputergirls%E2%80%9D
6. This process has been named “heteromation”, for a more thorough analysis see: Ekbia, Hamid, and Bonnie Nardi.
“Heteromation and Its (dis)contents: The Invisible Division of Labor between Humans and Machines.” First Monday 19, no.
6 (May 23, 2014). http://firstmonday.org/ojs/index.php/fm/article/view/5331
7. The name scanops was first introduce by artist Andrew Norman Wilson when he found out about this category of workers
during his artistic residency at Google in Mountain View. See http://www.andrewnormanwilson.com/WorkersGoogleplex.html
.
8. As collected by Krissy Wilson on her http://theartofgooglebooks.tumblr.com .
9. http://informationobservatory.info/2015/10/27/google-books-fair-use-or-anti-democratic-preemption/#more-279
10. http://www.rtvnoord.nl/nieuws/139016/Keerpunt-in-de-geschiedenis-van-de-Eemshaven .
11. http://www.cnet.com/news/google-wants-dark-fiber/ .
12. http://spectrum.ieee.org/tech-talk/telecom/internet/google-new-brazil-us-internet-cable .
13. See Baran, Paul. “On Distributed Communications.” Product Page, 1964. http://www.rand.org/pubs/research_memoranda/
RM3420.html .
14. Pierce, Thomas. Mettre des pierres autour des idées. Paul Otlet, de Cité Mondiale en de modernistische stedenbouw in de jaren
1930. PhD dissertation, KULeuven, 2007: 34.
15. Ibid: 94-95.
16. Ibid: 113-117.
17. Otlet, Paul. Traité de documentation: le livre sur le livre, théorie et pratique. Editiones Mundaneum, 1934.
18. Otlet, Paul. Les Communications MUNDANEUM, Documentatio Universalis, doc nr. 8438
19. Van Acker, Wouter. “Internationalist Utopias of Visual Education: The Graphic and Scenographic Transformation of the
Universal Encyclopaedia in the Work of Paul Otlet, Patrick Geddes, and Otto Neurath.” Perspectives on Science 19, no. 1
(January 19, 2011): 68-69.
20. Ibid: 66.

P.176

P.177

21. The Decimal part in the name means that any records can be further subdivided by tenths, virtually infinitely, according to an
evolving scheme of depth and specialization. For example, 1 is “Philosophy”, 16 is “Logic”, 161 is “Fundamentals of Logic”,
161.2 is “Statements”, 161.22 is “Type of Statements”, 161.225 is “Real and ideal judgements”, 161.225.2 is “Ideal
Judgements” and 161.225.22 is “Statements on equality, similarity and dissimilarity”.
22. “The UDC and FID: A Historical Perspective.” The Library Quarterly 37, no. 3 (July 1, 1967): 268-270.
23. TEMP: described in french by the word depouillement,
24. Otlet, Paul. Monde, essai d’universalisme: connaissance du monde, sentiment du monde, action organisée et plan du monde.
Editiones Mundaneum, 1935: XXI-XXII.
25. Leibniz, Gottfried Wilhelm, The Art of Discovery 1685, Wiener: 51.
26. https://en.wikipedia.org/wiki/G%C3%B6del_numbering
27. A fascinating list of all the algorithmic components of Google search is at https://moz.com/google-algorithm-change .

Madame
C/
Mevrouw
C
FEMKE SNELTING

MADAME C.01
EN

When I arrived in Brussels that autumn, I was still very young. I thought that as an au-pair
I would be helping out in the house, but instead I ended up working with the professor on
finishing his book. At the time I arrived, the writing was done but his handwriting was so
hard to decipher that the printer had a difficult time working with the manuscript. It became
my job to correct the typeset proofs but often there were words that neither the printer nor I
could decipher, so we had to ask. And the professor often had no time for us. So I did my
best to make the text as comprehensible as possible.
On the title page of the final proofs from the printer, the professor wrote me:

After five months of work behind the same table, here it is. Now it is your turn to sow the
good seed of documentation, of institution, and of Mundaneum, through the pre-book and the
spoken word[1]
NL

Toen ik die herfst in Brussel arriveerde was ik nog heel jong. Ik dacht dat ik als au-pair in
de huishouding zou helpen, maar in plaats daarvan moest ik de professor helpen met het
afmaken van zijn boek. Toen ik aankwam was het schrijven al afgerond, maar de drukker
worstelde nog met het manuscript omdat het handschrift moeilijk te ontcijferen was. Het
werd mijn taak om de drukproeven te corrigeren. Er waren veel woorden die de drukker en
ik niet konden ontcijferen, dus dan moesten we het navragen. Maar vaak had de professor
geen tijd voor ons. Ik deed dan mijn best om de tekst zo leesbaar mogelijk te maken.
Op de titelpagina van de definitieve drukproef schreef de professor me:

P.178

P.179

Na vijf maanden gewerkt te hebben aan dezelfde tafel is hier het resultaat. Nu is het jouw
beurt om via het boek, het voor-boek, het woord, het goede zaad te zaaien van documentatie,
instituut en Mundaneum.[2]
MADAME C.02
EN

She serves us coffee from a ceramic coffee pot and also a cake bought at the bakery next
door. It's all written in the files she reminds us repeatedly, and tells us about one day in the
sixties, when her husband returned home, telling her excitedly that he discovered the
Mundaneum at Chaussée de Louvain in Brussels. Ever since, he would return to the same
building, making friends with the friends of the Palais Mondial, those dedicated caretakers of
the immense paper heritage.
I haven't been there so often myself, she says. But I do remember there were cats, to keep the
mice away from the paper. And my husband loved cats. So in the eighties, when he was
finally in a position to save the archives, the cats had to be taken care of too." Hij wanted to
write the cats were written into the inventory.
We finish our coffee and she takes us behind a curtain that separates the salon from a small
office. She shows us four green binders that contain the meticulously filed papers of her late
husband pertaining to the Mundaneum. In the third is the Donation act that describes the
transfer of the archives from the Friends of the Palais Mondial to the Centre de Lecture
Public of the French community.
In the inventory, the cats are nowhere to be found.[3]
NL

Ze schenkt ons koffie uit een keramieken koffiepot en serveert gebak dat ze bij de
naburige bakkerij kocht. Herhaaldelijk herinnert ze ons eraan dat 'het allemaal geschreven
staat in de documenten'. Ze vertelt ons dat in de jaren zestig, haar man op een dag
thuiskwam en opgewonden vertelde dat hij het Mundaneum ontdekt had op de Leuvense
Steenweg in Brussel. Sindsdien keerde hij daar regelmatig terug om de vrienden van het
Palais Mondial te ontmoeten: de toegewijde verzorgers van die immense papieren erfenis.
Ik ben er zelf niet zo vaak geweest, zegt ze. Maar ik herinner me dat er katten waren om de
muizen weg te houden van al het papier. En mijn man hield van katten. In de jaren tachtig,
toen hij eindelijk een positie had die hem in staat stelde om de archieven te redden, moest er
ook voor de katten worden gezorgd. Hij wilde de katten opnemen in de inventaris.
We drinken onze koffie op en ze neemt ons mee achter een gordijn dat de salon van een
klein kantoor scheidt. Ze toont ons vier groene mappen met de keurig geordende papieren
van haar voormalige echtgenoot over het Mundaneum. In de derde map bevindt zich de akte

die de overdracht van de archieven beschrijft van de Vrienden van het Palais Mondial aan
het Centre de Lecture Public van de Franse Gemeenschap (CLPCF).
In de inventaris is geen spoor van de katten te vinden.[4]
MADAME C.03
EN

In a margarine box, between thousands of notes, tickets, postcards, letters, all folded to the
size of an index card, we find this:

Paul, leave me the key to mythe house, I forgot mine. Put it on your desk, in the small index
card box.[5]
NL

In een grote margarinedoos, tussen duizenden bonnetjes, aantekeningen, briefkaarten, en
brieven, allemaal gevouwen op maat van een indexkaart, vinden we een bericht:

Paul, laat je de sleutel van mijnhet huis voor mij achter, ik ben de mijne vergeten. Stop hem in
het kleine indexkaartdoosje op je bureau.[6]

P.180

P.181

Last
Revision:
2·08·2016

1. EN
Wilhelmina Coops came from The Netherlands to Brussels in 1932 to learn French. She was instrumental in transforming
Le Traité de Documentation into a printed book.
2. NL
Wilhelmina Coops kwam in 1932 uit Nederland naar Brussel om Frans te leren. Ze hielp het manuscript voor Le Traité de
Documentation omzetten naar een gedrukt boek.
3. EN
The act is dated April 4 1985. Madame Canonne is a librarian, widow of André Canonne († 1990). She is custodian of
the documents relating to the wanderings of The Mundaneum in Brussels.
4. NL
De akte is gedateerd op 4 april 1985. Madame Canonne is bibliothecaresse en weduwe van André Canonne († 1990).
Ze is de bewaarster van documenten die gerelateerd zijn aan de omzwervingen van het Mundaneum in Brussel.
5. EN
Cato van Nederhasselt, second wife of Paul Otlet, collaborated with her husband on many projects. Her family fortune kept
the Mundaneum running after other sources had dried up.
6. NL
Cato van Nederhasselt, de tweede vrouw van Paul Otlet, werkte met haar man aan vele projecten. Nadat alle andere
bronnen waren uitgeput hield haar familiefortuin het Mundaneum draaiende.

A Preemptive
History
of the
Google
Cultural
Institute
GERALDINE JUÁREZ

I. ORGANIZING INFORMATION IS NEVER INNOCENT

Six years ago, Google, an Alphabet company, launched a new project: The Google Art
Project. The official history, the one written by Google and distributed mainly through
tailored press releases and corporate news bits, tells us that it all started as “a 20% project
within Google in 2010 and had its first public showing in 2011. It was 17 museums,
coming together in a very interesting online platform, to allow users to essentially explore art
in a very new and different way."[1] While Google Books faced legal challenges and the
European Commission launched its antitrust case against Google in 2010, the Google Art
Project, not coincidentally, scaled up gradually, resulting in the Google Cultural Institute with
headquarters in Paris, “whose mission is to make the world's culture accessible online.”[2]
The Google Cultural Institute is strictly divided in Art Project, Historical Moments and
World Wonders, roughly corresponding to fine art, world history and material culture.
Technically, the Google Cultural Institute can be described as a database that powers a
repository of high-resolution images of fine art, objects, documents and ephemera, as well as
information about and from their ‘partners’ - the public museums, galleries and cultural
institutions that provide this cultural material - such as 3D tour views and street-view maps.
So far and counting, the Google Cultural Institute hosts 177 digital reproductions of selected
paintings in gigapixel resolution and 320 3D versions of different objects, together with
multiple thematic slide shows curated in collaboration with their partners or by their users.

P.182

P.183

According to their website, in their ‘Lab’ they develop the “new technology to help partners
publish their collections online and reach new audiences, as seen in the Google Art Project,
Historic Moments and World Wonders initiatives.” These services are offered – not by
chance – as a philanthropic service to public institutions that increasingly need to justify their
existence in face of cuts and other managerial demands of the austerity policies in Europe
and elsewhere.
The Google Cultural Institute “would be unlikely, even unthinkable, absent the chronic and
politically induced starvation of publicly funded cultural institutions even throughout the
wealthy countries”[3]. It is important to understand that what Google is really doing is
bankrolling the technical infrastructure and labour needed to turn culture into data. In this
way it can be easily managed and feed all kind of products needed in the neoliberal city to
promote and exploit these cultural ‘assets’, in order to compete with other urban centres in
the global stage, but also, to feed Google’s unstoppable accumulation of information.
The head of the Google Cultural Institute knows there are a lot of questions about their
activities but Alphabet chose to label legitimate critiques as misunderstandings: “This is our
biggest battle, this constant misunderstanding of why the Cultural Institute actually exists.”[4]
The Google Cultural Institute, much like many other cultural endeavours of Google like
Google Books and their Digital Revolution art exhibition, has been subject to a few but
much needed critiques, such as Powered by Google: Widening Access and Tightening
Corporate Control (Schiller & Yeo 2014), an in-depth account of the origins of this cultural
intervention and its role in the resurgence of welfare capitalism, “where people are referred to
corporations rather than states for such services as they receive; where corporate capital
routinely arrogates to itself the right to broker public discourse; and where history and art
remain saturated with the preferences and priorities of elite social classes.”[5]
Known as one, if not the first essay that dissects Google's use of information and the rhetoric
of democratization behind it to reorganize cultural public institutions as a “site of profitmaking”, Schiller & Yeo’s text is fundamental to understand the evolution of the Google
Cultural Institute within the historical context of digital capitalism, where the global
dependency in communication and information technologies is directly linked to the current
crisis of accumulation and where Google's archive fever “evinces a breath-taking cultural and
ideological range.”[6]
II. WHO COLONIZES THE COLONIZERS?

The Google Cultural Institute is a complex subject of interest since it reflects the colonial
impulses embedded in the scientific and economic desires that formed the very collections
which the Google Cultural Institute now mediates and accumulates in its database.

Who colonizes the colonizers? It is a very difficult issue which I have raised before in an
essay dedicated to the Google Cultural Institute, Alfred Russel Wallace and the colonial
impulse behind archive fevers from the 19th but also the 21st century. I have no answer yet.
But a critique of the Google Cultural Institute where their motivations are interpreted as
merely colonialist, would be misleading and counterproductive. It is not their goal to slave and
exploit whole populations and its resources in order to impose a new ideology and civilise
barbarians in the same sense and way that European countries did during the Colonization.
Additionally, it would be unfair and disrespectful to all those who still have to deal with the
endless effects of Colonization, that have exacerbated with the expansion of economic
globalisation.
The conflation of technology and science that has produced the knowledge to create such an
entity as Google and its derivatives, such as the Cultural Institute, together with the scale of
its impact on a society where information technology is the dominant form of technology,
makes technocolonialism a more accurate term to describe Google's cultural interventions
from my perspective.
Although technocolonization shares many traits and elements with the colonial project,
starting with the exploitation of materials needed to produce information and media
technologies – and the related conflicts that this produces –, information technologies still
differ from ships and canons. However, the commercial function of maritime technologies is
the same as the free – as in free trade – services deployed by Google or Facebook’s drones
beaming internet in Africa, although the networked aspect of information technologies is
significantly different at the infrastructure level.
There is no official definition of technocolonialism, but it is important to understand it as a
continuation of the idea of Enlightenment that gave birth to the impulse to collect, organise
and manage information in the 19th century. My use of this term aims to emphasize and
situate contemporary accumulation and management of information and data within a
technoscientific landscape driven by “profit above else” as a “logical extension of the surplus
value accumulated through colonialism and slavery.”[7]
Unlike in colonial times, in contemporary technocolonialism the important narrative is not the
supremacy of a specific human culture. Technological culture is the saviour. It doesn’t matter
if the culture is Muslim, French or Mayan, the goal is to have the best technologies to turn it
into data, rank it, produce content from it and create experiences that can be monetized.
It only makes sense that Google, a company with a mission of to organise the world’s
information for profit, found ideal partners in the very institutions that were previously in
charge of organising the world’s knowledge. But as I pointed out before, it is paradoxical that
the Google Cultural Institute is dedicated to collect information from museums created under
Colonialism in order to elevate a certain culture and way of seeing the world above others.
Today we know and are able to challenge the dominant narratives around cultural heritage,

P.184

P.185

because these institutions have an actual record in history and not only a story produced for
the ‘about’ section of a website, like in the case of the Google Cultural Institute.
“What museums should perhaps do is make visitors aware that this is not the only way of
seeing things. That the museum – the installation, the arrangement, the collection – has a
history, and that it also has an ideological baggage”[8]. But the Google Cultural Institute is
not a museum, it is a database with an interface that enables to browse cultural content.
Unlike the prestigious museums it collaborates with, it lacks a history situated in a specific
cultural discourse. It is about fine art, world wonders and historical moments in a general
sense. The Google Cultural Institute has a clear corporate and philanthropic mission but it
lacks a point of view and a defined position towards the cultural material that it handles. This
is not surprising since Google has always avoided to take a stand, it is all techno-determinism
and the noble mission of organising the world’s information to make the world better. But
“brokering and hoarding information are a dangerous form of techno-colonialism.”[8]
Looking for a cultural narrative beyond the Californian ideology, Alphabet's search engine
found in Paul Otlet and the Mundaneum the perfect cover to insert their philanthropic
services in the history of information science beyond Silicon Valley. After all, they
understand that “ownership over the historical narratives and their material correlates
becomes a tool for demonstrating and realizing economic claims”.[9]
After establishing a data centre in the Belgian city of Mons, home of the Mundaneum
archive center, Google lent its support to "the Mons 2015 adventure, in particular by
working with our longtime partners, the Mundaneum archive. More than a century ago, two
visionary Belgians envisioned the World Wide Web’s architecture of hyperlinks and
indexation of information, not on computers, but on paper cards. Their creation was called
the Mundaneum.”[10]

On the occasion of the 147th birthday of Paul Otlet, a Doodle in the homepage of the
Alphabet spelled the name of its company using the ‘drawers of the Mundaneum’ to form the
words G O O G L E: “Today’s Doodle pays tribute to Paul’s pioneering work on the

Mundaneum. The collection of knowledge stored in the Mundaneum’s drawers are the
foundational work for everything that happens at Google. In early drafts, you can watch the
concept come to life.”[11]
III. GOOGLE CULTURAL HISTORY

The dematerialisation of public collections using infrastructure and services bankrolled by
private actors like the GCI needs to be questioned and analyzed further in the context of
heterotopic institutions, to understand the new forms taken by the endless tension between
knowledge/power at the core of contemporary archivalism, where the architecture of the
interface replaces and acts on behalf of the museum, and the body of the visitor is reduced to
the fingers of a user capable of browsing endless cultural assets.
At a time when cultural institutions should be decolonised instead of googlified, it is vital to
discuss a project such as the Google Cultural Institute and its continuous expansion – which
is inversely proportional to the failure of the governments and the passivity of institutions
seduced by gadgets[12].
However, the dialogue is fragmented between limited academic accounts, corporate press
releases, isolated artistic interventions, specialised conferences and news reports. Femke
Snelting suggests that we must “find the patience to build a relation to these histories in ways
that make sense.” To do so, we need to excavate and assemble a better account of the
history of the Google Cultural Institute. Building upon Schiller & Yeo’s seminal text, the
following timeline is my contribution to this task and an attempt to put together the pieces, by
situating them in a broader economic and political context beyond the official history told by
the Google Cultural Institute. A closer inspection of the events reveals that the escalation of

P.186

P.187

Alphabet's cultural interventions often emerge after a legal challenge against their economic
hegemony in Europe was initiated.
2009
ERIC SCHMIDT VISITS IRAQ

A news report from the Wall Street Journal[13] as well as an AP report on Youtube[14] confirm
the new Google venture in the field of historical collections. The executive chairman of
Alphabet declared: “I can think of no better use of our time and our resources to make the
images and ideas from your civilization, from the very beginning of time, available to a billion
people worldwide.”
A detailed account and reflection of this visit, its background and agenda can be found in
Powered by Google: Widening Access and Tightening Corporate Control. (Schiller & Yeo
2014)
FRANCE REACTS AGAINST GOOGLE BOOKS

In relation to the Google Books dispute in Europe, Reuters reported in 2009 that France's
ex-president Nicolas Sarkozy “pledged hundreds of millions of euros toward a separate
digitization program, saying he would not permit France to be “stripped of our heritage to the
benefit of a big company, no matter how friendly, big or American it is.”[15]

Although the reactionary and nationalistic agenda of Sarkozy should not be celebrated, it is
important to note that the first open attack on Google’s cultural agenda came from the French
government. Four years later, the Google Cultural Institute establishes its headquarters in
Paris.
2010
EUROPEAN COMMISSION LAUNCHES AN ANTITRUST INVESTIGATION AGAINST
GOOGLE.

The European Commission has decided to open an antitrust investigation into
allegations that Google Inc. has abused a dominant position in online search, in
violation of European Union rules (Article 102 TFEU). The opening of formal
proceedings follows complaints by search service providers about unfavourable
treatment of their services in Google's unpaid and sponsored search results coupled
with an alleged preferential placement of Google's own services. This initiation of
proceedings does not imply that the Commission has proof of any infringements. It
only signifies that the Commission will conduct an in-depth investigation of the case as
[16]
a matter of priority.
THE GOOGLE ART PROJECT STARTS AS A 20% PROJECT UNDER THE DIRECTION
OF AMIT SOOD.

According to the Guardian[17], and other news reports, Google's cultural project is started by
passionate art “googlers”.
GOOGLE ANNOUNCES ITS PLANS TO BUILD A EUROPEAN CULTURAL INSTITUTE IN
FRANCE

Referring to France as one of the most important centres for culture and technology, Google
CEO Eric Schmidt formally announces the creation of a centre "dedicated to technology,
especially noting the promotion of past, present and future European cultures."[18]
2011
GOOGLE ART PROJECT LAUNCHES IN TATE LONDON.

In February the new ‘product’ is officially presented. The introduction[19] emphasises that it
started as a 20% project, meaning a project that lacked corporate mandate.
According to the “Our Story”[20] section of the Google Cultural Institute, the history of the
Google Art Project starts with the integration of 140,000 assets from the Yad Vashem
World Holocaust Centre, followed by the inclusion of the Nelson Mandela Archives in the
Historical Moments section of the Google Cultural Institute.

P.188

P.189

Later in August, Eric Schmidt declares that education should bring art and science together
just like in “the glory days of the Victorian Era”.[21]
2012
EU DATA AUTHORITIES INITIATE A NEW INVESTIGATION INTO GOOGLE AND
THEIR NEW TERMS OF USE.

At the request of the French authorities, the European Union initiates an investigation against
Google, related to the breach of data privacy due to the new terms of use published by
Google on 1 March 2012.[22]
THE GOOGLE CULTURAL INSTITUTE CONTINUES TO DIGITALIZE CULTURAL
‘ASSETS’.

According to the Google Cultural Institute website, 151 partners join the Google Art
Project including France's Musée D’Orsay. The World Wonders section is launched
including partnerships with the likes of UNESCO. By October, the platform is rebranded
and re-launched including over 400+ partners.
2013
GOOGLE CULTURAL INSTITUTE HEADQUARTERS OPENS IN PARIS.

On 10 December, the new French headquarters open in 8 rue de Londres. The French
Minister Aurélie Filippetti cancels her attendance as she doesn’t “wish to appear as a
guarantee for an operation that still raises a certain number of questions."[23]
BRITISH TAX AUTHORITIES INITIATE INVESTIGATION INTO GOOGLE'S TAX
SCHEME

HM Customs and Revenue Committee inquiry brands Google's tax operations in the UK
via Ireland as "devious, calculated and, in my view, unethical".[24]
2014
EUROPEAN COURT OF JUSTICE RULES ON THE “RIGHT TO BE FORGOTTEN”
AGAINST GOOGLE.

The controversial ruling holds search engines responsible for the personal data that it handles
and under European Law the court ruled “that the operator is, in certain circumstances,
obliged to remove links to web pages that are published by third parties and contain
information relating to a person from the list of results displayed following a search made on

the basis of that person’s name. The Court makes it clear that such an obligation may also
exist in a case where that name or information is not erased beforehand or simultaneously
from those web pages, and even, as the case may be, when its publication in itself on those
pages is lawful.”[25]
DIGITAL REVOLUTION AT BARBICAN UK

Google sponsors the exhibition Digital Revolution[26] and commission artworks under the
brand “Dev-art: art made with code.[27]”. The exhibition later tours to the Tekniska Museet in
Stockholm.[28]
GOOGLE CULTURAL INSTITUTE'S “THE LAB” OPENS

“Here creative experts and technology come together share ideas and build new ways to
experience art and culture.”[29]
GOOGLE EXPRESSED ITS PLANS TO SUPPORT THE CITY OF MONS, EUROPEAN
CAPITAL OF CULTURE IN 2015.

A press release from Google[30] describes the new partnership with the Belgian city of Mons
as a result of their position as local employer and investor in the city, since one of their two
major data centres in Europe is located there.
2015
EU COMMISSION SENDS STATEMENT OF OBJECTIONS TO GOOGLE.

The European Commission has sent a Statement of Objections to Google alleging the
company has abused its dominant position in the markets for general internet search
services in the European Economic Area (EEA) by systematically favouring its own
[31]
comparison shopping product in its general search results pages.”

Google rejects the accusations as “wrong as a matter of fact, law and economics”.[32]
EUROPEAN COMMISSION STARTS INVESTIGATION INTO ANDROID.

The Commission will assess if, by entering into anticompetitive agreements and/or by
abusing a possible dominant position, Google has illegally hindered the development
and market access of rival mobile operating systems, mobile communication
applications and services in the European Economic Area (EEA). This investigation
is distinct and separate from the Commission investigation into Google's search
[33]
business.
GOOGLE CULTURAL INSTITUTE CONTINUES TO EXPAND.

According to the ‘Our Story’ section of the Google Cultural Institute, the Street Art project
now has 10,000 assets. A new extension displays art from the Google Art Project in the
Chrome browser and “art lovers can wear art on their wrists via Android art”. By August,

P.190

P.191

the project has more than 850 partners using their tools, 4.7 million assets in its collection
and more than 1500 curated exhibitions.
TRANSPARENCY INTERNATIONAL REVEALS GOOGLE AS SECOND BIGGEST
[34]
CORPORATE LOBBYISTS OPERATING IN BRUSSELS.

ALPHABET INC. IS ESTABLISHED ON OCTOBER 2ND.

“Alphabet Inc. (commonly known as Alphabet) is an American multinational conglomerate
created in 2015 as the parent company of Google and several other companies previously
owned by or tied to Google.”[35]
PAUL OTLET DOODLE AND MUNDANEUM-GOOGLE EXHIBITIONS.

Google creates a doodle for their homepage on the occasion of the 147th birthday of Paul
Otlet[36] and produces the slide shows Towards the Information Age, Mapping Knowledge
and The 100th Anniversary of a Nobel Peace Prize, all hosted by the Google Cultural
Institute.
“The Mundaneum and Google have worked closely together to curate 9 exclusive online
exhibitions for the Google Cultural Institute. The team behind the reopening of the

Mundaneum this year also worked with the Cultural Institute engineers to launch a dedicated
mobile app.”[37]
GOOGLE CULTURAL INSTITUTE PARTNERS WITH THE BRITISH MUSEUM.

The British Museum announce a “unique partnership” where over 4,500 assets can be
“seen online in just a few clicks”. In the official press release, the director of the museum,
Neil McGregor, said “The world today has changed, the way we access information has
been revolutionised by digital technology. This enables us to gives the Enlightenment ideal
on which the Museum was founded a new reality. It is now possible to make our collection
accessible, explorable and enjoyable not just for those who physically visit, but to everybody
with a computer or a mobile device. ”[38]
GOOGLE CULTURAL INSTITUTE ADDS A PERFORMING ARTS SECTION.

Over 60 performing arts (dance, drama, music, opera) organizations and performers join the
assets collection of the Google Cultural Institute [39]
2016
CODA

The Google Culture Institute has quietly changed the name of its platform to “Google Art &
Culture”. The website has been also restructured, and categories have been simplified into
“Arts”, “History” and “Wonders”. Its partners and projects are placed at the top of their
“Menu”. It is now possible to browse artists and mediums trough time and by color. The site

P.192

P.193

offers a daily digest of art and history, but also cityscapes, galleries and street art views are
only one click away.

An important aspect of this make-over is the way in which it reveals its own instability as a
cultural archive. Before the upgrade, the link http://www.google.com/culturalinstitute/assetviewer/text-as-set-cell-0?exhibitId=QQ-RRh0A would take you to "The origins of the
Internet in Europe”, the page dedicated to the Mundaneum and Paul Otlet. Now it takes
you to a 404 error page. No timestamp, no redirect. No archived copy recorded in the
Wayback Machine. The structure of the new link for this "exhibition" still hints at some sort
of beta state: https://www.google.com/culturalinstitute/beta/exhibit/QQ-RRh0A . How
long can we rely on this cultural institute/beta link?
Should the “curator of the world”[40], as Amit Sood is described in media, take some
responsibility over the reliability of the structure in which Google Arts & Culture displays the
cultural material extracted from public institutions and, that unlike Google, need to do so by
mandate? Or should we all just take his word and look away: “I fell into this by mistake.”[41]?

Last
Revision:
2·08·2016

1. Caines, Matthew. “Arts head: Amit Sood, director, Google Cultural Institute” The Guardian. Dec 3, 2013. http://
www.theguardian.com/culture-professionals-network/culture-professionals-blog/2013/dec/03/amit sood-google-culturalinstitute-art-project
2. Google Paris. Accessed Dec 22, 2016 http://www.google.se/about/careers/locations/paris/

3. Schiller, Dan & Yeo, Shinjoung. “Powered By Google: Widening Access And Tightening Corporate Control.” (In Aceti, D.
L. (Ed.). Red Art: New Utopias in Data Capitalism: Leonardo Electronic Almanac, Vol. 20, No. 1. London: Goldsmiths
University Press. 2014):48
4. Down, Maureen. “The Google Art Heist”. The New York Times. Sept 12, 2015 http://www.nytimes.com/2015/09/13/
opinion/sunday/the-google-art-heist.html
5. Schiller, Dan & Shinjoung Yeo. “Powered By Google: Widening Access And Tightening Corporate Control.”, 48
6. Schiller, Dan & Yeo, Shinjoung. “Powered By Google: Widening Access And Tightening Corporate Control.”, 48
7. Davis, Heather & Turpin, Etienne, eds. Art in the Antropocene (London: Open Humanities Press. 2015), 7
8. Bush, Randy. Psg.com On techno-colonialism. (blog) June 13, 2015. Accessed Dec 22, 2015 https://psg.com/ontechnocolonialism.html
9. Starzmann, Maria Theresia. “Cultural Imperialism and Heritage Politics in the Event of Armed Conflict: Prospects for an
‘Activist Archaeology’”. Archeologies. Vol. 4 No. 3 (2008):376
10. Echikson, William. Partnering in Belgium to create a capital of culture (blog) March 20, 2014. Accessed Dec 22, 2015
http://googlepolicyeurope.blogspot.se/2014/03/partnering-in-belgium-to-create-capital.html
11. Google. Mundaneum co-founder Paul Otlet's 147th Birthday (blog) August 23, 2015. Accessed Dec 22, 2015 http://
www.google.com/doodles/mundaneum-co-founder-paul-otlets-147th-birthday
12. eg. https://www.google.com/culturalinstitute/thelab/#experiments
13. Lavallee, Andrew. “Google CEO: A New Iraq Means Business Opportunities.” Wall Street Journal. Nov 24, 2009 http://
blogs.wsj.com/digits/2009/11/24/google-ceo-a-new-iraq-means-business-opportunities/
14. Associated Press. Google Documents Iraqi Museum Treasures (on-line video November 24, 2009) https://
www.youtube.com/watch?v=vqtgtdBvA9k
15. Jarry, Emmanuel. “France's Sarkozy takes on Google in books dispute.” Reuters. December 8, 2009. http://
www.reuters.com/article/us-france-google-sarkozy-idUSTRE5B73E320091208
16. European Commission. Antitrust: Commission probes allegations of antitrust violations by Google (Brussels 2010) http://
europa.eu/rapid/press-release_IP-10-1624_en.htm
17. Caines, Matthew. “Arts head: Amit Sood, director, Google Cultural Institute” The Guardian. December 3, 2013. http://
www.theguardian.com/culture-professionals-network/culture-professionals-blog/2013/dec/03/amit sood google-culturalinstitute-art-project
18. Cyrus, Farivar. "Google to build R&D facility and 'European cultural center' in France.” Deutsche Welle. September 9,
2010. http://www.dw.com/en/google-to-build-rd-facility-and-european-cultural-center-in-france/a-5993560
19. Google Art Project. Art Project V1 - Launch Event at Tate Britain. (on-line video February 1, 2011) https://
www.youtube.com/watch?v=NsynsSWVvnM
20. Google Cultural Institute. Accessed Dec 18, 2015. https://www.google.com/culturalinstitute/about/partners/
21. Robinson, James. “Eric Schmidt, chairman of Google, condemns British education system” The Guardian. August 26, 2011
http://www.theguardian.com/technology/2011/aug/26/eric-schmidt-chairman-google-education
22. European Commission. Letter addressed to Google by the Article 29 Group (Brussels 2012) http://ec.europa.eu/justice/
data-protection/article-29/documentation/other-document/files/2012/20121016_letter_to_google_en.pdf
23. Willsher, Kim. “Google Cultural Institute's Paris opening snubbed by French minister.” The Guardian. December 10, 2013
http://www.theguardian.com/world/2013/dec/10/google-cultural-institute-france-minister-snub
24. Bowers, Simon & Syal, Rajeev. “MP on Google tax avoidance scheme: 'I think that you do evil'”. The Guardian. May 16,
2013. http://www.theguardian.com/technology/2013/may/16/google-told-by-mp you-do-do-evil
25. Court of Justice of the European Union. Press-release No 70/14 (Luxembourg, 2014) http://curia.europa.eu/jcms/upload/
docs/application/pdf/2014-05/cp140070en.pdf
26. Barbican. “Digital Revolution.” Accessed December 15, 2015 https://www.barbican.org.uk/bie/upcoming-digital-revolution
27. Google. “Dev Art”. Accessed December 15, 2015 https://devart.withgoogle.com/
28. Tekniska Museet. “Digital Revolution.” Accessed December 15, 2015 http://www.tekniskamuseet.se/1/5554.html
29. Google Cultural Institute. Accessed December 15, 2015. https://www.google.com/culturalinstitute/thelab/
30. Echikson,William. Partnering in Belgium to create a capital of culture (blog) March 20, 2014. Accessed Dec 22, 2015
http://googlepolicyeurope.blogspot.se/2014/03/partnering-in-belgium-to-create-capital.html
31. European Commission. Antitrust: Commission sends Statement of Objections to Google on comparison shopping service;
opens separate formal investigation on Android. (Brussels 2015) http://europa.eu/rapid/press-release_IP-15-4780_en.htm
32. Yun Chee, Foo. “Google rejects 'unfounded' EU antitrust charges of market abuse” Reuters. (August 27, 2015) http://
www.reuters.com/article/us-google-eu-antitrust-idUSKCN0QW20F20150827
33. European Commission. Antitrust: Commission sends Statement

P.194

P.195

34. Transparency International. Lobby meetings with EU policy-makers dominated by corporate interests (blog) June 24, 2015.
Accessed December 22, 2015. http://www.transparency.org/news/pressrelease/
lobby_meetings_with_eu_policy_makers_dominated_by_corporate_interests
35. Wikipedia, The Free Encyclopedia. s.v. “Alphabet Inc,” (accessed Jan 25, 2016, https://en.wikipedia.org/wiki/Alphabet_Inc
.
36. Google. Mundaneum co-founder Paul Otlet's 147th Birthday (blog) August 23, 2015. Accessed Dec 22, 2015 http://
www.google.com/doodles/mundaneum-co-founder-paul-otlets-147th-birthday
37. Google. Mundaneum co-founder Paul Otlet's 147th Birthday
38. The British Museum. The British Museum’s unparalleled world collection at your fingertips. (blog) November 12, 2015.
Accessed December 22, 2015. https://www.britishmuseum.org/about_us/news_and_press/press_releases/2015/
with_google.aspx
39. Sood, Amit. Step on stage with the Google Cultural Institute (blog) December 1, 2015. Accessed December 22, 2015.
https://googleblog.blogspot.se/2015/12/step-on-stage-with-google-cultural.html
40. Sam Sundberg “Världsarvet enligt Google”. Svenska Dagbladet. March 27, 2016 http://www.svd.se/varldsarvet-enligtgoogle
41. TED. Amit Sood: Every piece of art you've ever wanted to see up close and searchable. (on-line video February 2016)
https://www.ted.com/talks/amit_sood_every_piece_of_art_you_ve_ever_wanted_to_see_up_close_and_searchable/

Une
histoire
préventive
du
Google
Cultural
Institute
GERALDINE JUÁREZ

I. L'ORGANISATION DE L'INFORMATION N'EST JAMAIS
INNOCENTE

Il y a six ans, Google, une entreprise d'Alphabet a lancé un nouveau projet : le Google Art
Project. L'histoire officielle, celle écrite par Google et distribuée principalement à travers des
communiqués de presse sur mesure et de brèves informations commerciales, nous dit que
tout a commencé « en 2010, avec un projet ou Google intervenait à 20%, qui fut présenté
au public pour la première fois en 2011. Il s'agissait de 17 musées réunis dans une
plateforme en ligne très intéressante afin de permettre aux utilisateurs de découvrir l'art d'une
manière tout à fait nouvelle et différente. »[1] Tandis que Google Books faisait face à des
problèmes d'ordre légal et que la Commission européenne lançait son enquête antitrust
contre Google en 2010, le Google Art Project prenait, non pas par hasard, de l'ampleur.
Cela conduisit à la création du Google Art Institute dont le siège se trouve à Paris et « dont
la mission est de rendre la culture mondiale accessible en ligne ».[2]
Le Google Cultural Institute est clairement divisé en sections : Art Project, Historical
Moments et World Wonders. Cela correspond dans les grandes lignes à beaux-arts, histoire
du monde et matériel culturel. Techniquement, le Google Cultural Institute peut être décrit
comme une base de données qui alimente un dépositaire d'images haute résolution
représentant des objets d'art, des objets, des documents, des éphémères ainsi que
d'informations à propos, et provenant, de leurs « partenaires » - les musées publics, les

P.196

P.197

galeries et les institutions culturelles qui offrent ce matériel culturel -des visites en 3D et des
cartes faites à partir de "street view". Pour le moment, le Google Cultural Institute compte
177 reproductions numériques d'une sélection de peintures dans une résolution de l'ordre
des giga pixels et 320 différents objets en 3D ainsi que de multiples diapositives thématiques
choisies en collaboration avec leurs partenaires ou par leurs utilisateurs.
Selon leur site, dans leur « Lab », ils développent une « nouvelle technologie afin d'aider
leurs partenaires à publier leurs collections en ligne et à toucher de nouveaux publics, comme
l'ont fait les initiatives du Google Art Project, Historic Moments et Words Wonders. » Ce
n'est pas un hasard que ces services soient proposés comme une oeuvre philanthropique aux
institutions publiques qui sont de plus en plus amenées à justifier leur existence face aux
réductions budgétaires et aux autres exigences en matière de gestion des politiques
d'austérité en Europe et ailleurs. « Il est peu probable et même impensable que [le Google
Cultural Institute] fasse disparaitre la famine chronique des institutions culturelles de service
public causée par la politique et présente même dans les pays riches »[3]. Il est important de
comprendre que Google est réellement en train de financer l'infrastructure technique et le
travail nécessaire à la transformation de la culture en données. De cette manière, Google
s'assure que la culture peut être facilement gérée et nourrir toute sortes de produits
nécessaires à la ville néolibérale, afin de promouvoir et d'exploiter ces « biens » culturels, et
de soutenir la compétition avec d'autres centres urbains au niveau mondial, mais également
l'instatiable apétit d'informations de Google.
Le dirigeant du Google Cultural Institute est conscient qu'il existe un grand nombre
d'interrogations autour de leurs activités, cependant, Alphabet a choisi d'appeler les critiques
légitimes: des malentendus ; « Notre plus grand combat est ce malentendu permanent sur les
raisons de l'existence du Cultural Institute »[4] Le Google Cultural Institute, comme beaucoup
d'autres efforts culturels de Google, tels que Google Books et leur exposition artistique
Digital Revolution, a été le sujet de quelques critiques bien nécessaires, comme Powered by
Google: Widening Access and Tightening Corporate Control (Schiller & Yeo 2014); un
compte rendu détaillé des origines de cette intervention culturelle et de son rôle dans la
résurgence du capitalisme social: « là où les gens sont renvoyés aux corporations plutôt
qu'aux États pour des services qu'ils reçoivent ; là où le capital des entreprises a l'habitude
de se donner le droit de négocier le discours public ; et où l'histoire et l'art restent saturés par
les préférences et les priorités des classes de l'élite sociale. »[5]
Connu comme l'un, peut-être le seul essai d'analyse de l'utilisation des informations par
Google et de la rhétorique de démocratisation se trouvant en amont pour réorganiser les
institutions publiques culturelles en un « espace de profit », le texte de Schiller & Yeo est
fondamental pour la compréhension de l'évolution du Google Cultural Institute dans le
contexte historique du capitalisme numérique, où la dépendance mondiale aux technologies
de l'information est directement liée à la crise actuelle d'accumulation et, où la fièvre
d'archivage de Google « évince sa portée culturelle et idéologique à couper le souffle ».[6]

II. QUI COLONISE LES COLONS ?

Le Google Cultural Institute est un sujet de débat intéressant puisqu'il reflète les pulsions
colonialistes ancrées dans les désirs scientifiques et économiques qui ont formé ces mêmes
collections que le Google Cultural Institute négocie et accumule dans sa base de données.
Qui colonise les colons ? C'est une problématique très difficile que j'ai soulevée
précédemment dans un essai dédié au Google Cultural Institute, Alfred Russel Wallace et
les pulsions colonialistes derrière les fièvres d'archivage du 19e et du 20e siècles. Je n'ai pas
encore de réponse. Pourtant, une critique du Google Cultural Institute dans laquelle ses
motivations sont interprétées comme simplement colonialistes serait trompeuse et contreproductive. Leur but n'est pas d'asservir et d'exploiter la population tout entière et ses
ressources afin d'imposer une nouvelle idéologie et de civiliser les barbares dans la même
optique que celle des pays européens durant la colonisation. De plus, cela serait injuste et
irrespectueux vis-à-vis de tous ceux qui subissent encore les effets permanents de la
colonisation, exacerbés par l'expansion de la mondialisation économique.
Selon moi, l'assemblage de la technologie et de la science qui a produit le savoir à l'origine
de la création d'entités telles que Google et de ses dérivés, comme le Cultural Institute; ainsi
que la portée de son impact sur une société où la technologie de l'information est la forme de
technologie dominante, font de "technocolonialisme" un terme plus précis pour décrire les
interventions culturelles de Google. Même si la technocolonilisation partage de nombreux
traits et éléments avec le projet colonial, comme l'exploitation des matériaux nécessaires à la
production d'informations et de technologies médiatiques - ainsi que les conflits qui en
découlent - les technologies de l'information sont tout de même différentes des navires et des
canons. Cependant, la fonction commerciale des technologies maritimes est identique aux
services libres - comme dans libre échange - déployés par les drones de Google ou Facebook
qui fournissent internet à l'Afrique, même si la mise en réseau des technologies de
l'information est largement différent en matière d'infrastructure.
Il n'existe pas de définition officielle du technocolonialisme, mais il est important de le
comprendre comme une continuité des idées des Lumières qui a été à l'origine du désir de
rassembler, d'organiser et de gérer les informations au 19e siècle. Mon utilisation de ce
terme a pour objectif de souligner et de situer l'accumulation contemporaine, ainsi que la
gestion de l'information et des données au sein d'un paysage scientifique dirigé par l'idée « du
profit avant tout » comme une « extension logique de la valeur du surplus accumulée à
travers le colonialisme et l'esclavage ».[7]
Contrairement à l'époque coloniale, dans le technocolonialisme contemporain, la narration
n'est pas la suprématie d'une culture humaine spécifique. La culture technologique est le
sauveur. Peu importe que vous soyez musulman, français ou maya, l'objectif est d'obtenir les
meilleures technologies pour transformer la vie en données, les classifier, produire un contenu
à partir de celles-ci et créer des expériences pouvant être monétisées.

P.198

P.199

En toute logique, pour Google, une entreprise dont la mission est d'organiser les informations
du monde en vue de générer un profit, les institutions qui étaient auparavant chargées de
l'organisation de la connaissance du monde constituent des partenaires idéaux. Cependant,
comme indiqué plus tôt, l'engagement du Google Cultural Institute à rassembler les
informations des musées créés durant la période coloniale afin d'élever une certaine culture et
une manière supérieure de voir le monde est paradoxal. Aujourd'hui, nous sommes au
courant et nous sommes capables de défier les narrations dominantes autour du patrimoine
culturel, car ces institutions ont un véritable récit de l'histoire qui ne se limite pas à la
production de la section « à propos » d'un site internet, comme celui du Google Cultural
Institute. « Ce que les musées devraient peut-être faire, c'est amener les visiteurs à prendre
conscience que ce n'est pas la seule manière de voir les choses. Que le musée, à savoir
l'installation, la disposition et la collection, possède une histoire et qu'il dispose également
d'un bagage idéologique »[8]. Cependant, le Google Cultural Institute n'est pas un musée,
c'est une base de données disposant d'une interface qui permet de parcourir le contenu
culturel. Contrairement aux prestigieux musées avec lesquels il collabore, il manque d'une
histoire située dans un discours culturel spécifique. Il s'agit d'objets d'art, de merveilles du
monde et de moments historiques au sens large. La mission du Google Cultural Institute est
clairement commerciale et philanthropique, mais celui-ci manque d'un point de vue et d'une
position définie vis-à-vis du matériel culturel qu'il traite. Ce n'est pas surprenant puisque
Google a toujours évité de prendre position, tout est question de technodéterminisme et de la
noble mission d'organiser les informations du monde afin de le rendre meilleur. Cependant,
« la négociation et le rassemblement d'informations sont une forme dangereuse de
technocolonialisme ».[8]
En cherchant une narration culturelle dépassant l'idéologie californienne, le moteur de
recherche d'Alphabet a trouvé dans Paul Otlet et le Mundaneum la couverture parfaite pour
intégrer ses services philanthropiques dans l'histoire de la science de l'information, au-delà de
la Silicon Valley. Après tout, ils comprennent que « la possession des narrations historiques
et de leurs corrélats matériels devient un outil de manifestation et de réalisation des
revendications économiques ».[9]
Après avoir établi un centre de données dans la ville belge de Mons, ville du Mundaneum,
Google a offert son soutien à « l'aventure Mons 2015, en particulier en travaillant avec nos
partenaires de longue date, les archives du Mundaneum. Plus d'un siècle auparavant, deux
visionnaires belges ont imaginé l'architecture du World Wide Web d'hyperliens et

d'indexation de l'information, non pas sur des ordinateurs, mais sur des cartes de papier.
Leur création était appelée Mundaneum. »[10]

À l'occasion du 147e anniversaire de Paul Otlet, un Doodle sur la page d'Alphabet épelait
le nom de son entreprise en utilisant « les tiroirs du Mundaneum » pour former le mot G O
O G L E : « Aujourd'hui, Doodle rend hommage au travail pionnier de Paul sur le
Mundaneum. La collection de connaissances emmagasinées dans les tiroirs du Mundaneum
constituent un travail fondamental pour tout ce qui se fait chez Google. Dès les premiers
essais, vous pouvez voir ce concept prendre vie. »[11]
III. GOOGLE CULTURAL INSTITUTE

La dématérialisation des collections publiques à l'aide d'une infrastructure et de services
financés par des acteurs privés, tels que le GCI, doit être questionnée et analysée plus en
profondeur par des institutions hétérotopes pour comprendre les nouvelles formes prises par
une tension infinie entre connaissance/pouvoir au cœur d'un archivage contemporain, où
l'architecture de l'interface remplace et agit au nom du musée et où le visiteur est réduit aux
doigts d'un utilisateur capable de parcourir un nombre infini de biens culturels. À l'époque
où les institutions culturelles devraient être décolonisées plutôt que googlifiées, il est capital
d'aborder la question d'un projet tel que le Google Cultural Institute et son expansion
continue et inversement proportionnelle à l'échec des gouvernements et à la passivité des
institutions séduites par les gadgets[12].
Cependant, le dialogue est fragmenté entre les comptes rendus académiques, les
communiqués de presse, les interventions artistiques isolées, les conférences spécialisées et
les bulletins d'informations. Selon Femke Snelting, nous devons « trouver la patience de
construire une relation à ces théories de manière cohérente ». Pour ce faire, nous devons
approfondir et assembler un meilleur compte rendu de l'histoire du Google Cultural Institute.
Construite à partir du texte phare de Schiller & Yeo, la ligne du temps suivante est ma
contribution à cette tâche et à une tentative d'assembler des morceaux en les situant dans un
contexte politique et économique plus large allant au-delà de l'histoire officielle racontée par

P.200

P.201

le Google Cultural Institute. Une inspection plus minutieuse des événements révèle que
l'escalade des interventions culturelles d'Alphabet se produit généralement après l'apparition
d'un défi juridique pour l'hégémonie économique en Europe.
2009
ERIC SCHMIDT VISITE L'IRAK

Un bulletin d'informations du Wall Street Journal[13] ainsi qu'un rapport de l'AP Youtube[14]
confirment le nouveau projet de Google dans le domaine de collections historiques. Le
président exécutif d'Alphabet déclare : « je ne peux pas imaginer une meilleure manière
d'utiliser notre temps et nos ressources qu'en rendant disponibles les images et les idées de
notre civilisation, depuis son origine, pour un milliard de personnes à travers le monde. »
Un compte rendu détaillé de la réflexion de cette visite, son contexte et son programme se
trouvent dans Powered by Google: Widening Access and Tightening Corporate Control.
(Schiller & Yeo 2014)
LA FRANCE RÉAGIT À L'ENCONTRE DE GOOGLE BOOKS

Concernant le conflit impliquant Google Books en Europe, Reuters a déclaré qu'en 2009,
l'ancien président français, Nicolas Sarkozy « avait promis des centaines de millions d'euros à
un programme de numérisation distinct, disant qu'il ne permettrait pas à la France “d'être

dépouillée de son patrimoine au profit d'une grande entreprise, peu importe si celle-ci était
sympathique, grande ou américaine.” »[15]
Cependant, même si le programme réactionnaire et nationaliste de Nicolas Sarkozy ne doit
pas être félicité, il est important de noter que la première attaque ouverte à l'encontre du
programme culturel de Google est venue du gouvernement français. Quatre ans plus tard, le
Google Cultural Institute établissait son siège à Paris.
2010
LA COMMISSION EUROPÉENNE LANCE UNE ENQUÊTE ANTITRUST À L'ENCONTRE DE
GOOGLE.

La Commission européenne a décidé d'ouvrir une enquête antitrust à partir des
allégations selon lesquelles Google Inc. aurait abusé de sa position dominante de
moteur de recherche, en violation avec le règlement de l'Union européenne (Article
102 TFUE). L'ouverture de procédures formelles fait suite aux plaintes déposées par
des fournisseurs de service de recherche relatives à un traitement défavorable de leurs
services dans les résultats de recherche gratuits et payants de Google, ainsi qu'au
placement préférentiel des propres services de Google. Le lancement des procédures ne
signifie pas que la Commission dispose d'une quelconque preuve d'infraction. Cela
signifie seulement que la Commission va mener une enquête poussée et prioritaire sur
[16]
l'affaire.
LE GOOGLE ART PROJECT A COMMENCÉ COMME PROJET 20 % SOUS LA DIRECTION
D'AMIT SOOD.

D'après The Guardian[17], ainsi que d'autres bulletins d'informations, le projet culturel de
Google a été lancé par des « googleurs » passionnés d'art.
GOOGLE ANNONCE SON PROJET DE CONSTRUCTION D'UN EUROPEAN CULTURAL
CENTER EN FRANCE.

Faisant référence à la France comme à l'un des plus importants centres pour la culture et la
technologie, le PDG de Google, Eric Schmidt, a annoncé officiellement la création d'un
centre « dédié à la technologie, particulièrement en faveur de la promotion des cultures
européennes passées, présentes et futures ».[18]
2011
LE GOOGLE ART PROJECT EST LANCÉ À LA TATE LONDON.

En février, le nouveau « produit » a été officiellement présenté. La présentation[19] souligne
que l'idée a commencé avec un projet 20 %, un projet qui n'émanait donc pas d'un mandat
d'entreprise.

P.202

P.203

D'après la section « Our Story »[20] du Google Cultural Institute, l'histoire du Google Art
Project commence avec l'intégration de 140 000 pièces du Yad Vashem World Holocaust
Centre, suivie de l'intégration des archives de Nelson Mandela dans la section "Historical
Moments" du Google Cultural Institute.
Plus tard au mois d'août, Eric Schmidt déclara que l'éducation devrait rassembler l'art et la
science comme lors des « jours glorieux de l'époque victorienne ».[21]
2012
LES AUTORITÉS DES DONNÉES DE L'UE LANCENT UNE NOUVELLE ENQUÊTE SUR
GOOGLE ET SES NOUVEAUX TERMES D'UTILISATION.

À la demande des autorités françaises, l'Union européenne lance une enquête à l'encontre
de Google concernant une violation des données privées causée par les nouveaux termes
d'utilisation publiés par Google le 1er mars 2012.[22]
LE GOOGLE CULTURAL INSTITUTE CONTINUE À NUMÉRISER LES « BIENS »
CULTURELS.

D'après le site du Google Cultural Institute, 151 partenaires ont rejoint le Google Art
Project, y compris le Musée d'Orsay en France. La section World of Wonders est lancée
avec des partenariats comme celui de l'UNESCO. Au mois d'octobre, la plateforme avait
changé d'image et était relancée avec plus de 400 partenaires.
2013
LE SIÈGE DU GOOGLE CULTURAL INSTITUTE OUVRE À PARIS.

Le 10 décembre, le nouveau siège français ouvre au numéro 8 rue de Londres. La ministre
française, Aurélie Filippetti, annule sa participation à l'événement, car elle « ne souhaite pas
apparaitre comme une garantie à une opération qui soulève encore un certain nombre de
questions ».[23]
LES AUTORITÉS FISCALES BRITANNIQUES LANCENT UNE ENQUÊTE SUR LE PLAN
FISCAL DE GOOGLE.

L'enquêteur du HM Customs and Revenue Committee estime que les opérations fiscales de
Google au Royaume-Uni réalisées via l'Irlande sont « fourbes, calculées et, selon moi,
contraires à l'éthique ».[24]

2014
CONCERNANT LE « DROIT À L'OUBLI », LA COUR DE JUSTICE DE L'UE STATUE
CONTRE GOOGLE.

La décision controversée tient les moteurs de recherche responsables des données
personnelles qu'ils gèrent. Conformément à la loi européenne, la Cour a statué « que
l'opérateur est, dans certaines circonstances, obligé de retirer des liens vers des sites internet
publiés par un parti tiers et contenant des informations liées à une personne et apparaissant
dans la liste des résultats suite à une recherche basée sur le nom de cette personne. La Cour
établit clairement qu'une telle obligation peut également exister dans un cas où le nom, ou
l'information, n'est pas effacé préalablement de ces pages internet, et même, comme cela peut
être le cas, lorsque leur publication elle-même est légale. »[25]
RÉVOLUTION NUMÉRIQUE AU BARBICAN, ROYAUME-UNI

Google sponsorise l'exposition Digital Revolution[26] et les œuvres commissionnées sous le
nom « Dev-art: art made with code.[27] ». Le Tekniska Museet à Stockholm a ensuite
accueilli l'exposition.[28] « The Lab » du Google Cultural Institute ouvre « Ici, les experts
créatifs et la technologie se rassemblent pour partager des idées et construire de nouvelles
manières de profiter de l'art et de la culture. »[29]
GOOGLE FAIT CONNAITRE SON INTENTION DE SOUTENIR LA VILLE DE MONS,
CAPITALE EUROPÉENNE DE LA CULTURE EN 2015.

Un communiqué de presse de Google[30] décrit le nouveau partenariat avec la ville belge de
Mons comme le résultat de leur position d'employeur local et d'investisseur dans la ville où
se situe l'un de leurs deux principaux centres de données en Europe.
2015
LA COMMISSION DE L'UE ENVOIE UNE COMMUNICATION DES GRIEFS À GOOGLE.

La Commission européenne a envoyé une communication des griefs à Google, déclarant
que :
« l'entreprise avait abusé de sa position dominante sur les marchés des services
généraux de recherches internet dans l'espace économique européen en favorisant
systématiquement son propre produit de comparateur d'achats dans les pages de
[31]
résultats généraux de recherche. »

Google rejette les accusations, les jugeant « erronées d'un point de vue factuel, légal et
économique ».[32]

P.204

P.205

LA COMMISSION EUROPÉENNE COMMENCE À ENQUÊTER SUR ANDROID.

La Commission déterminera si, en concluant des accords anti-compétitifs et/ou en abusant
d'une possible position dominante, Google a :
illégalement entravé le développement et l'accès au marché de systèmes mobiles
d'exploitation, d'applications mobiles de communication et des services de ses rivaux
dans l'espace économique européen. Cette enquête est distincte et séparée du travail
[33]
d'investigation sur le commerce de la recherche de Google.
LE GOOGLE CULTURAL INSTITUTE POURSUIT SON EXPANSION.

D'après la section « Our Story » du Google Cultural Institute, le projet Street Art contient à
présent 10 000 pièces. Une nouvelle extension affiche les oeuvres d'art du Google Art
Project dans le navigateur Chrome et « les amateurs d'art peuvent porter une œuvre au
poignet grâce à l'art Android ». Au mois d'août, le projet disposait de 850 partenaires
utilisant ses outils, de 4,7 millions de pièces dans sa collection et de plus de 1 500
expositions organisées.
TRANSPARENCY INTERNATIONAL RÉVÈLE QUE GOOGLE EST LE DEUXIÈME PLUS
[34]
GRAND LOBBYISTE À BRUXELLES.

ALPHABET INC. EST CRÉÉ LE 2 OCTOBRE.

« Alphabet Inc. (connu sous le nom d'Alphabet) est un conglomérat multinational américain
créé en 2015 pour être la société mère de Google et de plusieurs entreprises appartenant
auparavant à Google ou y étant liées. »[35]
LE DOODLE PAUL OTLET ET LES EXPOSITIONS MUNDANEUM-GOOGLE.

Google crée un doodle pour sa page d'accueil à l'occasion du 147e anniversaire de Paul
Otlet[36] et des projections de diapositives Towards the Information Age, Mapping
Knowledge et The 100th Anniversary of a Nobel Peace Prize, toutes organisées par le
Google Cultural Institute.
« Le Mundaneum et Google ont étroitement collaboré pour organiser neuf expositions
en ligne exclusives pour le Google Cultural Institute. Cette année, l'équipe dans les
coulisses de la réouverture du Mundaneum a travaillé avec les ingénieurs du Cultural
[37]
Institute pour lancer une application mobile qui y est consacrée. »
LE GOOGLE CULTURAL INSTITUTE S'ASSOCIE AU BRITISH MUSEUM.

Le British Museum annonce un « partenariat unique » à travers lequel plus de 4 500 pièces
pourront être « visionnées en ligne en seulement quelques clics ». Dans le communiqué de
presse officiel, le directeur du musée, Neil McGregor, a déclaré « le monde a changé
aujourd'hui, notre manière d'accéder à l'information a été révolutionnée par la technologie
numérique. Cela permet de donner une nouvelle réalité à l'idéal des Lumières sur lequel le
Museum a été fondé. Il est à présent possible d'accéder à notre collection, d'explorer et de
profiter non seulement pour ceux qui la visitent en personne, mais pour tous ceux qui
disposent d'un ordinateur ou d'un appareil mobile. »[38]
LE GOOGLE CULTURAL INSTITUTE AJOUTE LA SECTION PERFORMING ARTS.

Plus de 60 organisations et interprètes d'art du spectacle (danse, théâtre, musique, opéra)
rejoignent la collection Google Cultural Institute[39]
2016

...
Last
Revision:
28·06·2016

1. Caines, Matthew. « Arts head: Amit Sood, director, Google Cultural Institute »The Guardian. 3 décembre 2013. http://
www.theguardian.com/culture-professionals-network/culture-professionals-blog/2013/dec/03/amit sood-google-culturalinstitute-art-project
2. Google Paris. Consulté le 22 décembre 2016 http://www.google.se/about/careers/locations/paris/

P.206

P.207

3. Schiller, Dan & Yeo, Shinjoung. « Powered By Google: Widening Access And Tightening Corporate Control. » (In Aceti, D.
L. (Éd.). Red Art: New Utopias in Data Capitalism: Leonardo Electronic Almanac, Vol. 20, No. 1. Londres : Goldsmiths
University Press. 2014): 48
4. Down, Maureen. « The Google Art Heist ». The New York Times. 12 septembre 2015 http://
www.nytimes.com/2015/09/13/opinion/sunday/the-google-art-heist.html
5. Schiller, Dan & Shinjoung Yeo. « Powered By Google: Widening Access And Tightening Corporate Control. », 48
6. 6. Schiller, Dan & Yeo, Shinjoung. « Powered By Google: Widening Access And Tightening Corporate Control. », 48
7. Davis, Heather & Turpin, Etienne, eds. Art in the Antropocene (Londres : Open Humanities Press. 2015), 7
8. Bush, Randy. Psg.com On techno-colonialism. (blog) 13 juin 2015. Consulté le 22 décembre 2015 https://psg.com/ontechnocolonialism.html
9. Starzmann, Maria Theresia. « Cultural Imperialism and Heritage Politics in the Event of Armed Conflict: Prospects for an
‘Activist Archaeology’ ». Archeologies. Vol. 4 n° 3 (2008):376
10. Echikson,William. Partnering in Belgium to create a capital of culture (blog) 10 mars 2014. Consulté le 22 décembre 2015
http://googlepolicyeurope.blogspot.se/2014/03/partnering-in-belgium-to-create-capital.html
11. Google. Mundaneum co-founder Paul Otlet's 147th Birthday (blog) 23 août, 2015. Consulté le 22 décembre 2015 http://
www.google.com/doodles/mundaneum-co-founder-paul-otlets-147th-birthday
12. ex. https://www.google.com/culturalinstitute/thelab/#experiments
13. 13. Lavallee, Andrew. « Google CEO: A New Iraq Means Business Opportunities. » Wall Street Journal. 24 novembre
2009 http://blogs.wsj.com/digits/2009/11/24/google-ceo-a-new-iraq-means-business-opportunities/
14. 14. Associated Press. Google Documents Iraqi Museum Treasures (vidéo en ligne 24 novembre 2009) https://
www.youtube.com/watch?v=vqtgtdBvA9k
15. Jarry, Emmanuel. « France's Sarkozy takes on Google in books dispute. » Reuters. 8 décembre 2009. http://
www.reuters.com/article/us-france-google-sarkozy-idUSTRE5B73E320091208
16. European Commission. Antitrust: Commission probes allegations of antitrust violations by Google (Bruxelles 2010) http://
europa.eu/rapid/press-release_IP-10-1624_en.htm
17. Caines, Matthew. “Arts head: Amit Sood, director, Google Cultural Institute »The Guardian. 3 décembre 2013. http://
www.theguardian.com/culture-professionals-network/culture-professionals-blog/2013/dec/03/amit sood google-culturalinstitute-art-project
18. Cyrus, Farivar. « Google to build R&D facility and 'European cultural center' in France. » Deutsche Welle. 9 septembre
2010. http://www.dw.com/en/google-to-build-rd-facility-and-european-cultural-center-in-france/a-5993560
19. 19. Google Art Project. Art Project V1 - Launch Event at Tate Britain. (vidéo en ligne le 1er février 2011) https://
www.youtube.com/watch?v=NsynsSWVvnM
20. Google Cultural Institute. Consulté le 18 décembre 2015. https://www.google.com/culturalinstitute/about/partners/
21. Robinson, James. « Eric Schmidt, chairman of Google, condemns British education system » The Guardian. 26 août 2011
http://www.theguardian.com/technology/2011/aug/26/eric-schmidt-chairman-google-education
22. European Commission. Letter addressed to Google by the Article 29 Group (Bruxelles 2012) http://ec.europa.eu/justice/
data-protection/article-29/documentation/other-document/files/2012/20121016_letter_to_google_en.pdf
23. Willsher, Kim. « Google Cultural Institute's Paris opening snubbed by French minister. » The Guardian. 10 décembre, 2013
http://www.theguardian.com/world/2013/dec/10/google-cultural-institute-france-minister-snub
24. 24. Bowers, Simon & Syal, Rajeev. « MP on Google tax avoidance scheme: 'I think that you do evil' ». The Guardian. 16
mai 2013. http://www.theguardian.com/technology/2013/may/16/google-told-by-mp you-do-do-evil
25. Court of Justice of the European Union. Press-release No 70/14 (Luxembourg, 2014) http://curia.europa.eu/jcms/upload/
docs/application/pdf/2014-05/cp140070en.pdf
26. Barbican. « Digital Revolution. » Consulté le 15 décembre 2015 https://www.barbican.org.uk/bie/upcoming-digital-revolution
27. Google. « Dev Art ». Consulté le 15 décembre 2015 https://devart.withgoogle.com/
28. Tekniska Museet. « Digital Revolution. » Consulté le 15 décembre 2015 http://www.tekniskamuseet.se/1/5554.html
29. Google Cultural Institute. Consulté le 15 décembre 2015. https://www.google.com/culturalinstitute/thelab/
30. Echikson,William. Partnering in Belgium to create a capital of culture (blog) 10 mars 2014. Consulté le 22 décembre 2015
http://googlepolicyeurope.blogspot.se/2014/03/partnering-in-belgium-to-create-capital.html
31. European Commission. Antitrust: Commission sends Statement of Objections to Google on comparison shopping service;
opens separate formal investigation on Android. (Bruxelles 2015) http://europa.eu/rapid/press-release_IP-15-4780_en.htm
32. Yun Chee, Foo. « Google rejects 'unfounded' EU antitrust charges of market abuse » Reuters. (27 août 2015) http://
www.reuters.com/article/us-google-eu-antitrust-idUSKCN0QW20F20150827
33. European Commission. Antitrust: Commission sends Statement

34. Transparency International. Lobby meetings with EU policy-makers dominated by corporate interests (blog) 24 juin 2015.
Consulté le mardi 22 décembre 2015. http://www.transparency.org/news/pressrelease/
lobby_meetings_with_eu_policy_makers_dominated_by_corporate_interests
35. Wikipedia, The Free Encyclopedia. s.v. “Alphabet Inc,” (consulté le 25 janvier 2016, https://en.wikipedia.org/wiki/
Alphabet_Inc.
36. Google. Mundaneum co-founder Paul Otlet's 147th Birthday (blog) 23 août, 2015. Consulté le 22 décembre 2015 http://
www.google.com/doodles/mundaneum-co-founder-paul-otlets-147th-birthday
37. Google. Mundaneum co-founder Paul Otlet's 147th Birthday
38. The British Museum. The British Museum’s unparalleled world collection at your fingertips. (blog) Novembre 12, 2015.
Consulté le mardi 22 décembre 2015. https://www.britishmuseum.org/about_us/news_and_press/press_releases/2015/
with_google.aspx
39. Sood, Amit. Step on stage with the Google Cultural Institute (blog) 1er décembre 2015. Consulté le mardi 22 décembre
2015. https://googleblog.blogspot.se/2015/12/step-on-stage-with-google-cultural.html

P.208

P.209

Special:Disambiguation
The following is a list of all disambiguation pages on Mondotheque.
A page is treated as a disambiguation page if it contains the tag __DISAMBIG__ (or an
equivalent alias).
Showing below up to 15 results in range #1 to #15.
View (previous 50 | next 50) (20 | 50 | 100 | 250 | 500)
1. Biblion may refer to:
◦ Biblion (category), a subcategory of the category: Index Traité de
documentation
◦ Biblion (Traité de documentation), term used by Paul
Otlet to define all categories of books and documents in a section of Traité de
documentation
◦ Biblion (unity), the smallest document or intellectual unit
2. Cultural Institute may refer to:
◦ A Cultural Institute (organisation) , such as The
Mundaneum Archive Center in Mons
◦ Cultural Institute (project), a critical interrogation of
cultural institutions in neo-liberal times, developed by amongst others
Geraldine Juárez
◦ The Google Cultural Institute, a project offering
"Technologies that make the world’s culture accessible to anyone, anywhere."
3. L'EVANGELISTE may refer to:
◦ Vint Cerf, so-called 'internet evangelist', or 'father of the internet',
working at LA MÉGA-ENTREPRISE
◦ Jiddu Krishnamurti, priest at the 'Order of the Star', a theosophist
splinter group that Paul Otlet related to
◦ Sir Tim Berners Lee, 'open data evangelist', heading the World Wide
Web consortium (W3C)

4. L'UTOPISTE may refer to:
◦ Paul Otlet, documentalist, universalist, internationalist, indexalist. At
times considered as the 'father of information science', or 'visionary inventor of
the internet on paper'
◦ Le Corbusier, architect, universalist, internationalist. Worked with Paul
Otlet on plans for a City of knowledge
◦ Otto Neurath , philosopher of science, sociologist, political economist.
Hosted a branch of Mundaneum in The Hague
◦ Ted Nelson , technologist, philosopher, sociologist. Coined the terms
hypertext, hypermedia, transclusion, virtuality and intertwingularity
5. LA CAPITALE may refer to:
◦ Brussels, capital of Flanders and Europe
◦ Genève , world civic center
6. LA MANAGER may refer to:
◦ Delphine Jenart, assistant director at the Mundaneum Archive Center
in Mons.
◦ Bill Echikson, former public relations officer at Google, coordinating
communications for the European Union, and for all of Southern, Eastern
Europe, Middle East and Africa. Handled the company’s high profile
antitrust and other policy-related issues in Europe.
7. LA MÉGA-ENTREPRISE may refer to:
◦ Google inc, or Alphabet, sometimes referred to as "Crystal
Computing", "Project02", "Saturn" or "Green Box Computing"
◦ Carnegie Steel Company, supporter of the Mundaneum in Brussels
and the Peace Palace in The Hague
8. LA RÉGION may refer to:
◦ Wallonia (Belgium), or La Wallonie. Former mining area, homebase of former prime minister Elio di Rupo, location of two Google
datacenters and the Mundaneum Archive Center
◦ Groningen (The Netherlands), future location of a Google data
center in Eemshaven
◦ Hamina (Finland), location of a Google data center

P.210

P.211

9. LE BIOGRAPHE is used for persons that are instrumental in constructing the
narrative of Paul Otlet. It may refer to:
◦ André Canonne, librarian and director of the Centre de Lecture
publique de la Communauté française (CLPCF). Discovers the
Mundaneum in the 1960s. Publishes a facsimile edition of the Traité de
documentation (1989) and prepares the opening of Espace Mundaneum in
Brussels at Place Rogier (1990)
◦ Warden Boyd Rayward, librarian scientist, discovers the Mundaneum
in the 1970s. Writes the first biography of Paul Otlet in English: The
Universe of Information: the Work of Paul Otlet for Documentation and
international Organization (1975)
◦ Benoît Peeters and François Schuiten , comics-writers and
scenographers, discover the Mundaneum in the 1980s. The archivist in the
graphic novel Les Cités Obscures (1983) is modelled on Paul Otlet
◦ Françoise Levie, filmmaker, discovers the Mundaneum in the 1990s.
Author of the fictionalised biography The man who wanted to classify the
world (2002)
◦ Alex Wright, writer and journalist, discovers the Mundaneum in 2003.
Author of Cataloging the World: Paul Otlet and the Birth of the Information
Age (2014)
10. LE DIRECTEUR may refer to:
◦ Harm Post, director of Groningen Sea Ports, future location of a Google
data center
◦ Andrew Carnegie, director of Carnegy Steel Company, sponsor of the
Mundaneum
◦ André Canonne, director of the Centre de Lecture publique de la
Communauté française (CLPCF) and guardian of the Mundaneum. See
also: LE BIOGRAPHE
◦ Jean-Paul Deplus, president of the current Mundaneum association,
but often referred to as LE DIRECTEUR
◦ Amid Sood, director (later 'founder') of the Google Cultural Institute and
Google Art Project
◦ Steve Crossan, director (sometimes 'founder' or 'head') of the Google
Cultural Institute
11. LE POLITICIEN may refer to:
◦ Elio di Rupo, former prime minister of Belgium and mayor of Mons

◦ Henri Lafontaine, Belgium lawyer and statesman, working with Paul
Otlet to realise the Mundaneum
◦ Nicolas Sarkozy, former president of France, negotiating deals with
LA MÉGA-ENTREPRISE
12. LE ROI may refer to:
◦ Leopold II, reigned as King of the Belgians from 1865 until 1909.
Exploited Congo as a private colonial venture. Patron of the Mundaneum
project
◦ Albert II, reigned as King of the Belgians from 1993 until his
abdication in 2013. Visited LA MÉGA-ENTREPRISE in 2008
13. Monde may refer to:
◦ Monde (Univers) means world in French and is used in many
drawings and schemes by Paul Otlet. See for example: World + Brain and
Mundaneum
◦ Monde (Publication), Essai d'universalisme. Last book published
by Paul Otlet (1935)
◦ Mondialisation , Term coined by Paul Otlet (1916)
14. Mundaneum may refer to:
◦ Mundaneum (Utopia) , a project designed by Paul Otlet and Henri
Lafontaine
◦ Mundaneum (Archive Centre) , a cultural institution in Mons,
housing the archives of Paul Otlet and Henri Lafontaine since 1993
15. Urbanisme may refer to:
◦ Urban planning, a technical and political process concerned with the
use of land, protection and use of the environment, public welfare, and the
design of the urban environment, including air, water, and the infrastructure
passing into and out of urban areas such as transportation, communications,
and distribution networks.
◦ Urbanisme (Publication), a book by Le Corbusier (1925).
View (previous 50 | next 50) (20 | 50 | 100 | 250 | 500)

P.212

P.213

Location,
location,
location

From
Paper
Mill to
Google
Data
Center
SHINJOUNG YEO

Every second of every day, billions of people around the world are googling, mapping, liking,
tweeting, reading, writing, watching, communicating, and working over the Internet.
According to Cisco, global Internet traffic will surpass one zettabyte – nearly a trillion
gigabytes! – in 2016, which equates to 667 trillion feature-length films.[1] Internet traffic is
expected to double by 2019[2] as the internet ever increasingly weaves itself into the very
fabric of many people’s daily lives.
Internet search giant Google – since August, 2015 a subsidiary of Alphabet Inc.[3] – is one
of the major conduits of our social activities on the Web. It processes over 3.3 billion
searches each and every day, 105 billon searches per month or 1.3 trillion per year,[4] and is
responsible for over 88% Internet search activity around the globe.[5] Predicating its business
on people’s everyday information activity – search – in 2015, Google generated $74.54
billion dollars,[6] equivalent to or more than the GDP of some countries. The vast majority of
Google’s revenue – $ 67.39 billion dollars[7] – from advertising on its various platforms
including Google search, YouTube, AdSense products, Chrome OS, Android etc.; the
company is rapidly expanding its business to other sectors like cloud services, health,
education, self-driving cars, internet of things, life sciences, and the like. Google’s lucrative
internet business does not only generate profits. As Google’s chief economist Hal Varian
states:
…it also generates torrents of data about users’ tastes and habits, data that Google
then sifts and processes in order to predict future consumer behavior, find ways to
improve its products, and sell more ads. This is the heart and soul of Googlenomics.
It’s a system of constant self-analysis: a data-fueled feedback loop that defines not only
[8]
Google’s future but the future of anyone who does business online.

P.214

P.215

Google’s business model is emblematic of the “new economy” which is primarily built around
data and information. The “new economy” – the term popularized in the 1990s during the
first dot-com boom – is often distinguished by the mainstream discourse from the traditional
industrial economy that demands large-scale investment of physical capital and produces
material goods and instead emphasizes the unique nature of information and purports to be
less resource-intensive. Originating in the 1960s, post-industrial theorists asserted the
emergence of the “new” economy, claiming that the increase of highly-skilled information
workers, widespread application of information technologies, along with the decrease of
manual labor, would bring a new mode of production and fundamental changes in
exploitative capitalist social relations.[9]
Has the “new” economy challenged capitalist social relations and transcended the material
world? Google and other Internet companies have been investing heavily in industrial-scale
real estate around the world and continue to build large-scale physical infrastructure in the
way of data centers where the world’s bits and bytes are stored, processed and delivered.
The term “tube” or “cloud” or “weightless” often gives us a façade that our newly marketed
social and cultural activities over the Internet transcend the physical realm and occur in the
vapors of the Internet; far from this perception, however, every bit of information in the “new
economy” is transmitted through and located in physical space, on very real and very large
infrastructure encomapssing existing power structures from phone lines and fiber optics to
data centers to transnatonal underseas telecommunication cables.
There is much boosterism and celebration that the “new economy” holds the keys to
individual freedom, liberty and democratic participation and will free labor from exploitation;
however, the material/physical base that supports the economy and our everyday lives tells a
very different story. My analysis presents an integral piece of the physical infrastructure
behind the “new economy” and the space embedded in that infrastructure in order to
elucidate that the “new economy” does not occur in an abstract place but rather is manifested
in the concrete material world, one deeply embedded in capitalist development which
reproduces structural inequality on a global scale. Specifically, the analysis will focus on
Google’s growing large-scale data center infrastructure that is restructuring and reconfiguring
previously declining industrial cities and towns as new production places within the US and
around the world.
Today, data centers are found in nearly every sector of the economy: financial services,
media, high-tech, education, retail, medical, government etc. The study of the development of
data centers in each of these sectors could be separate projects in and of themselves;
however, for this project, I will only look at Google as a window into the “new” economy, the

company which has led the way in the internet sector in building out and linking up data
centers as it expands its territory of profit.[10]
DATA CENTRES IN CONTEXT

The concepts of “spatial fix” by critical geographer David Harvey[11] and “digital capitalism”
by historian of communication and information Dan Schiller[12] are useful to contextualize and
place the emergence of large-scale data centers within capitalist development. Harvey
illustrates the notion of spatial fix to explicate and situate the geographical dynamics and crisis
tendency of capitalism with over-accumulation and under-consumption. Harvey’s spatial fix
has dual meanings. One meaning is that it is necessary for capital to have a fixed space –
physical infrastructure (transportation, communications, highways, power etc.) as well as a
built environment – in order to facilitate capital’s geographical expansion. The other meaning
is a fix or solution for capitalists’ crisis through geographical expansion and reorganization of
space as capital searches for new markets and temporarily relocates to more profitable space
– new accumulation sites and territories. This temporal spatial fix will lead capital to leave
behind existing physical infrastructure and built environments as it shifts to new temporal
fixed spaces in order to cultivate new markets.
Building on Harvey’s work, Schiller introduced the concept of digital capitalism in response
to the 1970’s crisis of capitalism in which information became that “spatial-temporal fix” or
“pole of growth.”[13] To renew capitalist crisis from the worst economic downturn of the
1970s, a massive amount of information and communication technologies were introduced
across the length and breadth of economic sectors as capitalism shifted to a more informationintensive economy – digital capitalism. Today digital capitalism grips every sector, as it has
expanded and extended beyond information industries and reorganized the entire economy
from manufacturing production to finance to science to education to arts and health and
impacts every iota of people’s social lives.[14] Current growth of large-scale data centers by
Internet companies and their reoccupation of industrial towns needs to be situated within the
context of the development of digital capitalism.
FROM MANUFACTURING FACTORY TO DATA FACTORY

Large-scale data centers – sometimes called “server farms” in an oddly quaint allusion to the
pre-industrial agrarian society – are centralized facilities that primarily contain large numbers
of servers and computer equipment used for data processing, data storage, and high-speed
telecommunications. In a sense, data centers are similar to the capitalist factory system; but
instead of a linear process of input of raw materials to
output of material goods for mass consumption, they input
mass data in order to facilitate and expand the endless
cycle of commodification – an Ouroboros-like machine.
As the factory system enables the production of more

P.216

From X = Y:
In these proposals, Otlet's archival

P.217

goods at a lower cost through automation and control of labor to maximize profit, data centers
have been developed to process large quantities of bits and bytes as fast as possible and at as
low a cost as possible through automation and centralization. The data center is a hyperautomated digital factory system that enables the operation of hundreds of thousands of
servers through centralization in order to conduct business around the clock and around the
globe. Compared to traditional industrial factories that produce material goods and generally
employ entire towns if not cities, large-scale data centers each generally employ fewer than
100 full-time employees – most of these employees are either engineers or security guards.
In a way, data centers are the ultimate automated factory. Moreover, the owner of a
traditional factory needs to acquire/purchase/extract raw materials to produce commodities;
however, much of the raw data for a data center are freely drawn from the labor and
everyday activities of Internet users without a direct cost to the data center. The factory
system is to industrial capitalism what data centers are becoming to digital capitalism.
THE GROWTH OF GOOGLE’S DATA FACTORIES

Today, there is a growing arms race among leading Internet companies – Google, Microsoft,
Amazon, Facebook, IBM – in building out large-scale data centers around the globe.[16]
Among these companies, Google has so far been leading in terms of scale and capital
investment. In 2014, the company spent $11 billion for real estate purchases, production
equipment, and data center construction,[17] compared to Amazon which spent $4.9 billion
and Facebook with $1.8 billion in the same year.[18]
Until 2002, Google rented only one collocation facility in Santa Clara, California to house
about 300 servers.[19] However, by 2003 the company had started to purchase entire
collocation buildings that were cheaply available due to overexpansion during the dot.com
era. Google soon began to design and build its own data centers containing thousands of
custom-built servers as Google expanded its services and global market and responded to
competitive pressures. Initially, Google was highly secretive about its data center locations
and related technologies; a former Google employee called this Google’s “Manhattan
project.” However, in 2012, Google began to open up its data centers. While this seems
like Google’s had a change of heart and wants to be more transparent about their data
centers to the public, it is in reality more about Google’s self-serving public relations
onslaught to show how its cloud infrastructure is superior to Google’s competitors and to
secure future cloud clients.[20]
As of 2016, Google has data centers in 14 locations around the globe – eight in Americas,
two in Asia and four in Europe – with an unknown number of collocated centers – ones in
which space, servers, and infrastructure are shared with other companies – in undisclosed
locations. The sheer size of Google’s data centers is reflected in its server chip consumption.
In all, Google supposedly accounts for 5% of all server chips sold in the world,[21] and it is
even affecting the price of chips as the company is one of biggest chip buyers. Google’s
recent allying with Qualcomm for its new chip has become a threat to Intel – Google has

been the largest customer of the world’s largest chip maker for quite some time.[22] According
to Steven Levy, Google admitted that, “it is the largest computing manufacturer in the world
– making its own servers requires it to build more units every year than the industry giants
HP, Dell, and Lenovo.”[23] Moreover, Google has been amassing cheap “dark fibre” – fibre
optic cables that were laid down during the 1990s dot.com boom by now-defunct telecom
firms betting on increased internet traffic[24] - constructing Google’s fibre optic cables in US
cities,[25] and investing in building massive undersea cables to maintain its dominance and
expand its markets by controlling Internet infrastructure.[26]
With its own customized servers and software, Google is building a massive data center
network infrastructure, delivering its service at unprecedented speeds around the clock and
around the world. According to one report, Google’s global network of data centers, with a
capacity to deliver 1-petabit-per-second bandwidth, is powerful enough to read all of the
scanned books in the Library of Congress in a fraction of a second.[27] New York Times
columnist Pascal Zachary once reported:
…I believe that the physical network is Google’s “secret sauce,” its premier competitive
advantage. While a brilliant lone wolf can conceive of a dazzling algorithm, only a
super wealthy and well-managed organization can run what is arguably the most
valuable computer network on the planet. Without the computer network, Google is
[28]
nothing.

Where then is Google’s secret sauce physically located? Despite its massiveness, Google’s
data center infrastructure and locations have been invisible to millions of everyday Google
users around the globe – users assume that Google is ubiquitous, the largest cloud in the
‘net.’ However, this infrastructure is no longer unnoticed since the infrastructure needed to
support the “new economy” is beginning to occupy and transform our landscapes and
building a new fixed network of global digital production space.
NEW NETWORK OF DIGITAL PRODUCTION SPACE:
RESTRUCTURING INDUSTRIAL CITIES

While Google’s data traffic and exchange extends well beyond geographic boundaries, its
physical plants are fixed in places where digital goods and services are processed and
produced. For the production of material goods, access to cheap labor has long been one of
the primary criteria for companies to select their places of production; but for data centers, a
large quantity of cheap labor is not as important since they require only a small number of
employees. The common characteristics necessary for data center sites have so far been:
good fiber-optic infrastructure; cheap and reliable power sources for cooling and running
servers, geographical diversity for redundancy and speed, cheap land, and locations close to
target markets.[29] Today, if one finds geographical areas in the world with some combination
of these factors, there will likely be data centers there or in the planning stages for the near
future.

P.218

P.219

Given these criteria, there has been an emerging trend of reconfiguration and conversion to
data centers of former industrial sites such as paper mills, printing plants, steel plants, textile
mills, auto plants, aluminum plants and coal plants. In the United States, and in particular rust
belt regions of the upper Northeast, Great Lakes and Midwest regions – previously hubs of
manufacturing industries and heart lands of both industrial capitalism and labor movements –
are turning (or attempting to turn) into hotspots for large-scale data centers for Internet
companies.[30] These cities are the remains of past crises of industrial capitalism as well as of
long labor struggles.
The reasons that former industrial sites in the US and other parts of the world are attractive
for data center conversion is that, starting in the 1970s, many factories had closed or moved
their operations overseas in search of ever-cheaper labor and concomitantly weak or
nonexistent labor laws, leaving behind solid physical plants and industrial infrastructures of
power, water and cooling systems once used to drive industrial machines and production lines
and now perfectly fit for data center development.[31] Especially, finding cheap energy is
crucial for companies like Google since data center energy costs are a major expenditure.
Moreover, many communities surrounding former industrial sites have struggled and become
distressed with increasing poverty, high unemployment and little labor power. Thus, under
the guise of “economic development,” many state and local governments have been eager to
lure data centers by offering lavish subsidies for IT companies. For at least the last five years,
state after state has legislated tax breaks for data centers and about a dozen states have
created customized incentives programs for data center operations.[32] State incentives range
from full or partial exemptions of sales/use taxes on equipment, construction materials, and in
some cases purchases of electricity and backup fuel.[33] This kind of corporate-centric
economic development is far from the construction of democratic cities that prioritize social
needs and collective interests, and reflects the environmental and long-term sustainability of
communities; but rather the goal is to, “create a good business climate and therefore to
optimize conditions for capital accumulation no matter what the consequences for
employment or social and environmental well-being.”[34]
Google’s first large-scale data center site is located in one of these struggling former industrial
towns. In 2006, Google opened its first data center in The Dalles – now nicknamed
Googleville – a town of a little over 15,000 located alongside the Columbia River and
about 80 miles east of Portland, Oregon. It is an ideal site in the sense that it is close to a
major metropolitan corridor (Seattle-Tacoma-Portland) to serve business interests and large
urban population centers; yet, cheap land, little organized labor, and the promise of cheap
electrical power from the Bonneville Power Administration, a federal governmental agency,
as well as a 15-year property tax exemption. In addition, The Dalles had already built a
fiber-optic loop as part of its economic development hoping to attract the IT industry.[35]
Not long ago, the residents of The Dalles and communities up and down the Columbia
River gorge relied on the aluminum industry, an industry which required massive amounts of

– in this case hydroelectric – power. Energy makes up 40 percent of the cost of aluminum
production[36] and was boosted by the war economies of World War II and the Korean war
as aluminum was used for various war products, especially aircraft. However, starting in
1980, aluminum smelter plants began to close and move out of the area, laid off their
workers and left their installed infrastructure behind.
Since then, The Dalles, like other industrial towns, has suffered from high unemployment,
poverty, aging population and budget-strapped schools, etc. Thus, the decision for Google to
build a data center the size of two football fields (68,680-square-foot storage buildings) in
order to take advantage of the preinstalled fiber optic infrastructure, relatively cheap
hydropower from the Dalles Dam, and tax benefits was presented as the new hope
for the
[37]
distressed town and a large employment opportunity for the town’s population.
There was much community excitement that Google’s arrival would mean an economic
revival for the struggling city and a better life for the poor , but no one could discuss about it
at the time of negotiations with Google because local officials involved in negotiations had all
signed nondisclosure agreements (NDAs);[38] they were required not to mention Google in
any way but were instead instructed to refer to the project as “Project 02.”[39] Google insisted
that the information it shared with representatives of The Dalles not be subject to public
records disclosures.[40] While public subsidies were a necessary precondition of building the
data center,[41] there were no transparency or open public debates on alternative visions of
development that reflects collective community interests.
Google’s highly anticipated data center in The Dalles opened in 2006, but it “opened” only
in the sense that it became operational. To this day, Google’s data center site is off-limits to
the community and is well-guarded, including multiple CCTV cameras which survey the
grounds around the clock. Google might boast of its corporate culture as “open” and “nonhierarchical” but this does not extend to the data centers within the community where Google
benefits as it extracts resources. Not only was the building process secretive, but access to the
data center itself is highly restricted. Data centers are well secured with several guards, gates
and checkpoints. Google’s data center has reshaped the landscape into a pseudo-militarized
zone as it is not far off from a top-secret military compound – access denied.
This kind of landscape is reproduced in other parts of the US as well. New data center hubs
have begun to emerge in other rural communities; one of them is in southwestern North
Carolina where the leading tech giants – Google, Facebook, Apple, Disney and American
Express – have built data centers in close proximity to each other. The cluster of data
centers is referred to as the “NC Data Center Corridor,”[42] a neologism used to market the
area.
At one time, the southwestern part of North Carolina had heavy concentration of highly
labor-intensive textiles and furniture industries that exploited the region’s cheap labor supply
and where workers fought long and hard for better working conditions and wages. However,
over the last 25 years, factories have closed and slowly moved out of the area and been

P.220

P.221

relocated to Asia and Latin America.[43] As a result – and mirroring the situation in The
Dalles – the area has suffered a series of layoffs, chronically high unemployment rates and
poverty, but now is being rebranded as a center of the “new economy” geared toward
attracting high-tech industries. For many towns, abandoned manufacturing plants are no
longer an eyesore but rather are becoming major selling points to the IT industry. Rich
Miller, editor of Data Center Knowledge, stated, “one of the things that’s driving the
competitiveness of our area is the power capacity built for manufacturers in the past 50
years.”[44]
In 2008, Google opened a $600 million data center in Lenoir, NC, a town in Caldwell
County (population 18,228[45]). Lenoir was once known as the furniture capital of the South
but lost 1,120 jobs in 2006.[46] More than 300,000 furniture jobs moved away from the
United States during 2000 as factories relocate to China for cheaper labor and operational
costs.[47] In order to lure Google, Caldwell County and the City of Lenoir gave Google a
100 percent waiver on business property taxes, an 80 percent waiver on real estate property
taxes over the next 30 years,[48] and various other incentives. Former NC Governor Mike
Easley announced that, “this company will provide hundreds of good-paying, knowledgebased jobs that North Carolina’s citizens want;”[49] yet, he addressed neither the cost of
attracting Google for taxpayers – including those laid-off factory workers – nor the
environmental impact of the data center. In 2013, Google expanded its operation in Lenoir
with an additional $600 million investment, and as of 2015, it has 250 employees in its
220-plus acre data center site.[50]
The company continues its crusade of giving “hope” to distressed communities and now
“saving” the environment from the old coal-fueled industrial economy. Google’s latest project
in the US is in Widows Creek, Alabama where the company is converting a coal burning
power plant commissioned in 1952 – which has been polluting the area for years – to its 14
th data center powered by renewable power. Shifting from coal to renewable energy seems to
demonstrate how Google has gone “green” and is being a different kind of corporation that
cares for the environment. However, this is a highly calculated business decision given that
relying on renewable energy is more economical over the long term than coal – which is
more volatile as commodity prices greatly fluctuate.[51] Google is gobbling up renewable
energy deals around the world to procure cheap energy and power its data centers.[52]
However, Google’s “green” public relations also camouflage environmental damages that are
brought by the data center’s enormous power consumption, e-waste from hardware, rare
earth mining and the environmental damage over the entire supply chain.[53]
The trend of reoccupation of industrial sites by data centers is not confined to the US.
Google’s Internet business operates across territories and more than 50% of its revenues
come from outside the US. As Google’s domestic search market share has stabilized at
around 60% share, the company has aggressively moved to build data centers around the
world for its global expansion. One of Google’s most ambitious data center projects outside
the US was in Hamina, Finland where Google converted a paper mill to a data center.

In 2008, Stora Enso, the Finnish paper maker, in which the Finnish Government held 16%
of the company’s shares and controlled 34% of the company, shut down its Summa paper
mill on the site close to the city of Hamina in Southeastern Finland despite workers’
resistance against the closure.[54] The company shed 985 jobs including 485 from the
Summa plant.[55] Shortly after closing the plant, Stora Enso sold the 53 year-old paper mill
site to Google for roughly $52 million which included 410 acres of land and the paper mill
and its infrastructure itself.
Whitewashing the workers’ struggles, the Helsinki Times reported that, “everyone was
excited about Google coming to Finland. The news that the Internet giant had bought the old
Stora Enso mill in Hamina for a data centre was great news for a community stunned by job
losses and a slowing economy.”[56] However, the local elites recognized that jobs created by
Google would not drastically affect the city’s unemployment rate or alleviate the economic
plight for many people in the community, so they justified their decision by arguing that
connecting Google’s logo to the city’s image would result in increased investments in the
area.[57] The facility had roughly 125 full-time employees when Google announced its
Hamina operation’s expansion in 2013.[58] The data center is monitored by Google’s
customary CCTV cameras and motion detectors; even Google staff only have access to the
server halls after passing biometric authentication using iris recognition scanners.[59]
Like Google’s other data centers, Google’s decision to build a data center in Hamina is not
merely because of favorable existing infrastructure or natural resources. The location of
Hamina as its first Nordic data center is vital and strategic in terms of extending Google’s
reach into geographically dispersed markets, speed and management of data traffic. Hamina
is located close to the border with Russia and the area has long been known for good
Internet connectivity via Scandinavian telecommunications giant TeliaSonera, whose services
and international connections run right through the area of Hamina and reach into Russia as
well as to Sweden and Western Europe.[60] Eastern Europe has a growing Internet market
and Russia is one of the few countries where Google does not dominate the search market.
Yandex, Russia’s native language search engine, controls the Russian search market with
over 60% share.[61] By locating its infrastructure in Hamina, Google is establishing its
strategic global digital production beach-head for both the Nordic and Russian markets.
As Google is trying to maintain its global dominance and expand its business, the company
has continued to build out its data center operations on European soil. Besides Finland,
Google has built data centers in Dublin, Ireland, and St. Ghislain and Mons in Belgium,
which respectively had expanded their operations after their initial construction. However,
the stories of each of these data centers is similar: aluminum smelting plant town The Dalles,
Oregon and Lenoir North Carolina in the US, paper mill town Hamina, Finland, coalmining town Ghislain–Mons, Belgium and a warehouse converted data center in Dublin,
Ireland. Each of these were once industrial production sites and/or sites for the extraction of
environmental resources turned into data centers creating temporal production spaces to
accelerate digital capitalism. Google’s latest venture in Europe is in a seaport town of

P.222

P.223

Eemshaven, Netherlands which hosts several power stations as well as the transatlantic fiberoptic cable which links the US and Europe.
To many struggling communities around the world, the building of Google’s large-scale data
centers has been presented by the company and by political elites as an opportunity to
participate in the “new economy” – as well as a veiled threat of being left behind from the
“new economy” – as if this would magically lead to the creation of prosperity and equality. In
reality, these cities and towns are being reorganized and reoccupied for corporate interests,
re-integrated into sites of capital accumulation and re-emerged as new networks of production
for capitalist development.
CONCLUSION

Is the current physical landscape that supports the “new economy” outside of capitalist social
relations? Does the process of the redevelopment of struggling former industrial cites by
building Google data centers under the slogan of participation in the “new economy” really
meet social needs, and express democratic values? The “new economy” is boasted about as if
it is radically different from past industrial capitalist development, the solution to myriad social
problems that hold the potential for growth outside of the capitalist realm; however, the “new
economy” operates deeply within the logic of capitalist development – constant technological
innovation, relocation and reconstruction of new physical production places to link
geographically dispersed markets, reduction of labor costs, removal of obstacles that hinder its
growth and continuous expansion. Google’s purely market-driven data centers illustrate that
the “new economy” built on data and information does not bypass physical infrastructures and
physical places for the production and distribution of digital commodities. Rather, it is firmly
anchored in the physical world and simply establishes new infrastructures on top of existing
industrial ones and a new network of production places to meet the needs of the processes of
digital commodities at the expense of environmental, labor and social well-being.
We celebrate the democratic possibilities of the “networked information economy” providing
for alternative space free from capitalist practices; however, it is vital to recognize that this
“new economy” in which we put our hopes is supported by, built on, and firmly planted in
our material world. The question that we need to ask ourselves is: given that our communities
and physical infrastructures continue to be configured to assist the reproduction of the social
relations of capitalism, how far can our “new economy” deliver on the democracy and social
justice for which we all strive?
Last
Revision:
3·07·2016

1. James Titcomb, “World’s internet traffic to surpass one zettabyte in 2016,” Telegraph, February 4, 2016, http://
www.telegraph.co.uk/technology/2016/02/04/worlds-internet-traffic-to-surpass-one-zettabyte-in-2016/
2. Ibid.

3. Cade Metz, “A new company called Alphabet now owns Google,” Wired, August 10, 2015. http://wired.com/2015/08/
new-company-called-alphabet-owns-google/.
4. Google hasn’t released new data since 2012, but the data extrapolate from based on Google annual growth date. See Danny
Sullivan, “Google Still Doing At Least 1 Trillion Searches Per Year,” Search Engine Land, January 16, 2015, http://
searchengineland.com/google-1-trillion-searches-per-year-212940
5. This is Google’s desktop search engine market as of January 2016. See “Worldwide desktop market share of leading search
engines from January 2010 to January 2016,” Statista, http://www.statista.com/statistics/216573/worldwide-market-shareof-search-engines/.
6. “Annual revenue of Alphabet from 2011 to 2015 (in billions of US dollars),” Statista, http://www.statista.com/
statistics/507742/alphabet-annual-global-revenue/.
7. “Advertising revenue of Google from 2001 to 2015 (in billion U.S. dollars),” Statista, http://www.statista.com/
statistics/266249/advertising-revenue-of-google/.
8. Seven Levy, “Secret of Googlenomics: Data-Fueled Recipe Brews Profitability,” Wired, May 22, 2009, http://
www.wired.com/culture/culturereviews/magazine/17-06/nep_googlenomics?currentPage=all.
9. Daniel Bell, The Coming of Post-Industrial Society: A Venture In Social Forecasting (New York: Basic Books, 1974); Alvin
Toffler, The Third Wave (New York: Morrow, 1980).
10. The term “territory of profit” is borrowed from Gary Fields’ book titled Territories of Profit: Communications, Capitalist
Development, and the Innovative Enterprises of G. F. Swift and Dell Computer (Stanford University Press, 2003)
11. David Harvey, Spaces of capital: towards a critical geography (New York: Routledge, 2001)
12. Top of FormDan Schiller, Digital capitalism networking the global market system (Cambridge, Mass: MIT Press, 1999).
13. Dan Schiller, “Power Under Pressure: Digital Capitalism In Crisis,” International Journal of Communication 5 (2011): 924–
941
14. Dan Schiller, “Digital capitalism: stagnation and contention?” Open Democracy, October 13, 2015, https://
www.opendemocracy.net/digitaliberties/dan-schiller/digital-capitalism-stagnation-and-contention.
15. Ibid: 113-117.
16. Jason Hiner, “Why Microsoft, Google, and Amazon are racing to run your data center.” ZDNet, June 4, 2009, http://
www.zdnet.com/blog/btl/why-microsoft-google-and-amazon-are-racing-to-run-your-data-center/19733.
17. Derrick Harris, “Google had its biggest quarter ever for data center spending. Again,” Gigaom, February 4, 2015, https://
gigaom.com/2015/02/04/google-had-its-biggest-quarter-ever-for-data-center-spending-again/.
18. Ibid.
19. Steven Levy, In the plex: how Google thinks, works, and shapes our lives )New York: Simon & Schuster, 2011), 182.
20. Steven Levy, “Google Throws Open Doors to Its Top-Secret Data Center,” Wired, October 17 2012, http://
www.wired.com/2012/10/ff-inside-google-data-center/.
21. Cade Metz, “Google’s Hardware Endgame? Making Its Very Own Chips,” Wired, February 12, 2016, http://
www.wired.com/2016/02/googles-hardware-endgame-making-its-very-own-chips/.
22. Ian King and Jack Clark, “Qualcomm's Fledgling Server-Chip Efforts,” Bloomberg Business, February 3, 2016, http://
www.bloomberg.com/news/articles/2016-02-03/google-said-to-endorse-qualcomm-s-fledgling-server-chip-efforts-ik6ud7qg.
23. Levy, In the Plex, 181.
24. In 2013, Wall Street Journal reported that Google controls more than 100,000 miles of routes around the world which was
considered bigger than US-based telecom company Sprint. See Drew FitzGerald and Spencer E. Ante, “Tech Firms Push to
Control Web's Pipes,” Wall Street Journal, December 13, 2016, http://www.wsj.com/articles/
SB10001424052702304173704579262361885883936
25. Google is offering its gigabit-speed fiber optic Internet service in 10 US cities. Since Internet service is a precondition of
Google’s myriad Internet businesses, Google’s strategy is to control the pipes rather than relying on telecom firms. See Mike
Wehner, “Google Fiber is succeeding and cable companies are starting to feel the pressure,” Business Insider, April 15, 2015,
http://www.businessinsider.com/google-fiber-is-succeeding-and-cable-companies-are-starting-to-feel-the-pressure-2015-4;
Ethan Baron, “Google Fiber coming to San Francisco first,” San Jose Mercury News, February 26, 2016, http://
www.mercurynews.com/business/ci_29556617/sorry-san-jose-google-fiber-coming-san-francisco.
26. Tim Hornyak, “9 things you didn't know about Google's undersea cable,” Computerworld, July 14, 2015, http://
www.computerworld.com/article/2947841/network-hardware-solutions/9-things-you-didnt-know-about-googles-underseacable.html
27. Jaikumar Vijayan, “Google Gives Glimpse Inside Its Massive Data Center Network,” eWeek, June 18, 2015, http://
www.eweek.com/servers/google-gives-glimpse-inside-its-massive-data-center-network.html
28. Pascal Zachary, “Unsung Heroes Who Move Products Forward,” New York Times, September 30, 2007, http://
www.nytimes.com/2007/09/30/technology/30ping.html

P.224

P.225

29. Tomas Freeman, Jones Lang, and Jason Warner, “What’s Important in the Data Center Location Decision,” Spring 2011,
http://www.areadevelopment.com/siteSelection/may2011/data-center-location-decision-factors2011-62626727.shtml
30. “From rust belt to data center green?” Green Data Center News, February 10, 2011, http://www.greendatacenternews.org/
articles/204867/from-rust-belt-to-data-center-green-by-doug-mohney/
31. Rich Miller, “North Carolina Emerges as Data Center Hub,” Data Center Knowledge, November 7, 2010, http://
www.datacenterknowledge.com/archives/2010/11/17/north-carolina-emerges-as-data-center-hub/.
32. David Chernicoff, “US tax breaks, state by state,” Datacenter Dynamics, January 6, 2016, http://
www.datacenterdynamics.com/design-build/us-tax-breaks-state-by-state/95428.fullarticle; Case Study: Server Farms,” Good
Job First, http://www.goodjobsfirst.org/corporate-subsidy-watch/server-farms.
33. John Leino, “The role of incentives in Data Center Location Decisions,” Critical Environment Practice, February 28, 2011,
http://www.cbrephoenix.com/wp_eig/?p=68.
34. David, Harvey, Spaces of global capitalism (London: Verso. 2006), 25.
35. Marsha Spellman, “Broadband, and Google, Come to Rural Oregon,” Broadband Properties, December 2005, http://
www.broadbandproperties.com/2005issues/dec05issues/spellman.pdf.
36. Ross Courtney “The Goldendale aluminum plant -- The death of a way of life,” Yakima Herald-Republic,” April 9, 2011,
http://www.yakima-herald.com/stories/2011/4/9/the-goldendale-aluminum-plant-the-death-of-a-way-of-life.
37. Ginger Strand, “Google’s addition to cheap electricity,” Harper Magazine, March 2008, https://web.archive.org/web/20080410194348/http://harpers.org/media/
slideshow/annot/2008-03/index.html.

38. Linda Rosencrance, “Top-secret Google data center almost completed,” Computerworld, June 16, 2006, http://
www.computerworld.com/article/2546445/data-center/top-secret-google-data-center-almost-completed.html.
39. Bryon Beck, “Welcome to Googleville America’s newest information superhighway begins On Oregon’s Silicon Prairie,”
Willamette Week, June 4, 2008, http://wweek.com/portland/article-9089-welcome_to_googleville.html.
40. Rich Miller, “Google & Facebook: A Tale of Two Data Centers,” Data Center Knowledge, August 2, 2010, http://
www.datacenterknowledge.com/archives/2010/08/10/google-facebook-a-tale-of-two-data-centers/
41. Ibid.
42. Alex Barkinka, “From textiles to tech, the state’s newest crop,” Reese News Lab, April 13, 2011, http://
reesenews.org/2011/04/13/from-textiles-to-tech-the-states-newest-crop/14263/.
43. “Textile & Apparel Overview,” North Carolina in the Global Economy, http://www.ncglobaleconomy.com/textiles/
overview.shtml.
44. Rich Miller, “The Apple-Google Data Center Corridor,” Data Center knowledge, August 4, 2009, http://
www.datacenterknowledge.com/archives/2009/08/04/the-apple-google-data-center-corridor/.
45. “2010 Decennial Census from the US Census Bureau,” http://factfinder.census.gov/bkmk/cf/1.0/en/place/Lenoir city,
North Carolina/POPULATION/DECENNIAL_CNT.
46. North Carolina in the Global Economy. Retrieved from http://www.soc.duke.edu/NC_GlobalEconomy/furniture/
workers.shtml
47. Frank Langfitt, Furniture Work Shifts From N.C. To South China. National Public Radio, December 1, 2009, http://
www.npr.org/templates/story/story.php?storyId=121576791&ft=1&f=121637143; Dan Morse, In North Carolina,
Furniture Makers Try to Stay Alive,” Wall Street Journal, February 20, 2004, http://www.wsj.com/articles/
SB107724173388134838; Robert Lacy, “Whither North Carolina Furniture Manufacturing,” Federal Reserve Bank of
Richmond, Working Paper Series, September 2004, https://www.richmondfed.org/~/media/richmondfedorg/publications/
research/working_papers/2004/pdf/wp04-7.pdf
48. Stephen Shankland, “Google gives itself leeway for N.C., data center,” Cnet, December 5, 2008, http://
news.cnet.com/8301-1023_3-10114349-93.html; Bill Bradley, “Cities Keep Giving Out Money for Server Farms, See
Very Few Jobs in Return,” Next City, August 15, 2013, https://nextcity.org/daily/entry/cities-keep-giving-out-money-forserver-farms-see-few-jobs-in-return.
49. Katherine Noyes, “Google Taps North Carolina for New Datacenter,” E-Commerce Times, January 19, 2007, http://
www.ecommercetimes.com/story/55266.html?wlc=1255976822
50. Getahn Ward, “Google to invest in new Clarksville data center,” Tennessean, December 22, 2015, http://
www.tennessean.com/story/money/real-estate/2015/12/21/google-invest-500m-new-clarksville-data-center/77474046/.
51. Ingrid Burrington, “The Environmental Toll of a Netflix Binge,” Atlantic, December 16, 2015, http://www.theatlantic.com/
technology/archive/2015/12/there-are-no-clean-clouds/420744/.
52. Mark Bergen, “After Gates, Google Splurges on Green With Largest Renewable Energy Buy for Server Farms,” Re/code,
December 3, 2015, http://recode.net/2015/12/03/after-gates-google-splurges-on-green-with-largest-renewable-energy-buyfor-server-farms/.

53. Burrington, “The Environmental Toll of a Netflix Binge.”; Richard Maxwell and Toby Miller, Greening the media (New
York: Oxford Univeristy Press, 2012)
54. “Finnish Paper Industry Uses Court Order to Block Government Protest,” IndustriAll Global Union, http://www.industriallunion.org/archive/icem/finnish-paper-industry-uses-court-order-to-block-government-protest.
55. Terhi Kinnunen and Tarmo Virki, “Stora to cut 985 jobs, close mills despite protests,” Reuter, January 17, 2008, http://
www.reuters.com/article/storaenso-idUSL1732158220080117; “Workers react to threat of closure of paper pulp mills,”
European Foundation, March 3, 2008, http://www.eurofound.europa.eu/observatories/eurwork/articles/workers-react-tothreat-of-closure-of-paper-pulp-mills.
56. David Cord, “Welcome to Finland,” The Helsinki Times, April 9, 2009, http://www.helsinkitimes.fi/helsinkitimes/2009apr/
issue15-95/helsinki_times15-95.pdf.
57. Elina Kervinen, Google is busy turning the old Summa paper mill into a data centre. Helsingin Sanomat International Edition,
October 9, 2010, https://web.archive.org/web/20120610020753/http://www.hs.fi/english/article/Google+is+busy
+turning+the+old+Summa+paper+mill+into+a+data+centre/1135260141400.
58. “Google invests 450M in expansion of Hamina data centre,” Helsinki Times, November 4, 2013, http://
www.helsinkitimes.fi/business/8255-google-invests-450m-in-expansion-of-hamina-data-centre.html.
59. “Revealed: Google’s new mega data center in Finland,” Pingdon, September 15, 2010, http://
royal.pingdom.com/2010/09/15/googles-mega-data-center-in-finland/
60. Ibid.
61. Shiv Mehta, “What's Google Strategy for the Russian Market?” Investopedia, July 28, 2015, http://www.investopedia.com/
articles/investing/072815/whats-google-strategy-russian-market.asp.

P.226

P.227

House,
City,
World,
Nation,
Globe
NATACHA ROUSSEL

This timeline starts in Brussels and is an attempt to situate some of the events
in the life, death and revival of the Mundaneum in relation to both local and
international events. By connecting several geographic locations at different
scales, this small research provokes cqrrelations in time and space that could help
formulate questions about the ways local events repeatedly mirror and
recompose global situations. Hopefully, it can also help to see which
contextual elements in the first iteration of the Mundaneum were different from
the current situation of our information economy.
The ambitious project of the Mundaneum was imagined by Paul Otlet with support of Henri
La Fontaine at the end of the 19th century. At that time colonialism was at its height,
bringing a steady stream of income to occidental countries which created a sense of security
that made everything seem possible. According to some of the most forward thinking persons
of the time it felt as if the intellectual and material benefits of rational thinking could
universally become the source of all goods. Far from any actual move towards independence,
the first tensions between colonial/commercial powers were starting to manifest themselves.
Already some conflicts erupted, mainly to defend commercial interests such as during the
Fashoda crisis and the Boers war. The sense of strength brought to colonial powers by the
large influx of money was however quickly tempered by World War I that was about to
shake up modern European society.
In this context Henri La Fontaine, energised by Paul Otlet's encompassing view of
classification systems and standards, strongly associates the Mundaneum project with an ideal
of world peace. This was a conscious process of thought; they believed that this universal
archive of all knowledge represented a resource for the promotion of education towards the

development of better social relations. While Otlet and La Fontaine were not directly
concerned with economical and colonial issues, their ideals were nevertheless fed by the
wealth of the epoch. The Mundaneum archives were furthermore established with a clear
intention, and a major effort was done to include documents that referred to often neglected
topics or that could be considered as alternative thinking, such as the well known archives of
the feminist movement in Belgium and information on anarchism and pacifism. In line with
the general dynamism caused by a growing wealth in Europe at the turn of the century, the
Mundaneum project seemed to be always growing in size and ambition. It also clearly
appears that the project was embedded in the international and 'politico-economical' context
of its time and in many aspects linked to a larger movement that engaged civil society towards
a proto-structure of networked society. Via the development of infrastructures for
communication and international regulations, Henri La Fontaine was part of several
international initiatives. For example he launched the 'Bureau International de la paix' as
early as 1907 and a few years after, in 1910, the 'International Union of Associations'.
Overall his interventions helped to root the process of archive collection in a larger network
of associations and regulatory structures. Otlet's view of archives and organisation extended
to all domains and La Fontaine asserted that general peace could be achieved through social
development by the means of education and access to knowledge. Their common view was
nurtured by an acute perception of their epoch, they observed and often contributed to most
of the major experiments that were triggered by the ongoing reflection about the new
organisation modalities of society.
The ever ambitious process of building the Mundaneum
From The Itinerant Archive (print):
archives took place in the context of a growing
Museology merged with the
internationalisation of society, while at the same time the
International Institute of Bibliography
social gap was increasing due to the expansion of
(IIB) which had its offices in the
same building. The ever-expanding
industrial society. Furthermore, the internationalisation of
index card catalog had already been
finances and relations did not only concern industrial
accessible to the public since 1914.
society, it also acted as a motivation to structure social
The project would be later known as
the World Palace or Mundaneum.
and political networks, among others via political
Here, Paul Otlet and Henri La
negotiations and the institution of civil society
Fontaine started to work on their
Encyclopaedia Universalis
organisations. Several broad structures dedicated to the
Mundaneum, an illustrated
regulation of international relations were created
encyclopaedia in the form of a mobile
simultaneous with the worldwide spreading of an
exhibition.
industrial economy. They aimed to formulate a world
view that would be based on international agreements
and communication systems regulated by governments and structured via civil society
organisations, rather than leaving everything to individual and commercial initiatives. Otlet
and La Fontaine spent a large part of their lives attempting to formulate a mondial society.
While La Fontaine clearly supported international networks of civil society organisations,
Otlet, according to Vincent Capdepuy[1], was the first person to use the French term
Mondialisation far ahead of his time, advocating what would become after World War II an
important movement that claimed to work for the development of an international regulatory

P.228

P.229

system. Otlet also mentioned that this 'Mondial' process was directly related to the necessity
of a new repartition and the regulation of natural goods (think: diamonds and gold ...), he
writes:
« Un droit nouveau doit remplacer alors le droit ancien pour préparer et organiser une
nouvelle répartition. La “question sociale” a posé le problème à l’intérieur ; “la question
internationale” pose le même problème à l’extérieur entre peuples. Notre époque a
poursuivi une certaine socialisation de biens. […] Il s’agit, si l’on peut employer cette
expression, de socialiser le droit international, comme on a socialisé le droit privé, et de
[2]
prendre à l’égard des richesses naturelles des mesures de “mondialisation”. » .

The approaches of La Fontaine and Otlet already bear certain differences, as one
(Lafontaine) emphasises an organisation based on local civil society structures which implies
direct participation, while the other (Otlet) focuses more on management and global
organisation managed by a regulatory framework. It is interesting to look at these early
concepts that were participating to a larger movement called 'the first mondialisation', and
understand how they differ from current forms of globalisation which equally involve private
and public instances and various infrastructures.
The project of Otlet and Lafontaine took place in an era of international agreements over
communication networks. It is known and often a subject of fascination that the global project
of the Mundaneum also involved the conception of a technical infrastructure and
communication systems that were conceived in between the two World Wars. Some of them
such as the Mondotheque were imagined as prospective possibilities, but others were already
implemented at the time and formed the basis of an international communication network,
consisting of postal services and telegraph networks. It is less acknowledged that the epoch
was also a time of international agreements between countries, structuring and normalising
international life; some of these structures still form the basis of our actual global economy,
but they are all challenged by private capitalist structures. The existing postal and telegraph
networks covered the entire planet, and agreements that regulated the price of the stamp
allowing for postal services to be used internationally, were recent. They certainly were the
first ones during where international agreements regulated commercial interests to the benefit
of individual communication. Henri Lafontaine directly participated in these processes by
asking for the postal franchise to be waived for the transport of documents between
international libraries, to the benefit of among others the Mundaneum. Lafontaine was also
an important promoter of larger international movements that involved civil society
organisations; he was the main responsible for the 'Union internationale des associations', that
acted as a network of information-sharing, setting up modalities for exchange to the general
benefit of civil society. Furthermore, concerns were raised to rethink social organisation that
was harmed by industrial economy and this issue was addressed in Brussels by the brand
new discipline of sociology. The 'Ecole de Bruxelles'[3] in which Otlet and La Fontaine both
took part was already very early on trying to formulate a legal discourse that could help
address social inequalities, and eventually come up with regulations that could help 'reengineer' social organisation.

The Mundaneum project differentiates itself from contemporary enterprises such as Google,
not only by its intentions, but also by its organisational context as it clearly inscribed itself in
an international regulatory framework that was dedicated to the promotion of local civil
society. How can we understand the similarities and differences between the development of
the Mundaneum project and the current knowledge economy? The timeline below attempts
to re-situate the different events related to the rise and fall of the Mundaneum in order to help
situate the differences between past and contemporary processes.

DATE

EVENT

TYPE

1865

The International Union of telegraph STANDARD
, is set up it is an important element of the
organisation of a mondial communication
network and will further become the

SCALE

WORLD

International Telecommunication
[4]
Union (UTI) that is still active in regulating

and standardizing radio-communication.

1870

Franco-Prussian war.

EVENT

WORLD

1874

The ONU creates the General Postal
[5]
Union and aims to federate international
postal distribution.

STANDARD

WORLD

1875

General Conference on Weights and
Measures in Sèvres, France.

STANDARD

WORLD

1882

Triple Alliance,

EVENT

WORLD

1889

Henri Lafontaine creates La Société Belge EVENT
de l'arbitrage et de la paix.

NATION

1890's

First colonial wars (Fachoda crisis, Boers war EVENT
...).

WORLD

1890

Henri Lafontaine meets Paul Otlet.

PERSON

CITY

1891

Franco-Russian entente', preliminary to
the Triple entente that will be signed in
1907.

EVENT

WORLD

1891

Henri Lafontaine publishes an essay Pour une PUBLICATION NATION
bibliographie de la paix.

P.230

renewed in 1902.

P.231

1893

Otlet and Lafontaine start together the Office ASSOCIATION CITY
International de Bibliologie
Sociologique (OIBS).

1894

Henri Lafontaine is elected senator of the
province of Hainaut and later senator of the
province of Liège-Brabant.

EVENT

NATION

1895 2-4 First Conférence de Bibliographie at
ASSOCIATION CITY
September which it is decided to create the Institut
International de Bibliographie (IIB)
founded by Henri La Fontaine.
WORLD

1900

Congrès bibliographique
international in Paris.

EVENT

1903

Creation of the international Women's
suffrage alliance (IWSA) that will later
become the International Alliance of
Women.

ASSOCIATION WORLD

1904

Entente cordiale

between France and
England which defines their mutual zone of
colonial influence in Africa.

EVENT

WORLD

1905

First Moroccan crisis.

EVENT

WORLD

1907 June Otlet and Lafontaine organise a Central

ASSOCIATION CITY

Office for International Associations
that will become the International Union
of Associations (IUA) at the first
Congrès mondial des associations
internationales in Brussels in May 1910.

1907

Henri Lafontaine is elected president of the
Bureau international de la paix that
he previously initiated.

1908 July Congrès bibliographique
international in Brussels.

PERSON

NATION

EVENT

CITY

ASSOCIATION WORLD
1910 May Official Creation of the International
union of associations (IUA). In 1914,
it federates 230 organizations, a little more
than half of them still exist. The IUA promotes
internationalist aspirations and desire for peace.

ASSOCIATION WORLD

1910
25-27
August

Le Congrès International de
Bibliographie et de Documentation

1911

ASSOCIATION WORLD
More than 600 people and institutions are
listed as IIB members or refer to their methods,
specifically the UDC.

1913

Henri Lafontaine is awarded the Nobel Price EVENT
for Peace.

WORLD

1914

Germany declares war to France and invades
Belgium.

WORLD

1916

PUBLICATION WORLD
Lafontaine publishes The great solution:
magnissima charta while in exile in the United
States.

1919

deals with issues of international cooperation
between non-governmental organizations and
with the structure of universal documentation.

Opening of the Mundaneum or Palais
at the Cinquantenaire park.

EVENT

EVENT

CITY

Mondial

1919 June The Traité de Versailles marks the end EVENT
of World War I and creation of the Societé
28
Des Nations (SDN) that will later become
the United Nations (UN).

WORLD

ASSOCIATION NATION

1924

Creation (within the IIB), of the Central
Classification Commission focusing on
development of the Universal Decimal
Classification (UDC).

1931

The IIB becomes the International
Institute of documentation (IID) and
in 1938 and is named International
Federation of documentation (IDF).

ASSOCIATION WORLD

1934

Publication of Otlet's book Traité de
documentation.

PUBLICATION WORLD

1934

The Mundaneum is closed after a governmental MOVE
decision. A part of the archives are moved to
Rue Fétis 44, Brussels, home of Paul Otlet.

the

1939
Invasion of Poland by Germany, start of World EVENT
September War II.

P.232

HOUSE

WORLD

P.233

1941

MOVE
Some files from the Mundaneum collections
concerning international associations, are
transferred to Germany. They are assumed to
have propaganda value.

WORLD

1944

Death of Paul Otlet. He is buried in Etterbeek EVENT
cemetery.

CITY

1947

The International Telecomunication
Union (UTI) is attached to the UN.

STANDARD

GLOBE

Two separate ITU committees, the

STANDARD

GLOBE

STANDARD

GLOBE

1956

Consultive Committee for
International Telephony (CCIF) and the
Consultive Committee for
International Telegraphy (CCIT) are

joined to form the CCITT, an institute to create
standards, recommendations and models for
telecommunications.

1963

American Standard Code for
Information Interchange (ASCII)

developed.

is

1966

The ARPANET project is initiated.

1974

Telenet,

1986

First meeting of the Internet Engineering
Task Force (IETF) , the US-located informal

STANDARD

GLOBE

1992

Creation of The Internet Society, an
American association with international
vocation.

STANDARD

WORLD

1993

Elio Di Rupo organises the transport of the
Mundaneum archives from Brussels to 76 rue
de Nimy in Mons.

MOVE

NATION

2012

Failure of the World Conference on

STANDARD

GLOBE

founded.

ASSOCIATION NATION

the first public version of the Internet STANDARD

WORLD

organization that promotes open standards
along the lines of "rough consensus and running
code".

International Telecommunications

(WCIT) to reach an international agreement
on Internet regulation.

ADDITIONAL TIMELINES

• https://www.timetoast.com/timelines/la-premiere-guerre-mondiale
• http://www.telephonetribute.com/timeline.html
• https://www.reseau-canope.fr/savoirscdi/societe-de-linformation/le-monde-du-livreet-de-la-presse/histoire-du-livre-et-de-la-documentation/biographies/paul-otlet.html
• http://monoskop.org/Otlet
• http://archives.mundaneum.org/fr/historique
REFERENCES
Last
Revision:
28·06·2016

1. https://cybergeo.revues.org/24903%7CVincent Capdepuy, In the prism of the words. Globalization and the philological
argument
2. Paul Otlet, 1916, Les Problèmes internationaux et la Guerre, les conditions et les facteurs de la vie internationale, Genève/
Paris, Kundig/Rousseau, p. 76.
3. http://www.philodroit.be/IMG/pdf/bf_-_le_droit_global_selon_ecole_de_bruxelles_-2014-3.pdf?lang=fr
4. http://www.itu.int/en/Pages/default.aspx
5. https://en.wikipedia.org/wiki/Universal_Postal_Union

P.234

P.235

The
Smart
City City of
Knowledge
DENNIS POHL

In Paul Otlet's words the Mundaneum is “an idea, an institution, a method, a
material corpus of works and collections, a building, a network.” It became a
lifelong project that he tried to establish together with Henri La Fontaine in
the beginning of the 20th century. The collaboration with Le Corbusier was
limited to the architectural draft of a centre of information, science, and
education, leading to the idea of a “World Civic Center” in Geneva.
Nevertheless the dialectical discourse between both Utopians did not restrict
itself to commissioned design, but reveals the relation between a specific
positivist conception of knowledge and architecture; the system of information
and the spatial distribution according to efficiency principles. A notion that lays
the foundation for what is now called the Smart City.
[1]

FORMULATING THE MUNDANEUM
“We’re on the verge of a historic moment for cities”

[2]

“We are at the beginning of a historic transformation in cities. At a time when the
concerns about urban equity, costs, health and the environment are intensifying,
unprecedented technological change is going to enable cities to be more efficient,
[3]
responsive, flexible and resilient.”

P.236

P.237

In 1927 Le Corbusier participated in the
design competition for the headquarters of
the League of Nations, but his designs were
rejected. It was then that he first met his
later cher ami Paul Otlet. Both were already
familiar with each other’s ideas and writings,
as evidenced by their use of schemes, but
also through the epistemic assumptions that
underlie their world views.

OTLET, SCHEME AND REALITY

CORBUSIER, CURRENT AND IDEAL
TRAFFIC CIRCULATION

Before meeting Le Corbusier, Otlet was
fascinated by the idea of urbanism as a
science, which systematically organizes all
elements of life in infrastructures of flows.
He was convinced to work with Van der
Swaelmen, who had already planned a
world city on the site of Tervuren near
Brussels in 1919.[4]

VAN DER SWAELMEN - TERVUREN, 1916

ends.

For Otlet it was the first time that two
notions from different practices came
together, namely an environment ordered
and structured according to principles of
rationalization and taylorization. On the one
hand, rationalization as an epistemic practice
that reduces all relationships to those of
definable means and ends. On the other
hand, taylorization as the possibility to
analyze and synthesize workflows according
to economic efficiency and productivity.
Nowadays, both principles are used
synonymously: if all modes of production are
reduced to labour, then its efficiency can be
rationally determined through means and

“By improving urban technology, it’s possible to significantly improve the lives of
billions of people around the world. […] we want to supercharge existing efforts in
areas such as housing, energy, transportation and government to solve real problems
[5]
that city-dwellers face every day.”

In the meantime, in 1922, Le Corbusier developed his theoretical model of the Plan Voisin,
which served as a blueprint for a vision of Paris with 3 million inhabitants. In the 1925
publication Urbanisme his main objective is to construct “a theoretically water-tight formula
to arrive at the fundamental principles of modern town planning.”[6] For Le Corbusier
“statistics are merciless things,” because they “show the past and foreshadow the future”[7],
therefore such a formula must be based on the objectivity of diagrams, data and maps.

CORBUSIER - SCHEME FOR THE TRAFFIC
CIRCULATION

P.238

P.239

OTLET'S FORMULA

Moreover, they “give us an exact picture of
our present state and also of former states;
[...] (through statistics) we are enabled to
penetrate the future and make those truths
our own which otherwise we could only
have guessed at.”[8] Based on the analysis of
statistical proofs he concluded that the
ancient city of Paris had to be demolished in
order to be replaced by a new one.
Nevertheless, he didn’t arrive at a concrete
formula but rather at a rough scheme.

A formula that includes every atomic entity
was instead developed by his later friend Otlet as an answer to the question he posed in
Monde, on whether the world can be expressed by a determined unifying entity. This is
Otlet’s dream: a “permanent and complete representation of the entire world,”[9] located in
one place.
Early on Otlet understood the active potential of Architecture and Urbanism as a dispositif, a
strategic apparatus, that places an individual in a specific environment and shapes his
understanding of the world.[10] A world that can be determined by ascertainable facts through
knowledge. He thought of his Traité de documentation: le livre sur le livre, théorie et pratique
as an “architecture of ideas”, a manual to collect and organize the world's knowledge, hand in
hand with contemporary architectural developments. As new modernist forms and use of
materials propagated the abundance of decorative
From A bag but is language nothing
elements, Otlet believed in the possibility of language as
of words:
a model of 'raw data', reducing it to essential information
and unambiguous facts, while removing all inefficient
Tim Berners-Lee: [...] Make a
beautiful website, but first give us the
assets of ambiguity or subjectivity.
“Information, from which has been removed all dross and
foreign elements, will be set out in a quite analytical way.
It will be recorded on separate leaves or cards rather than
being confined in volumes,” which will allow the
standardized annotation of hypertext for the Universal
Decimal Classification (UDC).[11] Furthermore, the
“regulation through architecture and its tendency of a
total urbanism would help towards a better understanding
of the book Traité de documentation and it's right
functional and holistic desiderata.”[12] An abstraction
would enable Otlet to constitute the “equation of
urbanism” as a type of sociology (S): U = u(S), because
according to his definition, urbanism “is an art of

unadulterated data, we want the data.
We want unadulterated data. OK, we
have to ask for raw data now. And
I'm going to ask you to practice that,
OK? Can you say "raw"?
Audience: Raw.
Tim Berners-Lee: Can you say
"data"?
Audience: Data.
TBL: Can you say "now"?
Audience: Now!
TBL: Alright, "raw data now"!
[...]

From La ville intelligente - Ville de la
connaissance:
Étant donné que les nouvelles formes
modernistes et l'utilisation de
matériaux propageaient l'abondance
d'éléments décoratifs, Paul Otlet
croyait en la possibilité du langage
comme modèle de « données brutes »,
le réduisant aux informations
essentielles et aux faits sans ambiguïté,

distributing public space in order to raise general human happiness; urbanization is the result
of all activities which a society employs in order to reach its proposed goal; [and] a material
expression of its organization.”[13] The scientific position, which determines all characteristic
values of a certain region by a systematic classification and observation, was developed by the
Scottish biologist and town planner Patrick Geddes, who Paul Otlet invited to present his
Town Planning Exhibition to an international audience at the 1913 world exhibition in
Gent.[14] What Geddes inevitably takes further is the positivist belief in a totality of science,
which he unfolds from the ideas of Auguste Comte, Frederic Le Play and Elisée Reclus in
order to reach a unified understanding of an urban development in a special context. This
position would allow to represent the complexity of an inhabited environment through data.[15]
THINKING THE MUNDANEUM

The only person that Otlet considered capable of the architectural realization of the
Mundaneum was Le Corbusier, whom he approached for the first time in spring 1928. In
one of the first letters he addressed the need to link “the idea and the building, in all its
symbolic representation. […] Mundaneum opus maximum.” Aside from being a centre of
documentation, information, science and education, the complex should link the Union of
International Associations (UIA), which was founded by La Fontaine and Otlet in 1907,
and the League of Nations. “A material and moral representation of The greatest Society of
the nations (humanity);” an international city located on an extraterritorial area in Geneva.[16]
Despite their different backgrounds, they easily understood each other, since they “did
frequently use similar terms such as plan, analysis, classification, abstraction, standardization
and synthesis, not only to bring conceptual order into their disciplines and knowledge
organization, but also in human action.”[17] Moreover, the appearance of common terms in
their most significant publications is striking. Such as spirit, mankind, elements, work, system
and history, just to name a few. These circumstances led both Utopians to think the
Mundaneum as a system, rather than a singular central type of building; it was meant to
include as many resources in the development process as possible. Because the Mundaneum
is “an idea, an institution, a method, a material corpus of works and collections, a building, a
network,”[18] it had to be conceptualized as an “organic plan with the possibility to expand on
different scales with the multiplication of each part.”[19] The possibility of expansion and an
organic redistribution of elements adapted to new necessities and needs, is what guarantees
the system efficiency, namely by constantly integrating more resources. By designing and
standardizing forms of life up to the smallest element, modernism propagated a new form of
living which would ensure the utmost efficiency. Otlet supported and encouraged Le
Corbusier with his words: “The twentieth century is called upon to build a whole new
civilization. From efficiency to efficiency, from rationalization to rationalization, it must so raise
itself that it reaches total efficiency and rationalization. […] Architecture is one of the best
bases not only of reconstruction (the deforming and skimpy name given to the whole of postwar activities) but of intellectual and social construction to which our era should dare to lay
claim.”[20] Like the Wohnmaschine, in Corbusier’s famous housing project Unité d'habitation,

P.240

P.241

the distribution of elements is shaped according to man's needs. The premise which underlies
this notion is that man's needs and desires can be determined, normalized and standardized
following geometrical models of objectivity.
“making transportation more efficient and lowering the cost of living, reducing energy
[21]
usage and helping government operate more efficiently”
BUILDING THE MUNDANEUM

In the first working phase, from March to September 1928, the plans for the Mundaneum
seemed more a commissioned work than a collaboration. In the 3rd person singular, Otlet
submitted descriptions and organizational schemes which would represent the institutional
structures in a diagrammatic manner. In exchange, Le Corbusier drafted the architectural
plans and detailed descriptions, which led to the publication N° 128 Mundaneum, printed
by International Associations in Brussels.[22] Le Corbusier seemed a little less enthusiastic
about the Mundaneum project than Otlet, mainly because of his scepticism towards the
League of Nations, which he called a “misguided” and “pre-machinist creation.”[23] The
rejection of his proposal for the Palace for the League of Nations in 1927, expressed with
anger in a public announcement, might also play a role. However, the second phase, from
September 1928 to August 1929, was marked by a strong friendship evidenced by the rise
of the international debate after their first publications, letters starting with cher ami and their
agreement to advance the project to the next level by including more stakeholders and
developing the Cité mondiale. This led to the second publication by Paul Otlet, La Cité
mondiale in February 1929, which unexpectedly traumatized the diplomatic environment in
Geneva. Although both tried to organize personal meetings with key stakeholders, the project
didn't find support for its realization, especially after Switzerland had withdrawn its offer of
providing extraterritorial land for Cité mondiale. Instead, Le Corbusier focussed on his Ville
Radieuse concept, which was presented at the 3rd CIAM meeting in Brussels in 1930.[24]
He considered Cité mondiale as “a closed case”, and withdrew himself from the political
environment by considering himself without any political color, “since the groups that gather
around our ideas are, militaristic bourgeois, communists, monarchists, socialists, radicals,
League of Nations and fascists. When all colors are mixed, only white is the result. That
stands for prudence, neutrality, decantation and the human search for truth.”[25]
GOVERNING THE MUNDANEUM

Le Corbusier considered himself and his work “apolitical” or “above politics”.[26] Otlet,
however, was more aware of the political force of his project. “Yet it is important to predict.
To know in order to predict and to predict in order to control, was Comte's positive
philosophy. Prediction doesn't cost a thing, was added by a master of contemporary urbanism
(Le Corbusier).”[27] Lobbying for the Cité mondiale project, That prediction doesn't cost
anything and is “preparing the ways for the coming years”, Le Corbusier wrote to Arthur

Fontaine and Albert Thomas from the International Labor Organization that prediction is
free and “preparing the ways for the coming years”.[28] Free because statistical data is always
available, but he didn't seem to consider that prediction is a form of governing. A similar
premise underlies the present domination of the smart city ideologies, where large amounts of
data are used to predict for the sake of efficiency. Although most of the actors behind these
ideas consider themselves apolitical, the governmental aspect is more than obvious. A form of
control and government, which is not only biopolitical but rather epistemic. The data is not
only used to standardize units for architecture, but also to determine categories of knowledge
that restrict life to the normality of what can be classified. What becomes clear in this
juxtaposition of Le Corbusier's and Paul Otlet's work is that the standardization of
architecture goes hand in hand with an epistemic standardization because it limits what can
be thought, experienced and lived to what is already there. This architecture has to be
considered as an “epistemic object”, which exemplifies the cultural logic of its time.[29] By its
presence it brings the abstract cultural logic underlying its conception into the everyday
experience, and becomes with material, form and function an actor that performs an epistemic
practice on its inhabitants and users. In this case: the conception that everything can be
known, represented and (pre)determined through data.

P.242

P.243

1. Paul Otlet, Monde: essai d'universalisme - Connaissance du Monde, Sentiment du Monde, Action organisee et Plan du Monde
, (Bruxelles: Editiones Mundeum 1935): 448.
2. Steve Lohr, Sidewalk Labs, a Start-Up Created by Google, Has Bold Aims to Improve City Living New, in York Times
11.06.15, http://www.nytimes.com/2015/06/11/technology/sidewalk-labs-a-start-up-created-by-google-has-bold-aims-toimprove-city-living.html?_r=0, quoted here is Dan Doctoroff, founder of Google Sidewalk Labs

3. Dan Doctoroff, 10.06.2015, http://www.sidewalkinc.com/relevant
4. Giuliano Gresleri and Dario Matteoni. La Città Mondiale: Andersen, Hébrard, Otlet, Le Corbusier. (Venezia: Marsilio,
1982): 128; See also: L. Van der Swaelmen, Préliminaires d'art civique (Leynde 1916): 164 – 299.
5. Larry Page, Press release, 10.06.2015, http://www.sidewalkinc.com/
6. Le Corbusier, “A Contemporary City” in The City of Tomorrow and its Planning, (New York: Dover Publications 1987):
164.
7. ibid.: 105 & 126.
8. ibid.: 108.
9. Rayward, W Boyd, “Visions of Xanadu: Paul Otlet (1868–1944) and Hypertext” in Journal of the American Society for
Information Science, (Volume 45, Issue 4, May 1994): 235.
10. The french term dispositif or translated apparatus, refers to Michel Foucault's description of a merely strategic function, “a
thoroughly heterogeneous ensemble consisting of discourses, institutions, architectural forms regulatory decisions, laws,
administrative measures, scientific statements, philosophical, moral and philanthropic propositions – in short, the said as much as
the unsaid.” This distinction allows to go beyond the mere object, and rather deconstruct all elements involved in the production
conditions and relate them to the distribution of power. See: Michel Foucault, “Confessions of the Flesh (1977) interview”, in
Power/Knowledge Selected Interviews and Other Writings, Colin Gordon (Ed.), (New York: Pantheon Books 1980): 194 –
200.
11. Bernd Frohmann, “The role of facts in Paul Otlet’s modernist project of documentation”, in European Modernism and the
Information Society, Rayward, W.B. (Ed.), (London: Ashgate Publishers 2008): 79.
12. “La régularisation de l’architecture et sa tendance à l’urbanisme total aident à mieux comprendre le livre et ses propres
desiderata fonctionnels et intégraux.” See: Paul Otlet, Traité de documentation, (Bruxelles: Mundaneum, Palais Mondial,
1934): 329.
13. “L'urbanisme est l'art d'aménager l'espace collectif en vue d'accroîte le bonheur humain général; l'urbanisation est le résulat de
toute l'activité qu'une Société déploie pour arriver au but qu'elle se propose; l'expression matérielle (corporelle) de son
organisation.” ibid.: 205.
14. Thomas Pearce, Mettre des pierres autour des idées, Paul Otlet, de Cité Mondiale en de modernistische stedenbouw in de jaren
1930, (KU Leuven: PhD Thesis 2007): 39.
15. Volker Welter, Biopolis Patrick Geddes and the City of Life. (Cambridge, Mass: MIT 2003).
16. Letter from Paul Otlet to Le Corbusier and Pierre Jeanneret, Brussels 2nd April 1928. See: Giuliano Gresleri and Dario
Matteoni. La Città Mondiale: Andersen, Hébrard, Otlet, Le Corbusier. (Venezia: Marsilio, 1982): 221-223.
17. W. Boyd Rayward (Ed.), European Modernism and the Information Society. (London: Ashgate Publishers 2008): 129.
18. “Le Mundaneum est une Idée, une Institution, une Méthode, un Corps matériel de traveaux et collections, un Edifice, un
Réseau.” See: Paul Otlet, Monde: essai d'universalisme - Connaissance du Monde, Sentiment du Monde, Action organisee et
Plan du Monde, (Bruxelles: Editiones Mundeum 1935): 448.
19. Giuliano Gresleri and Dario Matteoni. La Città Mondiale: Andersen, Hébrard, Otlet, Le Corbusier. (Venezia: Marsilio,
1982): 223.
20. Le Corbusier, Radiant City, (New York: The Orion Press 1964): 27.
21. http://www.sidewalkinc.com/
22. Giuliano Gresleri and Dario Matteoni. La Città Mondiale: Andersen, Hébrard, Otlet, Le Corbusier. (Venezia: Marsilio,
1982): 128
23. ibid.: 232.
24. ibid.: 129.
25. ibid.: 255.
26. Eric Paul Mumford, The CIAM Discourse on Urbanism, 1928-1960, (Cambridge: MIT Press, 2002): 20.
27. “Savoir, pour prévoir afin de pouvoir, a été la lumineuse formule de Comte. Prévoir ne coûte rien, a ajouté un maître de
l'urbanisme contemporain (Le Corbusier).” See: Paul Otlet, Monde: essai d'universalisme - Connaissance du Monde,
Sentiment du Monde, Action organisee et Plan du Monde, (Bruxelles: Editiones Mundeum 1935): 407.
28. Giuliano Gresleri and Dario Matteoni. La Città Mondiale: Andersen, Hébrard, Otlet, Le Corbusier. (Venezia: Marsilio,
1982): 241.
29. Considering architecture as an object of knowledge formation, the term “epistemic object” by the German philosopher Günter
Abel, helps bring forth the epistemic characteristic of architecture. Epistemic objects according to Abel are these, on which our
knowledge and empiric curiosity are focused. They are objects that perform an active contribution to what can be thought and
how it can be thought. Moreover because one cannot avoid architecture, it determines our boundaries (of thinking). See:
Günter Abel, Epistemische Objekte – was sind sie und was macht sie so wertvoll?, in: Hingst, Kai-Michael; Liatsi, Maria
(ed.), (Tübingen: Pragmata, 2008).

P.244

P.245

La ville
intelligente
- Ville
de la
connaissance
DENNIS POHL

Selon les mots de Paul Otlet, le Mundaneum est « une idée, une institution,
une méthode, un corpus matériel de travaux et de collections, une construction,
un réseau. » Il est devenu le projet d'une vie qu'il a tenté de mettre sur pied
avec Henri La Fontaine au début du 20e siècle. La collaboration avec Le
Corbusier se limitait au projet architectural d'un centre d'informations, de
science et d'éducation qui conduira à l'idée d'un « World Civic Center », à
Genève. Cependant, le discours dialectique entre les deux utopistes ne s'est
pas limité à une réalisation commissionnée, il a révélé la relation entre une
conception positiviste spécifique de la connaissance et l'architecture ; le
système de l'information et la distribution spatiale d'après des principes
d'efficacité. Une notion qui a apporté la base de ce qu'on appelle aujourd'hui
la Ville intelligente.
[1]

FORMULER LE MONDANEUM
[2]

« Nous sommes à l'aube d'un moment historique pour les villes » « Nous sommes à
l'aube d'une transformation historique des villes À une époque où les préoccupations
pour l'égalité urbaine, les coûts, la santé et l'environnement augmentent, un
changement technologique sans précédent va permettre aux villes d'être plus efficaces,
[3]
réactives, flexibles et résistantes. »

P.246

P.247

OTLET, SCHÉMA ET RÉALITÉ

CORBUSIER, CIRCULATION DU TRAFIC
ACTUELLE ET IDÉALE

En 1927, Le Corbusier a participé à une
compétition de design pour le siège de la
Ligue des nations. Cependant, ses
propositions furent rejetées. C'est à ce
moment qu'il a rencontré, pour la première
fois, son cher ami Paul Otlet. Tous deux
connaissaient déjà les idées et les écrits de
l'autre, comme le montre leur utilisation des
plans, mais également les suppositions
épistémiques à la base de leur vues sur le
monde. Avant de rencontrer Le Corbusier,
Paul Otlet était fasciné par l'idée d'un
urbanisme scientifique qui organise
systématiquement tous les éléments de la vie
par des infrastructures de flux. Il avait été
convaincu de travailler avec Van der
Saelmen, qui avait déjà prévu une ville
monde sur le site de Tervuren, près de
Bruxelles, en 1919.[4]

Pour Paul Otlet, c'était la première fois que
deux notions de pratiques différentes se
rassemblaient, à savoir un environnement
ordonné et structuré d'après des principes de
rationalisation et de taylorisme. D'un côté, la
rationalisation: une pratique épistémique qui
réduit toutes les relations à des moyens et
des fins définissables. D'un autre, le
taylorisme: une possibilité d'analyse et de
synthèse des flux de travail fonctionnant
selon les règles de l'efficacité économique et
VAN DER SWAELMEN - TERVUREN, 1916
productive. De nos jours, les deux principes
sont considérés comme des synonymes : si
tous les modes de production sont réduits au
travail, alors l'efficacité peut être rationnellement déterminée à par les moyens et les fins.
« En améliorant la technologie urbaine, il est possible d'améliorer de manière
significative la vie de milliards de gens dans le monde. […] nous voulons encourager
les efforts existants dans des domaines comme l'hébergement, l'énergie, le transport et le
gouvernement afin de résoudre des problèmes réels auxquels les citadins font face au
[5]
quotidien. »

Pendant ce temps, en 1922, Le Corbusier avait développé son modèle théorique du Plan
Voisin qui a servi de projet pour une vision de Paris avec trois millions d'habitants. Dans la
publication de 1925 d'Urbanisme, son objectif principal est la construction « d'un édifice
théoretique rigoureux, à formuler des principes fondamentaux d'urbanisme moderne. »[6] Pour
Le Corbusier, « la statistique est implacable », car « la statistique montre le passé et esquisse
l’avenir »[7], dès lors, une telle formule doit être basée sur l'objectivité des diagrammes, des
données et des cartes.

P.248

P.249

De plus, « la statisique donne la situation
exacte de l’heure présente, mais aussi les
états antérieurs ; [...] (à travers les
statistiques) nous pouvons pénétrer dans
l’avenir et aquérir des certitudes anticipées ».
[8]
À partir de l'analyse des preuves
statistiques, il conclut que la vieille ville de
Paris devait être démolie afin d'être
remplacée par une nouvelle. Cependant, il
n'est pas arrivé à une formule concrète, mais
à un plan approximatif.
CORBUSIER - SCHÉMA POUR UNE
CIRCULATION DU TRAFIC

À la place, une formule comprenant chaque
entité atomique fut développée par son ami
Paul Otlet en réponse à la question qu'il
publia dans Monde pour savoir si le monde
pouvait être exprimé par une entité
unificatrice déterminée. Voici le rêve de
Paul Otlet : une « représentation
permanente et complète du monde entier »[9]
dans un même endroit.

Paul Otlet comprit rapidement le potentiel
actif de l'architecture et de l'urbanisme en
LA FORMULE D'OTLET
tant que dispositif stratégique qui place un
individu dans un environnement spécifique
et façonne sa compréhension du monde.[10]
Un monde qui peut être déterminé par des faits vérifiables à travers la connaissance. Il a
pensé son Traité de documentation: le livre sur le livre, théorie et pratique comme une
« architecture des idées », un manuel pour collecter et organiser la connaissance du monde
en l'association avec les développements architecturaux contemporains.

Étant donné que les nouvelles formes modernistes et
l'utilisation de matériaux propageaient l'abondance
d'éléments décoratifs, Paul Otlet croyait en la possibilité
du langage comme modèle de « données brutes », le
réduisant aux informations essentielles et aux faits sans
ambiguïté, tout en se débarrassant de tous les éléments
inefficaces et subjectifs.

From A bag but is language nothing
of words:
Tim Berners-Lee: [...] Make a
beautiful website, but first give us the
unadulterated data, we want the data.
We want unadulterated data. OK, we
have to ask for raw data now. And
I'm going to ask you to practice that,
OK? Can you say "raw"?

« Des informations, dont tout déchet et élément étrangers
Audience: Raw.
ont été supprimés, seront présentées d'une manière assez
analytique. Elles seront encodées sur différentes feuilles
Tim Berners-Lee: Can you say
"data"?
ou cartes plutôt que confinées dans des volumes, » ce qui
permettra l'annotation standardisée de l'hypertexte pour
Audience: Data.
la classification décimale universelle ( CDU ).[11] De plus,
TBL: Can you say "now"?
la « régulation à travers l'architecture et sa tendance à un
urbanisme total favoriseront une meilleure compréhension Audience: Now!
du livre Traité de documentation ainsi que du désidérata
TBL: Alright, "raw data now"!
fonctionnel et holistique adéquat. »[12] Une abstraction
[...]
permettrait à Paul Otlet de constituer « l'équation de
l'urbanisme » comme un type de sociologie : U = u(S),
car selon sa définition, l'urbanisme « L'urbanisme est l'art
From The Smart City - City of
d'aménager l'espace collectif en vue d'accroître le
Knowledge:
As new modernist forms and use of
bonheur humain général ; l'urbanisation est le résultat de
materials propagated the abundance
toute l'activité qu'une Société déploie pour arriver au but
of decorative elements, Otlet believed
qu'elle se propose ; l'expression matérielle (corporelle)
in the possibility of language as a
model of 'raw data', reducing it to
de son organisation. »[13] La position scientifique qui
essential information and
détermine toutes les valeurs caractéristiques d'une
unambiguous facts, while removing all
certaine région par une classification et une observation
inefficient assets of ambiguity or
subjectivity.
systémiques a été avancée par le biologiste écossais et
planificateur de villes, Patrick Geddes, qui fut invité par
Paul Otlet pour l'exposition universelle de 1913 à Gand
afin de présenter à un public international sa Town Planning Exhibition.[14] Patrick Geddes
allait inévitablement plus loin dans sa croyance positiviste en une totalité de la science, une
croyance qui découle des idées d'Auguste Compte, de Frederic Le Play et d'Elisée Reclus,
pour atteindre une compréhension unifiée du développement urbain dans un contexte
spécifique. Cette position permettrait de représenter à travers des données la complexité d'un
environnement habité.[15]
PENSER LE MUNDANEUM

La seule personne que Paul Otlet estimait capable de réaliser l'architecture du Mundaneum
était Le Corbusier, qu'il approcha pour la première fois au printemps 1928. Dans une de

P.250

P.251

ses premières lettres, il évoqua le besoin de lier « l'idée et la construction, dans toute sa
représentation symbolique. […] Mundaneum opus maximum.” En plus d'être un centre de
documentation, d'informations, de science et d'éducation, le complexe devrait lier l'Union des
associations internationales (UAI), fondée par La Fontaine et Otlet en 1907, et la Ligue
des nations. « Une représentation morale et matérielle de The greatest Society of the nations
(humanité) ; » une ville internationale située dans une zone extraterritoriale à Genève.[16]
Malgré les différents milieux dont ils étaient issus, ils pouvaient facilement se comprendre
puisqu'ils « utilisaient fréquemment des termes similaires comme plan, analyse, classification,
abstraction, standardisation et synthèse, non seulement pour un ordre conceptuel dans leurs
disciplines et l'organisation de leur connaissance, mais également dans l'action humaine. »[17]
De plus, l'apparence des termes dans leurs publications les plus importantes est frappante.
Pour n'en nommer que quelques-uns : esprit, humanité, travail, système et histoire. Ces
circonstances ont conduit les deux utopistes à penser le Mundaneum comme un système
plutôt que comme un type de construction central singulier ; le processus de développement
cherchait à inclure autant de ressources que possible. Puisque « Le Mundaneum est une
Idée, une Institution, une Méthode, un Corps matériel de travaux et collections, un Édifice,
un Réseau. »[18] il devait être conceptualisé comme un « plan organique avec possibilité
d'expansion à différentes échelles grâce à la multiplication de chaque partie. »[19] La
possibilité d'expansion et la redistribution organique des éléments adaptées à de nouvelles
nécessités et besoins garantit l'efficacité du système, à savoir en intégrant plus de ressources
en permanence. En concevant et normalisant des formes de vie, même pour le plus petit
élément, le modernisme a propagé une nouvelle forme de vie qui garantirait l'efficacité
optimale. Paul Otlet a soutenu et encouragé Le Corbusier avec ces mots : « Le vingtième
siècle est appelé à construire une toute nouvelle civilisation. De l'efficacité à l'efficacité, de la
rationalisation à la rationalisation, il doit s'élever et atteindre l'efficacité et la rationalisation
totales. […] L'architecture est l'une des meilleures bases, non seulement de la reconstruction
(le nom étriqué et déformant donné à toutes les activités d'après-guerre), mais à la
construction intellectuelle et sociale à laquelle notre ère devrait oser prétendre. »[20] Comme la
Wohnmaschine, dans le célèbre projet d'habitation du Corbusier, Unité d'habitation, la
distribution des éléments est établie en fonction des besoins de l'homme. Le principe qui sous
tend cette notion est l'idée que les besoins et les désirs de l'homme peuvent être déterminés,
normalisés et standardisés selon des modèles géométriques d'objectivité.
« rendre le transport plus efficace et diminuer le coût de la vie, la consommation
[21]
d'énergie et aider le gouvernement à fonctionner plus efficacement »
CONSTRUIRE LE MUNDANEUM

Dans la première phase de travail, de mars à septembre 1928, les plans du Mundaneum
ressemblaient plus à un travail commissionné qu'à une collaboration. À la troisième personne
du singulier, Paul Otlet a soumis des descriptions et des projets organisationnels qui
représenteraient les structures institutionnelles de manière schématique. En échange, Le
Corbusier a réalisé le brouillon des plans architecturaux et les descriptions détaillées, ce qui

conduisit à la publication du N° 128 Mundaneum, imprimée par Associations
Internationales à Bruxelles.[22] Le Corbusier semblait un peu moins enthousiaste que Paul
Otlet concernant le Mundaneum, principalement à cause de son scepticisme vis-à-vis de la
Ligue des nations dont il disait qu'elle était « fourvoyée » et « une création prémachiniste ».[23]
Le rejet de sa proposition pour le palais de la Ligue des nations en 1927, exprimé avec
colère dans une déclaration publique, jouait peut-être également un rôle. Cependant, la
seconde phase, de septembre 1928 à août 1929, fut marquée par une amitié solide dont
témoigne l'amplification du débat international après leurs premières publications, des lettres
commençant par « cher ami », leur accord concernant l'avancement du projet au prochain
niveau avec l'intégration d'actionnaires et le développement de la Cité mondiale. Cela
conduisit à la seconde publication de Paul Otlet, la Cité mondiale, en février 1929, qui
traumatisa de manière inattendue l'environnement diplomatique de Genève. Même si tous
deux tentèrent d'organiser des entretiens personnels avec des acteurs clés, le projet ne trouva
pas de soutien pour sa réalisation, d'autant moins après le retrait de la proposition de la
Suisse de fournir un territoire extraterritorial pour la Cité mondiale. À la place, Le Corbusier
s'est concentré sur son concept de la Ville Radieuse qui fut présenté lors du 3e CIAM à
Bruxelles, en 1930.[24] Il considérait la Cité mondiale comme « une affaire classée » et s'était
retiré de l'environnement politique en considérant qu'il n'avait aucune couleur politique
« puisque les groupes qui se rassemblent autour de nos idées sont des bourgeois militaristes,
des communistes, des monarchistes, des socialistes, des radicaux, la Ligue des nations et des
fascistes. Lorsque toutes les couleurs sont mélangées, seul le blanc ressort. Il représente la
prudence, la neutralité, la décantation et la recherche humaine de la vérité. »[25]
DIRIGER LE MUNDANEUM

Le Corbusier considérait son travail et lui-même comme étant « apolitiques » ou « au-dessus
de la politique ».[26] Cependant, Paul Otlet était plus conscient de la force politique de ce
projet. « Savoir, pour prévoir afin de pouvoir, a été la lumineuse formule de Comte. Prévoir
ne coûte rien, a ajouté un maitre de l'urbanisme contemporain (Le Corbusier). »[27] En faisant
le lobby du projet de la Cité mondiale, cette prévision ne coûte rien et « prépare les années à
venir », Le Corbusier écrivit à Arthur Fontaine et Albert Thomas depuis l'Organisation
internationale de travail que la prévision était gratuite et « préparait les années à venir ».[28]
Gratuite, car les données statistiques sont toujours disponibles, cependant, il ne semblait pas
considérer la prévision comme une forme de pouvoir. Une prémisse similaire est à l'origine
de la domination actuelle des idéologies de la ville intelligente où de grandes quantités de
données sont utilisées pour prévoir au nom de l'efficacité. Même si la plupart des acteurs
derrière ces idées se considèrent apolitiques, l'aspect gouvernemental est plus qu'évident.
Une forme de contrôle et de gouvernement n'est pas seulement biopolitique, mais plutôt
épistémique. Les données sont non seulement utilisées pour standardiser les unités pour
l'architecture, mais également pour déterminer les catégories de connaissance qui restreignent
la vie à la normalité dans laquelle elle peut être classée. Dans cette juxtaposition du travail de
Le Corbusier et Paul Otlet, il devient clair que la standardisation de l'architecture va de pair

P.252

P.253

avec une standardisation épistémique, car elle limite ce qui peut être pensé, ressenti et vécu à
ce qui existe déjà. Cette architecture doit être considérée comme un « objet épistémique »
qui illustre la logique culturelle de son époque.[29] Par sa présence, elle apporte la logique
culturelle abstraite sous-jacente à sa conception dans l'expérience quotidienne et devient, au
côté de la matière, de la forme et de la fonction, un acteur qui accomplit une pratique
épistémique sur ses habitants et ses usagers. Dans ce cas : la conception selon laquelle tout
peut être connu, représenté et (pré)déterminé à travers les données.

Last
Revision:
2·08·2016

1. Paul Otlet, Monde : essai d'universalisme - Connaissance du Monde, Sentiment du Monde, Action organisée et Plan du
Monde, (Bruxelles : Editiones Mundeum 1935) : 448.

P.254

P.255

2. Steve Lohr, Sidewalk Labs, a Start-Up Created by Google, Has Bold Aims to Improve City Living New, dans le York Times
11/06/15, http://www.nytimes.com/2015/06/11/technology/sidewalk-labs-a-start-up-created-by-google-has-bold-aims-toimprove-city-living.html?_r=0, citation de Dan Doctoroff, fondateur de Google Sidewalk Labs
3. Dan Doctoroff, 10/06/2015, http://www.sidewalkinc.com/relevant
4. Giuliano Gresleri et Dario Matteoni. La Città Mondiale : Andersen, Hébrard, Otlet, Le Corbusier. (Venise : Marsilio,
1982) : 128 ; Voir aussi : L. Van der Swaelmen, Préliminaires d'art civique (Leynde 1916) : 164 - 299.
5. Larry Page, Communiqué de presse, 10/06/2015, http://www.sidewalkinc.com/
6. Le Corbusier, « Une Ville Contemporaine » dans Urbanisme, (Paris : Les Éditions G. Crès & Cie 1924) : 158.
7. ibid. : 115 et 97.
8. ibid. : 100.
9. Rayward, W Boyd, « Visions of Xanadu: Paul Otlet (1868–1944) and Hypertext » dans le Journal of the American Society
for Information Science, (Volume 45, Numéro 4, mai 1994) : 235.
10. Le terme français « dispositif » fait référence à la description de Michel Foucault d'une fonction simplement stratégique, « un
ensemble réellement hétérogène constitué de discours, d'institutions, de formes architecturales, de décisions régulatrices, de lois,
de mesures administratives, de déclarations scientifiques, philosophiques, morales et de propositions philanthropiques. En
résumé, ce qui est dit comme ce qui ne l'est pas. » La distinction permet d'aller plus loin que le simple objet, et de déconstruire
tous les éléments impliqués dans les conditions de production et de les lier à la distribution des pouvoirs. Voir : Michel Foucault,
« Confessions of the Flesh (1977) interview », dans Power/Knowledge Selected Interviews and Other Writings, Colin
Gordon (Éd.), (New York : Pantheon Books 1980) : 194 - 200.
11. Bernd Frohmann, « The role of facts in Paul Otlet’s modernist project of documentation », dans European Modernism and the
Information Society, Rayward, W.B. (Éd.), (Londres : Ashgate Publishers 2008) : 79.
12. « La régularisation de l’architecture et sa tendance à l’urbanisme total aident à mieux comprendre le livre et ses propres
désiderata fonctionnels et intégraux. » Voir : Paul Otlet, Traité de documentation, (Bruxelles : Mundaneum, Palais Mondial,
1934) : 329.
13. ibid. : 205.
14. Thomas Pearce, Mettre des pierres autour des idées, Paul Otlet, de Cité Mondiale en de modernistische stedenbouw in de
jaren 1930, (KU Leuven : PhD Thesis 2007) : 39.
15. Volker Welter, Biopolis Patrick Geddes and the City of Life. (Cambridge, Mass : MIT 2003).
16. Lettre de Paul Otlet à Le Corbusier et Pierre Jeanneret, Bruxelles, 2 avril 1928. Voir : Giuliano Gresleri et Dario Matteoni.
La Città Mondiale : Andersen, Hébrard, Otlet, Le Corbusier. (Venise : Marsilio, 1982) : 221-223.
17. W. Boyd Rayward (Éd.), European Modernism and the Information Society. (Londres : Ashgate Publishers 2008) : 129.
18. Voir : Paul Otlet, Monde : essai d'universalisme - Connaissance du Monde, Sentiment du Monde, Action organisée et Plan du
Monde, (Bruxelles : Editiones Mundeum 1935) : 448.
19. Giuliano Gresleri et Dario Matteoni. La Città Mondiale : Andersen, Hébrard, Otlet, Le Corbusier. (Venise : Marsilio,
1982) : 223.
20. Le Corbusier, Radiant City, (New York : The Orion Press 1964) : 27.
21. http://www.sidewalkinc.com/
22. Giuliano Gresleri et Dario Matteoni. La Città Mondiale : Andersen, Hébrard, Otlet, Le Corbusier. (Venise : Marsilio,
1982) : 128
23. ibid. : 232.
24. ibid. : 129.
25. ibid. : 255.
26. Eric Paul Mumford, The CIAM Discourse on Urbanism, 1928-1960, (Cambridge : MIT Press, 2002) : 20.
27. Voir : Paul Otlet, Monde : essai d'universalisme - Connaissance du Monde, Sentiment du Monde, Action organisée et Plan du
Monde, (Bruxelles : Editiones Mundeum 1935) : 407.
28. Giuliano Gresleri et Dario Matteoni. La Città Mondiale : Andersen, Hébrard, Otlet, Le Corbusier. (Venise : Marsilio,
1982) : 241.
29. En considérant l'architecture comme un objet de formation du savoir, le terme « objet épistémique » du philosophe Günter Abel
aide à produire la caractéristique épistémique de l'architecture. D'après Günter Abel, les objets épistémiques sont ceux sur
lesquels notre connaissance et notre curiosité empirique sont concentrés. Ce sont des objets ont une contribution active en ce qui
concerne ce qui peut être pensé et la manière dont cela peut être pensé. De plus, puisque personne ne peut éviter l'architecture,
elle détermine nos limites (de pensée). Voir : Günter Abel, Epistemische Objekte – was sind sie und was macht sie so
wertvoll?, dans : Hingst, Kai-Michael; Liatsi, Maria (éd.), (Tübingen : Pragmata, 2008).

The
Itinerant
Archive
The project of the Mundaneum and its many protagonists is undoubtedly
linked to the context of early 19th century Brussels. King Leopold II , in an
attempt to awaken his countries' desire for greatness, let a steady stream of
capital flow into the city from his private colonies in Congo. Located on the
crossroad between France, Germany, The Netherlands and The United
Kingdom, the Belgium capital formed a fertile ground for ambitious institutional
projects with international ambitions, such as the Mundaneum. Its tragic
demise was unfortunately equally at home in Brussels. Already in Otlet's
lifetime, the project fell prey to the dis-interest of its former patrons, not
surprising after World War I had shaken their confidence in the beneficial
outcomes of a global knowledge infrastructure. A complex entanglement of disinterested management and provincial politics sent the numerous boxes and
folders on a long trajectory through Brussels, until they finally slipped out of
the city. It is telling that the Capital of Europe has been unable to hold on to its
pertinent past.

P.256

P.257

This tour is a kind of itinerant monument to the Mundaneum in Brussels. It
takes you along the many temporary locations of the archives, guided by the
words of care-takers, reporters and biographers that have crossed it's path.
Following the increasingly dispersed and dwindling collection through the city
and centuries, you won't come across any material trace of its passage. You
might discover many unknown corners of Brussels though.
1919: MUSÉE INTERNATIONAL

Outre le Répertoire bibliographique universel et un Musée de la presse qui
comptera jusqu’à 200.000 spécimens de journaux du monde entier, on y trouvera
quelque 50 salles, sorte de musée de l’humanité technique et scientifique. Cette
décennie représente l’âge d’or pour le Mundaneum, même si le gros de ses
collections fut constitué entre 1895 et 1914, avant l’existence du Palais Mondial.
L’accroissement des collections ne se fera, par la suite, plus jamais dans les mêmes
[1]
proportions.
En 1920, le Musée international et les institutions créées par Paul Otlet et Henri
La Fontaine occupent une centaine de salles. L’ensemble sera désormais appelé
Palais Mondial ou Mundaneum. Dans les années 1920, Paul Otlet et Henri La
Fontaine mettront également sur pied l’Encyclopedia Universalis Mundaneum,
[2]
encyclopédie illustrée composée de tableaux sur planches mobiles.

Start at Parc du Cinquantenaire 11,
Brussels in front of the entrance of
what is now Autoworld.

In 1919, significantly delayed by World War I, the Musée international finally opened. The
project had been conceptualised by Paul Otlet and Henri Lafontaine already ten years
earlier and was meant to be a mix between a documentation center, conference venue and
educational display. It occupied the left wing of the magnificent buildings erected in the Parc
Cinquantenaire for the Grand Concours International des Sciences et de l'industrie.
Museology merged with the International Institute of
From House, City, World, Nation,
Bibliography (IIB) which had its offices in the same
Globe:
building. The ever-expanding index card catalog had
The ever ambitious process of
already been accessible to the public since 1914. The
building the Mundaneum archives
took place in the context of a growing
project would be later known as the World Palace or
internationalisation of society, while at
Mundaneum. Here, Paul Otlet and Henri La Fontaine
the same time the social gap was
started to work on their Encyclopaedia Universalis
increasing due to the expansion of
Mundaneum, an illustrated encyclopaedia in the form of a industrial society. Furthermore, the
internationalisation of finances and
mobile exhibition.
relations did not only concern
Walk under
the colonnade
to your
right, and
you will
recognise the
former entrance

industrial society, it also acted as a
motivation to structure social and
political networks, among others via
political negotiations and the
institution of civil society organisations.

of Le Palais Mondial.

Only a few years after its delayed opening, the ambitious project started to lose support from
the Belgium government, who preferred to use the vast exhibition spaces for commercial
activities. In 1922 and 1924, Le Palais Mondial was temporarily closed to make space for
an international rubber fair.

P.258

P.259

1934: MUNDANEUM MOVED TO HOME OF PAUL OTLET

Si dans de telles conditions le Palais Mondial devait définitivement rester fermé, il
semble bien qu’il n’y aurait plus place dans notre Civilisation pour une institution
d’un caractère universel, inspirée de l’idéal indiqué en ces mots à son entrée : Par
la Liberté, l’Égalité et la Fraternité mondiales − dans la Foi, l’Espérance et la
[3]
Charité humaines − vers le Travail, le Progrès et la Paix de tous !
Cato, my wife, has been absolutely devoted to my work. Her savings and jewels
testify to it; her invaded house testify to it; her collaboration testifies to it; her wish
to see it finished after me testifies to it; her modest little fortune has served for the
[4]
constitution of my work and of my thought.

Walk under the Arc de Triumph and exit
the Jubelfeestpark on your left. On
Avenue des Nerviens turn left into
Sint Geertruidestraat. Turn left onto
Kolonel Van Gelestraat and right onto
Rue Louis Hap. Turn left onto
Oudergemselaan and right onto Rue
Fetis 44.

In 1934, the ministry of public works decided to close the Mundaneum in order to make
place for an extension of the Royal Museum of Art and History. An outraged Otlet posted in
front of the closed entrance with his colleagues, but to no avail. The official address of the
Mundaneum was 'temporarily' transferred to the house at Rue Fétis 44 where he lived with
his second wife, Cato Van Nederhasselt.

P.260

P.261

Part of the archives were moved Rue Fétis, but many boxes and most of the card-indexes
remained stored in the Cinquantenaire building. Paul Otlet continued a vigorous program of
lectures and meetings in other places, including at home.

1941: MUNDANEUM IN PARC LÉOPOLD

The upper galleries ... are one big pile of rubbish, one inspector noted in his report.
It is an impossible mess, and high time for this all to be cleared away. The Nazis
evidently struggled to make sense of the curious spectacle before them. The
institute and its goals cannot be clearly defined. It is some sort of ... 'museum for
the whole world,' displayed through the most embarrassing and cheap and
[5]
primitive methods.
Distributed in two large workrooms, in corridors, under stairs, and in attic rooms
and a glass-roofed dissecting theatre at the top of the building, this residue
gradually fell prey to the dust and damp darkness of the building in its lower
regions, and to weather and pigeons admitted through broken panes of glass in the
roof in the upper rooms. On the ground floor of the building was a dimly lit, small,

steeply-raked lecture theatre. On either side of its dais loomed busts of the
[6]
founders.
Derrière les vitres sales, j’aperçus un amoncellement de livres, de liasses de papiers
contenus par des ficelles, des dossiers dressés sur des étagères de fortune. Des
feuilles volantes échappées des cartons s’amoncelaient dans les angles de l’immense
pièce, du papier pelure froissé se mêlait au gravat et à la poussière. Des récipients
de fortune avaient été placés entre les caisses et servaient à récolter l’eau de pluie.
Un pigeon avait réussi à pénétrer à l’intérieur et se cognait inlassablement contre
[7]
l’immense baie vitrée qui fermait le bâtiment.
Annually in this room in the years after Otlet's death until the late 1960's, the
busts garlanded with floral wreaths for the occasion, Otlet and La Fontaine's
colleagues and disciples, Les Amis du Palais Mondial, met in a ceremony of
remembrance. And it was Otlet, theorist and visionary, who held their
imaginations most in beneficial thrall as they continued to work after his death, just
as they had in those last days of his life, among the mouldering, discorded
collections of the Mundaneum, themselves gradually overtaken by age, their
[8]
numbers dwindling.

Exit the Fétisstraat onto Chaussee de
Wavre, turn right and follow into the
Vijverstraat. Turn right on Rue Gray,
cross Jourdan plein into Parc Leopold.
Right at the entrance is the building
of l’Institut d’Anatomie Raoul
Warocqué.

In 1941, the Nazi-Germans occupying Belgium wanted to use the spaces in the Palais du
Cinquantenaire but they were still used to store the collections of the Mundaneum. They
decided to move the archives to Parc Léopold except for a mass of periodicals, which were
simply destroyed. A vast quantity of files related to international associations were assumed to
have propaganda value for the German war effort. This part of the archive was transferred
back to Berlin and apparently re-appeared in the Stanford archives (?) many years later.
They must have been taken there by American soldiers after World War II.
Until the 1970's, the Mundaneum (or what was left of it) remained in the decaying building
in Parc Léopold. Georges Lorphèvre and André Colet continued to carry on the work of the
Mundaneum with the help of a few now elderly Amis du Palais Mondial, members of the
association with the same name that was founded in 1921. It is here that the Belgian
librarian André Canonne, the Australian scholar Warden Boyd Rayward and the Belgian
documentary-maker Françoise Levie came across the Mundaneum archives for the very first
time.

P.262

P.263

2009: OFFICES GOOGLE BELGIUM

A natural affinity exists between Google's modern project of making the world’s
information accessble and the Mundaneum project of two early 20th century
Belgians. Otlet and La Fontaine imagined organizing all the world's information on paper cards. While their dream was discarded, the Internet brought it back to
reality and it's little wonder that many now describe the Mundaneum as the paper
Google. Together, we are showing the way to marry our paper past with our
[9]
digital future.

Exit the park onto Steenweg op
Etterbeek and walk left to number
176-180.

In 2009, Google Belgium opened its offices at the Chaussée d'Etterbeek 180. It is only a
short walk away from the last location that Paul Otlet has been able to work on the
Mundaneum project.
Celebrating the discovery of its "European roots", the company has insisted on the
connection between the project of Paul Otlet, and their own mission to organize the world's
information and make it universally accessible and useful. To celebrate the desired
connection to the Forefather of documentation, the building is said to have a Mundaneum
meeting room. In the lobby, you can find a vitrine with one of the drawers filled with UDCindex cards, on loan from the Mundaneum archive center in Mons.

1944: GRAVE OF PAUL OTLET

When I am no more, my documentary instrument (my papers) should be kept
together, and, in order that their links should become more apparent, should be
sorted, fixed in successive order by a consecutive numbering of all the cards (like
[10]
the pages of a book).
Je le répète, mes papiers forment un tout. Chaque partie s’y rattache pour
constituer une oeuvre unique. Mes archives sont un "Mundus Mundaneum", un

P.264

P.265

outil conçu pour la connaissance du monde. Conservez-les; faites pour elles ce que
[11]
moi j’aurais fait. Ne les détruisez pas !

O P T I O N A L : Continue on Chaussée
d'Etterbeek toward Belliardstraat.
Turn left until you reach Rue de
Trèves. Turn right onto Luxemburgplein
and take bus 95 direction Wiener.

Paul Otlet dies in 1944 when he is 76 years old. His grave at the cemetary of Ixelles is
decorated with a globe and the inscription "Il ne fut rien sinon Mundanéen" (He was nothing
if not Mundanéen).
Exit the cemetary and walk toward
Avenue de la Couronne. At the
roundabout, turn left onto
Boondaalsesteenweg. Turn left onto
Boulevard Géneral Jacques and take
tram 25 direction Rogier.

Halfway your tram-journey you pass Square Vergote (Stop: Georges Henri), where Henri
Lafontaine and Mathilde Lhoest used to live. Statesman and Nobel-prize winner Henri
Lafontaine worked closely with Otlet and supported his projects throughout his life.
Get off at the stop Coteaux and follow
Rogierstraat until number 67.

1981: STORAGE AT AVENUE ROGIER 67

C'est à ce moment que le conseil d'administration, pour sauver les activités
(expositions, prêts gratuits, visites, congrès, exposés, etc.) vendit quelques pièces. Il
n'y a donc pas eu de vol de documents, contrairement à ce que certains affirment,
[12]
garantit de Louvroy.
In fact, not one of the thousands of objects contained in the hundred galleries of the
Cinquantenaire has survived into the present, not a single maquette, not a single
telegraph machine, not a single flag, though there are many photographs of the
[13]
exhibition rooms.
Mais je me souviens avoir vu à Bruxelles des meubles d'Otlet dans des caves
inondées. On dit aussi que des pans entiers de collections ont fait le bonheur des
amateurs sur les brocantes. Sans compter que le papier se conserve mal et que des
[14]
dépôts mal surveillés ont pollué des documents aujourd'hui irrécupérables.

This part of the walk takes about 45"
and will take you from the Ixelles
neighbourhood through Sint-Joost to
Schaerbeek; from high to low Brussels.

Continue on Steenweg op Etterbeek,
cross Rue Belliard and continue onto
Jean Reyplein. Take a left onto
Chaussée d'Etterbeek. If you prefer,
you can take a train at Bruxelles

P.266

P.267

Schumann Station to North Station, or
continue following Etterbeekse
steenweg onto Square Marie-Louise.
Continue straight onto
Gutenbergsquare, Rue Bonneels which
becomes Braemtstraat at some point.
Cross Chausséee de Louvain and turn
left onto Oogststraat. Continue onto
Place Houwaert and Dwarsstraat.
Continue onto Chaussée de Haecht and
follow onto Kruidtuinstraat. Take a
slight right onto Rue Verte, turn left
onto Kwatrechtstraat and under the
North Station railroad tracks. Turn
right onto Rue du Progrès.
Rogierstraat is the first street on
your left.

In 1972, we find Les Amis du Mundaneum back at Chaussée de Louvain 969.
Apparently, the City of Brussels has moved the Mundaneum out of Parc Léopold into a
parking garage, 'a building rented by the ministry of Finances', 'in the direction of the SaintJosse-ten-Node station'.[15]. 10 years later, the collection is moved to the back-house of a
building at Avenue Rogier 67.
As a young librarian, Andre Canonne visits the collection at this address until he is in a
position to move the collection elsewhere.

1985: ESPACE MUNDANEUM UNDER PLACE ROGIER

On peut donc croire sauvées les collections du "Mundaneum" et a bon droit
espérer la fin de leur interminable errance. Au moment ou nous écrivons ces lignes,
des travaux d’aménagement d'un "Espace Mundaneum" sont en voie
[16]
d’achèvement au cour de Bruxelles.
L'acte fut signé par le ministre Philippe Monfils, président de l'exécutif. Son
prédécesseur, Philippe Moureaux, n'était pas du même avis. Il avait même acheté
pour 8 millions un immeuble de la rue Saint-Josse pour y installer le musée. Il
fallait en effet sauver les collections, enfouies dans l'arrière-cour d'une maison de
repos de l'avenue Rogier! (...) L'étage moins deux, propriété de la commune de
Saint-Josse, fut cédé par un bail emphytéotique de 30 ans à la Communauté, avec
un loyer de 800.000 F par mois. (...) Mais le Mundaneum est aussi en passe de
devenir une mystérieuse affaire en forme de pyramide. A l'étage moins un, la
commune de Saint-Josse et la société française «Les Pyramides» négocient la
construction d'un Centre de congrès (il remplace celui d'un piano-bar luxueux)
d'ampleur. Le montant de l'investissement est évalué à 150 millions (...) Et puis,
ce musée fantôme n'est pas fermé pour tout le monde. Il ouvre ses portes! Pas pour
y accueillir des visiteurs. On organise des soirées dansantes, des banquets dans la
grande salle. Deux partenaires (dont un traiteur) ont signé des contrats avec
l'ASBL Centre de lecture publique de la communauté française. Contrats
[17]
reconfirmés il y a quinze jours et courant pendant 3 ans encore!
Mais curieusement, les collections sont toujours avenue Rogier, malgré l'achat
d'un local rue Saint-Josse par la Communauté française, et malgré le transfert
officiel (jamais réalisé) au «musée» du niveau - 2 de la place Rogier. Les seules
choses qu'il contient sont les caisses de livres rétrocédées par la Bibliothèque
[18]
Royale qui ne savait qu'en faire.

P.268

P.269

Follow Avenue Rogier. Turn left onto
Brabantstraat until you cross under
the railroad tracks. Place Rogier is
on your right hand, marked by a large
overhead construction of a tilted
white dish.

In 1985, Andre Canonne convinced Les Amis du Palais Mondial to transfer the
responsability for the collection and mission of the association to la Centre de lecture
publique de la Communauté française based in Liege, the organisation that he now has
become the director of. It was agreed that the Mundaneum should stay in Brussels; the
documents mention a future location at the Rue Saint Josse 49, a building apparently
acquired for that purpose by the Communauté française.
Five years later, plans have changed. In 1990, the archives are being moved from their
temporary storage in Avenue Rogier and the Royal Library of Belgium to a new location in
Place Rogier -2. Under the guidance of André Canonne a "Mundaneum space" will be
opened in the center of Brussels, right above the Metro station Rogier. Unfortunately,
Canonne dies just weeks after the move has begun, and the Brussels' Espace Mundaneum
never opens its doors.
In the following three years, the collection remains in the same location but apparently
without much supervision. Journalists report that doors were left unlocked and that Metro
passengers could help themselves to handfuls of documents. The collection has in the mean
time attracted the attention of Elio di Rupo, at that time minister of education at la
Communauté française. It marks the beginning of the end of The Mundaneum as an itinerant
archive in Brussels.

You can end the tour here, or add two optional destinations:

1934: IMPRIMERIE VAN KEERBERGHEN IN RUE PIERS

O P T I O N A L :

(from Place Rogier, 20") Follow
Kruidtuinlaan onto Boulevard Baudouin
and onto Antwerpselaan, down in the
direction of the canal. At the
Sainctelette bridge, cross the canal
and take a slight left into Rue
Adolphe Lavallée. Turn left onto
Piersstraat. Alternatively, at Rogier
you can take a Metro to Ribaucourt
station, and walk from there.

At number 101 we find Imprimerie Van Keerberghen, the printer that produced and
distributed Le Traité de Documentation . In 1934, Otlet did not have enough money to pay
for the full print-run of the book and therefore the edition remained with Van Keerberghen
who would distribute the copies himself through mailorders. The plaque on the door dates
from the period that the Traité was printed. So far we have not been able to confirm whether
this family-business is still in operation.

P.270

P.271

RUE OTLET

O P T I O N A L :

(from Rue Piers, ca. 30") Follow Rue
Piers and turn left into
Merchtemsesteenweg and follow until
Chaussée de Gand, turn left. Turn
right onto Ransfortstraat and cross
Chaussée de Ninove. Turn left to
follow the canal onto Mariemontkaai
and left at Rue de Manchester to cross
the water. Continue onto
Liverpoolstraat, cross Chaussee de
Mons and continue onto Dokter De

Meersmanstraat until you meet Rue
Otlet.

(from Place Rogier, ca. 30") Follow
Boulevard du Jardin Botanique and turn
left onto Adolphe Maxlaan and Place De
Brouckère. Continue onto Anspachlaan,
turn right onto Rue du Marché aux
Poulets. Turn left onto
Visverkopersstraat and continue onto
Rue Van Artevelde. Continue straight
onto Anderlechtschesteenweg, continue
onto Chaussée de Mons. Turn left onto
Otletstraat. Alternatively you can
take tram 51 or 81 to Porte
D'Anderlecht.

Although it seems that this dreary street is named to honor Paul Otlet, it already
mysteriously appears on a map dated 1894 when Otlet was not even 26 years old [19] and
again on a map from 1910, when the Mundaneum had not yet opened it's doors.[20]

P.272

P.273

OUTSIDE BRUSSELS

1998: THE MUNDANEUM RESURRECTED

Bernard Anselme, le nouveau ministre-président de la Communauté française,
négocia le transfert à Mons, au grand dam de politiques bruxellois furieux de voir
cette prestigieuse collection quitter la capitale. (...) Cornaqué par Charles Picqué et
Elio Di Rupo, le transfert à Mons n'a pas mis fin aux ennuis du Mundaneum.
On créa en Hainaut une nouvelle ASBL chargée d'assurer le relais. C'était sans
compter avec l'ASBL Célès, héritage indépendant du CLPCF, évoqué plus haut,
que la Communauté avait fini par dissoudre. Cette association s'est toujours
considérée comme propriétaire des collections, au point de s'opposer régulièrement
à leur exploitation publique. Les faits lui ont donné raison: au début du mois de

mai, le Célès a obtenu du ministère de la Culture que cinquante millions lui soient
[21]
versés en contrepartie du droit de propriété.
The reestablishment of the Mundaneum in Mons as a museum and archive is in
my view a major event in the intellectual life of Belgium. Its opening attracted
[22]
considerable international interest at the time.
Le long des murs, 260 meubles-fichiers témoignaient de la démesure du projet.
Certains tiroirs, ouverts, étaient éclairés de l’intérieur, ce qui leur donnait une
impression de relief, de 3D. Un immense globe terrestre, tournant lentement sur
lui-même, occupait le centre de l’espace. Sous une voie lactée peinte à même le
plafond, les voix de Paul Otlet et d’Henri La Fontaine, interprétés par des
comédiens, s’élevaient au fur et à mesure que l’on s’approchait de tel ou tel
[23]
document.
L’Otletaneum, c’est à dire les archives et papiers personnels ayant appartenu à
Paul Otlet, représentait un fonds important, peu connu, mal répertorié, que l’on
pouvait cependant quantifier à la place qu’il occupait sur les étagères des réserves
situées à l’arrière du musée. Il y avait là 100 à 150 mètres de rayonnages, dont
une partie infime avait fait l’objet d’un classement. Le reste, c’est à dire une
soixantaine de boîtes à bananes‚ était inexploré. Sans compter l’entrepôt de
Cuesmes où le travail de recensement pouvait être estimé, me disait-il, à une
[24]
centaine d’années...
Après des multiples déménagements, un travail laborieux de sauvegarde entamé
par les successeurs, ce patrimoine unique ne finit pas de révéler ses richesses et ses
surprises. Au-delà de cette démarche originale entamée dans un esprit
philanthropique, le centre d’archives propose des collections documentaires à valeur
[25]
historique, ainsi que des archives spécialisées.

In 1993, after some armwrestling between different local fractions of the Parti Socialiste, the
collections of the Mundaneum are moved from Place Rogier to former departement store
L'independance in Mons, 40 kilometres from Brussels and home to Elio Di Rupo. Benoît
Peeters and François Schuiten design a theatrical scenography that includes a gigantic globe
and walls decorated with what is if left of the wooden card catalogs. The center opens in
1998 under the direction of librarian Jean-François Füeg .
In 2015, Mons is elected Capital of Europe with the slogan "Mons, where culture meets
technology". The Mundaneum archive center plays a central role in the media-campaigns
and activities leading up to the festive year. In that same period, the center undergoes a largescale renovation to finally brings the archive facilities up to date. A new reading room is
named after André Canonne, the conference room is called Utopia. The mise-en-scène of
Otlet's messy office is removed, but otherwise the scenography remains largely unchanged.

P.274

P.275

2007: CRYSTAL COMPUTING

Jean-Paul Deplus, échevin (adjoint) à la culture de la ville, affiche ses ambitions.
« Ce lieu est une illustration saisissante de ce que des utopistes visionnaires ont
apporté à la civilisation. Ils ont inventé Google avant la lettre. Non seulement ils
l’ont fait avec les seuls outils dont ils disposaient, c’est-à- dire de l’encre et du

papier, mais leur imagination était si féconde que l’on a retrouvé les dessins et
croquis de ce qui préfigure Internet un siècle plus tard. » Et Jean-Pol Baras
d’ajouter «Et qui vient de s’installer à Mons ? Un “data center” de Google ...
[26]
Drôle de hasard, non ? »
Dans une ambiance où tous les partenaires du «projet Saint-Ghislain» de Google
savouraient en silence la confirmation du jour, les anecdotes sur la discrétion
imposée durant 18 mois n’ont pas manqué. Outre l’utilisation d’un nom de code,
Crystal Computing, qui a valu un jour à Elio Di Rupo d’être interrogé sur
l’éventuelle arrivée d’une cristallerie en Wallonie («J’ai fait diversion comme j’ai
pu !», se souvient-il), un accord de confidentialité liait Google, l’Awex et l’Idea,
notamment. «A plusieurs reprises, on a eu chaud, parce qu’il était prévu qu’au
[27]
moindre couac sur ce point, Google arrêtait tout»
Beaucoup de show, peu d’emplois: Pour son data center belge, le géant des
moteurs de recherche a décroché l’un des plus beaux terrains industriels de
Wallonie. Résultat : à peine 40 emplois directs et pas un euro d’impôts. Reste que
la Région ne voit pas les choses sous cet angle. En janvier, a appris Le Vif/
L’Express, le ministre de l’Economie Jean-Claude Marcourt (PS) a notifié à
Google le refus d’une aide à l’expansion économique de 10 millions d’euros.
Motif : cette aide était conditionnée à la création de 110 emplois directs, loin d’être
atteints. Est-ce la raison pour laquelle aucun ministre wallon n’était présent le 10
avril aux côtés d’Elio Di Rupo ? Au cabinet Marcourt, on assure que les relations
avec l’entreprise américaine sont au beau fixe : « C’est le ministre qui a permis ce
nouvel investissement de Google, en négociant avec son fournisseur d’électricité
[28]
(NDLR : Electrabel) une réduction de son énorme facture.

In 2005, Elio di Rupo succeeds in bringing a company "Crystal Computing" to the region,
code name for Google inc. who plans to build a data-center at Saint Ghislain, a prime
industrial site close to Mons. Promising 'a thousand jobs', the presence of Google becomes a
way for Di Rupo to demonstrate that the Marshall Plan for Wallonia, an attempt to "step up
the efforts taken to put Wallonia back on the track to prosperity" is attaining its goals. The
first data-center opens in 2007 and is followed by a second one opening in 2015. The
direct impact on employment in the region is estimated to be somewhere between 110[29] and
120 jobs.[30]

P.276

P.277

Last
Revision:
2·08·2016

1. Paul Otlet (1868-1944) Fondateur du mouvement bibliogique international Par Jacques Hellemans (Bibliothèque de
l’Université libre de Bruxelles, Premier Attaché)
2. Jacques Hellemans. Paul Otlet (1868-1944) Fondateur du mouvement bibliogique international
3. Paul Otlet. Document II in: Traité de documentation (1934)
4. Paul Otlet. Diary (1938), Quoted in: W. Boyd Rayward. The Universe of Information : The Work of Paul Otlet for
Documentation and International Organisation (1975)
5. Alex Wright. Cataloging the World: Paul Otlet and the Birth of the Information Age (2014)
6. Warden Boyd Rayward. Mundaneum: Archives of Knowledge (2010)
7. Françoise Levie. L'homme qui voulait classer le monde: Paul Otlet et le Mundaneum (2010)
8. Warden Boyd Rayward. Mundaneum: Archives of Knowledge (2010)
9. William Echikson. A flower of computer history blooms in Belgium (2013) http://googlepolicyeurope.blogspot.be/2013/02/
a-flower-of-computer-history-blooms-in.html
10. Testament Paul Otlet, 1942.01.18*, No. 67, Otletaneum. Quoted in: W. Boyd Rayward. The Universe of Information :
The Work of Paul Otlet for Documentation and International Organisation (1975)
11. Paul Otlet cited in Françoise Levie, Filmer Paul Otlet, Cahiers de la documentation – Bladen voor documentatie – 2012/2
12. Le Soir, 27 juillet 1991
13. Warden Boyd Rayward. Mundaneum: Archives of Knowledge (2010)
14. Le Soir, 17 juin 1998
15. http://www.reflexcity.net/bruxelles/photo/72ca206b2bf2e1ea73dae1c7380f57e3
16. André Canonne. Introduction to the 1989 facsimile edition of Le Traité de documentation File:TDD ed1989 preface.pdf
17. Le Soir, 24 juillet 1991
18. Le Soir, 27 juillet 1991
19. http://www.reflexcity.net/bruxelles/plans/4-cram-fin-xixe.html
20. http://gallica.bnf.fr/ark:/12148/btv1b84598749/f1.item.zoom
21. Le Soir, 17 juin 1998
22. Warden Boyd Rayward. Mundaneum: Archives of Knowledge (2010)
23. Françoise Levie, Filmer Paul Otlet, Cahiers de la documentation – Bladen voor documentatie – 2012/2
24. Françoise Levie, L'Homme qui voulait classer le monde: Paul Otlet et le Mundaneum, Impressions Nouvelles, Bruxelles,
2006
25. Stéphanie Manfroid, Les réalités d’une aventure documentaire, Cahiers de la documentation – Bladen voor documentatie –
2012/2
26. Jean-Michel Djian, Le Mundaneum, Google de papier, Le Monde Magazine, 19 december 2009
27. Libre Belgique (27 april 2007)
28. Le Vif, April 2013
29. Le Vif, April 2013

30. http://www.rtbf.be/info/regions/detail_google-va-investir-300-millions-a-saint-ghislain?id=7968392

P.278

P.279

Crossreadings

Les
Pyramides
"A pyramid is a structure whose outer surfaces are triangular and converge to
a single point at the top"
[1]

A slew of pyramids can be found in all of Paul Otlet's drawers. Knowledge
schemes and diagrams, drawings and drafts, designs, prototypes and
architectural plans (including works by Le Corbusier and Maurice Heymans)
employ the pyramid to provide structure, hierarchy, precise path and finally
access to the world's synthesized knowledge. At specific temporal crosssections, these plans were criticized for their proximity to occultism or
monumentalism. Today their rich esoteric symbolism is still readily apparent
and gives reason to search for possible spiritual or mystical underpinnings of
the Mundaneum.
Paul Otlet (1926):
“Une immense pyramide est à construire. Au sommet y travaillent Penseurs,
Sociologues et grands Artistes. Le sommet doit rejoindre la base où s’agitent les
masses, mais la base aussi doit être disposée de manière qu’elle puisse rejoindre le
[2]
sommet.”

P.280

P.281

[3]

[4]

Paul Otlet, Species
Inscription: "Il ne fut rien
sinon Mundanéen"
Mundaneum.
Mundaneum, Mons.
Personal papers of Paul
Otlet (MDN). Fonds
Encyclopaedia Universalis
Mundaneum (EUM),
document No. 8506.

La Pyramide des
Qui scit ubi scientia
Tomb at the grave of Paul
Bibliographies. In: Paul habenti est proximus.
Otlet
Otlet, Traité de
Who knows where
documentation: le livre sur science is, is about to have
le livre, théorie et pratique it. The librarian is helped
(Bruxelles: Editiones
by collaborators:
Mundaneum, 1934),
Bibliotecaire-adjoints,
rédacteurs, copistes, gens
290.
de service.
[5]

[6]

[7]

Design for the
Sketch for La
An axonometric view of Plan of the Mundaneum Perspective of the
Mundaneum, Section and Mondotheque. Paul Otlet, the Mundaneum gives the by M.C. Heymans
Mundaneum by M.C.
facades by Le Corbusier 1935?
effect of an aerial
Heymans
photograph of an
archeological site —
Egyptian, Babylonian,
Assyrian, ancient
American (Mayan and
Aztec) or Peruvian. These
historical reminiscences are
striking. Remember the
important building works
of the Mayas, who were
the zenith of ancient
American civilization.
These well-known ruins
(Uxmal, Chichen-Itza,
Palenque on the Yucatan
peninsula, and Copan in
Guatemala) represent a
“metaphysical architecture”
of special cities of religious
cults and burial grounds,
cities of rulers and priests;
pyramids, cathedrals of the
sun, moon and stars; holy
places of individual gods;
graduating pyramids and
terraced palaces with
architectural objects
conceived in basic

[8]

[9]

[10]

Paul Otlet, Cellula
Mundaneum (1936).
Mundaneum, Mons.
Personal papers of Paul
Otlet (MDN). Fonds
Affiches (AFF).

As soon as all forms of life Sketch for Mundaneum
are categorized, classified World City. Le
and determined,
Corbusier, 1929
individuals will become
numeric "dividuals" in sets,
subsets or classes.

[12]

Atlas Bruxelles –
Urbaneum - Belganeum Mundaneum. Page de
garde du chapitre 991 de
l'Atlas de Bruxelles.

[13]

The universe (which
others call the Library) is
composed of an indefinite
and perhaps infinite
number of triangular
galleries, with vast air
shafts between, surrounded
by very low railings. From
any of the triangles one
can see, interminably, the
upper and lower floors.
The distribution of the
galleries is invariable.

P.282

[11]

The ship wherein Theseus
and the youth of Athens
returned had thirty oars,
and was preserved by the
Athenians down even to
the time of Demetrius
Phalereus, for they took
away the old planks as
they decayed, putting in
new and stronger timber in
their place, insomuch that
this ship became a
standing example among
the philosophers, for the
logical question of things
that grow; one side holding
that the ship remained the
same, and the other
contending that it was not
the same.

P.283

[14]

[15]

Universal Decimal
Classification: hierarchy

World City by Le
Corbusier & Jeanneret

Paul Otlet personal
papers. Picture taken
during a Mondotheque
visit of the Mundaneum
archives, 11 September
2015

The face of the earth
Alimentation. — La base
would be much altered if de notre alimentation
repose en principe sur un
brick architecture were
trépied. 1° Protides
ousted everywhere by
glass architecture. It would (viandes, azotes). 2°
be as if the earth were
Glycides (légumineux,
hydrates de carbone). 3°
adorned with sparkling
jewels and enamels. Such Lipides (graisses). Mais il
glory is unimagmable. We faut encore pour présider
should then have a
au cycle de la vie et en
paradise on earth, and no assurer la régularité, des
need to watch in longing vitamines : c’est à elles
qu’est due la croissance
expectation for the
paradise in heaven.
des jeunes, l’équilibre
nutritif des adultes et une
certaine jeunesse chez les
vieillards.
[16]

[17]

[18]

[19]

Traité de documentation - Inverted pyramid and floor Architectural vision of the Section by Stanislas
La pyramide des
plan by Stanislas Jasinski Mundaneum by M.C.
Jasinski
bibliographies
Heymans

Le Corbusier, Musée
Mondial (1929), FLC,
doc nr. 24510

Le reseau Mundaneum.
From Paul Otlet,
Encylcopaedia Universalis
Mundaneum

[20]

Paul Otlet, Mundaneum.
Documentatio Partes.
MDN, EUM, doc nr.
8506, scan nr.
Mundaneum_A400176

P.284

Les
Pyramides

Metro Place Rogier in
2008

Paul Otlet, Atlas Monde
(1936). MDN, AFF,
scan nr.
Mundaneum_032;
Mundaneum_034;
Mundaneum_036;
Mundaneum_038;
Mundaneum_040;
Mundaneum_042;
Mundaneum_044;
Mundaneum_046;
Mundaneum_049 (sic!)

[21]

The “Sacrarium,” is
See Cross-readings,
Place Rogier, Brussels
something like a temple of Rayward, Warden Boyd around 2005
ethics, philosophy, and
(who translated and
religion. A great globe,
adapted), Mundaneum:
modeled and colored, in a Archives of Knowledge,
scale 1 = 1,000,000 with Urbana-Campaign, Ill. :
the planetarium inside, is Graduate School of
Library and Information
situated in front of the
museum building.
Science, University of
Illinois at UrbanaChampaign, 2010.
Original: Charlotte
Dubray et al.,
Mundaneum: Les
Archives de la
Connaissance, Bruxelles:
Les Impressions
Nouvelles, 2008. (p. 37)

Paul Otlet, Le Monde en son ensemble
(1936). Mundaneum, Mons. MDN,
AFF, scan nr.
MUND-00009061_2008_0001_MA

[22]

Place Rogier, Brussels
with sign "Pyramides"

P.285

[23]

Toute la Documentation. Logo
A late sketch from 1937 of the Mundaneum
showing all the complexity
of the pyramid of
documentation. An
evolutionary element
works its way up, and in
the conclusive level one
can read a synthesis:
"Homo Loquens, Homo
Scribens, Societas
Documentalis".

SOURCES
Last
Revision:
1·08·2016

1. https://en.wikipedia.org/wiki/Pyramid
2. Paul Otlet, L’Éducation et les Instituts du Palais Mondial (Mundaneum). Bruxelles: Union des Associations Internationales,
1926, p. 10. ("A great pyramid should be constructed. At the top are to be found Thinkers, Sociologists and great Artists. But
the top must be joined to the base where the masses are found, and the bases must have control of a path to the top.")
3. Wouter Van Acker. "Architectural Metaphors of Knowledge: The Mundaneum Designs of Maurice Heymans, Paul Otlet,
and Le Corbusier." Library Trends 61, no. 2 (2012): 371-396. http://muse.jhu.edu/
4. Photo: Roel de Groof http://www.zita.be/foto/roel-de-groof/allerlei/graf-paul-otlet/
5. Wouter Van Acker, 'Opening the Shrine of the Mundaneum The Positivist Spirit in the Architecture of Le Corbusier and his
Belgian “Idolators,”' in Proceedings of the Society of Architectural Historians, Australia and New Zealand: 30, Open, edited
by Alexandra Brown and Andrew Leach (Gold Coast,Qld: SAHANZ, 2013), vol. 2, p. 792.
6. Wouter Van Acker. "Architectural Metaphors of Knowledge: The Mundaneum Designs of Maurice Heymans, Paul Otlet,
and Le Corbusier." Library Trends 61, no. 2 (2012): 371-396.
7. Wouter Van Acker. "Architectural Metaphors of Knowledge: The Mundaneum Designs of Maurice Heymans, Paul Otlet,
and Le Corbusier." Library Trends 61, no. 2 (2012): 371-396.
8. Wouter Van Acker. "Architectural Metaphors of Knowledge: The Mundaneum Designs of Maurice Heymans, Paul Otlet,
and Le Corbusier." Library Trends 61, no. 2 (2012): 371-396. http://muse.jhu.edu/
9. Paul Otlet, Traité de documentation: le livre sur le livre, théorie et pratique (Bruxelles: Editiones Mundaneum, 1934), 420.
10. http://www.fondationlecorbusier.fr
11. http://www.numeriques.be
12. http://www.numeriques.be

13. Rayward, Warden Boyd, The Universe of Information: the Work of Paul Otlet for Documentation and international
Organization, FID Publication 520, Moscow, International Federation for Documentation by the All-Union Institute for
Scientific and Technical Information (Viniti), 1975. (p. 352)
14. The Man Who Wanted to Classify the World
15. Rayward, Warden Boyd (who translated and adapted), Mundaneum: Archives of Knowledge, Urbana-Campaign, Ill. :
Graduate School of Library and Information Science, University of Illinois at Urbana-Champaign, 2010, p. 35. Original:
Charlotte Dubray et al., Mundaneum: Les Archives de la Connaissance, Bruxelles: Les Impressions Nouvelles, 2008.
16. Paul Otlet, Traité de documentation: le livre sur le livre, théorie et pratique (Bruxelles: Editiones Mundaneum, 1934).
17. Wouter Van Acker, 'Opening the Shrine of the Mundaneum The Positivist Spirit in the Architecture of Le Corbusier and his
Belgian “Idolators,”' in Proceedings of the Society of Architectural Historians, Australia and New Zealand: 30, Open, edited
by Alexandra Brown and Andrew Leach (Gold Coast,Qld: SAHANZ, 2013), vol. 2, p. 804.
18. Wouter Van Acker, 'Opening the Shrine of the Mundaneum The Positivist Spirit in the Architecture of Le Corbusier and his
Belgian “Idolators,”' in Proceedings of the Society of Architectural Historians, Australia and New Zealand: 30, Open, edited
by Alexandra Brown and Andrew Leach (Gold Coast,Qld: SAHANZ, 2013), vol. 2, p. 803.
19. Wouter Van Acker, 'Opening the Shrine of the Mundaneum The Positivist Spirit in the Architecture of Le Corbusier and his
Belgian “Idolators,”' in Proceedings of the Society of Architectural Historians, Australia and New Zealand: 30, Open, edited
by Alexandra Brown and Andrew Leach (Gold Coast,Qld: SAHANZ, 2013), vol. 2, p. 804.
20. From Van Acker, Wouter, “Internationalist Utopias of Visual Education. The Graphic and Scenographic Transformation of
the Universal Encyclopaedia in the Work of Paul Otlet, Patrick Geddes, and Otto Neurath,” in Perspectives on Science,
Vol.19, nr.1, 2011, p. 72. http://staging01.muse.jhu.edu/journals/perspectives_on_science/v019/19.1.van-acker.html
21. https://ideals.illinois.edu/bitstream/handle/2142/15431/Rayward_215_WEB.pdf?sequence=2
22. http://www.sonuma.com/archive/la-conservation-des-archives-du-mundaneum
23. Mundaneum Archives, Mons

P.286

P.287

Transclusionism
This page documents some of the contraptions at work in the Mondotheque
wiki. The name "transclusionism" refers to the term "transclusion" coined by
utopian systems humanist Ted Nelson and used in Mediawiki to refer to
inclusion of the same piece of text in between different pages.
HOW TO TRANSCLUDE LABELLED SECTIONS BETWEEN
TEXTS:

To create transclusions between different texts, you need to select a section of text that will
form a connection between the pages, based on a common subject:
• Think of a category that is the common ground for the link. For example if two texts
refer to a similar issue or specific concept (eg. 'rawdata'), formulate it without
spaces or using underscores (eg. 'raw_data', not 'raw data' );
• Edit the two or more pages which you want to link, adding {{RT|rawdata}}
before the text section, and end=rawdata /> at the end (take care of the closing '/>' );
• All text sections in other wiki pages that are marked up through the same common
ground, will be transcluded in the margin of the text.
HOW IT SHOWS UP:

For example, this is how a transclusion from a labelled section of the Xanadu article appears:

From Xanadu:
Every document can contain links of
any type including virtual copies
("transclusions") to any other
document in the system accessible to

its owner.

HOW IT WORKS:

The
code is used by the 'Labeled Section Transclusion' extension, which
looks for the tagged sections in a text, to transclude them into another text based on the
assigned labels.
The {{RT|rawdata}} instead, creates the side links by transcluding the
Template:RT page, substituting the word rawdata in its internal code, in place of
{{{1}}}. This is the commented content of Template:RT:
# Puts the trancluded sections in its own div:

# Searches semantically for all the pages in the
# requested category, puts them in an
array:
{{#ask:
[[Category:{{{1}}}]]|format=array | name=results
}}
# Starts a loop, going from 0 to the amount of pages
# in the array:
{{#loop: looper
| 0
| {{#arraysize: results}}
# If the pagename of the current element of the array
# is the same as the page calling the loop, it will skip
# the page:
| {{#ifeq: {{FULLPAGENAME:
{{#arrayindex: results | {{#var:looper}} }}
}}
|
{{FULLPAGENAME}}
|
|
{{#lst:
# Otherwise it searches through the current page in the
# loop, for all the occurrences of labeled sections:
{{#arrayindex: results | {{#var:looper}} }}
| {{{1}}}
}}
# Adds a link to the current page in loop:
([[{{#arrayindex: results | {{#var:looper}} }}]])
# Adds some space after the page:

P.288

P.289




# End of pagename if statement:
}}
# End of loop:
}}
# Closes div:

# Adds the page to the label category:
[[category:{{{1}}}]]
NECESSAIRE

Currently, on top of MediaWiki and SemanticMediaWiki, the following extensions needed
to be installed for the contraption to work:
• Labeled Section Transclusion to be able to select specific sections of the texts and make
connections between them;
• Parser Functions to be able to operate statements like
if
in the wiki pseudo-language;
• Arrays to create lists of objects, for example as a result of semantic queries;
• Loops to loop between the arrays above;
• Variables as it's needed by some of the above.
Last
Revision:
2·08·2016

Reading
list
Cross-readings. Not a bibliography.
PAUL OTLET
• Paul Otlet, L’afrique aux noirs, Bruxelles: Ferdinand Larcier,
1888.
• Paul Otlet, L’Éducation et les Instituts du Palais Mondial
(Mundaneum). Bruxelles: Union des Associations
Internationales, 1926.
• Paul Otlet, Cité mondiale. Geneva: World civic center:
Mundaneum. Bruxelles: Union des Associations
Internationales, 1929.
• Paul Otlet, Traité de documentation, Bruxelles, Mundaneum,
Palais Mondial, 1934.
• Paul Otlet, Monde: essai d'universalisme - Connaissance du
Monde, Sentiment du Monde, Action organisee et Plan du
Monde, Bruxelles: Editiones Mundeum 1935. See also:
http://www.laetusinpraesens.org/uia/docs/otlet_contents.php
• Paul Otlet, Plan belgique; essai d'un plan général, économique,
social, culturel. Plan d'urbanisation national. Liaison avec le
plan mondial. Conditions. Problèmes. Solutions. Réformes,
Bruxelles: Éditiones Mundaneum, 1935.

RE-READING OTLET

Or, reading the readers that explored and contextualized the work of Otlet in recent times.
• Jacques Gillen, Stéphanie Manfroid, and Raphaèle Cornille
(eds.), Paul Otlet, fondateur du Mundaneum (1868-1944).

P.290

P.291

Architecte du savoir, Artisan de paix, Mons: Éditions Les
Impressions Nouvelles, 2010.
• Françoise Levie, L’homme qui voulait classer le monde. Paul
Otlet et le Mundaneum, Bruxelles: Les Impressions Nouvelles,
2006.
• Warden Boyd Rayward, The Universe of Information: the
Work of Paul Otlet for Documentation and international
Organization, FID Publication 520, Moscow: International
Federation for Documentation by the All-Union Institute for
Scientific and Technical Information (Viniti), 1975.
• Warden Boyd Rayward, Universum informastsii Zhizn' i
deiatl' nost' Polia Otle, Trans. R.S. Giliarevesky, Moscow:
VINITI, 1976.
• Warden Boyd Rayward (ed.), International Organization and
Dissemination of Knowledge: Selected Essays of Paul Otlet,
Amsterdam: Elsevier, 1990.
• Warden Boyd Rayward, El Universo de la Documentacion: la
obra de Paul Otlet sobra documentacion y organizacion
internacional, Trans. Pilar Arnau Rived, Madrid: Mundanau,
2005.
• Warden Boyd Raywar, "Visions of Xanadu: Paul Otlet
(1868-1944) and Hypertext." Journal of the American
Society for Information Science (1986-1998) 45, no. 4 (05,
1994): 235-251.
• Warden Boyd Rayward (who translated and adapted),
Mundaneum: Archives of Knowledge, Urbana-Campaign, Ill. :
Graduate School of Library and Information Science,
University of Illinois at Urbana-Champaign, 2010. Original:
Charlotte Dubray et al., Mundaneum: Les Archives de la
Connaissance, Bruxelles: Les Impressions Nouvelles, 2008.
• Wouter Van Acker,[http://staging01.muse.jhu.edu/journals/
perspectives_on_science/v019/19.1.van-acker.html
“Internationalist Utopias of Visual Education. The Graphic
and Scenographic Transformation of the Universal
Encyclopaedia in the Work of Paul Otlet, Patrick Geddes,

and Otto Neurath” in Perspectives on Science, Vol.19, nr.1,
2011, p. 32-80.
• Wouter Van Acker, “Universalism as Utopia. A Historical
Study of the Schemes and Schemas of Paul Otlet
(1868-1944)”, Unpublished PhD Dissertation, University
Press, Zelzate, 2011.
• Theater Adhoc, The humor and tragedy of completeness,
2005.

FATHERS OF THE INTERNET

Constructing a posthumous pre-history of contemporary networking technologies.
• Christophe Lejeune, Ce que l’annuaire fait à Internet Sociologie des épreuves documentaires, in Cahiers dela
documentation – Bladen voor documentatie – 2006/3.
• Paul Dourish and Genevieve Bell, Divining a Digital Future,
Chicago: MIT Press 2011.
• John Johnston, The Allure of Machinic Life: Cybernetics,
Artificial Life, and the New AI, Chicago: MIT Press 2008.
• Charles van den Heuvel Building society, constructing
knowledge, weaving the web, in Boyd Rayward [ed.]
European Modernism and the Information Society, London:
Ashgate Publishers 2008, chapter 7 pp. 127-153.
• Tim Berners-Lee, James Hendler, Ora Lassila, The Semantic
Web, in Scientific American - SCI AMER , vol. 284, no. 5,
pp. 34-43, 2001.
• Alex Wright, Cataloging the World: Paul Otlet and the Birth
of the Information Age, Oxford University Press, 2014.
• Popova, Maria, “The Birth of the Information Age: How Paul
Otlet’s Vision for Cataloging and Connecting Humanity
Shaped Our World”, Brain Pickings, 2014.
• Heuvel, Charles van den, “Building Society, Constructing
Knowledge, Weaving the Web”. in European Modernism and

P.292

P.293

the Information Society – Informing the Present,
Understanding the Past, Aldershot, 2008, pp. 127–153.

CLASSIFYING THE WORLD

The recurring tensions between the world and its systematic representation.
• ShinJoung Yeo, James R. Jacobs, Diversity matters?
Rethinking diversity in libraries, Radical Reference
Countepoise 9 (2) Spring, 2006. p. 5-8.
• Thomas Hapke, Wilhelm Ostwald's Combinatorics as a Link
between In-formation and Form, in Library Trends, Volume
61, Number 2, Fall 2012.
• Nancy Cartwright, Jordi Cat, Lola Fleck, Thomas E. Uebel,
Otto Neurath: Philosophy Between Science and Politics.
Cambridge University Press, 2008.
• Nathan Ensmenger, The Computer Boys Take Over:
Computers, Programmers, and the Politics of Technical
Expertise. MIT Press, 2010.
• Ronald E. Day, The Modern Invention of Information:
Discourse, History, and Power, Southern Illinois University
Press, 2001.
• Markus Krajewski, Peter Krapp Paper Machines: About
Cards & Catalogs, 1548-1929 The MIT Press
• Eric de Groller A Study of general categories applicable to
classification and coding in documentation; Documentation and
terminology of science; 1962.
• Marlene Manoff, "Theories of the archive from across the
disciplines," in portal: Libraries and the Academy, Vol. 4, No.
1 (2004), pp. 9–25.
• Charles van den Heuvel, W. Boyd Rayward, Facing
Interfaces: Paul Otlet's Visualizations of Data Integration.

Journal of the American society for information science and
technology (2011).

DON'T BE EVIL

Standing on the hands of Internet giants.
• Rene Koenig, Miriam Rasch (eds), Society of the Query
Reader: Reflections on Web Search, Amsterdam: Institute of
Network Cultures, 2014.
• Matthew Fuller, Andrew Goffey, Evil Media. Cambridge,
Mass., United States: MIT Press, 2012.
• Steve Levy In The Plex. Simon & Schuster, 2011.
• Dan Schiller, ShinJoung Yeo, Powered By Google: Widening
Access and Tightening Corporate Control in: Red Art: New
Utopias in Data Capitalism, Leonardo Electronic Almanac,
Volume 20 Issue 1 (2015).
• Invisible Committee, Fuck Off Google, 2014.
• Dave Eggers, The Circle. Knopf, 2014.
• Matteo Pasquinelli, Google’s PageRank Algorithm: A
Diagram of the Cognitive Capitalism and the Rentier of the
Common Intellect. In: Konrad Becker, Felix Stalder
(eds), Deep Search, London: Transaction Publishers: 2009.
• Joris van Hoboken, Search Engine Freedom: On the
Implications of the Right to Freedom of Expression for the

P.294

P.295

Legal Governance of Web Search Engines. Kluwer Law
International, 2012.
• Wendy Hui Kyong Chun, Control and Freedom: Power and
Paranoia in the Age of Fiber Optics. The MIT Press, 2008.
• Siva Vaidhyanathan, The Googlization of Everything (And
Why We Should Worry). University of California Press.
2011.
• William Miller, Living With Google. In: Journal of Library
Administration Volume 47, Issue 1-2, 2008.
• Lawrence Page, Sergey Brin The Anatomy of a Large-Scale
Hypertextual Web Search Engine. Computer Networks, vol.
30 (1998), pp. 107-117.
• Ken Auletta Googled: The end of the world as we know it.
Penguin Press, 2009.

EMBEDDED HIERARCHIES

How classification systems, and the dream of their universal application actually operate.
• Paul Otlet, Traité de documentation, Bruxelles, Mundaneum,
Palais Mondial, 1934. (for alphabet hierarchy, see page 71)
• Paul Otlet, L’afrique aux noirs, Bruxelles: Ferdinand Larcier,
1888.
• Judy Wajcman, Feminism Confronts Technology, University
Park, Pa: Pennsylvania State University Press, 1991.
• Judge, Anthony, “Union of International Associations – Virtual
Organization – Paul Otlet's 100-year Hypertext
Conundrum?”, 2001.
• Ducheyne, Steffen, “Paul Otlet's Theory of Knowledge and
Linguistic Objectivism”, in Knowledge Organization, no 32,
2005, pp. 110–116.

ARCHITECTURAL VISIONS

Writings on how Otlet's knowledge site was successively imagined and visualized on grand
architectural scales.
• Catherine Courtiau, "La cité internationale 1927-1931," in
Transnational Associations, 5/1987: 255-266.
• Giuliano Gresleri and Dario Matteoni. La Città Mondiale:
Andersen, Hébrard, Otlet, Le Corbusier. Venezia: Marsilio,
1982.
• Isabelle Rieusset-Lemarie, "P. Otlet's Mundaneum and the
International Perspective in the History of Documentation and
Information science," in Journal of the American Society for
Information Science (1986-1998)48.4 (Apr 1997):
301-309.
• Le Corbusier, Vers une Architecture, Paris: les éditions G.
Crès, 1923.
• Transnational Associations, "Otlet et Le Corbusier" 1927-31,
INGO Development Projects: Quantity or Quality, Issue No:
5, 1987.
• Wouter Van Acker. "Hubris or utopia? Megalomania and
imagination in the work of Paul Otlet," in Cahiers de la
documentation – Bladen voor documentatie – 2012/2,
58-66.
• Wouter Van Acker. "Architectural Metaphors of Knowledge:
The Mundaneum Designs of Maurice Heymans, Paul Otlet,
and Le Corbusier." Library Trends 61, no. 2 (2012):
371-396.
• Van Acker, Wouter, Somsen, Geert, “A Tale of Two World
Capitals – the Internationalisms of Pieter Eijkman and Paul
Otlet”, in Revue Belge de Philologie et d'Histoire/Belgisch
Tijdschrift voor Filologie en Geschiedenis, Vol. 90, nr.4,
2012.
• Wouter Van Acker, "Opening the Shrine of the Mundaneum
The Positivist Spirit in the Architecture of Le Corbusier and his
Belgian “Idolators”, in Proceedings of the Society of

P.296

P.297

Architectural Historians, Australia and New Zealand: 30,
Open, edited by Alexandra Brown and Andrew Leach (Gold
Coast,Qld: SAHANZ, 2013), vol. 2, 791-805.
• Anthony Vidler, “The Space of History: Modern Museums
from Patrick Geddes to Le Corbusier,” in The Architecture of
the Museum: Symbolic Structures, Urban Contexts, ed.
Michaela Giebelhausen (Manchester; New York: Manchester
University Press, 2003).
• Volker Welter. "Biopolis Patrick Geddes and the City of
Life." Cambridge, Mass: MIT, 2003.
• Alfred Willis, “The Exoteric and Esoteric Functions of Le
Corbusier’s Mundaneum,” Modulus/University of Virginia
School of Architecture Review 12, no. 21 (1980).

ZEITGEIST

It includes both century-old sources and more recent ones on the parallel or entangled
movements around the Mundaneum time.
• Hendrik Christian Andersen and Ernest M. Hébrard.
Création d'un Centre mondial de communication. Paris, 1913.
• Julie Carlier, "Moving beyond Boundaries: An Entangled
History of Feminism in Belgium, 1890–1914," Ph.D.
dissertation, Universiteit Gent, 2010. (esp. 439-458.)
• Bambi Ceuppens, Congo made in Flanders?: koloniale
Vlaamse visies op "blank" en "zwart" in Belgisch Congo.
[Gent]: Academia Press, 2004.
• Conseil International des Femmes (International Council of
Women), Office Central de Documentation pour les Questions
Concernant la Femme. Rapport. Bruxelles : Office Central de
Documentation Féminine, 1909.
• Sandi E. Cooper, Patriotic pacifism waging war on war in
Europe, 1815-1914. New York: Oxford University Press,
1991.
• Sylvie Fayet-Scribe, "Women Professionals in Documentation
in France during the 1930s," Libraries & the Cultural Record

Vol. 44, No. 2, Women Pioneers in the Information Sciences
Part I, 1900-1950 (2009), pp. 201-219. (translated by
Michael Buckland)
• François Garas, Mes temples. Paris: Michalon, 1907.
• Madeleine Herren, Hintertüren zur Macht: Internationalismus
und modernisierungsorientierte Aussenpolitik in Belgien, der
Schweiz und den USA 1865-1914. München: Oldenbourg,
2000.
• Robert Hoozee and Mary Anne Stevens, Impressionism to
Symbolism: The Belgian Avant-Garde 1880-1900, London:
Royal Academy of Arts, 1994.
• Markus Krajewski, Die Brücke: A German contemporary of
the Institut International de Bibliographie. In: Cahiers de la
documentation / Bladen voor documentatie 66.2 (Juin,
Numéro Spécial 2012), 25–31.
• Daniel Laqua, "Transnational intellectual cooperation, the
League of Nations, and the problem of order," in Journal of
Global History (2011) 6, pp. 223–247.
• Lewis Pyenson and Christophe Verbruggen, "Ego and the
International: The Modernist Circle of George Sarton," Isis,
Vol. 100, No. 1 (March 2009), pp. 60-78.
• Elisée Reclus, Nouvelle géographie universelle; la terre et les
hommes, Paris, Hachette et cie., 1876-94.
• Edouard Schuré, Les grands initiés: esquisse de l'histoire
secrète des religions, 1889.
• Rayward, Warden Boyd (ed.), European Modernism and the
Information Society: Informing the Present, Understanding the
Past. Aldershot, Hants, England: Ashgate, 2008.
• Van Acker, Wouter, “Internationalist Utopias of Visual
Education. The Graphic and Scenographic Transformation of
the Universal Encyclopaedia in the Work of Paul Otlet,

P.298

P.299

Patrick Geddes, and Otto Neurath”, in Perspectives on
Science, Vol.19, nr.1, 2011, p. 32-80.
• Nader Vossoughian, "The Language of the World Museum:
Otto Neurath, Paul Otlet, Le Corbusier", Transnational
Associations 1-2 (January-June 2003), Brussels, pp 82-93.
• Alfred Willis, “The Exoteric and Esoteric Functions of Le
Corbusier’s Mundaneum,” Modulus/University of Virginia
School of Architecture Review 12, no. 21 (1980).
Last
Revision:
2·08·2016

Colophon/
Colofon
• Mondotheque editorial team/redactie team/équipe éditoriale: André Castro,
Sînziana Păltineanu, Dennis Pohl, Dick Reckard, Natacha
Roussel, Femke Snelting, Alexia de Visscher
• Copy-editing/tekstredactie/édition EN: Sophie Burm (Amateur Librarian, The
Smart City - City of Knowledge, X=Y, A Book of the Web), Liz Soltan (An
experimental transcript)
• Translations EN-FR/vertalingen EN-FR/traductions EN-FR: Eva Lena
Vermeersch (Amateur Librarian, A Pre-emptive History of the Google Cultural
Institute, The Smart City - City of Knowledge), Natacha Roussel (LES
UTOPISTES and their common logos, Introduction), Donatella
Portoghese
• Translations EN-NL/vertalingen EN-FR/traductions EN-NL: Femke
Snelting, Peter Westenberg
• Transcriptions/transcripties/transcriptions: Lola Durt, Femke Snelting,
Tom van den Wijngaert
• Design and development/ontwerp en ontwikkeling/graphisme et développement:
Alexia de Visscher, André Castro
• Fonts/lettertypes/polices: NotCourierSans, Cheltenham, Traité facsimile
• Tools/gereedschappen/outils: Semantic Mediawiki, etherpad,
Weasyprint, html5lib, mwclient, phantomjs, gnu make ...
• Source-files/bronbestanden/code source: https://gitlab.com/Mondotheque/
RadiatedBook + http://www.mondotheque.be
• Published by/een publicatie van/publié par: Constant (2016)
• Printed at/druk/imprimé par: Online-Druck.biz
• License/licentie/licence: Texts and images developed by Mondotheque are available
under a Free Art License 1.3 (C) Copyleft Attitude, 2007. You may copy,
distribute and modify them according to the terms of the Free Art License: http://
artlibre.org Texts and images by Paul Otlet and Henri Lafontaine are in the Public
Domain. Other materials copyright by the authors/Teksten en afbeeldingen
ontwikkeld door Mondotheque zijn beschikbaar onder een Free Art License 1.3 (C)
Copyleft Attitude, 2007. U kunt ze dus kopiëren, verspreiden en wijzigen volgens de
voorwaarden van de Free Art License: http://artlibre.org Teksten en beelden van
Paul Otlet en Henri Lafontaine zijn in het publieke domein. Andere materialen:
auteursrecht bij de auteurs/Les textes et images développées par Mondotheque sont

P.300

P.301

disponibles sous licence Art Libre 1.3 (C) Copyleft Attitude 2007. Vous pouvez
les copier, distribuer et modifier selon les termes de la Licence Art Libre: http://
artlibre.org Les textes et les images de Paul Otlet et Henri Lafontaine sont dans le
domaine public. Les autres matériaux sont assujettis aux droits d'auteur choisis par
les auteurs.
• ISBN: 9789081145954
Thank you/bedankt/merci: the contributors/de auteurs/les contributeurs, Yves Bernard,
Michel Cleempoel, Raphaèle Cornille, Jan Gerber, Marc d'Hoore, Églantine Lebacq,
Nicolas Malevé, Stéphanie Manfroid, Robert M. Ochshorn, An Mertens, Dries Moreels,
Sylvia Van Peteghem, Jara Rocha, Roel Roscam Abbing.
Mondotheque is supported by/wordt ondersteund door/est soutenu par: De Vlaamse
GemeenschapsCommissie, Akademie Schloss Solitude.
Last
Revision:
2·08·2016

> I am less interested in the critical practice of reflection, of
> showing once-again that the emperor has no clothes, than in finding a
> way to *diffract* critical inquiry in order to make difference
> patterns in a more worldly way.^[1](#ebceffee)^

The techno-galactic software survival guide that you are holding right
now was collectively produced as an outcome of the Techno-Galactic
Software Observatory. This guide proposes several ways to achieve
critical distance from the seemingly endless software systems that
surround us. It offers practical and fantastical tools for the tactical
(mis)use of software, empowering/enabling users to resist embedded
paradigms and assumptions. It is a collection of methods for approaching
software, experiencing its myths and realities, its risks and benefits.

With the rise of online services, the use of software has increasingly
been knitted into the production of software, even while the rhetoric,
rights, and procedures continue to suggest that use and production
constitute separate realms. This knitting together and its corresponding
disavowal have an effect on the way software is used and produced, and
radically alters its operative role in society. The shifts ripple across
galaxies, through social structures, working conditions and personal
relations, resulting in a profusion of apparatuses aspiring to be
seamless while optimizing and monetizing individual and collective flows
of information in line with the interests of a handful of actors. The
diffusion of software services affects the personal, in the form of
intensified identity shaping and self-management. It also affects the
public, as more and more libraries, universities and public
infrastructures as well as the management of public life rely on
\"solutions\" provided by private companies. Centralizing data flows in
the clouds, services blur the last traces of the thin line that
separates bio- from necro-politics.

Given how fast these changes resonate and reproduce, there is a growing
urgency to engage in a critique of software that goes beyond taking a
distance, and that deals with the fact that we are inevitably already
entangled. How can we interact, intervene, respond and think with
software? What approaches can allow us to recognize the agency of
different actors, their ways of functioning and their politics? What
methods of observation enable critical inquiry and affirmative discord?
What techniques can we apply to resurface software where it has melted
into the infrastructure and into the everyday? How can we remember that
software is always at work, especially where it is designed to disappear
into the background?

We adopted the term of observation for a number of reasons. We regard
observation as a way to approach software, as one way to organize
engagement with its implications. Observation, and the enabling of
observation through intensive data-centric feedback mechanisms, is part
of the cybernetic principles that underpin present day software
production. Our aim was to scrutinize this methodology in its many
manifestations, including in \"observatories\" \-- high cost
infrastructures \[testing infrastructures?CITECLOSE23310 of observation
troubled by colonial, imperial traditions and their problematic
divisions of nature and culture \-- with the hope of opening up
questions about who gets to observe software (and how) and who is being
observed by software (and with what impact)? It is a question of power,
one that we answer, at least in part, with critical play.

We adopted the term techno-galactic to match the advertised capability
of \"scaling up to the universe\" that comes in contemporary paradigms
of computation, and to address different scales of software communities
and related political economies that involve and require observation.

Drawing on theories of software and computation developed in academia
and elsewhere, we grounded our methods in hands-on exercises and
experiments that you now can try at home. This Guide to Techno-Galactic
Software Observation offers methods developed in and inspired by the
context of software production, hacker culture, software studies,
computer science research, Free Software communities, privacy activism,
and artistic practice. It invites you to experiment with ways to stay
with the trouble of software.

The Techno-Galactic Software Observatory
----------------------------------------

In the summer of 2017, around thirty people gathered in Brussels to
explore practices of proximate critique with and of software in the
context of a worksession entitled \"Techno-Galactic Software
Observatory\".^[2](#bcaacdcf)^ The worksession called for
software-curious people of all kinds to ask questions about software.
The intuition behind such a call was that different types of engagement
requires a heterogeneous group of participants with different levels of
expertise, skill and background. During three sessions of two days,
participants collectively inspected the space-time of computation and
probed the universe of hardware-software separations through excursions,
exercises and conversations. They tried out various perspectives and
methods to look at the larger picture of software as a concept, as a
practice, and as a set of techniques.

The first two days of The Techno-Galactic Software Observatory included
visits to the Musée de l\'Informatique Pionnière en
Belgique^[3](#aaceaeff)^ in Namur and the Computermuseum
KULeuven^[4](#afbebabd)^. In the surroundings of these collections of
historical 'numerical artefacts', we started viewing software in a
long-term context. It offered us the occasion to reflect on the
conditions of its appearance, and allowed us to take on current-day
questions from a genealogical perspective. What is software? How did it
appear as a concept, in what industrial and governmental circumstances?
What happens to the material conditions of its production (minerals,
factory labor, hardware) when it evaporates into a cloud?

The second two days we focused on the space-time dimension of IT
development. The way computer programs and operating systems are
manufactured changed tremendously through time, and so did its
production times and places. From military labs via the mega-corporation
cubicles to the open-space freelancer utopia, what ruptures and
continuities can be traced in the production, deployment, maintenance
and destruction of software? From time-sharing to user-space partitions
and containerization, what separations were and are at work? Where and
when is software made today?

The Walk-in Clinic
------------------

The last two days at the Techno-galactic software observatory were
dedicated to observation and its consequences. The development of
software encompasses a series of practices whose evocative names are
increasingly familiar: feedback, report, probe, audit, inspect, scan,
diagnose, explore, test \... What are the systems of knowledge and power
within which these activities take place, and what other types of
observation are possible? As a practical set for our investigations, we
set up a walk-in clinic on the 25th floor of the World Trade Center,
where users and developers could arrive with software-questions of all
kinds.

> Do you suffer from the disappearance of your software into the cloud,
> feel oppressed by unequal user privilege, or experience the torment of
> software-ransom of any sort? Bring your devices and interfaces to the
> World Trade Center! With the help of a clear and in-depth session, at
> the Techno-Galactic Walk-In Clinic we guarantee immediate results. The
> Walk-In Clinic provides free hands-on observations to software curious
> people of all kinds. A wide range of professional and amateur
> practitioners will provide you with
> Software-as-a-Critique-as-a-Service on the spot. Available services
> range from immediate interface critique, collaborative code
> inspection, data dowsing, various forms of network analyses,
> unusability testing, identification of unknown viruses, risk
> assessment, opening of black-boxes and more. Free software
> observations provided. Last intake at 16:45.\
> (invitation to the Walk-In Clinic, June 2017)

On the following pages: Software as a Critique as a Service (SaaCaaS)
Directory and intake forms for Software Curious People (SCP).

[SHOW IMAGE HERE:
http://observatory.constantvzw.org/documents/masterlist\_twosides\_NEU.pdf]{.tmp}
[SHOW IMAGE HERE:
http://observatory.constantvzw.org/documents/scprecord\_FINAL.pdf]{.tmp}
[]{#owqzmtdk .anchor}

Techno-Galactic Software Observation Essentials
=
**WARNING**

The survival techniques described in the following guide are to be used
at your own risk in case of emergency regarding software curiosity. The
publisher will not accept any responsability in case of damages caused
by misuse, misundestanding of instruction or lack of curiosity. By
trying the action exposed in the guide, you accept the responsability of
loosing data or altering hardware, including hard disks, usb key, cloud
storage, screens by throwing them on the floor, or even when falling on
the floor with your laptop by tangling your feet in an entanglement of
cables. No harm has been done to human, animal, computers or plants
while creating the guide. No firearms or any kind of weapon is needed in
order to survive software.\
Just a little bit of patience.

**Software observation survival stresses**

**Physical fitness plays a great part of software observation. Be fit or
CTRL-Quit.**

When trying to observe software you might experience stresses as such :

*Anxiety*Sleep deprivation *Forgetting about eating*Loss of time
tracking

**Can you cope with software ? You have to.**

> our methods for observation, like mapping, come with their luggage.

[Close encounters]{.grouping} []{#njm5zwm4 .anchor}
[[Method:](http://pad.constantvzw.org/p/observatory.guide.visit)
Encounter several collections of historical hardware
back-to-back]{.method .descriptor} [How]{.how .empty .descriptor}

This can be done by identifying one or more computer museums and visit
them with little time in-between. Visiting a friend with a large
basement and lots of left-over computer equipment can help. Seeing and
possibly touching hardware from different contexts
(state-administration, business, research, \...), periods of time,
cultural contexts (California, Germany, French-speaking Belgium) and
price ranges allows you to sense the interactions between hardware and
software development.

[Note: It\'s a perfect way to hear people speak about the objects and
their contexts, how they worked or not and how objects are linked one
with another. It also shows the economic and cultural aspects of
softwares.]{.note .descriptor} [WARNING: **DO NOT FOLD, SPINDLE OR
MUTILATE**]{.warning .descriptor} [Example: Spaghetti Suitcase]{.example
.descriptor}

At one point during the demonstration of a Bull computer, the guide
revealed the system\'s \"software\" \-- a suitcase sized module with
dozens of patch cords. She made the comment that the term \"spaghetti
code\" (a derogatory expression about early code usign many \"GOTO\"
statments) had its origin in this physical arrangement of code as
patchings.

Preserving old hardware in order to observe physical manifestation of
software. See software here : we did experienced the incredible
possibility of actually touching software.

[SHOW IMAGE HERE:
http://observatory.constantvzw.org/images/wednesday/IMG\_20170607\_113634\_585.jpg]{.tmp}
[SHOW IMAGE HERE:
http://gallery.constantvzw.org/var/resizes/Techno-Galactic-Software-Observatory/IMG\_1163.JPG?m=1496916927]{.tmp}
[Example: Playing with the binary. Bull cards. Happy operator! Punch
card plays.]{.example .descriptor}

\"The highlight of the collection is to revive a real punch card
workshop of the 1960s.\"

[Example: Collection de la Maison des Écritures d\'Informatique & Bible,
Maredsous]{.example .descriptor}

The particularity of the collection lies in the fact that it\'s the
conservation of multiple stages of life of a software since its initial
computerization until today. The idea of introducing informatics into
the work of working with/on the Bible (versions in Hebrew, Greek, Latin,
and French) dates back to 1971, via punch card recordings and their
memorization on magnetic tape. Then came the step of analyzing texts
using computers.

[SHOW IMAGE HERE:
http://gallery.constantvzw.org/var/resizes/Preparing-the-Techno-galactic-Software-Observatory/DSC05019.JPG?m=1490635726]{.tmp}
[TODO: RELATES TO
http://pad.constantvzw.org/p/observatory.guide.jean.heuns]{.tmp}
[]{#mguzmza4 .anchor}
[[Method:](http://pad.constantvzw.org/p/observatory.guide.jean.heuns)
Interview people about their histories with software]{.method
.descriptor} [What: Observe personnal narratives around software
history. Retrace the path of relation to software, how it changed during
the years and what are the human access memories that surrounds it. To
look at software through personal relations and emotions.]{.what
.descriptor} [How: Interviews are a good way to do it. Informal
conversations also.]{.how .descriptor}

Jean Heuns has been collecting servers, calculators, softwares, magnetic
tapes hard disks for xxx years. Found an agreement for them to be
displayed in the department hallways. Department of Computer sciences -
Kul Leuven.

[SHOW IMAGE HERE:
http://gallery.constantvzw.org/var/albums/Techno-Galactic-Software-Observatory/PWFU3350.JPG]{.tmp}
[SHOW IMAGE HERE:
http://gallery.constantvzw.org/var/albums/Techno-Galactic-Software-Observatory/PWFU3361.JPG]{.tmp}
[SHOW IMAGE HERE:
http://gallery.constantvzw.org/var/albums/Techno-Galactic-Software-Observatory/PWFU3356.JPG]{.tmp}
[SHOW IMAGE HERE:
http://gallery.constantvzw.org/var/albums/Techno-Galactic-Software-Observatory/PWFU3343.JPG]{.tmp}
[TODO: RELATES TO]{.tmp} []{#odfkotky .anchor}
[[Method:](http://pad.constantvzw.org/p/observatory.guide.samequestion)
Ask several people from different fields and age-groups the same
question: \"***What is software?***\"]{.method .descriptor} [Remember:
The answers to this question will vary depending on who is asking it to
who.]{.remember .descriptor} [What: By paying close attention to the
answers, and possibly logging them, observations on the ambiguous place
and nature of software can be made.]{.what .descriptor}
[Example]{.example .empty .descriptor}

Jean Huens (system administrator at the department of Computer Science,
KULeuven): \"*It is difficult to answer the question \'what is
software\', but I know what is good software*\"

Thomas Cnudde (hardware designer at ESAT - COSIC, Computer Security and
Industrial Cryptography, KULeuven): \"*Software is a list of sequential
instructions! Hardware for me is made of silicon, software a sequence of
bits in a file. But naturally I am biased: I\'m a hardware designer so I
like to consider it as unique and special*\".

Amal Mahious (Director of NAM-IP, Namur): \"*This, you have to ask the
specialists.*\"

` {.verbatim}
*what is software?
--the unix filesystem says: it's a file----what is a file?
----in the filesystem, if you ask xxd:
------ it's a set of hexadecimal bytes
-------what is hexadecimal bytes?
------ -b it's a set of binary 01s
----if you ask objdump
-------it's a set of instructions
--side channel researching also says:
----it's a set of instructions
--the computer glossary says:
----it's a computer's programs, plus the procedure for their use http://etherbox.local/home/pi/video/A_Computer_Glossary.webm#t=02:26
------ a computer's programs is a set of instrutions for performing computer operations
`

[Remember: To answer the question \"*what is software*\" depends on the
situation, goal, time, and other contextual influences.]{.remember
.descriptor} [TODO: RELATES TO
http://pad.constantvzw.org/p/observatory.guide.everyonescp]{.tmp}
[]{#mzcxodix .anchor}
[[Method:](http://pad.constantvzw.org/p/observatory.guide.devmem) FMEM
and /DEV/MEM]{.method .descriptor} [What: Different ways of exploring
your memory (RAM). Because in unix everything is a file, you can access
your memory as if it were a file.]{.what .descriptor} [Urgency: To try
and observe the operational level of software, getting closer to the
workings, the instruction-being of an executable/executing file, the way
it is when it is loaded into memory rather than when it sits in the
harddisk]{.urgency .descriptor} [Remember: In Unix-like operating
systems, a device file or special file is an interface for a device
driver that appears in a file system as if it were an ordinary file. In
the early days you could fully access your memory via the memory device
(`/dev/mem`) but over time the access was more and more restricted in
order to avoid malicious processes to directly access the kernel memory.
The kernel option CONFIG\_STRICT\_DEVMEM was introduced in kernel
version 2.6 and upper (2.6.36--2.6.39, 3.0--3.8, 3.8+HEAD). So you\'ll
need to use the Linux kernel module fmem: this module creates
`/dev/fmem` device, that can be used for accessing physical memory
without the limits of /dev/mem (1MB/1GB, depending on
distribution).]{.remember .descriptor}

`/dev/mem` tools to explore processes stored in the memory

ps ax | grep process
cd /proc/numberoftheprocess
cat maps

\--\> check what it is using

The proc filesystem is a pseudo-filesystem which provides an interface
to kernel data structures. It is commonly mounted at `/proc`. Most of it
is read-only, but some files allow kernel variables to be changed.

dump to a file\--\>change something in the file\--\>dump new to a
file\--\>diff oldfile newfile

\"where am i?\"

to find read/write memory addresses of a certain process\
`awk -F "-| " '$3 ~ /rw/ { print $1 " " $2}' /proc/PID/maps`{.bash}

take the range and drop it to hexdump

sudo dd if=/dev/mem bs=1 skip=$(( 16#b7526000 - 1 )) \
count=$(( 16#b7528000 - 16#7b7526000 + 1)) | hexdump -C

Besides opening the memory dump with an hex editor you can also try and
explore it with other tools or devices. You can open it as a raw image,
you can play it as a sound or perhaps send it directly to your
frame-buffer device (`/dev/fb0`).

[WARNING: Although your memory may look like/sound like/read like
gibberish, it may contain sensitive information about you and your
computer!]{.warning .descriptor} [Example]{.example .empty .descriptor}
[SHOW IMAGE HERE:
http://observatory.constantvzw.org/images/Screenshot\_from\_2017-06-07\_164407.png]{.tmp}
[TODO: BOX: Forensic and debuggung tools can be used to explore and
problematize the layers of abstraction of computing.]{.tmp} [TODO:
RELATES TO
http://pad.constantvzw.org/p/observatory.guide.monopsychism]{.tmp}
[]{#m2mwogri .anchor}
[[Method:](http://pad.constantvzw.org/p/observatory.guide.monopsychism)
Pan/Monopsychism]{.method .descriptor} [What: Reading and writing
sectors of memory from/to different computers]{.what .descriptor} [How:
Shell commands and fmem kernel module]{.how .descriptor} [Urgency:
Memory, even when it is volatile, is a trace of the processes happening
in your computer in the form of saved information, and is therefore more
similar to a file than to a process. Challenging the file/process
divide, sharing memory with others will allow a more intimate relation
with your and other\'s computers.]{.urgency .descriptor} [About:
Monopsychism is the philosophical/theological doctrine according to
which there exists but one intellect/soul, shared by all beings.]{.about
.descriptor} [TODO: RELATES TO
http://pad.constantvzw.org/p/observatory.guide.devmem]{.tmp} [Note: The
parallel allocation and observation of the same memory sector in two
different computers is in a sense the opposite process of machine
virtualization, where the localization of multiple virtual machines in
one physical comptuers can only happen by rigidly separating the memory
sectors dedicated to the different virtual machines.]{.note .descriptor}
[WARNING: THIS METHOD HAS NOT BEEN TESTED, IT CAN PROBABLY DAMAGE YOUR
RAM MEMORY AND/OR COMPUTER]{.warning .descriptor}

First start the fmem kernel module in both computers:

`sudo sh fmem/run.sh`{.bash}

Then load part of your computer memory into the other computer via dd
and ssh:

`dd if=/dev/fmem bs=1 skip=1000000 count=1000 | ssh user@othercomputer dd of=/dev/fmem`{.bash}

Or viceversa, load part of another computer\'s memory into yours:

`ssh user@othercomputer dd if=/dev/fmem bs=1 skip=1000000 count=1000 | dd of=/dev/fmem`{.bash}

Or even, exchange memory between two other computers:

`ssh user@firstcomputer dd if=/dev/fmem bs=1 skip=1000000 count=1000 | ssh user@secondcomputer dd of=/dev/fmem`{.bash}

` {.quaverbatim}
pan/monopsychism:
(aquinas famously opposed averroes..who's philosophy can be interpreted as monopsychist)

shared memory

copying the same memory to different computers

https://en.wikipedia.org/wiki/Reflection_%28computer_programming%29

it could cut through the memory like a worm

or it could go through the memory of different computers one after the other and take and leave something there
`

[Temporality]{.grouping} []{#ndawnmy5 .anchor}
[[Method:](http://pad.constantvzw.org/p/observatory.guide.fountain)
Fountain refreshment]{.method .descriptor} [What: Augmenting a piece of
standardised office equipment designed to dispense water to perform a
decorative function.]{.what .descriptor} [How: Rearranging space as
conditioning observations (WTC vs. Museum vs. University vs. Startup
Office vs. Shifting Walls that became Water Fountains)]{.how
.descriptor} [Who: Gaining access to standardised water dispensing
equipment turned out to be more difficult than expected as such
equipment is typically licensed / rented rather than purchased outright.
Acquiring a unit that could be modified required access to secondary
markets of second hand office equiment in order to purchase a disused
model.]{.who .descriptor} [Urgency: EU-OSHA (European Agency for Safety
and Health at Work) Directive 2003/10/EC noise places describes the
minimum health and safety requirements regarding the exposure of workers
to the risks arising from physical agents (noise). However no current
European guidelines exist on the potential benefitial uses of tactially
designed additive noise systems.]{.urgency .descriptor}

The Techno-Galactic Software Observatory -- Comfortable silence, one way
mirrors

A drinking fountain and screens of one-way mirrors as part of the work
session \"*The Techno-Galactic Software Observatory*\" organised by
Constant.

For the past 100 years the western ideal of a corporate landscape has
been has been moving like a pendulum, oscillating between grids of
cubicles and organic, open landscapes, in a near to perfect 25-year
rhythm. These days the changes in office organisation is supplemented by
sound design, in corporate settings mostly to create comfortable
silence. Increase the sound and the space becomes more intimate, the
person on the table next to you can not immediately hear what you are
saying. It seems that actual silence in public and corporate spaces has
not been sought after since the start of the 20th century. Actual
silence is not at the moment considered comfortable. One of the visible
symptoms of our desire to take the edge off the silence is to be
observed through the appearance of fountains in public space. The
fountains purpose being to give off neutral sound, like white noise
without the negative connotations. However as a sound engineer\'s
definition of noise is unwanted sound that all depends on ones personal
relation to the sound of dripping water.

This means that there needs to be a consistent inoffensiveness to create
comfortable silence.

In corporate architecture the arrival of glass buildings were originally
seen as a symbol of transparency, especially loved by governmental
buildings. Yet the reflectiveness of this shiny surface once combined
with strong light -- known as the treason of the glass -- was only
completely embraced at the invention of one-way-mirror foil. And it was
the corporate business-world that would come to be known for their
reflective glass skyscrapers. As the foil reacts to light, it appears
transparent to someone standing in the dark, while leaving the side with
the most light with an opaque surface. Using this foil as room dividers
in a room with a changing light, what is hidden or visible will vary
throughout the day. So will the need for comfortable silence. Disclaimer
:\
Similar to the last 100 years of western office organisation,\
this fountain only has two modes:\
on or off

If it is on it also offers two options\
cold water and hot water

This fountain has been tampered with and has not in any way been
approved by a proffesional fountain cleaner. I do urge you to consider
this before you take the decision to drink from the fountain.

Should you chose to drink from the fountain, then I urge you to write
your name on your cup, in the designated area, for a customised
experience of my care for you.

I do want you to be comfortable.

[SHOW IMAGE HERE:
http://observatory.constantvzw.org/documents/mia/mia6.gif]{.tmp} [SHOW
IMAGE HERE:
http://observatory.constantvzw.org/documents/mia/FullSizeRender%2811%29.jpg]{.tmp}
[SHOW IMAGE HERE:
http://observatory.constantvzw.org/documents/mia/IMG\_5695.JPG]{.tmp}
[SHOW IMAGE HERE:
http://observatory.constantvzw.org/documents/mia/IMG\_5698.JPG]{.tmp}
[TODO: RELATES TO]{.tmp} []{#mtk5yjbl .anchor}
[[Method:](http://pad.constantvzw.org/p/observatory.guide.silvio) Create
\"nannyware\": Software that observes and addresses the user]{.method
.descriptor} [What]{.what .empty .descriptor}

Nannyware is software meant to protect users while limiting their space
of activity. It is software that passive-aggressively suggests or
enforces some kind of discipline. In other words, create a form of
parental control extended to adults by means of user experience / user
interfaces.

Nannyware is a form of Content-control software: software designed to
restrict or control the content a reader is authorised to access,
especially when utilised to restrict material delivered over the
Internet via the Web, e-mail, or other means. Content-control software
determines what content will be available or be blocked.

[How]{.how .empty .descriptor}

> \[\...RestrictionsCITECLOSE23310 can be applied at various levels: a
> government can attempt to apply them nationwide (see Internet
> censorship), or they can, for example, be applied by an ISP to its
> clients, by an employer to its personnel, by a school to its students,
> by a library to its visitors, by a parent to a child\'s computer, or
> by an individual user to his or her own computer.^[5](#fcefedaf)^

[Who]{.who .empty .descriptor}

> Unlike filtering, accountability software simply reports on Internet
> usage. No blocking occurs. In setting it up, you decide who will
> receive the detailed report of the computer's usage. Web sites that
> are deemed inappropriate, based on the options you've chosen, will be
> red-flagged. Because monitoring software is of value only "after the
> fact", we do not recommend this as a solution for families with
> children. However, it can be an effective aid in personal
> accountability for adults. There are several available products out
> there.^[6](#bffbbeaf)^

[Urgency]{.urgency .empty .descriptor}

> As with all new lifestyle technologies that come along, in the
> beginning there is also some chaos until their impact can be assessed
> and rules put in place to bring order and respect to their
> implementation and use in society. When the automobile first came into
> being there was much confusion regarding who had the right of way, the
> horse or the car. There were no paved roads, speed limits, stop signs,
> or any other traffic rules. Many lives were lost and much property was
> destroyed as a result. Over time, government and society developed
> written and unwritten rules as to the proper use of the
> car.^[7](#bbfcbcfa)^

[WARNING]{.warning .empty .descriptor}

> Disadvantages of explicit proxy deployment include a user\'s ability
> to alter an individual client configuration and bypass the proxy. To
> counter this, you can configure the firewall to allow client traffic
> to proceed only through the proxy. Note that this type of firewall
> blocking may result in some applications not working
> properly.^[8](#ededebde)^

[Example]{.example .empty .descriptor}

> The main problem here is that the settings that are required are
> different from person to person. For example, I use workrave with a 25
> second micropause every two and a half minute, and a 10 minute
> restbreak every 20 minutes. I need these frequent breaks, because I\'m
> recovering from RSI. And as I recover, I change the settings to fewer
> breaks. If you have never had any problem at all (using the computer,
> that is), then you may want much fewer breaks, say 10 seconds
> micropause every 10 minutes, and a 5 minute restbreak every hour. It
> is very hard to give proper guidelines here. My best advice is to play
> around and see what works for you. Which settings \"feel right\".
> Basically, that\'s how Workrave\'s defaults evolve.^[9](#cfbbbfdd)^

[SHOW IMAGE HERE: !\[Content-control software\](
http://www.advicegoddess.com/archives/2008/05/03/nannyware.jpg )]{.tmp}
[SHOW IMAGE HERE: !\[A \"nudge\" from your music player
\](http://img.wonderhowto.com/img/10/25/63533437022064/0/disable-high-volume-warning-when-using-headphones-your-samsung-galaxy-s4.w654.jpg)]{.tmp}
[SHOW IMAGE HERE: !\[Emphasis on the body\]
(http://classicallytrained.net/wp-content/uploads/2014/10/take-a-break.jpg)]{.tmp}
[SHOW IMAGE HERE: !\[ \"Slack is trying to be my friend but it\'s more
like a slightly insensitive and slightly bossy acquaintance.\"
\@briecode \] (https://pbs.twimg.com/media/CuZLgV4XgAAYexX.jpg)]{.tmp}
[SHOW IMAGE HERE: !\[Slack is trying to be my friend but it\'s more like
a slightly insensitive and slightly bossy acquaintance.\]
(https://pbs.twimg.com/media/CuZLgV4XgAAYexX.jpg)]{.tmp} [SHOW IMAGE
HERE:
!\[\](https://images.duckduckgo.com/iu/?u=http%3A%2F%2Fi0.wp.com%2Fatherbeg.com%2Fwp-content%2Fuploads%2F2015%2F06%2FWorkrave-Restbreak-Shoulder.png&f=1)]{.tmp}

Facebook is working on an app to stop you from drunk-posting \"Yann
LeCun, who overseas the lab, told Wired magazine that the program would
be like someone asking you, \'Uh, this is being posted publicly. Are you
sure you want your boss and your mother to see this?\'\"

[SHOW IMAGE HERE: !\[This Terminal Dashboard Reminds You to Take a Break
When You\'re Lost Deep Inside the Command
Line\](https://i.kinja-img.com/gawker-media/image/upload/s\--\_of0PoM2\--/c\_fit,fl\_progressive,q\_80,w\_636/eegvqork0qizokwrlemz.png)]{.tmp}
[SHOW IMAGE HERE: !\[\](http://waterlog.gd/images/homescreen.png)]{.tmp}
[SHOW IMAGE HERE:
!\[\](https://pbs.twimg.com/media/C6oKTduWcAEruIE.jpg:large)]{.tmp}
[TODO: RELATES TO]{.tmp} []{#yzuwmdq4 .anchor}
[[Method:](http://pad.constantvzw.org/p/observatory.guide.scrollresistance)
Useless scroll against productivity]{.method .descriptor} []{#m2vjndu3
.anchor} [[Method:](http://pad.constantvzw.org/p/observatory.guide.time)
Investigating how humans and machines negotiate the experience of
time]{.method .descriptor} [What]{.what .empty .descriptor} [SHOW IMAGE
HERE:
http://observatory.constantvzw.org/images/Screenshot\_from\_2017-06-10\_172547.png]{.tmp}
[How: python script]{.how .descriptor} [Example]{.example .empty
.descriptor}

` {.verbatim}
# ends of time

https://en.wikipedia.org/wiki/Year_2038_problem

Exact moment of the epoch:
03:14:07 UTC on 19 January 2038

## commands

local UNIX time of this machine
%XBASHCODE: date +%s

UNIX time + 1
%BASHCODE: echo $((`date +%s` +1 ))

## goodbye unix time

while :
do
sleep 1
figlet $((2147483647 - `date +%s`))
done

# Sundial Time Protocol Group tweaks

printf 'Current Time in Millennium Unix Time: '
printf $((2147483647 - `date +%s`))
echo
sleep 2
echo $((`cat ends-of-times/idletime` + 2)) > ends-of-times/idletime
idletime=`cat ends-of-times/idletime`
echo
figlet "Thank you for having donated 2 seconds to our ${idletime} seconds of collective SSH pause "
echo
echo

http://observatory.constantvzw.org/etherdump/ends-of-time.html
`

[TODO: RELATES TO]{.tmp} [Languaging]{.grouping} []{#nmi5mgjm .anchor}
[[Method:](http://pad.constantvzw.org/p/observatory.guide.quine)
Quine]{.method .descriptor} [What: A program whose function consists of
displaying its own code. Also known as \"self-replicating
program\"]{.what .descriptor} [Why: Quines show the tension between
\"software as language\" and \"software as operation\".]{.why
.descriptor} [How: By running a quine you will get your code back. You
may do a step forward and wonder about functionality and aesthetics,
uselessness and performativity, data and code.]{.how .descriptor}
[Example: A quine (Python). When executed it outputs the same text as
the source:]{.example .descriptor}

` {.sourceCode .python}
s = 's = %r\nprint(s%%s)'
print(s%s)
`

[Example: A oneline unibash/etherpad quine, created during relearn
2017:]{.example .descriptor}

` {.quaverbatim}
wget -qO- http://192.168.73.188:9001/p/quine/export/txt | curl -F "file=@-;type=text/plain" http://192.168.73.188:9001/p/quine/import
`

[WARNING]{.warning .empty .descriptor}

The encounter with quines may deeply affect you. You may want to write
one and get lost in trying to make an ever shorter and more elegant one.
You may also take quines as point of departure or limit-ideas for
exploring software dualisms.

\"A quine is without why. It prints because it prints. It pays no
attention to itself, nor does it asks whether anyone sees it.\" \"Aquine
is aquine is aquine. \" Aquine is not a quine This is not aquine

[Remember: Although seemingly absolutely useless, quines can be used as
exploits.]{.remember .descriptor}

Exploring boundaries/tensions

databases treat their content as data (database punctualization) some
exploits manage to include operations in a database

[TODO: RELATES TO
http://pad.constantvzw.org/p/observatory.guide.monopsychism]{.tmp}
[]{#zwu0ogu0 .anchor}
[[Method:](http://pad.constantvzw.org/p/observatory.guide.glossary)
Glossaries as an exercise]{.method .descriptor} [What: Use the technique
of psychanalytic listening to compile (gather, collect, bring together)
a list of key words for understanding software.]{.what .descriptor}
[How: Create a shared document that participants can add words to as
their importance emerges.To do pyschoanalytic listening, let your
attention float freely, hovering evenly, over a conversation or a text
until something catches its ear. Write down what your ear/eye catches.
When working in a collective context invite others to participate in
this project and describe the practice to them. Each individual may move
in and out of this mode of listening according to their interest and
desire and may add as many words to the list as they want. Use this list
to create an index of software observation.]{.how .descriptor} [When:
This is best done in a bounded context. In the case of the
Techno-Galactic Observatory, our bounded contexts includes the six day
work session and the pages and process of this publication.]{.when
.descriptor} [Who: The so-inclined within the group]{.who .descriptor}
[Urgency: Creating and troubling categories]{.urgency .descriptor}
[Note: Do not remove someone else\'s word from the glossary during the
accumulation phase. If an editing and cutting phase is desired this
should be done after the collection through collective consensus.]{.note
.descriptor} [WARNING: This method is not exclusive to and was not
developed for software observation. It may lead to awareness of
unconscious processes and to shifts in structures of feeling and
relation.]{.warning .descriptor} [Example]{.example .empty .descriptor}

` {.verbatim}
Agile
Code
Colonial
Command Line
Communication
Connectivity
Emotional
Galaxies
Green
Guide
Kernel
Imperial
Issues
Machine
Mantra
Memory
Museum
Observation
ProductionPower
Programmers
Progress
Relational
Red
Scripting
Scrum
Software
Survival
Technology
Test
Warning
WhiteBoard
Yoga
`

[TODO: RELATES TO]{.tmp} []{#mja0m2i5 .anchor}
[[Method:](http://pad.constantvzw.org/p/observatory.guide.validation)
Adding qualifiers]{.method .descriptor} [Remember: \"\[V\]alues are
properties of things and states of affairs that we care about and strive
to attain\...vlaues expressed in technical systems are a function of
their uses as well as their features and designs.\" Values at Play in
Digital Games, Mary Flanagan and Helen Nissenbaum]{.remember
.descriptor} [What: Bringing a moral, ethical, or otherwise
evaluative/adjectival/validating lens.]{.what .descriptor} [How:
Adjectives create subcategories. They narrow the focus by naming more
specifically the imagined object at hand and by implicitly excluding all
objects that do not meet the criteria of the qualifier. The more
adjectives that are added, the easier it becomes to answer the question
what is software. Or so it seems. Consider what happens if you add the
words good, bad, bourgeois, queer, stable, or expensive to software. Now
make a list of adjectives and try it for yourself. Level two of this
exercise consists of observing a software application and deducing from
this the values of the individuals, companies, and societies that
produced it.]{.how .descriptor} [Note: A qualifier may narrow down
definitions to undesirable degrees.]{.note .descriptor} [WARNING: This
exercise may be more effective at identifying normative and ideological
assumptions at play in the making, distributing, using, and maintaining
of software than at producing a concise definition.]{.warning
.descriptor} [Example: \"This morning, Jan had difficulties to answer
the question \"what is software\", but he said that he could answer the
question \"what is good software\". What is good software?]{.example
.descriptor} [TODO: RELATES TO]{.tmp} []{#mmmwmje2 .anchor}
[[Method:](http://pad.constantvzw.org/p/observatory.guide.softwarethrough)
Searching \"software\" through software]{.method .descriptor} [What: A
quick way to sense the ambiguity of the term \'software\', is to go
through the manual files on your hard drive and observe in which cases
is the term used.]{.what .descriptor} [How: command-line oneliner]{.how
.descriptor} [Why: Software is a polymorphic term that take different
meanings and comes with different assumptions for the different agents
involved in its production, usage and all other forms of encounter and
subjection. From the situated point of view of the software present on
your machine, when and why does software call itself as such?]{.why
.descriptor} [Example]{.example .empty .descriptor}

so software exists only outside your computer? only in general terms?
checking for the word software in all man pages:

grep -nr software /usr/local/man
!!!!

software appears only in terms of license:

This program is free software
This software is copyright (c)

we don\'t run software. we still run programs.\
nevertheless software is everywhere

[TODO: RELATES TO
http://pad.constantvzw.org/p/observatory.guide.samequestion]{.tmp}
[]{#ndhkmwey .anchor}
[[Method:](http://pad.constantvzw.org/p/observatory.guide.everyonescp)
Persist in calling everyone a Software Curious Person]{.method
.descriptor} [What: Persistance in naming is a method for changing a
person\'s relationship to software by (sometimes forcibly) call everyone
a Software Curious Person.]{.what .descriptor} [How: Insisting on
curiosity as a relation, rather than for example \'fear\' or
\'admiration\' might help cut down the barriers between different types
of expertise and allows multiple stakeholders feel entitled to ask
questions, to engage, to investigate and to observe.]{.how .descriptor}
[Urgency: Software is too important to not be curious about.
Observations could benefit from recognising different forms of
knowledge. It seems important to engage with software through multiple
interests, not only by means of technical expertise.]{.urgency
.descriptor} [Example: This method was used to address each of the
visitors at the Techno-Galactic Walk-in Clinic.]{.example .descriptor}
[TODO: RELATES TO]{.tmp} [Healing]{.grouping} []{#mmu1mgy0 .anchor}
[[Method:](http://pad.constantvzw.org/p/observatory.guide.relational)
Setup a Relational software observatory consultancy (RSOC)]{.method
.descriptor} [Remember]{.remember .empty .descriptor}

- Collectivise research around hacking to save time.
- Self-articulate software needs as your own Operating (system)
perspective.
- Change the lens by looking to software through a time perspective.

[What: By paying a visit to our ethnomethodology interview practice
you'll learn to observe software from different angles / perspectives.
Our practionners passion is to make the \"what is the relation to
software\" discussion into a service.]{.what .descriptor} [How: Reading
the signs. Considering the everchanging nature of software development
and use and its vast impact on globalized societies, it is necessary to
recognize some of the issues of how software is (often) either
passively-perceived or actively-observed, without an articulation of the
relations. We offer a method to read the signs of the relational aspect
of software observance. It\'s a crucial aspect of our guide. It will
give you another view on software that will shape your ability to
survive any kind of software disaster.]{.how .descriptor} [SHOW IMAGE
HERE: !\[Reading the signs. From: John \"Lofty\" Wiseman, SAS Survival
Handbook: The Ultimate Guide to Surviving Anywhere\](
http://gallery.constantvzw.org/index.php/Techno-Galactic-Software-Observatory/IMAG1319
)]{.tmp} [WARNING]{.warning .empty .descriptor} [SHOW IMAGE HERE: have a
advertising blob for the RSOC with a smiling doctor welcoming
image]{.tmp} [Example]{.example .empty .descriptor}

What follows is an example of a possible diagnostic questionnaire.

Sample Questionnaire
--------------------

**What to expect** You will obtain a cartography of software users
profiles. It will help you to shape your own relation to software. You
will be able to construct your own taxonomy and classifcation of
software users that is needed in order to find a means of rescue in case
of a software catastrophy.

- SKILLS\
- What kind of user would you say that you are?
- What is your most frequently used type of software?
- How often do you install/experiment/learn new software?



- History
- What is your first recollection of software use?
- How often do / when did you last purchase software or pay for a
software service?



- Ethics
- What is the software feature you care about the most?
- Do you use any free software?
- if yes than
- do you remember your first attempt at using this software
service? Do you still use it? If not why?



- Do you pay for media distribution/streaming services?
- Do you remember your first attempt at using free software and how
did that make you feel?
- Have you used any of these software services : facebook, dating app
(grindr, tinder, etc.), twitter, instagram or equivalent.



- Can you talk about your favorite apps or webtools that you use
regularly?
- What is most popular software your friends use?



- SKILL
- Would you say that you are a specilised user?



- Have you ever used the command line?
- Do you know about scripting?
- Have you ever edited an HTML page? A CSS file? A PHP file? A
configuration file?
- Can you talk about your most technical encounter with your computer
/ telephone?



- ECONOMY\
- How do you pay for your software use?
- Please elaborate (for example, do you buy the software? /
contribute in kind / deliver services or support)
- What is the last software that you paid for using?
- What online services are you currently paying for?
- Is someone paying for your use of service?



- Personal
- What stories do you have concerning contracts and administration in
relation to your software, Internet or computer?
- How does software help you shape your relations with other people?
- From which countries does your softwares come from / reside? How do
you feel about that?
- Have you ever read a terms of software service, what about one that
is not targeting the American market?

Sample questionnaire results
----------------------------

Possible/anticipated user profiles
----------------------------------

### \...meAsHardwareOwnerSoftwareUSER:

\"I did not own a computer personally until very very late as I did not
enjoy gaming as a kid or had interest in spending much time behind PC
beyond work (and work computer). My first was hence I think in 2005 and
it was a SGI workstation that was the computer of the year 2000 (cost
10.000USD) and I got it for around 300USD. Proprietary drivers for
unified graphics+RAM were never released, so it remained a software
dead-end in gorgeous blue curved chassis
http://www.sgidepot.co.uk/sgidepot/pics/vwdocs.jpg\"

### \...meAsSoftwareCONSUMER:

\"I payed/purchased software only twice in my life (totalling less then
25eur), as I could access most commercial software as widely pirated in
Balkans and later had more passion for FLOSS anyway, this made me relate
to software as material to exchange and work it, rather than commodity
goods I could or not afford.\"

### \...meAsSoftwareINVESTOR:

\"I did it as both of those apps were niche products in early beta (one
was Jeeper Elvis, real-time-non-linear-video-editor for BeOS) that
failed to reach market, but I think I would likely do it again and only
in that mode (supporting the bleeding edge and off-stream work), but
maybe with more than 25eur.\"

### \...meAsSoftwareUserOfOS:

\"I would spend most of 80s ignoring computers, 90ties figuring out
software from high-end to low-end, starting with OSF/DecAlpha and SunOS,
than IRIX and MacOS, finally Win 95/98 SE, that permanently pushed me
into niches (of montly LINUX distro install fests, or even QNX/Solaris
experiments and finally BeOS use).\"

### \...meAsSoftwareWEBSURFER:

\"I got used to websurfing in more than 15 windows on UNIX systems and
never got used to less than that ever since, furthermore with addition
of more browser options this number only multiplied (always wondered if
my first system was Windows 3.11 - would I be a more focused person and
how would that form my relations to browser windows\>tabs).\"

### \...meAsSoftwareUserOfPropertarySoftware:

\"I signed one NDA contract in person on the paper and with ink on a
rainy day while stopping of at trainstaion in north Germany for the
software that was later to be pulled out of market due to problematic
licencing agreement (intuitivly I knew it was wrong) - it had too much
unprofessional pixeleted edges in its graphics.

### \...meAsSoftwareUserOfDatingWebsites:

\"I got one feature request implemented by a prominent dating website
(to search profiles by language they speak), however I was never
publicly acknowledged (though I tried to make use of it few times), that
made our relations feel a bit exploitative and underappreciated. \"

### \...meAsSoftwareUserTryingToGoPRO:

\"my only two attempts to get into the software company failed as they
insisted on full time commitments. Later I found out ones were
intimidated in interview and other gave it to a person that negotiated
to work part time with friend! My relation to professionalism is likely
equally complex and pervert as one to the software.\"

Case study : W. W.
------------------

\...ww.AsExperiencedAdventerousUSER - experiments with software every
two days as she uses FLOSS and Gnu/Linux, cares the most for maliabity
of the software - as a result she has big expectations of flexibility
even in software category which is quite conventional and stability
focused like file-hosting.

\...ww.AsAnInevstorInSoftware - paid compiled version of FLOSS audio
software 5 years ago as she is supportive of economy and work around
production, maintainance and support, but she also used closed
hardware/software where she had to agree on licences she finds unfair,
but then she was hacking it in order to use it as an expert - when she
had time.

\...ww.AsCommunicationSoftwareUSER - she is not using commercial social
networks, so she is very concious of information transfers and time
relations, but has no strong media/format/design focus.

Q: What is your first recollection of software use?\
A: ms dos in 1990 at school \_ i was 15 or 16. oh no 12. Basic in 1986.

Q: What are the emotions related to this use?\
A: fun. i\'m good at this. empowering

Q: How often do / when did you last purchase software or pay for a
software service?\
A: I paid for ardour five years ago. I paid the developper directly. For
the compiled version. I paid for the service. I pay for my website and
email service at domaine public.

Q: What kind of user would you say you are?\
A: An experienced user drawing out the line. I don\'t behave.

Q: Is there a link between this and your issue?\
A: Even if it\'s been F/LOSS there is a lot of decision power in my
package.

Q: What is your most frequently used type of software?\
A: Web browser. email. firefox & thunderbird

Q: How often do you install/experiment/learn new software?\
A: Every two days. I reinstall all the time. my old lts system died.
stop being supported last april. It was linux mint something.

Q: Do you know about scripting?\
A: I do automating scripts for any operation i have to doi several times
like format conversion.

Q: Can you talk about your most technical encounter with your computer /
telephone?\
A: I\'ve tried to root it. but i didn\'t succeed.

Q: How much time do you wish to spend on such activities like hacking,
rooting your device?\
A: hours. you should take your time

Q: Did you ever sign licence agreement you were not agree with? How does
that affect you?\
A: This is the first thing your when you have a phone. it\'s obey or
die.

Q: What is the software feature you care for the most?\
A: malleability. different ways to approach a problem, a challenge, an
issue.

Q: Do you use any free software?\
A: yes. there maybe are some proprietary drivers.

Q: Do you remember your first attempt at using free software and how did
that make you feel?\
A: Yes i installed my dual boot in \... 10 years ago. scared and
powerful.

Q: Do you use one of this software service: facebook, dating app (grindr
of sort), twitter, instagram or equivalent?\
A: Google, gmail that\'s it

Q: Can you talk about your favorite apps or webtools that you use
regularly?\
A: Music player. vanilla music and f-droid. browser. I pay attention to
clearing my history, no cookies. I also have iceweasel. Https by
default. Even though i have nothing to hide.

Q: What stories around contracts and administration in relation to your
software internet or computer?\
A: Nothing comes to my mind. i\'m not allowed to do, to install on
phone. When it\'s an old phone, there is nothing left that is working
you have to do it.

Q: How does software help you shape your relations with other people?\
A: It\'s a hard question. if it\'s communication software of course
it\'s it\'s nature to be related to other people.there is an expectency
of immediate reply, of information transfer\...It\'s troubling your
relation with people in certain situations.

Q: From which countries does your softwares live / is coming from? How
do you feel about that?\
A: i think i chose the netherlands as a miror. you are hoping to reflect
well in this miror.

Q: Have you ever read a terms of software service; one that is not
targeting the American market?\
A: i have read them. no.

[TODO: RELATES TO]{.tmp} []{#mta1ntzm .anchor}
[[Method:](http://pad.constantvzw.org/p/observatory.guide.agile.yoga)
Agile Sun Salutation]{.method .descriptor} [Remember]{.remember .empty
.descriptor}

> Agile software development describes a set of values and principles
> for software development under which requirements and solutions evolve
> through the collaborative effort of self-organizing cross-functional
> teams. It advocates adaptive planning, evolutionary development, early
> delivery, and continuous improvement, and it encourages rapid and
> flexible response to change. These principles support the definition
> and continuing evolution of many software development
> methods.^[10](#dbabcece)^

[What: You will be observing yourself]{.what .descriptor} [How]{.how
.empty .descriptor}

> Scrum is a framework for managing software development. It is designed
> for teams of three to nine developers who break their work into
> actions that can be completed within fixed duration cycles (called
> \"sprints\"), track progress and re-plan in daily 15-minute stand-up
> meetings, and collaborate to deliver workable software every sprint.
> Approaches to coordinating the work of multiple scrum teams in larger
> organizations include Large-Scale Scrum, Scaled Agile Framework (SAFe)
> and Scrum of Scrums, among others.^[11](#eefcbaac)^

[When: Anywhere where it\'s possible to lie on the floor]{.when
.descriptor} [Who]{.who .empty .descriptor}

> Self-organization and motivation are important, as are interactions
> like co-location and pair programming. It is better to have a good
> team of developers who communicate and collaborate well, rather than a
> team of experts each operating in isolation. Communication is a
> fundamental concept.^[12](#fbaeffab)^

[Urgency: Using Agile software development methods to develop a new path
into your professional and personal life towards creativity, focus and
health.]{.urgency .descriptor} [WARNING]{.warning .empty .descriptor}

> The agile movement is in some ways a bit like a teenager: very
> self-conscious, checking constantly its appearance in a mirror,
> accepting few criticisms, only interested in being with its peers,
> rejecting en bloc all wisdom from the past, just because it is from
> the past, adopting fads and new jargon, at times cocky and arrogant.
> But I have no doubts that it will mature further, become more open to
> the outside world, more reflective, and also therefore more
> effective.^[13](#edabeeaf)^

[Example]{.example .empty .descriptor} [SHOW IMAGE HERE:
https://mfr.osf.io/render?url=https://osf.io/ufdvb/?action=download%26direct%26mode=render&initialWidth=450&childId=mfrIframe]{.tmp}

Hello and welcome to the presentation of the agile yoga methodology. I
am Allegra, and today I\'m going to be your personal guide to YOGA, an
acronym for why organize? Go agile! I\'ll be part of your team today and
we\'ll do a few exercises together as an introduction to a new path into
your professional and personal life towards creativity, focus and
health.

A few months ago, I was stressed, overwhelmed with my work, feeling
alone, inadequate, but since I started practicing agile yoga, I feel
more productive. I have many clients as an agile yoga coach, and I\'ve
seen new creative business opportunities coming to me as a software
developer.

For this first experience with the agile yoga method and before we do
physical exercises together, I would like to invite you to close your
eyes. Make yourself comfortable, lying on the floor, or sitting with
your back on the wall. Close your eyes, relax. Get comfortable. Feel the
weight of your body on the floor or on the wall. Relax.

Leave your troubles at the door. Right now, you are not procrastinating,
you are having a meeting at the \,
a professional building dedicated to business, you are meeting yourself,
you are your own business partner, you are one. You are building your
future.

You are in a room standing with your team, a group of lean programmers.
You are watching a white board together. You are starting your day, a
very productive day as you are preparing to run a sprint together. Now
you turn towards each other, making a scrum with your team, you breathe
together, slowly, inhaling and exhaling together, slowly, feeling the
air in and out of your body. Now you all turn towards the sun to prepare
to do your ASSanas, the agile Sun Salutations or ASS with the team
dedicated ASS Master. She\'s guiding you. You start with Namaskar, the
Salute. your palms joined together, in prayer pose. you all reflect on
the first principle of the agile manifesto. your highest priority is to
satisfy the customer through early and continuous delivery of valuable
software.

Next pose, is Ardha Chandrasana or (Half Moon Pose). With a deep
inhalation, you raise both arms above your head and tilt slightly
backward arching your back. you welcome changing requirements, even late
in development. Agile processes harness change for the customer\'s
competitive advantage. then you all do Padangusthasana (Hand to Foot
Pose). With a deep exhalation, you bend forward and touch the mat, both
palms in line with your feet, forehead touching your knees. you deliver
working software frequently.

Surya Darshan (Sun Sight Pose). With a deep inhalation, you take your
right leg away from your body, in a big backward step. Both your hands
are firmly planted on your mat, your left foot between your hands. you
work daily throughout the project, business people and developers
together. now, you\'re flowing into Purvottanasana (Inclined Plane) with
a deep inhalation by taking your right leg away from your body, in a big
backward step. Both your hands are firmly planted on your mat, your left
foot between your hands. you build projects around motivated
individuals. you give them the environment and support they need, and
you trust them to get the job done.

You\'re in Adho Mukha Svanasana (Downward Facing Dog Pose). With a deep
exhalation, you shove your hips and butt up towards the ceiling, forming
an upward arch. Your arms are straight and aligned with your head. The
most efficient and effective method of conveying information to and
within a development team is face-to-face conversation.

Then, Sashtang Dandawat (Forehead, Chest, Knee to Floor Pose). With a
deep exhalation, you lower your body down till your forehead, chest,
knees, hands and feet are touching the mat, your butt tilted up. Working
software is the primary measure of progress.

Next is Bhujangasana (Cobra Pose). With a deep inhalation, you slowly
snake forward till your head is up, your back arched concave, as much as
possible. Agile processes promote sustainable development. You are all
maintaining a constant pace indefinitely, sponsors, developers, and
users together.

Now back into Adho Mukha Svanasana (Downward Facing Dog Pose).
Continuous attention to technical excellence and good design enhances
agility.

And then again to Surya Darshan (Sun Sight Pose). Simplicity\--the art
of maximizing the amount of work not done\--is essential. Then to
Padangusthasana (Hand to Foot Pose). The best architectures,
requirements, and designs emerge from self-organizing teams.

You all do again Ardha Chandrasana (Half Moon Pose). At regular
intervals, you as the team reflect on how to become more effective, then
tune and adjust your behavior accordingly. you end our ASSanas session
with a salute to honor your agile yoga practices. you have just had a
productive scrum meeting. now i invite you to open your eyes, move your
body around a bit, from the feet up to the head and back again.

Stand up on your feet and let\'s do a scrum together if you\'re ok being
touched on the arms by someone else. if not, you can do it on your own.
so put your hands on the shoulder of the SCP around you. now we\'re
joined together, let\'s look at the screen together as we inhale and
exhale. syncing our body together to the rythms of our own internal
software, modulating our oxygen level intake requirements to the oxygen
availability of our service facilities.

Now, let\'s do together a couple of exercise to protect and strengthen
our wrists. as programmers, as internauts, as entrepreneurs, they are a
very crucial parts of the body to protect. in order to be able to type,
to swipe, to shake hands vigourously, we need them in good health. So
bring to hands towards each other in a prayer pose, around a book, a
brick. You can do it without but I\'m using my extreme programming book
- embrace change - for that. So press the palms together firmly, press
the pad of your fingers together. do that while breathing in and out
twice.

Now let\'s expand our arms towards us, in the air, face and fingers
facing down. like we\'re typing. make your shoulders round. let\'s
breath while visualizing in our heads the first agile mantra :
Individuals and interactions over processes and tools.

Now let\'s bring back the arms next to the body and raise them again.
And let\'s move our hands towards the ceiling this time. Strenghtening
our back. In our head, the second mantra. Working software over
comprehensive documentation. now let\'s bring back the hands in the
standing position. Then again the first movement while visualizing the
third mantra : Customer collaboration over contract negotiation and then
the second movement thinking about the fourth and last mantra :
Responding to change over following a plan and of course we continue
breathing. Now to finish this session, let\'s do a sprint together in
the corridor !

[SHOW IMAGE HERE: !\[\](
http://observatory.constantvzw.org/guide/agileyoga/8-Poses-Yoga-Your-Desk.contours.png
)]{.tmp} [SHOW IMAGE HERE: !\[\](
http://observatory.constantvzw.org/guide/agileyoga/gayolab-office-chair-for-yoga.contours.png
)]{.tmp} [TODO: RELATES TO]{.tmp} []{#mdu0mmji .anchor}
[[Method:](http://pad.constantvzw.org/p/observatory.guide.blobservation)
Hand reading]{.method .descriptor} [How: Visit the Future Blobservation
Booth to have your fortunes read and derive life insight from the wisdom
of software.]{.how .descriptor} [What: Put your hand in the reading
booth and get your line read.]{.what .descriptor} [Why: The hand which
holds your mouse everyday hides many secrets.]{.why .descriptor}
[Example]{.example .empty .descriptor}

` {.verbatim .wrap}
* sample reading timeline:

* 15:00 a test user, all tests clear and systems are online a user who said goodbye to us another user a user who thought it'd be silly to say thank you to the machine but thank you very much another kind user who said thank you yet another kind user another user, no feeback a nice user who found the reading process relieving yet another kind user a scared user! took the hand out but ended up trusting the system. "so cool thanks guys" another user a young user! this is a funny computer
* 15:35 another nice user
* 15:40 another nice user
* 15:47 happy user (laughing)
* 15:51 user complaining about her fortune, saying it's not true. Found the reading process creepy but eased up quickly
* 15:59 another nice user: http://etherbox.local:9001/p/SCP.sedyst.md
* 16:06 a polite user
* 16:08 a friendly playful user (stephanie)
* 16:12 a very giggly user (wendy)
* 16:14 a playful user - found the reading process erotic - DEFRAGMENTING? NO! Thanks Blobservation http://etherbox.local:9001/p/SCP.loup.md
* 16:19 a curious user
* 16:27 a friendly user but oh no, we had a glitch and computer crashed. But we still delivered the fortune. We got a thank you anyway
* 16:40 a nice user, the printer jammed but it was sorted out quickly *16:42 another nice user
* 16:50 nice user (joak)
* 16:52 yet another nice user (jogi)
* 16:55 happy user! (peter w)
* 16:57 more happy user (pierre h)
* 16:58 another happy user
* 17:00 super happy user (peggy)
* 17:02 more happy user
`

[Example]{.example .empty .descriptor}

> Software time is not the same as human time. Computers will run for AS
> LONG AS THEY WILL BE ABLE TO, provided sufficient power is available.
> You, as a human, don\'t have the luxury of being always connected to
> the power grid and this have to rely on your INTERNAL BATTERY. Be
> aware of your power cycles and set yourself to POWER-SAVING MODE
> whenever possible.

[SHOW IMAGE HERE:
http://gallery.constantvzw.org/var/resizes/Techno-Galactic-Software-Observatory/IMAG1407.jpg?m=1497344230]{.tmp}
[TODO: RELATES TO]{.tmp} []{#yznjodq3 .anchor}
[[Method:](http://pad.constantvzw.org/p/observatory.guide.dirty) Bug
reporting for sharing observations]{.method .descriptor} [What: Etherpad
had stopped working but it was unclear why. Where does etherpad
\'live\'?]{.what .descriptor} [How: Started by looking around the pi\'s
filesystem by reading /var/log/syslog in /opt/etherpad and in a
subdirectory named var/ there was dirty.db, and dirty it was.]{.how
.descriptor} [When: Monday morning]{.when .descriptor} [Urgency:
Software (etherpad) not working and the Walk-in Clinic was about to
start.]{.urgency .descriptor} [Note:
http://pad.constantvzw.org/p/observatory.inventory.jogi]{.note
.descriptor}

from jogi\@mur.at to \[Observatory\] When dirty.db get\'s dirty

Dear all,

as promised yesterday, here my little report regarding the broken
etherpad.

\ \#\#\# When dirty.db get\'s dirty

When I got to WTC on Monday morning the etherpad on etherbox.local was
disfunct. Later someone said that in fact etherpad had stopped working
the evening before, but it was unclear why. So I started looking around
the pi\'s filesystem to find out what was wrong. Took me a while to find
the relevant lines in /var/log/syslog but it became clear that there was
a problem with the database. Which database? Where does etherpad
\'live\'? I found it in /opt/etherpad and in a subdirectory named var/
there it was: dirty.db, and dirty it was.

A first look at the file revealed no apparent problem. The last lines
looked like this:

`{"key":"sessionstorage:Ddy0gw7okwbkv5BzkR1DuSLCV_IA5_jQ","val":{"cookie ":{"path":"/","_expires":null,"originalMaxAge":null,"httpOnly":true,"secure":false}}} {"key":"sessionstorage:AU1cffgcTf_q6BV9aIdAvES2YyXM7Gm1","val":{"cookie ":{"path":"/","_expires":null,"originalMaxAge":null,"httpOnly":true,"secure":false}}} {"key":"sessionstorage:_H5SdUlDvQ3XCuPaZEXQ5lx0K6aAEJ9m","val":{"cookie ":{"path":"/","_expires":null,"originalMaxAge":null,"httpOnly":true,"se cure":false}}}`

What I did not see at the time was that there were some (AFAIR something
around 150) binary zeroes at the end of the file. I used tail for the
first look and that tool silently ignored the zeroes at the end of the
file. It was Martino who suggested using different tools (xxd in that
case) and that showed the cause of the problem. The file looked
something like this:

00013730: 6f6b 6965 223a 7b22 7061 7468 223a 222f okie":{"path":"/
00013740: 222c 225f 6578 7069 7265 7322 3a6e 756c ","_expires":nul
00013750: 6c2c 226f 7269 6769 6e61 6c4d 6178 4167 l,"originalMaxAg
00013760: 6522 3a6e 756c 6c2c 2268 7474 704f 6e6c e":null,"httpOnl
00013770: 7922 3a74 7275 652c 2273 6563 7572 6522 y":true,"secure"
00013780: 3a66 616c 7365 7d7d 7d0a 0000 0000 0000 :false}}}.......
00013790: 0000 0000 0000 0000 0000 0000 0000 0000 ................

So Anita, Martino and I stuck our heads together to come up with a
solution. Our first attempt to fix the problem went something like this:

dd if=dirty.db of=dirty.db.clean bs=1 count=793080162

which means: write the first 793080162 blocks of size 1 byte to a new
file. After half an hour or so I checked on the size of the new file and
saw that some 10% of the copying had been done. No way this would get
done in time for the walk-in-clinic. Back to the drawing board.

Using a text editor was no real option btw since even vim has a hard
time with binary zeroes and the file was really big. But there was
hexedit! Martino installed it and copied dirty.db onto his computer.
After some getting used to the various commands to navigate in hexedit
the unwanted zeroes were gone in an instant. The end of the file looked
like this now:

00013730: 6f6b 6965 223a 7b22 7061 7468 223a 222f okie":{"path":"/
00013740: 222c 225f 6578 7069 7265 7322 3a6e 756c ","_expires":nul
00013750: 6c2c 226f 7269 6769 6e61 6c4d 6178 4167 l,"originalMaxAg
00013760: 6522 3a6e 756c 6c2c 2268 7474 704f 6e6c e":null,"httpOnl
00013770: 7922 3a74 7275 652c 2273 6563 7572 6522 y":true,"secure"
00013780: 3a66 616c 7365 7d7d 7d0a :false}}}.

Martino asked about the trailing \'.\' character and I checked a
different copy of the file. No \'.\' there, so that had to go too. My
biggest mistake in a long time! The \'.\' we were seeing in Martino\'s
copy of the file was in fact a \'\' (0a)! We did not realize that,
copied the file back to etherbox.local and waited for etherpad to resume
it\'s work. But no luck there, for obvious reasons.

We ended up making backups of dirty.db in various stages of deformation
and Martino started a brandnew pad so we could use pads for the walk-
in-clinic. The processing tool chain has been disabled btw. We did not
want to mess up any of the already generated .pdf, .html and .md files.

We still don\'t know why exactly etherpad stopped working sometime
Sunday evening or how the zeroes got into the file dirty.db. Anita
thought that she caused the error when she adjusted time on
etherbox.local, but the logfile does not reflect that. The last clean
entry in /var/log/syslog regarding nodejs/etherpad is recorded with a
timestamp of something along the line of \'Jun 10 10:17\'. Some minutes
later, around \'Jun 10 10:27\' the first error appears. These timestamps
reflect the etherbox\'s understanding of time btw, not \'real time\'.

It might be that the file just got too big for etherpad to handle it.
The size of the repaired dirty.db file was already 757MB. That could btw
explain why etherpad was working somewhat slugishly after some days.
There is still a chance that the time adjustment had an unwanted side
effect, but so far there is no obvious reason for what had happened.
\
\-- J.Hofmüller

http://thesix.mur.at/

[]{#ytu5y2qy .anchor}
[[Method:](http://pad.constantvzw.org/p/observatory.guide.detournement)
Interface Détournement]{.method .descriptor} [Embodiment / body
techniques]{.grouping} []{#y2q4zju5 .anchor}
[[Method:](http://pad.constantvzw.org/p/observatory.guide.occupational)
Comportments of software (softwear)]{.method .descriptor}
[Remember]{.remember .empty .descriptor}

> The analysis of common sense, as opposed to the exercise of it, must
> then begin by redrawing this erased distinction between the mere
> matter-of-fact apprehension of reality\--or whatever it is you want to
> call what we apprehend merely and matter-of-factly\--and
> down-to-earth, colloquial wisdom, judgements, and assessments of it.

[What: Observe and catalog the common gestures, common comportments, and
common sense(s) surrounding software.]{.what .descriptor} [How: This can
be done through observation of yourself or others. Separate the
apprehended and matter of fact from the meanings, actions, reactions,
judgements, and assessments that the apprehension occasions. Step 1:
Begin by assembling a list of questions such as: When you see a software
application icon what are you most likely to do? When a software
application you are using presents you with a user agreement what are
you most likely to do? When a software applciation does something that
frustrates you what are you most likely to do? When a software
application you are using crashes what are you most likely to do? Step
2: Write down your responses and the responses of any subjects you are
observing. Step 3: For each question, think up three other possible
responses. Write these down. Step 4: (this step is only for the very
curious) Try the other possible responses out the next time you
encounter each of the given scenarios.]{.how .descriptor} [Note: The
common senses and comportments of software are of course informed and
conditioned by those of hardware and so perhaps this is more accurately
a method for articulating comportments of computing.]{.note .descriptor}
[WARNING: Software wears on both individual and collective bodies and
selves. Software may harm your physical and emotional health and that of
your society both by design and by accident.]{.warning .descriptor}
[TODO: RELATES TO Agile Sun Salutation, Natasha Schull\'s Addicted by
Design]{.tmp} [Flow-regulation, logistics, seamlessness]{.grouping}
[]{#mwrhm2y4 .anchor}
[[Method:](http://pad.constantvzw.org/p/observatory.guide.continuousintegration)
Continuous integration]{.method .descriptor} [What: Continuous
integration is a sophisticated form of responsibility management: it is
the fascia of services. Continous integration picks up after all other
services and identifies what needs to happen so that they can work in
concert. Continuous integration is a way of observing the evolution of
(micro)services through cybernetic (micro)management.]{.what
.descriptor} [How: Continuous integration keeps track of changes to all
services and allows everyone to observe if they still can work together
after all the moving parts are fitted together.]{.how .descriptor}
[When: Continuous integration comes to prominence in a world of
distributed systems where there are many parts being organized
simultaneously. Continuous integration is a form of observation that
helps (micro)services maintain a false sense of independence and
decentralization while constantly subjecting them to centralized
feedback.]{.when .descriptor} [Who: Continuous integration assumes that
all services will submit themselves to the feedback loops of continuous
integration. This could be a democratic process or not.]{.who
.descriptor} [Urgency: Continuous integration reconfigures divisions of
labor in the shadows of automation. How can we surface and question its
doings and undoings?]{.urgency .descriptor} [WARNING: When each service
does one thing well, the service makers tend to assume everybody else is
doing the things they do not want to do.]{.warning .descriptor}

At TGSO continuous integration was introduced as a service that responds
to integration hell when putting together a number of TGSO services for
a walk-in software clinic. Due to demand, the continuous integration
service was extended to do \"service discovery\" and \"load balancing\"
once the walk-in clinic was in operation.

Continuous integration worked by visiting the different services of the
walk-in clinic to check for updates, test the functionality and think
through implications of integration with other services. If the pieces
didn\'t fit, continuous integration delivered error messages and
solution options.

When we noticed that software curious persons visiting the walk-in
clinic may have troubles finding the different services, and that some
services may be overloaded with software curious persons, continuous
integration was extended. We automated service registration using
colored tape and provided a lookup registry for software curious
persons.

http://gallery.constantvzw.org/index.php/Techno-Galactic-Software-Observatory/IMAG1404

Load balancing meant that software curious persons were forwarded to
services that had capacity. If all other services were full, the load
balancer defaulted to sending the software curious person to the [Agile
Sun
Salutation](http://pad.constantvzw.org/p/observatory.guide.agile.yoga)
service.

[WARNING: At TGSO the bundling of different functionalities into the
continuous integration service broke the \"do one thing well\"
principle, but saved the day (we register this as technical debt for the
next iteration of the walk-in clinic).]{.warning .descriptor} [Remember:
Continous integration may be the string that holds your current software
galaxy together.]{.remember .descriptor}

\"More technically, I am interested in how things bounce around in
computer systems. I am not sure if these two things are relted, but I
hope continuous integration will help me.\"

[]{#zdixmgrm .anchor}
[[Method:](http://pad.constantvzw.org/p/observatory.guide.pipeline) make
make do]{.method .descriptor} [What: Makefile as a method for
quick/collective assemblages + observing amalgamates/pipelines]{.what
.descriptor} [Note: Note:
http://observatory.constantvzw.org/etherdump/makefile.raw.html]{.note
.descriptor}

etherpad-\>md-\>pdf-\>anything pipeline. makefile as a method for
quick/collective assemblages + observing amalgamates/pipelines CHRISTOPH

[]{#zweymtni .anchor}
[[Method:](http://pad.constantvzw.org/p/observatory.guide.ssogy)
Flowcharts (Flow of the chart -- chart of the flow on demand!)]{.method
.descriptor} [Example]{.example .empty .descriptor} [SHOW IMAGE HERE:
!\[\]( http://observatory.constantvzw.org/images/symbols/ibm-ruler.jpg
)]{.tmp} [SHOW IMAGE HERE: !\[\](
http://observatory.constantvzw.org/images/symbols/burroughs-ruler.jpg
)]{.tmp} [SHOW IMAGE HERE: !\[\](
http://observatory.constantvzw.org/images/symbols/rectangle.png )]{.tmp}
[SHOW IMAGE HERE: !\[\](
http://observatory.constantvzw.org/images/symbols/curly\_rec.png
)]{.tmp} [SHOW IMAGE HERE: !\[\](
http://observatory.constantvzw.org/images/symbols/curly\_rec-2.png
)]{.tmp} [SHOW IMAGE HERE: !\[\](
http://observatory.constantvzw.org/images/symbols/flag.png )]{.tmp}
[SHOW IMAGE HERE: !\[\](
http://observatory.constantvzw.org/images/symbols/trapec.png )]{.tmp}
[SHOW IMAGE HERE: !\[Claude Shannon Information Diagram Blanked: Silvio
Lorusso\](
http://silviolorusso.com/wp-content/uploads/2012/02/shannon\_comm\_channel.gif
)]{.tmp} [TODO: RELATES TO]{.tmp}
[Beingontheside/inthemiddle/behind]{.grouping} []{#ywfin2e4 .anchor}
[[Method:](http://pad.constantvzw.org/p/observatory.guide.somethinginthemiddlemaybe)
Something in the Middle Maybe (SitMM)]{.method .descriptor} [What: The
network traffic gets observed. There are different sniffing software out
there which differ in granularity and how far the user can taylor the
different functionality. SitMM builds on one of these tools called
[scapy](http://www.secdev.org/projects/scapy/).]{.what .descriptor}
[How: SitMM takes a closer look at the network traffic coming from/going
to a software curious person\'s device. The software curious person
using SitMM may ask to filter the traffic based on application or device
of interest.]{.how .descriptor} [Who]{.who .empty .descriptor}

The software curious person gets to observe their own traffic. Ideally,
observing ones own network traffic should be available to anyone, but
using such software can be deemed illegal under different jurisdictions.

For example, in the US wiretap law limit packet-sniffing to parties
owning the network that is being sniffed or the availability of consent
from one of the communicating parties. Section 18 U.S. Code § 2511 (2)
(a) (i) says:

> It shall not be unlawful \... to intercept \... while engaged in any
> activity which is a necessary incident to the rendition of his service
> or to the protection of the rights or property of the provider of that
> service

See here for a
[paper](http://spot.colorado.edu/%7Esicker/publications/issues.pdf) on
the topic. Google went on a big legal spree to defend their right to
capture unencrypted wireless traffic with google street view cars. The
courts were concerned about wiretapping and infringements on the privacy
of users, and not with the leveraging of private and public WiFi
infrastructure for the gain of a for profit company. The case raises
hard questions about the state, ownership claims and material reality of
WiFi signals. So, while WiFi sniffing is common and the tools like SitMM
are widely available, it is not always possible for software curious
persons to use them legally or to neatly filter out \"their traffic\"
from that of \"others\".

[When: SitMM can be used any time a software curious person feels the
weight of the (invisible) networks.]{.when .descriptor} [Why: SitMM is
intended to be a tool that gives artists, designers and educators an
easy to use custom WiFi router to work with networks and explore the
aspects of our daily communications that are exposed when we use WiFi.
The goal is to use the output to encourage open discussions about how we
use our devices online.]{.why .descriptor} [Example]{.example .empty
.descriptor}

Snippets of a Something In The Middle, Maybe - Report

` {.verbatim}
UDP 192.168.42.32:53649 -> 8.8.8.8:53
TCP 192.168.42.32:49250 -> 17.253.53.208:80
TCP 192.168.42.32:49250 -> 17.253.53.208:80
TCP/HTTP 17.253.53.208:80 GET http://captive.apple.com/mDQArB9orEii/Xmql6oYqtUtn/f6xY5snMJcW8/CEm0Ioc1d0d8/9OdEOfkBOY4y.html
TCP 192.168.42.32:49250 -> 17.253.53.208:80
TCP 192.168.42.32:49250 -> 17.253.53.208:80
TCP 192.168.42.32:49250 -> 17.253.53.208:80
UDP 192.168.42.32:63872 -> 8.8.8.8:53
UDP 192.168.42.32:61346 -> 8.8.8.8:53
...
TCP 192.168.42.32:49260 -> 17.134.127.97:443
TCP 192.168.42.32:49260 -> 17.134.127.97:443
TCP 192.168.42.32:49260 -> 17.134.127.97:443
TCP 192.168.42.32:49260 -> 17.134.127.97:443
TCP 192.168.42.32:49260 -> 17.134.127.97:443
TCP 192.168.42.32:49260 -> 17.134.127.97:443
TCP 192.168.42.32:49260 -> 17.134.127.97:443

##################################################
Destination Address: 17.253.53.208
Destination Name: nlams2-vip-bx-008.aaplimg.com

Port: Connection Count
80: 6

##################################################
Destination Address: 17.134.127.79
Destination Name: unknown

Port: Connection Count
443: 2
##################################################
Destination Address: 17.248.145.76
Destination Name: unknown

Port: Connection Count
443: 16
`

[TODO: RELATES TO]{.tmp} []{#ntlimgqy .anchor}
[[Method:](http://pad.constantvzw.org/p/observatory.guide.whatisitliketobeanelevator)
What is it like to be AN ELEVATOR?]{.method .descriptor} [What:
Understanding software systems by becoming them]{.what .descriptor}
[TODO: extend this text \.... how to observe software in the world
around you. How to observe an everyday software experience and translate
this into a flowchart )]{.tmp} [How: Creating a flowchart to incarnate a
software system you use everyday]{.how .descriptor} [WARNING: Uninformed
members of the public may panic when confronted with a software
performance in a closed space.]{.warning .descriptor} [Example: What is
it like to be an elevator?]{.example .descriptor}

` {.verbatim}

what
is
it
like
to be
an
elevator?

"from 25th floor to 1st floor"

light on button light of 25th floor
check current floor
if current floor is 25th floor
no
if current floor is ...
go one floor up
... smaller than 25th floor
go one floor down
... bigger than 25th floor
stop elevator
turn button light off of 25th floor
turn door light on
open door of elevator
play sound opening sequence
yes
start
user pressed button of 25th floor
close door of elevator
if door is closed
user pressed 1st floor button
start timer for door closing
if timer is running more than three seconds
yes
yes
light on button
go one floor down
no
if current floor is 1st floor
update floor indicator
check current floor
stop elevator
no
yes
light off button
turn door light on
open door of elevator
play sound opening sequence
end
update floor indicator
`

[SHOW IMAGE HERE:
http://observatory.constantvzw.org/documents/joseph/flowchart.pdf]{.tmp}
[TODO: RELATES TO]{.tmp} []{#ndg2zte4 .anchor}
[[Method:](http://pad.constantvzw.org/p/observatory.guide.sidechannel)
Side Channel Analysis]{.method .descriptor} [Urgency: Side Channel
attacks are possible by disregarding the abstraction of software into
pure logic: the physical effects of the running of the software become
backdoors to observe its functioning, both threatening the control of
processes and the re-affirming the materiality of software.]{.urgency
.descriptor} [WARNING: **engineers are good guys!**]{.warning
.descriptor} [Example]{.example .empty .descriptor} [SHOW IMAGE HERE:
https://www.tek.com/sites/default/files/media/image/119-4146-00%20Near%20Field%20Probe%20Set.png.jpg]{.tmp}
[SHOW IMAGE HERE:
http://gallery.constantvzw.org/index.php/Techno-Galactic-Software-Observatory/PWFU3377]{.tmp}
[TODO: RELATES TO]{.tmp} [Collections / collecting]{.grouping}
[]{#njmzmjm1 .anchor}
[[Method:](http://pad.constantvzw.org/p/observatory.guide.bestiary)
Compiling a bestiary of software logos]{.method .descriptor} [What:
Since the early days of GNU-linux and cemented through the ubiquitous
O\'Reilly publications, the visual culture of software relies heavily on
animal representations. But what kinds of animals, and to what
effect?]{.what .descriptor} [How]{.how .empty .descriptor}

Compile a collection of logos and note the metaphors for observation: \*
stethoscope \* magnifying glass \* long neck (giraffe)

[Example]{.example .empty .descriptor}

` {.verbatim}
% http://animals.oreilly.com/browse/
% [check Testing the testbed pads for examples]
% [something on bestiaries]
`

[TODO: RELATES TO]{.tmp} []{#njm5zwm4 .anchor} []{#mmy2zgrl .anchor}
[[Method:](http://pad.constantvzw.org/p/observatory.guide.testingtestbed)
Testing the testbed: testing software with observatory ambitions
(SWOA)]{.method .descriptor} [WARNING: this method may make more sense
if you first take a look at the [Something in the Middle Maybe
(SitMM)](http://pad.constantvzw.org/p/observatory.guide.sitmm) which is
an instance of a SWOA]{.warning .descriptor} [How: The interwebs hosts
many projects that aim to produce software for observing software, (from
now on Software With Observatory Ambitions (SWOA)). A comparative
methodology can be produced by testing different SWOA to observe
software of interest. Example: use different sniffing software to
observe wireless networks, e.g., wireshark vs tcpdump vs SitMM.
Comparing SWOA reveals what is seen as worthy of observation (e.g., what
protocols, what space, which devices), the granularity of the
observation (e.g., how is the observation captured, in what detail), the
logo and conceptual framework of choice etc. This type of observation
may be turned into a service (See also: Something in the Middle Maybe
(SitMM)).]{.how .descriptor} [When: Ideally, SWOA can be used everywhere
and in every situation. In reality, institutions, laws and
administrators like to limit the use of SWOA on infrastructures to
people who are also administering these networks. Hence, we are
presented with the situation that the use of SWOA is condoned when it is
down by researchers and pen testers (e.g., they were hired) and shunned
when done by others (often subject to name calling as hackers or
attackers).]{.when .descriptor} [What: Deep philosophical moment: most
software has a recursive observatory ambition (it wants to be observed
in its execution, output etc.). Debuggers, logs, dashboards are all
instances of software with observatory ambitions and can not be
separated from software itself. Continuous integration is the act of
folding the whole software development process into one big feedback
loop. So, what separates SWOA from software itself? Is it the intention
of observing software with a critical, agonistic or adversarial
perspective vs one focused on productivity and efficiency that
distinguishes SWOA from software? What makes SWOA a critical practice
over other forms of sotware observation. If our methodology is testing
SWOA, then is it a meta critique of critique?]{.what .descriptor} [Who:
If you can run multiple SWOAs, you can do it. The question is: will
people like it if you turn your gaze on their SWOA based methods of
observation? Once again we find that observation can surface power
asymmetries and lead to defensiveness or desires to escape the
observation in the case of the observed, and a instinct to try to
conceal that observation is taking place.]{.who .descriptor} [Urgency:
If observation is a form of critical engagement in that it surfaces the
workings of software that are invisible to many, it follows that people
would develop software to observe (SWOAs). Testing SWOAs puts this form
of critical observation to test with the desire to understand how what
is made transparent through each SWOA also makes things invisible and
reconfigures power.]{.urgency .descriptor} [Note: Good SWOA software
usually uses an animal as a logo.:D]{.note .descriptor} [WARNING: Many
of the SWOA projects we looked at are promises more than running
software/available code. Much of it is likely to turn into obsolete
gradware, making testing difficult.]{.warning .descriptor} [TODO:
RELATES TO
http://pad.constantvzw.org/p/observatory.guide.bestiary]{.tmp} [TODO:
RELATES TO http://pad.constantvzw.org/p/observatory.guide.sitmm]{.tmp}
[]{#mmmzmmrh .anchor}
[[Method:](http://pad.constantvzw.org/p/observatory.guide.reader)
Prepare a reader to think theory with software]{.method .descriptor}
[What: Compile a collection of texts about software.]{.what .descriptor}
[How: Choose texts from different realms. Software observations are
mostly done in the realm of the technological and the pragmatic. Also
the ecology of texts around software includes first and foremost
manuals, technical documentation and academic papers by software
engineers and these all \'live\' in different realms. More recently, the
field of software studies opened up additional perspectives fuelled by
cultural studies and sometimes filosophy. By compiling a reader \...
ways of speaking/writing about. Proximity.]{.how .descriptor}
[Example]{.example .empty .descriptor}

` {.verbatim .wrap}
Pull some quotes from the reader, for example from the chapter: Observation and its consequences

Lilly Irani, Hackathons and the Making of Entrepreneurial Citizenship, 2015 http://sci-hub.bz/10.1177/0162243915578486

Kara Pernice (Nielsen Norman Group), Talking with Participants During a Usability Test, January 26, 2014, https://www.nngroup.com/articles/talking-to-users/

Matthew G. Kirschenbaum, Extreme Inscription: Towards a Grammatology of the Hard Drive. 2004 http://texttechnology.mcmaster.ca/pdf/vol13_2_06.pdf

Alexander R. Galloway, The Poverty of Philosophy: Realism and Post-Fordism, Critical Inquiry. 2013, http://cultureandcommunication.org/galloway/pdf/Galloway,%20Poverty%20of%20Philosophy.pdf
Edward Alcosser, James P. Phillips, Allen M. Wolk, How to Build a Working Digital Computer. Hayden Book Company, 1968. https://archive.org/details/howtobuildaworkingdigitalcomputer_jun67

Matthew Fuller, "It looks like you're writing a letter: Microsoft Word", Nettime, 5 Sep 2000. https://library.memoryoftheworld.org/b/xpDrXE_VQeeuDDpc5RrywyTJwbzD8eatYGHKmyT2A_HnIHKb

Barbara P. Aichinger, DDR Memory Errors Caused by Row Hammer. 2015 www.memcon.com/pdfs/proceedings2015/SAT104_FuturePlus.pdf

Fangfei Liu, Yuval Yarom, Qian Ge, Gernot Heiser, Ruby B. Lee. Last-Level Cache Side-Channel Attacks are Practical. 2015 http://palms.ee.princeton.edu/system/files/SP_vfinal.pdf
`

[TODO: RELATES TO
http://pad.constantvzw.org/p/observatory.guide.samequestion]{.tmp}
[]{#ytjmmmni .anchor}

Colophon

The Guide to techno-galactic software observing was compiled by Carlin
Wing, Martino Morandi, Peggy Pierrot, Anita, Christoph Haag, Michael
Murtaugh, Femke Snelting

License: Free Art License

Support:

Sources:

Constant, February 2018

::: {.footnotes}
1. [[[Haraway]{.fname}, [Donna]{.gname}, [Galison]{.fname},
[Peter]{.gname} and [Stump]{.fname}, [David J]{.gname}: [Modest
Witness: Feminist Diffractions in Science Studies]{.title},
[Stanford University Press]{.publisher}, [1996]{.date}.
]{.collection} [-\>](#eeffecbe)]{#ebceffee}
2. [Worksessions are intensive transdisciplinary moments, organised
twice a year by Constant. They aim to provide conditions for
participants with different experiences and capabilities to

Constant
The Techno-Galactic Guide to Software Observation
2018


::: {.toc}
[Introduction](#mtljymuz) [Encounter several collections of historical
hardware back-to-back](#njm5zwm4) [Interview people about their
histories with software](#mguzmza4) [Ask several people from different
fields and age-groups the same question: \"***What is
software?***\"](#odfkotky) [FMEM and /DEV/MEM](#mzcxodix)
[Pan/Monopsychism](#m2mwogri) [Fountain refreshment](#ndawnmy5) [Create
\"nannyware\": Software that observes and addresses the user](#mtk5yjbl)
[Useless scroll against productivity](#yzuwmdq4) [Investigating how
humans and machines negotiate the experience of time](#m2vjndu3)
[Quine](#nmi5mgjm) [Glossaries as an exercise](#zwu0ogu0) [Adding
qualifiers](#mja0m2i5) [Searching \"software\" through
software](#mmmwmje2) [Persist in calling everyone a Software Curious
Person](#ndhkmwey) [Setup a Relational software observatory consultancy
(RSOC)](#mmu1mgy0) [Agile Sun Salutation](#mta1ntzm) [Hand
reading](#mdu0mmji) [Bug reporting for sharing observations](#yznjodq3)
[Interface Détournement](#ytu5y2qy) [Comportments of software
(softwear)](#y2q4zju5) [Continuous integration](#mwrhm2y4) [make make
do](#zdixmgrm) [Flowcharts (Flow of the chart -- chart of the flow on
demand!)](#zweymtni) [Something in the Middle Maybe (SitMM)](#ywfin2e4)
[What is it like to be AN ELEVATOR?](#ntlimgqy) [Side Channel
Analysis](#ndg2zte4) [Compiling a bestiary of software logos](#njmzmjm1)
[Encounter several collections of historical hardware
back-to-back](#njm5zwm4) [Testing the testbed: testing software with
observatory ambitions (SWOA)](#mmy2zgrl) [Prepare a reader to think
theory with software](#mmmzmmrh)
:::

[]{#mtljymuz .anchor}

A guide to techno-galactic software observation

> I am less interested in the critical practice of reflection, of
> showing once-again that the emperor has no clothes, than in finding a
> way to *diffract* critical inquiry in order to make difference
> patterns in a more worldly way.^[1](#ebceffee)^

The techno-galactic software survival guide that you are holding right
now was collectively produced as an outcome of the Techno-Galactic
Software Observatory. This guide proposes several ways to achieve
critical distance from the seemingly endless software systems that
surround us. It offers practical and fantastical tools for the tactical
(mis)use of software, empowering/enabling users to resist embedded
paradigms and assumptions. It is a collection of methods for approaching
software, experiencing its myths and realities, its risks and benefits.

With the rise of online services, the use of software has increasingly
been knitted into the production of software, even while the rhetoric,
rights, and procedures continue to suggest that use and production
constitute separate realms. This knitting together and its corresponding
disavowal have an effect on the way software is used and produced, and
radically alters its operative role in society. The shifts ripple across
galaxies, through social structures, working conditions and personal
relations, resulting in a profusion of apparatuses aspiring to be
seamless while optimizing and monetizing individual and collective flows
of information in line with the interests of a handful of actors. The
diffusion of software services affects the personal, in the form of
intensified identity shaping and self-management. It also affects the
public, as more and more libraries, universities and public
infrastructures as well as the management of public life rely on
\"solutions\" provided by private companies. Centralizing data flows in
the clouds, services blur the last traces of the thin line that
separates bio- from necro-politics.

Given how fast these changes resonate and reproduce, there is a growing
urgency to engage in a critique of software that goes beyond taking a
distance, and that deals with the fact that we are inevitably already
entangled. How can we interact, intervene, respond and think with
software? What approaches can allow us to recognize the agency of
different actors, their ways of functioning and their politics? What
methods of observation enable critical inquiry and affirmative discord?
What techniques can we apply to resurface software where it has melted
into the infrastructure and into the everyday? How can we remember that
software is always at work, especially where it is designed to disappear
into the background?

We adopted the term of observation for a number of reasons. We regard
observation as a way to approach software, as one way to organize
engagement with its implications. Observation, and the enabling of
observation through intensive data-centric feedback mechanisms, is part
of the cybernetic principles that underpin present day software
production. Our aim was to scrutinize this methodology in its many
manifestations, including in \"observatories\" \-- high cost
infrastructures \[testing infrastructures?CITECLOSE23310 of observation
troubled by colonial, imperial traditions and their problematic
divisions of nature and culture \-- with the hope of opening up
questions about who gets to observe software (and how) and who is being
observed by software (and with what impact)? It is a question of power,
one that we answer, at least in part, with critical play.

We adopted the term techno-galactic to match the advertised capability
of \"scaling up to the universe\" that comes in contemporary paradigms
of computation, and to address different scales of software communities
and related political economies that involve and require observation.

Drawing on theories of software and computation developed in academia
and elsewhere, we grounded our methods in hands-on exercises and
experiments that you now can try at home. This Guide to Techno-Galactic
Software Observation offers methods developed in and inspired by the
context of software production, hacker culture, software studies,
computer science research, Free Software communities, privacy activism,
and artistic practice. It invites you to experiment with ways to stay
with the trouble of software.

The Techno-Galactic Software Observatory
----------------------------------------

In the summer of 2017, around thirty people gathered in Brussels to
explore practices of proximate critique with and of software in the
context of a worksession entitled \"Techno-Galactic Software
Observatory\".^[2](#bcaacdcf)^ The worksession called for
software-curious people of all kinds to ask questions about software.
The intuition behind such a call was that different types of engagement
requires a heterogeneous group of participants with different levels of
expertise, skill and background. During three sessions of two days,
participants collectively inspected the space-time of computation and
probed the universe of hardware-software separations through excursions,
exercises and conversations. They tried out various perspectives and
methods to look at the larger picture of software as a concept, as a
practice, and as a set of techniques.

The first two days of The Techno-Galactic Software Observatory included
visits to the Musée de l\'Informatique Pionnière en
Belgique^[3](#aaceaeff)^ in Namur and the Computermuseum
KULeuven^[4](#afbebabd)^. In the surroundings of these collections of
historical 'numerical artefacts', we started viewing software in a
long-term context. It offered us the occasion to reflect on the
conditions of its appearance, and allowed us to take on current-day
questions from a genealogical perspective. What is software? How did it
appear as a concept, in what industrial and governmental circumstances?
What happens to the material conditions of its production (minerals,
factory labor, hardware) when it evaporates into a cloud?

The second two days we focused on the space-time dimension of IT
development. The way computer programs and operating systems are
manufactured changed tremendously through time, and so did its
production times and places. From military labs via the mega-corporation
cubicles to the open-space freelancer utopia, what ruptures and
continuities can be traced in the production, deployment, maintenance
and destruction of software? From time-sharing to user-space partitions
and containerization, what separations were and are at work? Where and
when is software made today?

The Walk-in Clinic
------------------

The last two days at the Techno-galactic software observatory were
dedicated to observation and its consequences. The development of
software encompasses a series of practices whose evocative names are
increasingly familiar: feedback, report, probe, audit, inspect, scan,
diagnose, explore, test \... What are the systems of knowledge and power
within which these activities take place, and what other types of
observation are possible? As a practical set for our investigations, we
set up a walk-in clinic on the 25th floor of the World Trade Center,
where users and developers could arrive with software-questions of all
kinds.

> Do you suffer from the disappearance of your software into the cloud,
> feel oppressed by unequal user privilege, or experience the torment of
> software-ransom of any sort? Bring your devices and interfaces to the
> World Trade Center! With the help of a clear and in-depth session, at
> the Techno-Galactic Walk-In Clinic we guarantee immediate results. The
> Walk-In Clinic provides free hands-on observations to software curious
> people of all kinds. A wide range of professional and amateur
> practitioners will provide you with
> Software-as-a-Critique-as-a-Service on the spot. Available services
> range from immediate interface critique, collaborative code
> inspection, data dowsing, various forms of network analyses,
> unusability testing, identification of unknown viruses, risk
> assessment, opening of black-boxes and more. Free software
> observations provided. Last intake at 16:45.\
> (invitation to the Walk-In Clinic, June 2017)

On the following pages: Software as a Critique as a Service (SaaCaaS)
Directory and intake forms for Software Curious People (SCP).

[SHOW IMAGE HERE:
http://observatory.constantvzw.org/documents/masterlist\_twosides\_NEU.pdf]{.tmp}
[SHOW IMAGE HERE:
http://observatory.constantvzw.org/documents/scprecord\_FINAL.pdf]{.tmp}
[]{#owqzmtdk .anchor}

Techno-Galactic Software Observation Essentials
=
**WARNING**

The survival techniques described in the following guide are to be used
at your own risk in case of emergency regarding software curiosity. The
publisher will not accept any responsability in case of damages caused
by misuse, misundestanding of instruction or lack of curiosity. By
trying the action exposed in the guide, you accept the responsability of
loosing data or altering hardware, including hard disks, usb key, cloud
storage, screens by throwing them on the floor, or even when falling on
the floor with your laptop by tangling your feet in an entanglement of
cables. No harm has been done to human, animal, computers or plants
while creating the guide. No firearms or any kind of weapon is needed in
order to survive software.\
Just a little bit of patience.

**Software observation survival stresses**

**Physical fitness plays a great part of software observation. Be fit or
CTRL-Quit.**

When trying to observe software you might experience stresses as such :

*Anxiety*Sleep deprivation *Forgetting about eating*Loss of time
tracking

**Can you cope with software ? You have to.**

> our methods for observation, like mapping, come with their luggage.

[Close encounters]{.grouping} []{#njm5zwm4 .anchor}
[[Method:](http://pad.constantvzw.org/p/observatory.guide.visit)
Encounter several collections of historical hardware
back-to-back]{.method .descriptor} [How]{.how .empty .descriptor}

This can be done by identifying one or more computer museums and visit
them with little time in-between. Visiting a friend with a large
basement and lots of left-over computer equipment can help. Seeing and
possibly touching hardware from different contexts
(state-administration, business, research, \...), periods of time,
cultural contexts (California, Germany, French-speaking Belgium) and
price ranges allows you to sense the interactions between hardware and
software development.

[Note: It\'s a perfect way to hear people speak about the objects and
their contexts, how they worked or not and how objects are linked one
with another. It also shows the economic and cultural aspects of
softwares.]{.note .descriptor} [WARNING: **DO NOT FOLD, SPINDLE OR
MUTILATE**]{.warning .descriptor} [Example: Spaghetti Suitcase]{.example
.descriptor}

At one point during the demonstration of a Bull computer, the guide
revealed the system\'s \"software\" \-- a suitcase sized module with
dozens of patch cords. She made the comment that the term \"spaghetti
code\" (a derogatory expression about early code usign many \"GOTO\"
statments) had its origin in this physical arrangement of code as
patchings.

Preserving old hardware in order to observe physical manifestation of
software. See software here : we did experienced the incredible
possibility of actually touching software.

[SHOW IMAGE HERE:
http://observatory.constantvzw.org/images/wednesday/IMG\_20170607\_113634\_585.jpg]{.tmp}
[SHOW IMAGE HERE:
http://gallery.constantvzw.org/var/resizes/Techno-Galactic-Software-Observatory/IMG\_1163.JPG?m=1496916927]{.tmp}
[Example: Playing with the binary. Bull cards. Happy operator! Punch
card plays.]{.example .descriptor}

\"The highlight of the collection is to revive a real punch card
workshop of the 1960s.\"

[Example: Collection de la Maison des Écritures d\'Informatique & Bible,
Maredsous]{.example .descriptor}

The particularity of the collection lies in the fact that it\'s the
conservation of multiple stages of life of a software since its initial
computerization until today. The idea of introducing informatics into
the work of working with/on the Bible (versions in Hebrew, Greek, Latin,
and French) dates back to 1971, via punch card recordings and their
memorization on magnetic tape. Then came the step of analyzing texts
using computers.

[SHOW IMAGE HERE:
http://gallery.constantvzw.org/var/resizes/Preparing-the-Techno-galactic-Software-Observatory/DSC05019.JPG?m=1490635726]{.tmp}
[TODO: RELATES TO
http://pad.constantvzw.org/p/observatory.guide.jean.heuns]{.tmp}
[]{#mguzmza4 .anchor}
[[Method:](http://pad.constantvzw.org/p/observatory.guide.jean.heuns)
Interview people about their histories with software]{.method
.descriptor} [What: Observe personnal narratives around software
history. Retrace the path of relation to software, how it changed during
the years and what are the human access memories that surrounds it. To
look at software through personal relations and emotions.]{.what
.descriptor} [How: Interviews are a good way to do it. Informal
conversations also.]{.how .descriptor}

Jean Heuns has been collecting servers, calculators, softwares, magnetic
tapes hard disks for xxx years. Found an agreement for them to be
displayed in the department hallways. Department of Computer sciences -
Kul Leuven.

[SHOW IMAGE HERE:
http://gallery.constantvzw.org/var/albums/Techno-Galactic-Software-Observatory/PWFU3350.JPG]{.tmp}
[SHOW IMAGE HERE:
http://gallery.constantvzw.org/var/albums/Techno-Galactic-Software-Observatory/PWFU3361.JPG]{.tmp}
[SHOW IMAGE HERE:
http://gallery.constantvzw.org/var/albums/Techno-Galactic-Software-Observatory/PWFU3356.JPG]{.tmp}
[SHOW IMAGE HERE:
http://gallery.constantvzw.org/var/albums/Techno-Galactic-Software-Observatory/PWFU3343.JPG]{.tmp}
[TODO: RELATES TO]{.tmp} []{#odfkotky .anchor}
[[Method:](http://pad.constantvzw.org/p/observatory.guide.samequestion)
Ask several people from different fields and age-groups the same
question: \"***What is software?***\"]{.method .descriptor} [Remember:
The answers to this question will vary depending on who is asking it to
who.]{.remember .descriptor} [What: By paying close attention to the
answers, and possibly logging them, observations on the ambiguous place
and nature of software can be made.]{.what .descriptor}
[Example]{.example .empty .descriptor}

Jean Huens (system administrator at the department of Computer Science,
KULeuven): \"*It is difficult to answer the question \'what is
software\', but I know what is good software*\"

Thomas Cnudde (hardware designer at ESAT - COSIC, Computer Security and
Industrial Cryptography, KULeuven): \"*Software is a list of sequential
instructions! Hardware for me is made of silicon, software a sequence of
bits in a file. But naturally I am biased: I\'m a hardware designer so I
like to consider it as unique and special*\".

Amal Mahious (Director of NAM-IP, Namur): \"*This, you have to ask the
specialists.*\"

` {.verbatim}
*what is software?
--the unix filesystem says: it's a file----what is a file?
----in the filesystem, if you ask xxd:
------ it's a set of hexadecimal bytes
-------what is hexadecimal bytes?
------ -b it's a set of binary 01s
----if you ask objdump
-------it's a set of instructions
--side channel researching also says:
----it's a set of instructions
--the computer glossary says:
----it's a computer's programs, plus the procedure for their use http://etherbox.local/home/pi/video/A_Computer_Glossary.webm#t=02:26
------ a computer's programs is a set of instrutions for performing computer operations
`

[Remember: To answer the question \"*what is software*\" depends on the
situation, goal, time, and other contextual influences.]{.remember
.descriptor} [TODO: RELATES TO
http://pad.constantvzw.org/p/observatory.guide.everyonescp]{.tmp}
[]{#mzcxodix .anchor}
[[Method:](http://pad.constantvzw.org/p/observatory.guide.devmem) FMEM
and /DEV/MEM]{.method .descriptor} [What: Different ways of exploring
your memory (RAM). Because in unix everything is a file, you can access
your memory as if it were a file.]{.what .descriptor} [Urgency: To try
and observe the operational level of software, getting closer to the
workings, the instruction-being of an executable/executing file, the way
it is when it is loaded into memory rather than when it sits in the
harddisk]{.urgency .descriptor} [Remember: In Unix-like operating
systems, a device file or special file is an interface for a device
driver that appears in a file system as if it were an ordinary file. In
the early days you could fully access your memory via the memory device
(`/dev/mem`) but over time the access was more and more restricted in
order to avoid malicious processes to directly access the kernel memory.
The kernel option CONFIG\_STRICT\_DEVMEM was introduced in kernel
version 2.6 and upper (2.6.36--2.6.39, 3.0--3.8, 3.8+HEAD). So you\'ll
need to use the Linux kernel module fmem: this module creates
`/dev/fmem` device, that can be used for accessing physical memory
without the limits of /dev/mem (1MB/1GB, depending on
distribution).]{.remember .descriptor}

`/dev/mem` tools to explore processes stored in the memory

ps ax | grep process
cd /proc/numberoftheprocess
cat maps

\--\> check what it is using

The proc filesystem is a pseudo-filesystem which provides an interface
to kernel data structures. It is commonly mounted at `/proc`. Most of it
is read-only, but some files allow kernel variables to be changed.

dump to a file\--\>change something in the file\--\>dump new to a
file\--\>diff oldfile newfile

\"where am i?\"

to find read/write memory addresses of a certain process\
`awk -F "-| " '$3 ~ /rw/ { print $1 " " $2}' /proc/PID/maps`{.bash}

take the range and drop it to hexdump

sudo dd if=/dev/mem bs=1 skip=$(( 16#b7526000 - 1 )) \
count=$(( 16#b7528000 - 16#7b7526000 + 1)) | hexdump -C

Besides opening the memory dump with an hex editor you can also try and
explore it with other tools or devices. You can open it as a raw image,
you can play it as a sound or perhaps send it directly to your
frame-buffer device (`/dev/fb0`).

[WARNING: Although your memory may look like/sound like/read like
gibberish, it may contain sensitive information about you and your
computer!]{.warning .descriptor} [Example]{.example .empty .descriptor}
[SHOW IMAGE HERE:
http://observatory.constantvzw.org/images/Screenshot\_from\_2017-06-07\_164407.png]{.tmp}
[TODO: BOX: Forensic and debuggung tools can be used to explore and
problematize the layers of abstraction of computing.]{.tmp} [TODO:
RELATES TO
http://pad.constantvzw.org/p/observatory.guide.monopsychism]{.tmp}
[]{#m2mwogri .anchor}
[[Method:](http://pad.constantvzw.org/p/observatory.guide.monopsychism)
Pan/Monopsychism]{.method .descriptor} [What: Reading and writing
sectors of memory from/to different computers]{.what .descriptor} [How:
Shell commands and fmem kernel module]{.how .descriptor} [Urgency:
Memory, even when it is volatile, is a trace of the processes happening
in your computer in the form of saved information, and is therefore more
similar to a file than to a process. Challenging the file/process
divide, sharing memory with others will allow a more intimate relation
with your and other\'s computers.]{.urgency .descriptor} [About:
Monopsychism is the philosophical/theological doctrine according to
which there exists but one intellect/soul, shared by all beings.]{.about
.descriptor} [TODO: RELATES TO
http://pad.constantvzw.org/p/observatory.guide.devmem]{.tmp} [Note: The
parallel allocation and observation of the same memory sector in two
different computers is in a sense the opposite process of machine
virtualization, where the localization of multiple virtual machines in
one physical comptuers can only happen by rigidly separating the memory
sectors dedicated to the different virtual machines.]{.note .descriptor}
[WARNING: THIS METHOD HAS NOT BEEN TESTED, IT CAN PROBABLY DAMAGE YOUR
RAM MEMORY AND/OR COMPUTER]{.warning .descriptor}

First start the fmem kernel module in both computers:

`sudo sh fmem/run.sh`{.bash}

Then load part of your computer memory into the other computer via dd
and ssh:

`dd if=/dev/fmem bs=1 skip=1000000 count=1000 | ssh user@othercomputer dd of=/dev/fmem`{.bash}

Or viceversa, load part of another computer\'s memory into yours:

`ssh user@othercomputer dd if=/dev/fmem bs=1 skip=1000000 count=1000 | dd of=/dev/fmem`{.bash}

Or even, exchange memory between two other computers:

`ssh user@firstcomputer dd if=/dev/fmem bs=1 skip=1000000 count=1000 | ssh user@secondcomputer dd of=/dev/fmem`{.bash}

` {.quaverbatim}
pan/monopsychism:
(aquinas famously opposed averroes..who's philosophy can be interpreted as monopsychist)

shared memory

copying the same memory to different computers

https://en.wikipedia.org/wiki/Reflection_%28computer_programming%29

it could cut through the memory like a worm

or it could go through the memory of different computers one after the other and take and leave something there
`

[Temporality]{.grouping} []{#ndawnmy5 .anchor}
[[Method:](http://pad.constantvzw.org/p/observatory.guide.fountain)
Fountain refreshment]{.method .descriptor} [What: Augmenting a piece of
standardised office equipment designed to dispense water to perform a
decorative function.]{.what .descriptor} [How: Rearranging space as
conditioning observations (WTC vs. Museum vs. University vs. Startup
Office vs. Shifting Walls that became Water Fountains)]{.how
.descriptor} [Who: Gaining access to standardised water dispensing
equipment turned out to be more difficult than expected as such
equipment is typically licensed / rented rather than purchased outright.
Acquiring a unit that could be modified required access to secondary
markets of second hand office equiment in order to purchase a disused
model.]{.who .descriptor} [Urgency: EU-OSHA (European Agency for Safety
and Health at Work) Directive 2003/10/EC noise places describes the
minimum health and safety requirements regarding the exposure of workers
to the risks arising from physical agents (noise). However no current
European guidelines exist on the potential benefitial uses of tactially
designed additive noise systems.]{.urgency .descriptor}

The Techno-Galactic Software Observatory -- Comfortable silence, one way
mirrors

A drinking fountain and screens of one-way mirrors as part of the work
session \"*The Techno-Galactic Software Observatory*\" organised by
Constant.

For the past 100 years the western ideal of a corporate landscape has
been has been moving like a pendulum, oscillating between grids of
cubicles and organic, open landscapes, in a near to perfect 25-year
rhythm. These days the changes in office organisation is supplemented by
sound design, in corporate settings mostly to create comfortable
silence. Increase the sound and the space becomes more intimate, the
person on the table next to you can not immediately hear what you are
saying. It seems that actual silence in public and corporate spaces has
not been sought after since the start of the 20th century. Actual
silence is not at the moment considered comfortable. One of the visible
symptoms of our desire to take the edge off the silence is to be
observed through the appearance of fountains in public space. The
fountains purpose being to give off neutral sound, like white noise
without the negative connotations. However as a sound engineer\'s
definition of noise is unwanted sound that all depends on ones personal
relation to the sound of dripping water.

This means that there needs to be a consistent inoffensiveness to create
comfortable silence.

In corporate architecture the arrival of glass buildings were originally
seen as a symbol of transparency, especially loved by governmental
buildings. Yet the reflectiveness of this shiny surface once combined
with strong light -- known as the treason of the glass -- was only
completely embraced at the invention of one-way-mirror foil. And it was
the corporate business-world that would come to be known for their
reflective glass skyscrapers. As the foil reacts to light, it appears
transparent to someone standing in the dark, while leaving the side with
the most light with an opaque surface. Using this foil as room dividers
in a room with a changing light, what is hidden or visible will vary
throughout the day. So will the need for comfortable silence. Disclaimer
:\
Similar to the last 100 years of western office organisation,\
this fountain only has two modes:\
on or off

If it is on it also offers two options\
cold water and hot water

This fountain has been tampered with and has not in any way been
approved by a proffesional fountain cleaner. I do urge you to consider
this before you take the decision to drink from the fountain.

Should you chose to drink from the fountain, then I urge you to write
your name on your cup, in the designated area, for a customised
experience of my care for you.

I do want you to be comfortable.

[SHOW IMAGE HERE:
http://observatory.constantvzw.org/documents/mia/mia6.gif]{.tmp} [SHOW
IMAGE HERE:
http://observatory.constantvzw.org/documents/mia/FullSizeRender%2811%29.jpg]{.tmp}
[SHOW IMAGE HERE:
http://observatory.constantvzw.org/documents/mia/IMG\_5695.JPG]{.tmp}
[SHOW IMAGE HERE:
http://observatory.constantvzw.org/documents/mia/IMG\_5698.JPG]{.tmp}
[TODO: RELATES TO]{.tmp} []{#mtk5yjbl .anchor}
[[Method:](http://pad.constantvzw.org/p/observatory.guide.silvio) Create
\"nannyware\": Software that observes and addresses the user]{.method
.descriptor} [What]{.what .empty .descriptor}

Nannyware is software meant to protect users while limiting their space
of activity. It is software that passive-aggressively suggests or
enforces some kind of discipline. In other words, create a form of
parental control extended to adults by means of user experience / user
interfaces.

Nannyware is a form of Content-control software: software designed to
restrict or control the content a reader is authorised to access,
especially when utilised to restrict material delivered over the
Internet via the Web, e-mail, or other means. Content-control software
determines what content will be available or be blocked.

[How]{.how .empty .descriptor}

> \[\...RestrictionsCITECLOSE23310 can be applied at various levels: a
> government can attempt to apply them nationwide (see Internet
> censorship), or they can, for example, be applied by an ISP to its
> clients, by an employer to its personnel, by a school to its students,
> by a library to its visitors, by a parent to a child\'s computer, or
> by an individual user to his or her own computer.^[5](#fcefedaf)^

[Who]{.who .empty .descriptor}

> Unlike filtering, accountability software simply reports on Internet
> usage. No blocking occurs. In setting it up, you decide who will
> receive the detailed report of the computer's usage. Web sites that
> are deemed inappropriate, based on the options you've chosen, will be
> red-flagged. Because monitoring software is of value only "after the
> fact", we do not recommend this as a solution for families with
> children. However, it can be an effective aid in personal
> accountability for adults. There are several available products out
> there.^[6](#bffbbeaf)^

[Urgency]{.urgency .empty .descriptor}

> As with all new lifestyle technologies that come along, in the
> beginning there is also some chaos until their impact can be assessed
> and rules put in place to bring order and respect to their
> implementation and use in society. When the automobile first came into
> being there was much confusion regarding who had the right of way, the
> horse or the car. There were no paved roads, speed limits, stop signs,
> or any other traffic rules. Many lives were lost and much property was
> destroyed as a result. Over time, government and society developed
> written and unwritten rules as to the proper use of the
> car.^[7](#bbfcbcfa)^

[WARNING]{.warning .empty .descriptor}

> Disadvantages of explicit proxy deployment include a user\'s ability
> to alter an individual client configuration and bypass the proxy. To
> counter this, you can configure the firewall to allow client traffic
> to proceed only through the proxy. Note that this type of firewall
> blocking may result in some applications not working
> properly.^[8](#ededebde)^

[Example]{.example .empty .descriptor}

> The main problem here is that the settings that are required are
> different from person to person. For example, I use workrave with a 25
> second micropause every two and a half minute, and a 10 minute
> restbreak every 20 minutes. I need these frequent breaks, because I\'m
> recovering from RSI. And as I recover, I change the settings to fewer
> breaks. If you have never had any problem at all (using the computer,
> that is), then you may want much fewer breaks, say 10 seconds
> micropause every 10 minutes, and a 5 minute restbreak every hour. It
> is very hard to give proper guidelines here. My best advice is to play
> around and see what works for you. Which settings \"feel right\".
> Basically, that\'s how Workrave\'s defaults evolve.^[9](#cfbbbfdd)^

[SHOW IMAGE HERE: !\[Content-control software\](
http://www.advicegoddess.com/archives/2008/05/03/nannyware.jpg )]{.tmp}
[SHOW IMAGE HERE: !\[A \"nudge\" from your music player
\](http://img.wonderhowto.com/img/10/25/63533437022064/0/disable-high-volume-warning-when-using-headphones-your-samsung-galaxy-s4.w654.jpg)]{.tmp}
[SHOW IMAGE HERE: !\[Emphasis on the body\]
(http://classicallytrained.net/wp-content/uploads/2014/10/take-a-break.jpg)]{.tmp}
[SHOW IMAGE HERE: !\[ \"Slack is trying to be my friend but it\'s more
like a slightly insensitive and slightly bossy acquaintance.\"
\@briecode \] (https://pbs.twimg.com/media/CuZLgV4XgAAYexX.jpg)]{.tmp}
[SHOW IMAGE HERE: !\[Slack is trying to be my friend but it\'s more like
a slightly insensitive and slightly bossy acquaintance.\]
(https://pbs.twimg.com/media/CuZLgV4XgAAYexX.jpg)]{.tmp} [SHOW IMAGE
HERE:
!\[\](https://images.duckduckgo.com/iu/?u=http%3A%2F%2Fi0.wp.com%2Fatherbeg.com%2Fwp-content%2Fuploads%2F2015%2F06%2FWorkrave-Restbreak-Shoulder.png&f=1)]{.tmp}

Facebook is working on an app to stop you from drunk-posting \"Yann
LeCun, who overseas the lab, told Wired magazine that the program would
be like someone asking you, \'Uh, this is being posted publicly. Are you
sure you want your boss and your mother to see this?\'\"

[SHOW IMAGE HERE: !\[This Terminal Dashboard Reminds You to Take a Break
When You\'re Lost Deep Inside the Command
Line\](https://i.kinja-img.com/gawker-media/image/upload/s\--\_of0PoM2\--/c\_fit,fl\_progressive,q\_80,w\_636/eegvqork0qizokwrlemz.png)]{.tmp}
[SHOW IMAGE HERE: !\[\](http://waterlog.gd/images/homescreen.png)]{.tmp}
[SHOW IMAGE HERE:
!\[\](https://pbs.twimg.com/media/C6oKTduWcAEruIE.jpg:large)]{.tmp}
[TODO: RELATES TO]{.tmp} []{#yzuwmdq4 .anchor}
[[Method:](http://pad.constantvzw.org/p/observatory.guide.scrollresistance)
Useless scroll against productivity]{.method .descriptor} []{#m2vjndu3
.anchor} [[Method:](http://pad.constantvzw.org/p/observatory.guide.time)
Investigating how humans and machines negotiate the experience of
time]{.method .descriptor} [What]{.what .empty .descriptor} [SHOW IMAGE
HERE:
http://observatory.constantvzw.org/images/Screenshot\_from\_2017-06-10\_172547.png]{.tmp}
[How: python script]{.how .descriptor} [Example]{.example .empty
.descriptor}

` {.verbatim}
# ends of time

https://en.wikipedia.org/wiki/Year_2038_problem

Exact moment of the epoch:
03:14:07 UTC on 19 January 2038

## commands

local UNIX time of this machine
%XBASHCODE: date +%s

UNIX time + 1
%BASHCODE: echo $((`date +%s` +1 ))

## goodbye unix time

while :
do
sleep 1
figlet $((2147483647 - `date +%s`))
done

# Sundial Time Protocol Group tweaks

printf 'Current Time in Millennium Unix Time: '
printf $((2147483647 - `date +%s`))
echo
sleep 2
echo $((`cat ends-of-times/idletime` + 2)) > ends-of-times/idletime
idletime=`cat ends-of-times/idletime`
echo
figlet "Thank you for having donated 2 seconds to our ${idletime} seconds of collective SSH pause "
echo
echo

http://observatory.constantvzw.org/etherdump/ends-of-time.html
`

[TODO: RELATES TO]{.tmp} [Languaging]{.grouping} []{#nmi5mgjm .anchor}
[[Method:](http://pad.constantvzw.org/p/observatory.guide.quine)
Quine]{.method .descriptor} [What: A program whose function consists of
displaying its own code. Also known as \"self-replicating
program\"]{.what .descriptor} [Why: Quines show the tension between
\"software as language\" and \"software as operation\".]{.why
.descriptor} [How: By running a quine you will get your code back. You
may do a step forward and wonder about functionality and aesthetics,
uselessness and performativity, data and code.]{.how .descriptor}
[Example: A quine (Python). When executed it outputs the same text as
the source:]{.example .descriptor}

` {.sourceCode .python}
s = 's = %r\nprint(s%%s)'
print(s%s)
`

[Example: A oneline unibash/etherpad quine, created during relearn
2017:]{.example .descriptor}

` {.quaverbatim}
wget -qO- http://192.168.73.188:9001/p/quine/export/txt | curl -F "file=@-;type=text/plain" http://192.168.73.188:9001/p/quine/import
`

[WARNING]{.warning .empty .descriptor}

The encounter with quines may deeply affect you. You may want to write
one and get lost in trying to make an ever shorter and more elegant one.
You may also take quines as point of departure or limit-ideas for
exploring software dualisms.

\"A quine is without why. It prints because it prints. It pays no
attention to itself, nor does it asks whether anyone sees it.\" \"Aquine
is aquine is aquine. \" Aquine is not a quine This is not aquine

[Remember: Although seemingly absolutely useless, quines can be used as
exploits.]{.remember .descriptor}

Exploring boundaries/tensions

databases treat their content as data (database punctualization) some
exploits manage to include operations in a database

[TODO: RELATES TO
http://pad.constantvzw.org/p/observatory.guide.monopsychism]{.tmp}
[]{#zwu0ogu0 .anchor}
[[Method:](http://pad.constantvzw.org/p/observatory.guide.glossary)
Glossaries as an exercise]{.method .descriptor} [What: Use the technique
of psychanalytic listening to compile (gather, collect, bring together)
a list of key words for understanding software.]{.what .descriptor}
[How: Create a shared document that participants can add words to as
their importance emerges.To do pyschoanalytic listening, let your
attention float freely, hovering evenly, over a conversation or a text
until something catches its ear. Write down what your ear/eye catches.
When working in a collective context invite others to participate in
this project and describe the practice to them. Each individual may move
in and out of this mode of listening according to their interest and
desire and may add as many words to the list as they want. Use this list
to create an index of software observation.]{.how .descriptor} [When:
This is best done in a bounded context. In the case of the
Techno-Galactic Observatory, our bounded contexts includes the six day
work session and the pages and process of this publication.]{.when
.descriptor} [Who: The so-inclined within the group]{.who .descriptor}
[Urgency: Creating and troubling categories]{.urgency .descriptor}
[Note: Do not remove someone else\'s word from the glossary during the
accumulation phase. If an editing and cutting phase is desired this
should be done after the collection through collective consensus.]{.note
.descriptor} [WARNING: This method is not exclusive to and was not
developed for software observation. It may lead to awareness of
unconscious processes and to shifts in structures of feeling and
relation.]{.warning .descriptor} [Example]{.example .empty .descriptor}

` {.verbatim}
Agile
Code
Colonial
Command Line
Communication
Connectivity
Emotional
Galaxies
Green
Guide
Kernel
Imperial
Issues
Machine
Mantra
Memory
Museum
Observation
ProductionPower
Programmers
Progress
Relational
Red
Scripting
Scrum
Software
Survival
Technology
Test
Warning
WhiteBoard
Yoga
`

[TODO: RELATES TO]{.tmp} []{#mja0m2i5 .anchor}
[[Method:](http://pad.constantvzw.org/p/observatory.guide.validation)
Adding qualifiers]{.method .descriptor} [Remember: \"\[V\]alues are
properties of things and states of affairs that we care about and strive
to attain\...vlaues expressed in technical systems are a function of
their uses as well as their features and designs.\" Values at Play in
Digital Games, Mary Flanagan and Helen Nissenbaum]{.remember
.descriptor} [What: Bringing a moral, ethical, or otherwise
evaluative/adjectival/validating lens.]{.what .descriptor} [How:
Adjectives create subcategories. They narrow the focus by naming more
specifically the imagined object at hand and by implicitly excluding all
objects that do not meet the criteria of the qualifier. The more
adjectives that are added, the easier it becomes to answer the question
what is software. Or so it seems. Consider what happens if you add the
words good, bad, bourgeois, queer, stable, or expensive to software. Now
make a list of adjectives and try it for yourself. Level two of this
exercise consists of observing a software application and deducing from
this the values of the individuals, companies, and societies that
produced it.]{.how .descriptor} [Note: A qualifier may narrow down
definitions to undesirable degrees.]{.note .descriptor} [WARNING: This
exercise may be more effective at identifying normative and ideological
assumptions at play in the making, distributing, using, and maintaining
of software than at producing a concise definition.]{.warning
.descriptor} [Example: \"This morning, Jan had difficulties to answer
the question \"what is software\", but he said that he could answer the
question \"what is good software\". What is good software?]{.example
.descriptor} [TODO: RELATES TO]{.tmp} []{#mmmwmje2 .anchor}
[[Method:](http://pad.constantvzw.org/p/observatory.guide.softwarethrough)
Searching \"software\" through software]{.method .descriptor} [What: A
quick way to sense the ambiguity of the term \'software\', is to go
through the manual files on your hard drive and observe in which cases
is the term used.]{.what .descriptor} [How: command-line oneliner]{.how
.descriptor} [Why: Software is a polymorphic term that take different
meanings and comes with different assumptions for the different agents
involved in its production, usage and all other forms of encounter and
subjection. From the situated point of view of the software present on
your machine, when and why does software call itself as such?]{.why
.descriptor} [Example]{.example .empty .descriptor}

so software exists only outside your computer? only in general terms?
checking for the word software in all man pages:

grep -nr software /usr/local/man
!!!!

software appears only in terms of license:

This program is free software
This software is copyright (c)

we don\'t run software. we still run programs.\
nevertheless software is everywhere

[TODO: RELATES TO
http://pad.constantvzw.org/p/observatory.guide.samequestion]{.tmp}
[]{#ndhkmwey .anchor}
[[Method:](http://pad.constantvzw.org/p/observatory.guide.everyonescp)
Persist in calling everyone a Software Curious Person]{.method
.descriptor} [What: Persistance in naming is a method for changing a
person\'s relationship to software by (sometimes forcibly) call everyone
a Software Curious Person.]{.what .descriptor} [How: Insisting on
curiosity as a relation, rather than for example \'fear\' or
\'admiration\' might help cut down the barriers between different types
of expertise and allows multiple stakeholders feel entitled to ask
questions, to engage, to investigate and to observe.]{.how .descriptor}
[Urgency: Software is too important to not be curious about.
Observations could benefit from recognising different forms of
knowledge. It seems important to engage with software through multiple
interests, not only by means of technical expertise.]{.urgency
.descriptor} [Example: This method was used to address each of the
visitors at the Techno-Galactic Walk-in Clinic.]{.example .descriptor}
[TODO: RELATES TO]{.tmp} [Healing]{.grouping} []{#mmu1mgy0 .anchor}
[[Method:](http://pad.constantvzw.org/p/observatory.guide.relational)
Setup a Relational software observatory consultancy (RSOC)]{.method
.descriptor} [Remember]{.remember .empty .descriptor}

- Collectivise research around hacking to save time.
- Self-articulate software needs as your own Operating (system)
perspective.
- Change the lens by looking to software through a time perspective.

[What: By paying a visit to our ethnomethodology interview practice
you'll learn to observe software from different angles / perspectives.
Our practionners passion is to make the \"what is the relation to
software\" discussion into a service.]{.what .descriptor} [How: Reading
the signs. Considering the everchanging nature of software development
and use and its vast impact on globalized societies, it is necessary to
recognize some of the issues of how software is (often) either
passively-perceived or actively-observed, without an articulation of the
relations. We offer a method to read the signs of the relational aspect
of software observance. It\'s a crucial aspect of our guide. It will
give you another view on software that will shape your ability to
survive any kind of software disaster.]{.how .descriptor} [SHOW IMAGE
HERE: !\[Reading the signs. From: John \"Lofty\" Wiseman, SAS Survival
Handbook: The Ultimate Guide to Surviving Anywhere\](
http://gallery.constantvzw.org/index.php/Techno-Galactic-Software-Observatory/IMAG1319
)]{.tmp} [WARNING]{.warning .empty .descriptor} [SHOW IMAGE HERE: have a
advertising blob for the RSOC with a smiling doctor welcoming
image]{.tmp} [Example]{.example .empty .descriptor}

What follows is an example of a possible diagnostic questionnaire.

Sample Questionnaire
--------------------

**What to expect** You will obtain a cartography of software users
profiles. It will help you to shape your own relation to software. You
will be able to construct your own taxonomy and classifcation of
software users that is needed in order to find a means of rescue in case
of a software catastrophy.

- SKILLS\
- What kind of user would you say that you are?
- What is your most frequently used type of software?
- How often do you install/experiment/learn new software?



- History
- What is your first recollection of software use?
- How often do / when did you last purchase software or pay for a
software service?



- Ethics
- What is the software feature you care about the most?
- Do you use any free software?
- if yes than
- do you remember your first attempt at using this software
service? Do you still use it? If not why?



- Do you pay for media distribution/streaming services?
- Do you remember your first attempt at using free software and how
did that make you feel?
- Have you used any of these software services : facebook, dating app
(grindr, tinder, etc.), twitter, instagram or equivalent.



- Can you talk about your favorite apps or webtools that you use
regularly?
- What is most popular software your friends use?



- SKILL
- Would you say that you are a specilised user?



- Have you ever used the command line?
- Do you know about scripting?
- Have you ever edited an HTML page? A CSS file? A PHP file? A
configuration file?
- Can you talk about your most technical encounter with your computer
/ telephone?



- ECONOMY\
- How do you pay for your software use?
- Please elaborate (for example, do you buy the software? /
contribute in kind / deliver services or support)
- What is the last software that you paid for using?
- What online services are you currently paying for?
- Is someone paying for your use of service?



- Personal
- What stories do you have concerning contracts and administration in
relation to your software, Internet or computer?
- How does software help you shape your relations with other people?
- From which countries does your softwares come from / reside? How do
you feel about that?
- Have you ever read a terms of software service, what about one that
is not targeting the American market?

Sample questionnaire results
----------------------------

Possible/anticipated user profiles
----------------------------------

### \...meAsHardwareOwnerSoftwareUSER:

\"I did not own a computer personally until very very late as I did not
enjoy gaming as a kid or had interest in spending much time behind PC
beyond work (and work computer). My first was hence I think in 2005 and
it was a SGI workstation that was the computer of the year 2000 (cost
10.000USD) and I got it for around 300USD. Proprietary drivers for
unified graphics+RAM were never released, so it remained a software
dead-end in gorgeous blue curved chassis
http://www.sgidepot.co.uk/sgidepot/pics/vwdocs.jpg\"

### \...meAsSoftwareCONSUMER:

\"I payed/purchased software only twice in my life (totalling less then
25eur), as I could access most commercial software as widely pirated in
Balkans and later had more passion for FLOSS anyway, this made me relate
to software as material to exchange and work it, rather than commodity
goods I could or not afford.\"

### \...meAsSoftwareINVESTOR:

\"I did it as both of those apps were niche products in early beta (one
was Jeeper Elvis, real-time-non-linear-video-editor for BeOS) that
failed to reach market, but I think I would likely do it again and only
in that mode (supporting the bleeding edge and off-stream work), but
maybe with more than 25eur.\"

### \...meAsSoftwareUserOfOS:

\"I would spend most of 80s ignoring computers, 90ties figuring out
software from high-end to low-end, starting with OSF/DecAlpha and SunOS,
than IRIX and MacOS, finally Win 95/98 SE, that permanently pushed me
into niches (of montly LINUX distro install fests, or even QNX/Solaris
experiments and finally BeOS use).\"

### \...meAsSoftwareWEBSURFER:

\"I got used to websurfing in more than 15 windows on UNIX systems and
never got used to less than that ever since, furthermore with addition
of more browser options this number only multiplied (always wondered if
my first system was Windows 3.11 - would I be a more focused person and
how would that form my relations to browser windows\>tabs).\"

### \...meAsSoftwareUserOfPropertarySoftware:

\"I signed one NDA contract in person on the paper and with ink on a
rainy day while stopping of at trainstaion in north Germany for the
software that was later to be pulled out of market due to problematic
licencing agreement (intuitivly I knew it was wrong) - it had too much
unprofessional pixeleted edges in its graphics.

### \...meAsSoftwareUserOfDatingWebsites:

\"I got one feature request implemented by a prominent dating website
(to search profiles by language they speak), however I was never
publicly acknowledged (though I tried to make use of it few times), that
made our relations feel a bit exploitative and underappreciated. \"

### \...meAsSoftwareUserTryingToGoPRO:

\"my only two attempts to get into the software company failed as they
insisted on full time commitments. Later I found out ones were
intimidated in interview and other gave it to a person that negotiated
to work part time with friend! My relation to professionalism is likely
equally complex and pervert as one to the software.\"

Case study : W. W.
------------------

\...ww.AsExperiencedAdventerousUSER - experiments with software every
two days as she uses FLOSS and Gnu/Linux, cares the most for maliabity
of the software - as a result she has big expectations of flexibility
even in software category which is quite conventional and stability
focused like file-hosting.

\...ww.AsAnInevstorInSoftware - paid compiled version of FLOSS audio
software 5 years ago as she is supportive of economy and work around
production, maintainance and support, but she also used closed
hardware/software where she had to agree on licences she finds unfair,
but then she was hacking it in order to use it as an expert - when she
had time.

\...ww.AsCommunicationSoftwareUSER - she is not using commercial social
networks, so she is very concious of information transfers and time
relations, but has no strong media/format/design focus.

Q: What is your first recollection of software use?\
A: ms dos in 1990 at school \_ i was 15 or 16. oh no 12. Basic in 1986.

Q: What are the emotions related to this use?\
A: fun. i\'m good at this. empowering

Q: How often do / when did you last purchase software or pay for a
software service?\
A: I paid for ardour five years ago. I paid the developper directly. For
the compiled version. I paid for the service. I pay for my website and
email service at domaine public.

Q: What kind of user would you say you are?\
A: An experienced user drawing out the line. I don\'t behave.

Q: Is there a link between this and your issue?\
A: Even if it\'s been F/LOSS there is a lot of decision power in my
package.

Q: What is your most frequently used type of software?\
A: Web browser. email. firefox & thunderbird

Q: How often do you install/experiment/learn new software?\
A: Every two days. I reinstall all the time. my old lts system died.
stop being supported last april. It was linux mint something.

Q: Do you know about scripting?\
A: I do automating scripts for any operation i have to doi several times
like format conversion.

Q: Can you talk about your most technical encounter with your computer /
telephone?\
A: I\'ve tried to root it. but i didn\'t succeed.

Q: How much time do you wish to spend on such activities like hacking,
rooting your device?\
A: hours. you should take your time

Q: Did you ever sign licence agreement you were not agree with? How does
that affect you?\
A: This is the first thing your when you have a phone. it\'s obey or
die.

Q: What is the software feature you care for the most?\
A: malleability. different ways to approach a problem, a challenge, an
issue.

Q: Do you use any free software?\
A: yes. there maybe are some proprietary drivers.

Q: Do you remember your first attempt at using free software and how did
that make you feel?\
A: Yes i installed my dual boot in \... 10 years ago. scared and
powerful.

Q: Do you use one of this software service: facebook, dating app (grindr
of sort), twitter, instagram or equivalent?\
A: Google, gmail that\'s it

Q: Can you talk about your favorite apps or webtools that you use
regularly?\
A: Music player. vanilla music and f-droid. browser. I pay attention to
clearing my history, no cookies. I also have iceweasel. Https by
default. Even though i have nothing to hide.

Q: What stories around contracts and administration in relation to your
software internet or computer?\
A: Nothing comes to my mind. i\'m not allowed to do, to install on
phone. When it\'s an old phone, there is nothing left that is working
you have to do it.

Q: How does software help you shape your relations with other people?\
A: It\'s a hard question. if it\'s communication software of course
it\'s it\'s nature to be related to other people.there is an expectency
of immediate reply, of information transfer\...It\'s troubling your
relation with people in certain situations.

Q: From which countries does your softwares live / is coming from? How
do you feel about that?\
A: i think i chose the netherlands as a miror. you are hoping to reflect
well in this miror.

Q: Have you ever read a terms of software service; one that is not
targeting the American market?\
A: i have read them. no.

[TODO: RELATES TO]{.tmp} []{#mta1ntzm .anchor}
[[Method:](http://pad.constantvzw.org/p/observatory.guide.agile.yoga)
Agile Sun Salutation]{.method .descriptor} [Remember]{.remember .empty
.descriptor}

> Agile software development describes a set of values and principles
> for software development under which requirements and solutions evolve
> through the collaborative effort of self-organizing cross-functional
> teams. It advocates adaptive planning, evolutionary development, early
> delivery, and continuous improvement, and it encourages rapid and
> flexible response to change. These principles support the definition
> and continuing evolution of many software development
> methods.^[10](#dbabcece)^

[What: You will be observing yourself]{.what .descriptor} [How]{.how
.empty .descriptor}

> Scrum is a framework for managing software development. It is designed
> for teams of three to nine developers who break their work into
> actions that can be completed within fixed duration cycles (called
> \"sprints\"), track progress and re-plan in daily 15-minute stand-up
> meetings, and collaborate to deliver workable software every sprint.
> Approaches to coordinating the work of multiple scrum teams in larger
> organizations include Large-Scale Scrum, Scaled Agile Framework (SAFe)
> and Scrum of Scrums, among others.^[11](#eefcbaac)^

[When: Anywhere where it\'s possible to lie on the floor]{.when
.descriptor} [Who]{.who .empty .descriptor}

> Self-organization and motivation are important, as are interactions
> like co-location and pair programming. It is better to have a good
> team of developers who communicate and collaborate well, rather than a
> team of experts each operating in isolation. Communication is a
> fundamental concept.^[12](#fbaeffab)^

[Urgency: Using Agile software development methods to develop a new path
into your professional and personal life towards creativity, focus and
health.]{.urgency .descriptor} [WARNING]{.warning .empty .descriptor}

> The agile movement is in some ways a bit like a teenager: very
> self-conscious, checking constantly its appearance in a mirror,
> accepting few criticisms, only interested in being with its peers,
> rejecting en bloc all wisdom from the past, just because it is from
> the past, adopting fads and new jargon, at times cocky and arrogant.
> But I have no doubts that it will mature further, become more open to
> the outside world, more reflective, and also therefore more
> effective.^[13](#edabeeaf)^

[Example]{.example .empty .descriptor} [SHOW IMAGE HERE:
https://mfr.osf.io/render?url=https://osf.io/ufdvb/?action=download%26direct%26mode=render&initialWidth=450&childId=mfrIframe]{.tmp}

Hello and welcome to the presentation of the agile yoga methodology. I
am Allegra, and today I\'m going to be your personal guide to YOGA, an
acronym for why organize? Go agile! I\'ll be part of your team today and
we\'ll do a few exercises together as an introduction to a new path into
your professional and personal life towards creativity, focus and
health.

A few months ago, I was stressed, overwhelmed with my work, feeling
alone, inadequate, but since I started practicing agile yoga, I feel
more productive. I have many clients as an agile yoga coach, and I\'ve
seen new creative business opportunities coming to me as a software
developer.

For this first experience with the agile yoga method and before we do
physical exercises together, I would like to invite you to close your
eyes. Make yourself comfortable, lying on the floor, or sitting with
your back on the wall. Close your eyes, relax. Get comfortable. Feel the
weight of your body on the floor or on the wall. Relax.

Leave your troubles at the door. Right now, you are not procrastinating,
you are having a meeting at the \,
a professional building dedicated to business, you are meeting yourself,
you are your own business partner, you are one. You are building your
future.

You are in a room standing with your team, a group of lean programmers.
You are watching a white board together. You are starting your day, a
very productive day as you are preparing to run a sprint together. Now
you turn towards each other, making a scrum with your team, you breathe
together, slowly, inhaling and exhaling together, slowly, feeling the
air in and out of your body. Now you all turn towards the sun to prepare
to do your ASSanas, the agile Sun Salutations or ASS with the team
dedicated ASS Master. She\'s guiding you. You start with Namaskar, the
Salute. your palms joined together, in prayer pose. you all reflect on
the first principle of the agile manifesto. your highest priority is to
satisfy the customer through early and continuous delivery of valuable
software.

Next pose, is Ardha Chandrasana or (Half Moon Pose). With a deep
inhalation, you raise both arms above your head and tilt slightly
backward arching your back. you welcome changing requirements, even late
in development. Agile processes harness change for the customer\'s
competitive advantage. then you all do Padangusthasana (Hand to Foot
Pose). With a deep exhalation, you bend forward and touch the mat, both
palms in line with your feet, forehead touching your knees. you deliver
working software frequently.

Surya Darshan (Sun Sight Pose). With a deep inhalation, you take your
right leg away from your body, in a big backward step. Both your hands
are firmly planted on your mat, your left foot between your hands. you
work daily throughout the project, business people and developers
together. now, you\'re flowing into Purvottanasana (Inclined Plane) with
a deep inhalation by taking your right leg away from your body, in a big
backward step. Both your hands are firmly planted on your mat, your left
foot between your hands. you build projects around motivated
individuals. you give them the environment and support they need, and
you trust them to get the job done.

You\'re in Adho Mukha Svanasana (Downward Facing Dog Pose). With a deep
exhalation, you shove your hips and butt up towards the ceiling, forming
an upward arch. Your arms are straight and aligned with your head. The
most efficient and effective method of conveying information to and
within a development team is face-to-face conversation.

Then, Sashtang Dandawat (Forehead, Chest, Knee to Floor Pose). With a
deep exhalation, you lower your body down till your forehead, chest,
knees, hands and feet are touching the mat, your butt tilted up. Working
software is the primary measure of progress.

Next is Bhujangasana (Cobra Pose). With a deep inhalation, you slowly
snake forward till your head is up, your back arched concave, as much as
possible. Agile processes promote sustainable development. You are all
maintaining a constant pace indefinitely, sponsors, developers, and
users together.

Now back into Adho Mukha Svanasana (Downward Facing Dog Pose).
Continuous attention to technical excellence and good design enhances
agility.

And then again to Surya Darshan (Sun Sight Pose). Simplicity\--the art
of maximizing the amount of work not done\--is essential. Then to
Padangusthasana (Hand to Foot Pose). The best architectures,
requirements, and designs emerge from self-organizing teams.

You all do again Ardha Chandrasana (Half Moon Pose). At regular
intervals, you as the team reflect on how to become more effective, then
tune and adjust your behavior accordingly. you end our ASSanas session
with a salute to honor your agile yoga practices. you have just had a
productive scrum meeting. now i invite you to open your eyes, move your
body around a bit, from the feet up to the head and back again.

Stand up on your feet and let\'s do a scrum together if you\'re ok being
touched on the arms by someone else. if not, you can do it on your own.
so put your hands on the shoulder of the SCP around you. now we\'re
joined together, let\'s look at the screen together as we inhale and
exhale. syncing our body together to the rythms of our own internal
software, modulating our oxygen level intake requirements to the oxygen
availability of our service facilities.

Now, let\'s do together a couple of exercise to protect and strengthen
our wrists. as programmers, as internauts, as entrepreneurs, they are a
very crucial parts of the body to protect. in order to be able to type,
to swipe, to shake hands vigourously, we need them in good health. So
bring to hands towards each other in a prayer pose, around a book, a
brick. You can do it without but I\'m using my extreme programming book
- embrace change - for that. So press the palms together firmly, press
the pad of your fingers together. do that while breathing in and out
twice.

Now let\'s expand our arms towards us, in the air, face and fingers
facing down. like we\'re typing. make your shoulders round. let\'s
breath while visualizing in our heads the first agile mantra :
Individuals and interactions over processes and tools.

Now let\'s bring back the arms next to the body and raise them again.
And let\'s move our hands towards the ceiling this time. Strenghtening
our back. In our head, the second mantra. Working software over
comprehensive documentation. now let\'s bring back the hands in the
standing position. Then again the first movement while visualizing the
third mantra : Customer collaboration over contract negotiation and then
the second movement thinking about the fourth and last mantra :
Responding to change over following a plan and of course we continue
breathing. Now to finish this session, let\'s do a sprint together in
the corridor !

[SHOW IMAGE HERE: !\[\](
http://observatory.constantvzw.org/guide/agileyoga/8-Poses-Yoga-Your-Desk.contours.png
)]{.tmp} [SHOW IMAGE HERE: !\[\](
http://observatory.constantvzw.org/guide/agileyoga/gayolab-office-chair-for-yoga.contours.png
)]{.tmp} [TODO: RELATES TO]{.tmp} []{#mdu0mmji .anchor}
[[Method:](http://pad.constantvzw.org/p/observatory.guide.blobservation)
Hand reading]{.method .descriptor} [How: Visit the Future Blobservation
Booth to have your fortunes read and derive life insight from the wisdom
of software.]{.how .descriptor} [What: Put your hand in the reading
booth and get your line read.]{.what .descriptor} [Why: The hand which
holds your mouse everyday hides many secrets.]{.why .descriptor}
[Example]{.example .empty .descriptor}

` {.verbatim .wrap}
* sample reading timeline:

* 15:00 a test user, all tests clear and systems are online a user who said goodbye to us another user a user who thought it'd be silly to say thank you to the machine but thank you very much another kind user who said thank you yet another kind user another user, no feeback a nice user who found the reading process relieving yet another kind user a scared user! took the hand out but ended up trusting the system. "so cool thanks guys" another user a young user! this is a funny computer
* 15:35 another nice user
* 15:40 another nice user
* 15:47 happy user (laughing)
* 15:51 user complaining about her fortune, saying it's not true. Found the reading process creepy but eased up quickly
* 15:59 another nice user: http://etherbox.local:9001/p/SCP.sedyst.md
* 16:06 a polite user
* 16:08 a friendly playful user (stephanie)
* 16:12 a very giggly user (wendy)
* 16:14 a playful user - found the reading process erotic - DEFRAGMENTING? NO! Thanks Blobservation http://etherbox.local:9001/p/SCP.loup.md
* 16:19 a curious user
* 16:27 a friendly user but oh no, we had a glitch and computer crashed. But we still delivered the fortune. We got a thank you anyway
* 16:40 a nice user, the printer jammed but it was sorted out quickly *16:42 another nice user
* 16:50 nice user (joak)
* 16:52 yet another nice user (jogi)
* 16:55 happy user! (peter w)
* 16:57 more happy user (pierre h)
* 16:58 another happy user
* 17:00 super happy user (peggy)
* 17:02 more happy user
`

[Example]{.example .empty .descriptor}

> Software time is not the same as human time. Computers will run for AS
> LONG AS THEY WILL BE ABLE TO, provided sufficient power is available.
> You, as a human, don\'t have the luxury of being always connected to
> the power grid and this have to rely on your INTERNAL BATTERY. Be
> aware of your power cycles and set yourself to POWER-SAVING MODE
> whenever possible.

[SHOW IMAGE HERE:
http://gallery.constantvzw.org/var/resizes/Techno-Galactic-Software-Observatory/IMAG1407.jpg?m=1497344230]{.tmp}
[TODO: RELATES TO]{.tmp} []{#yznjodq3 .anchor}
[[Method:](http://pad.constantvzw.org/p/observatory.guide.dirty) Bug
reporting for sharing observations]{.method .descriptor} [What: Etherpad
had stopped working but it was unclear why. Where does etherpad
\'live\'?]{.what .descriptor} [How: Started by looking around the pi\'s
filesystem by reading /var/log/syslog in /opt/etherpad and in a
subdirectory named var/ there was dirty.db, and dirty it was.]{.how
.descriptor} [When: Monday morning]{.when .descriptor} [Urgency:
Software (etherpad) not working and the Walk-in Clinic was about to
start.]{.urgency .descriptor} [Note:
http://pad.constantvzw.org/p/observatory.inventory.jogi]{.note
.descriptor}

from jogi\@mur.at to \[Observatory\] When dirty.db get\'s dirty

Dear all,

as promised yesterday, here my little report regarding the broken
etherpad.

\ \#\#\# When dirty.db get\'s dirty

When I got to WTC on Monday morning the etherpad on etherbox.local was
disfunct. Later someone said that in fact etherpad had stopped working
the evening before, but it was unclear why. So I started looking around
the pi\'s filesystem to find out what was wrong. Took me a while to find
the relevant lines in /var/log/syslog but it became clear that there was
a problem with the database. Which database? Where does etherpad
\'live\'? I found it in /opt/etherpad and in a subdirectory named var/
there it was: dirty.db, and dirty it was.

A first look at the file revealed no apparent problem. The last lines
looked like this:

`{"key":"sessionstorage:Ddy0gw7okwbkv5BzkR1DuSLCV_IA5_jQ","val":{"cookie ":{"path":"/","_expires":null,"originalMaxAge":null,"httpOnly":true,"secure":false}}} {"key":"sessionstorage:AU1cffgcTf_q6BV9aIdAvES2YyXM7Gm1","val":{"cookie ":{"path":"/","_expires":null,"originalMaxAge":null,"httpOnly":true,"secure":false}}} {"key":"sessionstorage:_H5SdUlDvQ3XCuPaZEXQ5lx0K6aAEJ9m","val":{"cookie ":{"path":"/","_expires":null,"originalMaxAge":null,"httpOnly":true,"se cure":false}}}`

What I did not see at the time was that there were some (AFAIR something
around 150) binary zeroes at the end of the file. I used tail for the
first look and that tool silently ignored the zeroes at the end of the
file. It was Martino who suggested using different tools (xxd in that
case) and that showed the cause of the problem. The file looked
something like this:

00013730: 6f6b 6965 223a 7b22 7061 7468 223a 222f okie":{"path":"/
00013740: 222c 225f 6578 7069 7265 7322 3a6e 756c ","_expires":nul
00013750: 6c2c 226f 7269 6769 6e61 6c4d 6178 4167 l,"originalMaxAg
00013760: 6522 3a6e 756c 6c2c 2268 7474 704f 6e6c e":null,"httpOnl
00013770: 7922 3a74 7275 652c 2273 6563 7572 6522 y":true,"secure"
00013780: 3a66 616c 7365 7d7d 7d0a 0000 0000 0000 :false}}}.......
00013790: 0000 0000 0000 0000 0000 0000 0000 0000 ................

So Anita, Martino and I stuck our heads together to come up with a
solution. Our first attempt to fix the problem went something like this:

dd if=dirty.db of=dirty.db.clean bs=1 count=793080162

which means: write the first 793080162 blocks of size 1 byte to a new
file. After half an hour or so I checked on the size of the new file and
saw that some 10% of the copying had been done. No way this would get
done in time for the walk-in-clinic. Back to the drawing board.

Using a text editor was no real option btw since even vim has a hard
time with binary zeroes and the file was really big. But there was
hexedit! Martino installed it and copied dirty.db onto his computer.
After some getting used to the various commands to navigate in hexedit
the unwanted zeroes were gone in an instant. The end of the file looked
like this now:

00013730: 6f6b 6965 223a 7b22 7061 7468 223a 222f okie":{"path":"/
00013740: 222c 225f 6578 7069 7265 7322 3a6e 756c ","_expires":nul
00013750: 6c2c 226f 7269 6769 6e61 6c4d 6178 4167 l,"originalMaxAg
00013760: 6522 3a6e 756c 6c2c 2268 7474 704f 6e6c e":null,"httpOnl
00013770: 7922 3a74 7275 652c 2273 6563 7572 6522 y":true,"secure"
00013780: 3a66 616c 7365 7d7d 7d0a :false}}}.

Martino asked about the trailing \'.\' character and I checked a
different copy of the file. No \'.\' there, so that had to go too. My
biggest mistake in a long time! The \'.\' we were seeing in Martino\'s
copy of the file was in fact a \'\' (0a)! We did not realize that,
copied the file back to etherbox.local and waited for etherpad to resume
it\'s work. But no luck there, for obvious reasons.

We ended up making backups of dirty.db in various stages of deformation
and Martino started a brandnew pad so we could use pads for the walk-
in-clinic. The processing tool chain has been disabled btw. We did not
want to mess up any of the already generated .pdf, .html and .md files.

We still don\'t know why exactly etherpad stopped working sometime
Sunday evening or how the zeroes got into the file dirty.db. Anita
thought that she caused the error when she adjusted time on
etherbox.local, but the logfile does not reflect that. The last clean
entry in /var/log/syslog regarding nodejs/etherpad is recorded with a
timestamp of something along the line of \'Jun 10 10:17\'. Some minutes
later, around \'Jun 10 10:27\' the first error appears. These timestamps
reflect the etherbox\'s understanding of time btw, not \'real time\'.

It might be that the file just got too big for etherpad to handle it.
The size of the repaired dirty.db file was already 757MB. That could btw
explain why etherpad was working somewhat slugishly after some days.
There is still a chance that the time adjustment had an unwanted side
effect, but so far there is no obvious reason for what had happened.
\
\-- J.Hofmüller

http://thesix.mur.at/

[]{#ytu5y2qy .anchor}
[[Method:](http://pad.constantvzw.org/p/observatory.guide.detournement)
Interface Détournement]{.method .descriptor} [Embodiment / body
techniques]{.grouping} []{#y2q4zju5 .anchor}
[[Method:](http://pad.constantvzw.org/p/observatory.guide.occupational)
Comportments of software (softwear)]{.method .descriptor}
[Remember]{.remember .empty .descriptor}

> The analysis of common sense, as opposed to the exercise of it, must
> then begin by redrawing this erased distinction between the mere
> matter-of-fact apprehension of reality\--or whatever it is you want to
> call what we apprehend merely and matter-of-factly\--and
> down-to-earth, colloquial wisdom, judgements, and assessments of it.

[What: Observe and catalog the common gestures, common comportments, and
common sense(s) surrounding software.]{.what .descriptor} [How: This can
be done through observation of yourself or others. Separate the
apprehended and matter of fact from the meanings, actions, reactions,
judgements, and assessments that the apprehension occasions. Step 1:
Begin by assembling a list of questions such as: When you see a software
application icon what are you most likely to do? When a software
application you are using presents you with a user agreement what are
you most likely to do? When a software applciation does something that
frustrates you what are you most likely to do? When a software
application you are using crashes what are you most likely to do? Step
2: Write down your responses and the responses of any subjects you are
observing. Step 3: For each question, think up three other possible
responses. Write these down. Step 4: (this step is only for the very
curious) Try the other possible responses out the next time you
encounter each of the given scenarios.]{.how .descriptor} [Note: The
common senses and comportments of software are of course informed and
conditioned by those of hardware and so perhaps this is more accurately
a method for articulating comportments of computing.]{.note .descriptor}
[WARNING: Software wears on both individual and collective bodies and
selves. Software may harm your physical and emotional health and that of
your society both by design and by accident.]{.warning .descriptor}
[TODO: RELATES TO Agile Sun Salutation, Natasha Schull\'s Addicted by
Design]{.tmp} [Flow-regulation, logistics, seamlessness]{.grouping}
[]{#mwrhm2y4 .anchor}
[[Method:](http://pad.constantvzw.org/p/observatory.guide.continuousintegration)
Continuous integration]{.method .descriptor} [What: Continuous
integration is a sophisticated form of responsibility management: it is
the fascia of services. Continous integration picks up after all other
services and identifies what needs to happen so that they can work in
concert. Continuous integration is a way of observing the evolution of
(micro)services through cybernetic (micro)management.]{.what
.descriptor} [How: Continuous integration keeps track of changes to all
services and allows everyone to observe if they still can work together
after all the moving parts are fitted together.]{.how .descriptor}
[When: Continuous integration comes to prominence in a world of
distributed systems where there are many parts being organized
simultaneously. Continuous integration is a form of observation that
helps (micro)services maintain a false sense of independence and
decentralization while constantly subjecting them to centralized
feedback.]{.when .descriptor} [Who: Continuous integration assumes that
all services will submit themselves to the feedback loops of continuous
integration. This could be a democratic process or not.]{.who
.descriptor} [Urgency: Continuous integration reconfigures divisions of
labor in the shadows of automation. How can we surface and question its
doings and undoings?]{.urgency .descriptor} [WARNING: When each service
does one thing well, the service makers tend to assume everybody else is
doing the things they do not want to do.]{.warning .descriptor}

At TGSO continuous integration was introduced as a service that responds
to integration hell when putting together a number of TGSO services for
a walk-in software clinic. Due to demand, the continuous integration
service was extended to do \"service discovery\" and \"load balancing\"
once the walk-in clinic was in operation.

Continuous integration worked by visiting the different services of the
walk-in clinic to check for updates, test the functionality and think
through implications of integration with other services. If the pieces
didn\'t fit, continuous integration delivered error messages and
solution options.

When we noticed that software curious persons visiting the walk-in
clinic may have troubles finding the different services, and that some
services may be overloaded with software curious persons, continuous
integration was extended. We automated service registration using
colored tape and provided a lookup registry for software curious
persons.

http://gallery.constantvzw.org/index.php/Techno-Galactic-Software-Observatory/IMAG1404

Load balancing meant that software curious persons were forwarded to
services that had capacity. If all other services were full, the load
balancer defaulted to sending the software curious person to the [Agile
Sun
Salutation](http://pad.constantvzw.org/p/observatory.guide.agile.yoga)
service.

[WARNING: At TGSO the bundling of different functionalities into the
continuous integration service broke the \"do one thing well\"
principle, but saved the day (we register this as technical debt for the
next iteration of the walk-in clinic).]{.warning .descriptor} [Remember:
Continous integration may be the string that holds your current software
galaxy together.]{.remember .descriptor}

\"More technically, I am interested in how things bounce around in
computer systems. I am not sure if these two things are relted, but I
hope continuous integration will help me.\"

[]{#zdixmgrm .anchor}
[[Method:](http://pad.constantvzw.org/p/observatory.guide.pipeline) make
make do]{.method .descriptor} [What: Makefile as a method for
quick/collective assemblages + observing amalgamates/pipelines]{.what
.descriptor} [Note: Note:
http://observatory.constantvzw.org/etherdump/makefile.raw.html]{.note
.descriptor}

etherpad-\>md-\>pdf-\>anything pipeline. makefile as a method for
quick/collective assemblages + observing amalgamates/pipelines CHRISTOPH

[]{#zweymtni .anchor}
[[Method:](http://pad.constantvzw.org/p/observatory.guide.ssogy)
Flowcharts (Flow of the chart -- chart of the flow on demand!)]{.method
.descriptor} [Example]{.example .empty .descriptor} [SHOW IMAGE HERE:
!\[\]( http://observatory.constantvzw.org/images/symbols/ibm-ruler.jpg
)]{.tmp} [SHOW IMAGE HERE: !\[\](
http://observatory.constantvzw.org/images/symbols/burroughs-ruler.jpg
)]{.tmp} [SHOW IMAGE HERE: !\[\](
http://observatory.constantvzw.org/images/symbols/rectangle.png )]{.tmp}
[SHOW IMAGE HERE: !\[\](
http://observatory.constantvzw.org/images/symbols/curly\_rec.png
)]{.tmp} [SHOW IMAGE HERE: !\[\](
http://observatory.constantvzw.org/images/symbols/curly\_rec-2.png
)]{.tmp} [SHOW IMAGE HERE: !\[\](
http://observatory.constantvzw.org/images/symbols/flag.png )]{.tmp}
[SHOW IMAGE HERE: !\[\](
http://observatory.constantvzw.org/images/symbols/trapec.png )]{.tmp}
[SHOW IMAGE HERE: !\[Claude Shannon Information Diagram Blanked: Silvio
Lorusso\](
http://silviolorusso.com/wp-content/uploads/2012/02/shannon\_comm\_channel.gif
)]{.tmp} [TODO: RELATES TO]{.tmp}
[Beingontheside/inthemiddle/behind]{.grouping} []{#ywfin2e4 .anchor}
[[Method:](http://pad.constantvzw.org/p/observatory.guide.somethinginthemiddlemaybe)
Something in the Middle Maybe (SitMM)]{.method .descriptor} [What: The
network traffic gets observed. There are different sniffing software out
there which differ in granularity and how far the user can taylor the
different functionality. SitMM builds on one of these tools called
[scapy](http://www.secdev.org/projects/scapy/).]{.what .descriptor}
[How: SitMM takes a closer look at the network traffic coming from/going
to a software curious person\'s device. The software curious person
using SitMM may ask to filter the traffic based on application or device
of interest.]{.how .descriptor} [Who]{.who .empty .descriptor}

The software curious person gets to observe their own traffic. Ideally,
observing ones own network traffic should be available to anyone, but
using such software can be deemed illegal under different jurisdictions.

For example, in the US wiretap law limit packet-sniffing to parties
owning the network that is being sniffed or the availability of consent
from one of the communicating parties. Section 18 U.S. Code § 2511 (2)
(a) (i) says:

> It shall not be unlawful \... to intercept \... while engaged in any
> activity which is a necessary incident to the rendition of his service
> or to the protection of the rights or property of the provider of that
> service

See here for a
[paper](http://spot.colorado.edu/%7Esicker/publications/issues.pdf) on
the topic. Google went on a big legal spree to defend their right to
capture unencrypted wireless traffic with google street view cars. The
courts were concerned about wiretapping and infringements on the privacy
of users, and not with the leveraging of private and public WiFi
infrastructure for the gain of a for profit company. The case raises
hard questions about the state, ownership claims and material reality of
WiFi signals. So, while WiFi sniffing is common and the tools like SitMM
are widely available, it is not always possible for software curious
persons to use them legally or to neatly filter out \"their traffic\"
from that of \"others\".

[When: SitMM can be used any time a software curious person feels the
weight of the (invisible) networks.]{.when .descriptor} [Why: SitMM is
intended to be a tool that gives artists, designers and educators an
easy to use custom WiFi router to work with networks and explore the
aspects of our daily communications that are exposed when we use WiFi.
The goal is to use the output to encourage open discussions about how we
use our devices online.]{.why .descriptor} [Example]{.example .empty
.descriptor}

Snippets of a Something In The Middle, Maybe - Report

` {.verbatim}
UDP 192.168.42.32:53649 -> 8.8.8.8:53
TCP 192.168.42.32:49250 -> 17.253.53.208:80
TCP 192.168.42.32:49250 -> 17.253.53.208:80
TCP/HTTP 17.253.53.208:80 GET http://captive.apple.com/mDQArB9orEii/Xmql6oYqtUtn/f6xY5snMJcW8/CEm0Ioc1d0d8/9OdEOfkBOY4y.html
TCP 192.168.42.32:49250 -> 17.253.53.208:80
TCP 192.168.42.32:49250 -> 17.253.53.208:80
TCP 192.168.42.32:49250 -> 17.253.53.208:80
UDP 192.168.42.32:63872 -> 8.8.8.8:53
UDP 192.168.42.32:61346 -> 8.8.8.8:53
...
TCP 192.168.42.32:49260 -> 17.134.127.97:443
TCP 192.168.42.32:49260 -> 17.134.127.97:443
TCP 192.168.42.32:49260 -> 17.134.127.97:443
TCP 192.168.42.32:49260 -> 17.134.127.97:443
TCP 192.168.42.32:49260 -> 17.134.127.97:443
TCP 192.168.42.32:49260 -> 17.134.127.97:443
TCP 192.168.42.32:49260 -> 17.134.127.97:443

##################################################
Destination Address: 17.253.53.208
Destination Name: nlams2-vip-bx-008.aaplimg.com

Port: Connection Count
80: 6

##################################################
Destination Address: 17.134.127.79
Destination Name: unknown

Port: Connection Count
443: 2
##################################################
Destination Address: 17.248.145.76
Destination Name: unknown

Port: Connection Count
443: 16
`

[TODO: RELATES TO]{.tmp} []{#ntlimgqy .anchor}
[[Method:](http://pad.constantvzw.org/p/observatory.guide.whatisitliketobeanelevator)
What is it like to be AN ELEVATOR?]{.method .descriptor} [What:
Understanding software systems by becoming them]{.what .descriptor}
[TODO: extend this text \.... how to observe software in the world
around you. How to observe an everyday software experience and translate
this into a flowchart )]{.tmp} [How: Creating a flowchart to incarnate a
software system you use everyday]{.how .descriptor} [WARNING: Uninformed
members of the public may panic when confronted with a software
performance in a closed space.]{.warning .descriptor} [Example: What is
it like to be an elevator?]{.example .descriptor}

` {.verbatim}

what
is
it
like
to be
an
elevator?
from 25th floor to 1st floor
light on button light of 25th floor
check current floor
if current floor is 25th floor
no
if current floor is ...
go one floor up
... smaller than 25th floor
go one floor down
... bigger than 25th floor
stop elevator
turn button light off of 25th floor
turn door light on
open door of elevator
play sound opening sequence
yes
start
user pressed button of 25th floor
close door of elevator
if door is closed
user pressed 1st floor button
start timer for door closing
if timer is running more than three seconds
yes
yes
light on button
go one floor down
no
if current floor is 1st floor
update floor indicator
check current floor
stop elevator
no
yes
light off button
turn door light on
open door of elevator
play sound opening sequence
end
update floor indicator
`

[SHOW IMAGE HERE:
http://observatory.constantvzw.org/documents/joseph/flowchart.pdf]{.tmp}
[TODO: RELATES TO]{.tmp} []{#ndg2zte4 .anchor}
[[Method:](http://pad.constantvzw.org/p/observatory.guide.sidechannel)
Side Channel Analysis]{.method .descriptor} [Urgency: Side Channel
attacks are possible by disregarding the abstraction of software into
pure logic: the physical effects of the running of the software become
backdoors to observe its functioning, both threatening the control of
processes and the re-affirming the materiality of software.]{.urgency
.descriptor} [WARNING: **engineers are good guys!**]{.warning
.descriptor} [Example]{.example .empty .descriptor} [SHOW IMAGE HERE:
https://www.tek.com/sites/default/files/media/image/119-4146-00%20Near%20Field%20Probe%20Set.png.jpg]{.tmp}
[SHOW IMAGE HERE:
http://gallery.constantvzw.org/index.php/Techno-Galactic-Software-Observatory/PWFU3377]{.tmp}
[TODO: RELATES TO]{.tmp} [Collections / collecting]{.grouping}
[]{#njmzmjm1 .anchor}
[[Method:](http://pad.constantvzw.org/p/observatory.guide.bestiary)
Compiling a bestiary of software logos]{.method .descriptor} [What:
Since the early days of GNU-linux and cemented through the ubiquitous
O\'Reilly publications, the visual culture of software relies heavily on
animal representations. But what kinds of animals, and to what
effect?]{.what .descriptor} [How]{.how .empty .descriptor}

Compile a collection of logos and note the metaphors for observation: \*
stethoscope \* magnifying glass \* long neck (giraffe)

[Example]{.example .empty .descriptor}

` {.verbatim}
% http://animals.oreilly.com/browse/
% [check Testing the testbed pads for examples]
% [something on bestiaries]
`

[TODO: RELATES TO]{.tmp} []{#njm5zwm4 .anchor} []{#mmy2zgrl .anchor}
[[Method:](http://pad.constantvzw.org/p/observatory.guide.testingtestbed)
Testing the testbed: testing software with observatory ambitions
(SWOA)]{.method .descriptor} [WARNING: this method may make more sense
if you first take a look at the [Something in the Middle Maybe
(SitMM)](http://pad.constantvzw.org/p/observatory.guide.sitmm) which is
an instance of a SWOA]{.warning .descriptor} [How: The interwebs hosts
many projects that aim to produce software for observing software, (from
now on Software With Observatory Ambitions (SWOA)). A comparative
methodology can be produced by testing different SWOA to observe
software of interest. Example: use different sniffing software to
observe wireless networks, e.g., wireshark vs tcpdump vs SitMM.
Comparing SWOA reveals what is seen as worthy of observation (e.g., what
protocols, what space, which devices), the granularity of the
observation (e.g., how is the observation captured, in what detail), the
logo and conceptual framework of choice etc. This type of observation
may be turned into a service (See also: Something in the Middle Maybe
(SitMM)).]{.how .descriptor} [When: Ideally, SWOA can be used everywhere
and in every situation. In reality, institutions, laws and
administrators like to limit the use of SWOA on infrastructures to
people who are also administering these networks. Hence, we are
presented with the situation that the use of SWOA is condoned when it is
down by researchers and pen testers (e.g., they were hired) and shunned
when done by others (often subject to name calling as hackers or
attackers).]{.when .descriptor} [What: Deep philosophical moment: most
software has a recursive observatory ambition (it wants to be observed
in its execution, output etc.). Debuggers, logs, dashboards are all
instances of software with observatory ambitions and can not be
separated from software itself. Continuous integration is the act of
folding the whole software development process into one big feedback
loop. So, what separates SWOA from software itself? Is it the intention
of observing software with a critical, agonistic or adversarial
perspective vs one focused on productivity and efficiency that
distinguishes SWOA from software? What makes SWOA a critical practice
over other forms of sotware observation. If our methodology is testing
SWOA, then is it a meta critique of critique?]{.what .descriptor} [Who:
If you can run multiple SWOAs, you can do it. The question is: will
people like it if you turn your gaze on their SWOA based methods of
observation? Once again we find that observation can surface power
asymmetries and lead to defensiveness or desires to escape the
observation in the case of the observed, and a instinct to try to
conceal that observation is taking place.]{.who .descriptor} [Urgency:
If observation is a form of critical engagement in that it surfaces the
workings of software that are invisible to many, it follows that people
would develop software to observe (SWOAs). Testing SWOAs puts this form
of critical observation to test with the desire to understand how what
is made transparent through each SWOA also makes things invisible and
reconfigures power.]{.urgency .descriptor} [Note: Good SWOA software
usually uses an animal as a logo.:D]{.note .descriptor} [WARNING: Many
of the SWOA projects we looked at are promises more than running
software/available code. Much of it is likely to turn into obsolete
gradware, making testing difficult.]{.warning .descriptor} [TODO:
RELATES TO
http://pad.constantvzw.org/p/observatory.guide.bestiary]{.tmp} [TODO:
RELATES TO http://pad.constantvzw.org/p/observatory.guide.sitmm]{.tmp}
[]{#mmmzmmrh .anchor}
[[Method:](http://pad.constantvzw.org/p/observatory.guide.reader)
Prepare a reader to think theory with software]{.method .descriptor}
[What: Compile a collection of texts about software.]{.what .descriptor}
[How: Choose texts from different realms. Software observations are
mostly done in the realm of the technological and the pragmatic. Also
the ecology of texts around software includes first and foremost
manuals, technical documentation and academic papers by software
engineers and these all \'live\' in different realms. More recently, the
field of software studies opened up additional perspectives fuelled by
cultural studies and sometimes filosophy. By compiling a reader \...
ways of speaking/writing about. Proximity.]{.how .descriptor}
[Example]{.example .empty .descriptor}

` {.verbatim .wrap}
Pull some quotes from the reader, for example from the chapter: Observation and its consequences

Lilly Irani, Hackathons and the Making of Entrepreneurial Citizenship, 2015 http://sci-hub.bz/10.1177/0162243915578486

Kara Pernice (Nielsen Norman Group), Talking with Participants During a Usability Test, January 26, 2014, https://www.nngroup.com/articles/talking-to-users/

Matthew G. Kirschenbaum, Extreme Inscription: Towards a Grammatology of the Hard Drive. 2004 http://texttechnology.mcmaster.ca/pdf/vol13_2_06.pdf

Alexander R. Galloway, The Poverty of Philosophy: Realism and Post-Fordism, Critical Inquiry. 2013, http://cultureandcommunication.org/galloway/pdf/Galloway,%20Poverty%20of%20Philosophy.pdf
Edward Alcosser, James P. Phillips, Allen M. Wolk, How to Build a Working Digital Computer. Hayden Book Company, 1968. https://archive.org/details/howtobuildaworkingdigitalcomputer_jun67

Matthew Fuller, "It looks like you're writing a letter: Microsoft Word", Nettime, 5 Sep 2000. https://library.memoryoftheworld.org/b/xpDrXE_VQeeuDDpc5RrywyTJwbzD8eatYGHKmyT2A_HnIHKb

Barbara P. Aichinger, DDR Memory Errors Caused by Row Hammer. 2015 www.memcon.com/pdfs/proceedings2015/SAT104_FuturePlus.pdf

Fangfei Liu, Yuval Yarom, Qian Ge, Gernot Heiser, Ruby B. Lee. Last-Level Cache Side-Channel Attacks are Practical. 2015 http://palms.ee.princeton.edu/system/files/SP_vfinal.pdf
`

[TODO: RELATES TO
http://pad.constantvzw.org/p/observatory.guide.samequestion]{.tmp}
[]{#ytjmmmni .anchor}

Colophon

The Guide to techno-galactic software observing was compiled by Carlin
Wing, Martino Morandi, Peggy Pierrot, Anita, Christoph Haag, Michael
Murtaugh, Femke Snelting

License: Free Art License

Support:

Sources:

Constant, February 2018

::: {.footnotes}
1. [[[Haraway]{.fname}, [Donna]{.gname}, [Galison]{.fname},
[Peter]{.gname} and [Stump]{.fname}, [David J]{.gname}: [Modest
Witness: Feminist Diffractions in Science Studies]{.title},
[Stanford University Press]{.publisher}, [1996]{.date}.
]{.collection} [-\>](#eeffecbe)]{#ebceffee}
2. [Worksessions are intensive transdisciplinary moments, organised
twice a year by Constant. They aim to provide conditions for
participants with different experiences and capabilities to
temporarily link their practice and to develop ideas, prototypes and
research projects together. For the worksessions, primarily Free,
Libre and Open Source software is used and material that is
available under ??? [-\>](#fcdcaacb)]{#bcaacdcf}
3. [http://www.nam-ip.be [-\>](#ffeaecaa)]{#aaceaeff}
4. [http://www.etwie.be/database/actor/computermuseum-ku-leuven
[-\>](#dbabebfa)]{#afbebabd}
5. [[contributors]{.fname}, [Wikipedia]{.gname}: [Content-control
software --- Wikipedia, The Free Encyclopedia]{.title},
[2018]{.date}. [-\>](#fadefecf)]{#fcefedaf}
6. [[UrbanMinistry.org]{.fname}, [TechMission]{.gname}:
[SafeFamilies.org \| Accountability Software: Encyclopedia of Urban
Ministry]{.title}, [2018]{.date}. [-\>](#faebbffb)]{#bffbbeaf}
7. [[Content Watch Holdings]{.fname}, [Inc]{.gname}: [Protecting Your
Family]{.title}, [2018]{.date}. [-\>](#afcbcfbb)]{#bbfcbcfa}
8. [[websense.com]{.fname}, []{.gname}: [Explicit and transparent proxy
deployments]{.title}, [2012]{.date}. [-\>](#edbedede)]{#ededebde}
9. [[workrave.org]{.fname}, []{.gname}: [Frequently Asked
Questions]{.title}, [2018]{.date}. [-\>](#ddfbbbfc)]{#cfbbbfdd}
10. [[contributors]{.fname}, [Wikipedia]{.gname}: [Agile software
development --- Wikipedia, The Free Encyclopedia]{.title},
[2018]{.date}. [-\>](#ececbabd)]{#dbabcece}
11. [[contributors]{.fname}, [Wikipedia]{.gname}: [Scrum (software
development) --- Wikipedia, The Free Encyclopedia]{.title},
[2018]{.date}. [-\>](#caabcfee)]{#eefcbaac}
12. [[contributors]{.fname}, [Wikipedia]{.gname}: [The Manifesto for
Agile Software Development]{.title}, [2018]{.date}.
[-\>](#baffeabf)]{#fbaeffab}
13. [[Kruchten]{.fname}, [Philippe]{.gname}: [Agile's Teenage
Crisis?]{.title}, [2011]{.date}. [-\>](#faeebade)]{#edabeeaf}
:::

Custodians
In solidarity with Library Genesis and Sci-Hub
2015


:::::::::::::::::: contact:
[little.prince@custodians.online](mailto:little.prince@custodians.online)

# In solidarity with [Library Genesis](http://libgen.io) and [Sci-Hub](http
://sci-hub.io)

In Antoine de Saint Exupéry's tale the Little Prince meets a businessman who
accumulates stars with the sole purpose of being able to buy more stars. The
Little Prince is perplexed. He owns only a flower, which he waters every day.
Three volcanoes, which he cleans every week. "It is of some use to my
volcanoes, and it is of some use to my flower, that I own them," he says, "but
you are of no use to the stars that you own".

There are many businessmen who own knowledge today. Consider Elsevier, the
largest scholarly publisher, whose 37% profit margin1 stands in sharp contrast
to the rising fees, expanding student loan debt and poverty-level wages for
adjunct faculty. Elsevier owns some of the largest databases of academic
material, which are licensed at prices so scandalously high that even Harvard,
the richest university of the global north, has complained that it cannot
afford them any longer. Robert Darnton, the past director of Harvard Library,
says "We faculty do the research, write the papers, referee papers by other
researchers, serve on editorial boards, all of it for free … and then we buy
back the results of our labour at outrageous prices."2 For all the work
supported by public money benefiting scholarly publishers, particularly the
peer review that grounds their legitimacy, journal articles are priced such
that they prohibit access to science to many academics - and all non-academics
- across the world, and render it a token of privilege.3

Elsevier has recently filed a copyright infringement suit in New York against
Science Hub and Library Genesis claiming millions of dollars in damages.4 This
has come as a big blow, not just to the administrators of the websites but
also to thousands of researchers around the world for whom these sites are the
only viable source of academic materials. The social media, mailing lists and
IRC channels have been filled with their distress messages, desperately
seeking articles and publications.

Even as the New York District Court was delivering its injunction, news came
of the entire editorial board of highly-esteemed journal Lingua handing in
their collective resignation, citing as their reason the refusal by Elsevier
to go open access and give up on the high fees it charges to authors and their
academic institutions. As we write these lines, a petition is doing the rounds
demanding that Taylor & Francis doesn't shut down Ashgate5, a formerly
independent humanities publisher that it acquired earlier in 2015. It is
threatened to go the way of other small publishers that are being rolled over
by the growing monopoly and concentration in the publishing market. These are
just some of the signs that the system is broken. It devalues us, authors,
editors and readers alike. It parasites on our labor, it thwarts our service
to the public, it denies us access6.

We have the means and methods to make knowledge accessible to everyone, with
no economic barrier to access and at a much lower cost to society. But closed
access’s monopoly over academic publishing, its spectacular profits and its
central role in the allocation of academic prestige trump the public interest.
Commercial publishers effectively impede open access, criminalize us,
prosecute our heroes and heroines, and destroy our libraries, again and again.
Before Science Hub and Library Genesis there was Library.nu or Gigapedia;
before Gigapedia there was textz.com; before textz.com there was little; and
before there was little there was nothing. That's what they want: to reduce
most of us back to nothing. And they have the full support of the courts and
law to do exactly that.7

In Elsevier's case against Sci-Hub and Library Genesis, the judge said:
"simply making copyrighted content available for free via a foreign website,
disserves the public interest"8. Alexandra Elbakyan's original plea put the
stakes much higher: "If Elsevier manages to shut down our projects or force
them into the darknet, that will demonstrate an important idea: that the
public does not have the right to knowledge."

We demonstrate daily, and on a massive scale, that the system is broken. We
share our writing secretly behind the backs of our publishers, circumvent
paywalls to access articles and publications, digitize and upload books to
libraries. This is the other side of 37% profit margins: our knowledge commons
grows in the fault lines of a broken system. We are all custodians of
knowledge, custodians of the same infrastructures that we depend on for
producing knowledge, custodians of our fertile but fragile commons. To be a
custodian is, de facto, to download, to share, to read, to write, to review,
to edit, to digitize, to archive, to maintain libraries, to make them
accessible. It is to be of use to, not to make property of, our knowledge
commons.

More than seven years ago Aaron Swartz, who spared no risk in standing up for
what we here urge you to stand up for too, wrote: "We need to take
information, wherever it is stored, make our copies and share them with the
world. We need to take stuff that's out of copyright and add it to the
archive. We need to buy secret databases and put them on the Web. We need to
download scientific journals and upload them to file sharing networks. We need
to fight for Guerilla Open Access. With enough of us, around the world, we'll
not just send a strong message opposing the privatization of knowledge — we'll
make it a thing of the past. Will you join us?"9

We find ourselves at a decisive moment. This is the time to recognize that the
very existence of our massive knowledge commons is an act of collective civil
disobedience. It is the time to emerge from hiding and put our names behind
this act of resistance. You may feel isolated, but there are many of us. The
anger, desperation and fear of losing our library infrastructures, voiced
across the internet, tell us that. This is the time for us custodians, being
dogs, humans or cyborgs, with our names, nicknames and pseudonyms, to raise
our voices.

Share this letter - read it in public - leave it in the printer. Share your
writing - digitize a book - upload your files. Don't let our knowledge be
crushed. Care for the libraries - care for the metadata - care for the backup.
Water the flowers - clean the volcanoes.

30 November 2015

Dusan Barok, Josephine Berry, Bodo Balazs, Sean Dockray, Kenneth Goldsmith,
Anthony Iles, Lawrence Liang, Sebastian Luetgert, Pauline van Mourik Broekman,
Marcell Mars, spideralex, Tomislav Medak, Dubravka Sekulic, Femke Snelting...

* * *

1. Lariviere, Vincent, Stefanie Haustein, and Philippe Mongeon. “[The Oligopoly of Academic Publishers in the Digital Era.](http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0127502)” PLoS ONE 10, no. 6 (June 10, 2015): e0127502. doi:10.1371/journal.pone.0127502.,
“[The Obscene Profits of Commercial Scholarly
Publishers.](http://svpow.com/2012/01/13/the-obscene-profits-of-commercial-
scholarly-publishers/)” svpow.com. Accessed November 30, 2015.  ↩

2. Sample, Ian. “[Harvard University Says It Can’t Afford Journal Publishers’ Prices.](http://www.theguardian.com/science/2012/apr/24/harvard-university-journal-publishers-prices)” The Guardian, April 24, 2012, sec. Science. theguardian.com.  ↩
3. “[Academic Paywalls Mean Publish and Perish - Al Jazeera English.](http://www.aljazeera.com/indepth/opinion/2012/10/20121017558785551.html)” Accessed November 30, 2015. aljazeera.com.  ↩
4. “[Sci-Hub Tears Down Academia’s ‘Illegal’ Copyright Paywalls.](https://torrentfreak.com/sci-hub-tears-down-academias-illegal-copyright-paywalls-150627/)” TorrentFreak. Accessed November 30, 2015. torrentfreak.com.  ↩
5. “[Save Ashgate Publishing.](https://www.change.org/p/save-ashgate-publishing)” Change.org. Accessed November 30, 2015. change.org.  ↩
6. “[The Cost of Knowledge.](http://thecostofknowledge.com/)” Accessed November 30, 2015. thecostofknowledge.com.  ↩
7. In fact, with the TPP and TTIP being rushed through the legislative process, no domain registrar, ISP provider, host or human rights organization will be able to prevent copyright industries and courts from criminalizing and shutting down websites "expeditiously".  ↩
8. “[Court Orders Shutdown of Libgen, Bookfi and Sci-Hub.](https://torrentfreak.com/court-orders-shutdown-of-libgen-bookfi-and-sci-hub-151102/)” TorrentFreak. Accessed November 30, 2015. torrentfreak.com.  ↩
9. “[Guerilla Open Access Manifesto.](https://archive.org/stream/GuerillaOpenAccessManifesto/Goamjuly2008_djvu.txt)” Internet Archive. Accessed November 30, 2015. archive.org.  ↩

Dean, Dockray, Ludovico, Broekman, Thoburn & Vilensky
Materialities of Independent Publishing: A Conversation with AAAAARG, Chto Delat?, I Cite, Mute, and Neural
2013


Materialities Of Independent Publishing: A
Conversation With Aaaaarg, Chto Delat?,
I Cite, Mute, And Neural
Jodi Dean, Sean Dockray, Alessandro Ludovico, Pauline van
Mourik Broekman, Nicholas Thoburn, and Dmitry Vilensky
Abstract This text is a conversation among practitioners of independent political
media, focusing on the diverse materialities of independent publishing associated with
the new media environment. The conversation concentrates on the publishing projects
with which the participants are involved: the online archive and conversation platform
AAAAARG, the print and digital publications of artist and activist group Chto Delat?,
the blog I Cite, and the hybrid print/digital magazines Mute and Neural. Approaching
independent media as sites of political and aesthetic intervention, association, and
experimentation, the conversation ranges across a number of themes, including: the
technical structures of new media publishing; financial constraints in independent
publishing; independence and institutions; the sensory properties of paper and the
book; the politics of writing; design and the aesthetics of publishing; the relation
between social media and communicative capitalism; publishing as art; publishing as
self-education; and post-digital print.
Keywords independent publishing, art publishing, activist publishing, digital
archive, blog, magazine, newspaper

BETWEEN DISCOURSE AND ACT
Nicholas Thoburn (NT) In one way or another all of you have an investment
in publishing as a political practice, where publishing might be understood
loosely as a political ‘gesture’ located ‘between the realm of discourse and the
material act’.1 And in large measure, this takes the path of critical intervention
in the form of the media with which you work - newspaper, blog, magazine,
and digital archive. That is, media come forward in your publishing practice
and writing as complex sets of materials, capacities, and effects, and as sites
of political intervention and critical reflection.
The aim of this conversation is to concentrate on these materials,
capacities, and effects of independent media (a term, ‘independent media’,
that I use advisedly, given its somewhat pre-digital associations and a
nagging feeling that it lacks purchase on the complexity of convergent media
environments). I’m keen as much as possible to keep each of your specific
DOI:10.3898/NEWF.78.08.2013

Materialities Of Independent Publishing 157

1. Nat Muller
and Alessandro
Ludovico, ‘Of
Process and
Gestures: A
Publishing Act’, in
Alessandro Ludovico
and Natt Muller
(eds) The Mag.net
Reader 3, London,
OpenMute, p6.

publishing projects at the forefront of the conversation, to convey a strong
sense of their ‘materialities’: the technical and aesthetic forms and materials
they mobilise; what strategies of authorship, editorship, or collectivity
they employ; how they relate to publics, laws, media paradigms, financial
structures; how they model or represent their media form, and so on. To start
us off, I would like to invite each of you to introduce your publishing project
with a few sentences: its aims, the mediums it uses, where it’s located, when
established - that kind of thing.

2. Jodi Dean,
Publicity’s Secret:
How Technoculture
Capitalizes on
Democracy, London,
Cornell University
Press, 2002.

3. Alessandro
Ludovico, Post-Digital
Print: The Mutation
of Publishing Since
1894, Eindhoven,
Onomatopee, 2012.

Jodi Dean (JD) I started my blog, I Cite, in January 2005. It’s on the Typepad
platform. I pay about 20 dollars a year for some extra features.
I first started the blog so that I could ‘talk’ to people in a format that was
not an academic article or an email. Or maybe it’s better to say that I was
looking for a medium in which to write, where what I was writing was not
immediately constrained by the form of an academic piece, written alone,
appearing once and late, if at all, or by the form of an email which is generally
of a message sent to specific people, who may or may not appreciate being
hailed or spammed every time something occurs to me.
There was another reason for starting the blog, though. I had already
begun formulating my critique of communicative capitalism (in the book
Publicity’s Secret and in a couple of articles).2 I was critical of the way that
participatory media entraps people into a media mentality, a 24/7 mindset
of reaching an audience and competing with the mainstream press. I thought
that if my critique is going to be worth anything, I better have more firsthand
experience, from the very belly of the beast.
Alessandro Ludovico (AL) I’m the editor in chief of Neural, a printed and
online magazine established in 1993 in Bari (Italy) dealing with new media
art, electronic music and hacktivism. It’s a publication which beyond being
committed to its topics, always experimented with publishing in various ways.
Furthermore, I’m one of the founders (together with Simon Worthington of
Mute and a few others) of Mag.net, electronic cultural publishers, a network
of magazines related to new media art whose slogan is: ‘collaboration is
better than competition’. Finally, I’m finishing a book called Post-Digital
Print, about the historical and contemporary relationship between offline
and online publishing.3
Sean Dockray (SD) About five years ago, I wrote this description:
AAAARG is a conversation platform - at different times it performs as a school, or
a reading group, or a journal.
AAAARG was created with the intention of developing critical discourse outside
of an institutional framework. But rather than thinking of it like a new building,
158

New Formations

imagine scaffolding that attaches onto existing buildings and creates new
architectures between them.
More straightforwardly, the project is a website where people share texts:
usually PDFs, anything from a couple of inspiring pages to a book or a
collection of essays. The people who use the site tend to be writers, artists,
organizers, activists, curators, architects, librarians, publishers, designers,
philosophers, teachers, or students themselves. Although the texts are most
often in the domain of critical or political theory, there are also technical
documents, legal decisions, works of fiction, government declarations, poetry
collections and so on. There is no moderation.
It’s hard to imagine it now as anything other than it is - which is really
a library, and not a school, a reading group, or a journal! Still, AAAARG
supports quite a few self-organised reading groups, it spawned a sister project
called The Public School, and now produces a small online publication,
‘Contents’. It’s used by many people in many ways, and even when that use is
‘finished,’ the texts remain available on the site for others to use as a shared
resource.
Dmitry Vilensky (DV) The workgroup Chto Delat? (What Is to Be Done?) has
been publishing a newspaper, of the same name, since 2003. The newspaper
was edited by myself and David Riff (2003-2008) in collaboration with the
workgroup Chto Delat?, and since 2008 is mostly edited by me in collaboration
with other members of the group.
The newspaper is bilingual (Russian and English), and appears on
an irregular basis (roughly 4-5 times a year). It varies between 16 and 24
pages (A3). Its editions (1,000-9,000 copies) are distributed for free at
different cultural events, exhibitions, social forums, political gatherings,
and universities, but it has no fixed network of distribution. At the moment,
with an on-line audience much bigger than that for the paper version of the
newspaper, we concentrate more on newspapers as part of the exhibition and
contextualisation of our work - a continuation of art by other means.
Each newspaper addresses a theme or problem central to the search for
new political subjectivities, and their impact on art, activism, philosophy, and
cultural theory. So far, the rubrics and sections of the paper have followed a
free format, depending on theme at hand. There are no exhibition reviews.
The focus is on the local Russian situation, which the newspaper tries to link
to a broader international context. Contributors include artists, art theorists,
philosophers, activists, and writers from Russia, Western Europe and the
United States.
It is also important to focus on the role of publication as translation
device, something that is really important in the Russian situation – to
introduce different voices and languages and also to have a voice in different
international debates from a local perspective.
Materialities Of Independent Publishing 159

Pauline van Mourik Broekman (PvMB) After so many years - we’ve been at it

4. See Pauline van
Mourik Broekman
(2011) ‘Mute’s
100% Cut by
ACE - A Personal
Consideration of
Mute’s Defunding’,
http://www.
metamute.org/en/
mute_100_per_cent_
cut_by_ace

5. Régis Debray,
‘Socialism: A LifeCycle’, New Left
Review 46 (2007):
5-28.

for 17! - I seem to find it harder and harder to figure out what ‘Mute’ is. But
sticking to the basic narrative for the moment, it formed as an artist-initiated
publication engaging with the question of what new technologies (read:
the internet and convergent media) meant for artistic production; asking
whether, or to what degree, the internet’s promise of a radically democratised
space, where a range of gate-keepers might be challenged, would upset the
‘art system’ as was (and sadly, still is). Since that founding moment in 1994,
when Mute appeared appropriating the format of the Financial Times, as
producers we have gradually been forced to engage much more seriously
- and materially - with the realities of Publishing with a capital ‘P’. Having
tried out six different physical formats in an attempt to create a sustainable
niche for Mute’s critical content - which meanwhile moved far beyond its
founding questions - our production apparatus now finds itself strangely
distended across a variety of geographic, institutional, professional and social
spaces, ranging from the German Leuphana University (with whom we have
recently started an intensive collaboration), to a series of active email lists,
to a small office in London’s Soho. It will be interesting to see what effect
this enforced virtualisation, which is predominantly a response to losing our
core funding from Arts Council England, will have on the project overall.4
Our fantastic and long-serving editorial board are thankfully along for the
ride. These are: Josephine Berry Slater, Omar El-Khairy, Matthew Hyland,
Anthony Iles, Demetra Kotouza, Hari Kunzru, Stefan Szczelkun, Mira Mattar
and Benedict Seymour.
WRITING POLITICS
NT Many thanks for your introductory words; I’m very pleased - they set
us off in intriguing and promising directions. I’m struck by the different
capacities and aims that you’ve highlighted in your publishing projects.
Moving now to focus on their specific features and media forms, I’d like us
to consider first the question of political writing, which comes across most
apparently in the descriptions from Jodi and Dmitry of I Cite and Chto
Delat?. This conversation aims to move beyond a narrow focus on textual
communication, and we will do so soon, but writing is clearly a key component
of the materialities of publishing. Political writing published more or less
independently of corporate media institutions has been a central aspect of
the history of radical cultures. Régis Debray recently identified what he calls
the ‘genetic helix’ of socialism as the book, the newspaper, and the school/
party.5 He argues, not uniquely, that in our era of the screen and the image,
this nexus collapses, taking radical politics with it - it’s a gloomy prognosis.
  Jodi and Dmitry, whether or not you have some sympathy for Debray’s
diagnosis, I think it is true to say that political writing still holds for you some
kind of political power, albeit that the conjunction of writing and radicalism
160

New Formations

has become most complicated. Dmitry, you talk of the themes of Chto Delat?
newspapers contributing to a ‘search for new political subjectivities’. Can you
discuss any specific examples of that practice - however tentative or precarious
they may be - from the concrete experience of publishing Chto Delat? Also, I’m
interested in the name of your group, ‘What Is to Be Done?’ What effect does a
name with such strong associations to the Russian revolutionary tradition have
in Russia - or indeed the US and elsewhere - today? I’m reminded of course
that it is in Lenin’s pamphlet of that title that he sets out his understanding
of the party newspaper as ‘collective organiser’ - not only in its distribution
and consumption, but in its production also. How do you relate to that model
of the political press?
  And Jodi, with regard to your comment about I Cite enabling a different
mode of ‘talk’ or ‘writing’ to that of academic writing or email, is there
a political dimension to this? Put another way, you have been exploring
the theme of ‘communism’ in your blog, but does this link up with the
communicative form of blog talk at all - or are blogs always and only in the
‘belly of the beast’?
JD Is there a political dimension to I Cite’s enabling a different mode of
‘talk’ or ‘writing’? This is hard. My first answer is no. That is, the fact of
blogging, that there are blogs and bloggers, is not in itself any more politically
significant than the fact that there is television, radio, film, and newspapers.
But saying this immediately suggests the opposite and I need to answer yes.
Just as with any medium, blogs have political effects. Much of my academic
writing is about the ways that networked communication supports and furthers
communicative capitalism, helping reformat democratic ideals into means for
the intensification of capitalism - and hence inequality. Media democracy, mass
participation in personal media, is the political form of neoliberal capitalism.
Many participate, a few profit thereby. The fact that I talk about communism
on my blog is either politically insignificant or significant in a horrible way.
As with the activity of any one blog or blogger, it exemplifies and furthers
the hold of capitalism as it renders political activity into individual acts of
participation. Politics becomes nothing but the individual’s contribution to
the flow of circulating media.
Well, this is a pretty unpleasant way for me to think about what I do on
I Cite, why I have kept track of the extremes of finance capital for over five
years, why I blog about Žižek’s writing, why I’ve undertaken readings of
Lenin, etc. And lately, since the Egyptian revolution, the mass protests in
Greece and Spain, and the movement around Occupy Wall Street in the
US, I’ve been wondering if I’ve been insufficiently dialectical or have overplayed the negative. What this amazing outpouring of revolutionary energy
has made me see is the collective dimension of blogs and social media. The
co-production of a left communicative common, that stretches across media
and is constituted through photos and videos uploaded from the occupations,
Materialities Of Independent Publishing 161

massive reposting, forwarding, tweeting, and lots of blog commentary, and
that includes mainstream journalistic outlets like the Guardian, Al Jazeera, and
the New York Times, this new left communicative common seems, for now at
any rate, to have an urgency and intensity irreducible to any one of its nodes.
It persists as the flow between them and the way that this flow is creating
something like its own media storm or front (I’m thinking in part here of
some of the cool visualisations of October 15 on Twitter - the modelling of
the number of tweets regarding demonstrations in Rome looks like some kind
of mountain or solar flare). I like thinking of I Cite as one of the thousands
of elements contributing to this left communicative common.
DV When I talk about a ‘search for new political subjectivities’ I mean, first
of all, that we see our main task as an educational process - to research certain
issues and try to open up the process of research to larger audiences who
could start to undertake their own investigations. Formally, we are located
in the art world, but we are trying to escape from the professional art public
and address the issues that we deal with to audiences outside of the art world.
We also have a very clear political identification embodied in the name of
our collective. The question of ‘What is to be done?’ is clearly marked by
the history of leftist struggle and thinking. The name of our group is an
actualisation of the history of the workers’ movement and revolutionary
theory in Russia. The name in itself is a gesture of actualisation of the past. I
was very glad when the last Documenta decided to choose the same title for
their leitmotif on education, so that now a rather broad public would know
that this question comes from a novel written by the Russian nineteenth
century writer Nikolai Chernyshevsky, and directly refers to the first socialist
workers’ self-organisation cells in Russia, which Lenin later actualised in his
famous 1902 pamphlet What Is to Be Done? Chto Delat? also sees itself as a
self-organizing collective structure that works through reflections on, and
redefinitions of, the political engagement of art in society.
To be engaged means for us that we practice art as a production of
knowledge, as a political and economic issue - and not a solitary contemplation
of the sublime or entertainment for the ruling class. It means to be involved
with all the complexities of contemporary social and political life and make
a claim that we, with all our efforts, are able to influence and change this
condition for the better. Whatever one means by ‘better’, we have an historical
responsibility to make the world more free, human and to fight alienation.
To openly display one’s leftism in the Russian historical moment of 2003
was not only a challenge in the sense of an artistic gesture; it also meant
adopting a dissident civic stance. For my generation, this was a kind of return
to Soviet times, when any honest artist was incapable of having anything to
do with official culture. In the same way, for us the contemporary Russian art
establishment had become a grotesque likeness of late-Soviet official culture,
to which it was necessary to oppose other values. So this was not a particularly
162

New Formations

unique experience for us: we simply returned to our dissident youth. Yet at
the same time, in the 2000s, we had more opportunities to realise ourselves,
and we saw ourselves as part of an overall movement. Immediately after us,
other new civic initiatives arose with which it was interesting to cooperate:
among them, the Pyotr Alexeev Resistance Movement (2004), the Institute
for Collective Action (2004), the Vpered Socialist Movement (2005), and the
Russian Social Forum (2005). It was they who became our main reference
group: we still draw our political legitimacy from our relationships with them
and with a number of newer initiatives that have clearly arisen under our
influence.
At the same time, having positioned our project as international, we began
discovering new themes and areas of struggle: the theory of the multitude,
immaterial labour, social forums, the movement of movements, urban
studies, research into everyday life, etc. We also encountered past thinkers
(such as Cornelius Castoriadis and Henri Lefebvre) who were largely absent
from Russian intellectual discourse, as well as newer figures that were much
discussed at that time (such as Negri, Virno, and Rancière). There was a
strong sense of discovery, and this always gives one a particular energy. We
consciously strove to take the position of Russian cultural leftists who were
open-minded and focused on involvement in international cultural activist
networks, and we have been successful in realizing this aim.
MAGAZINE PLATFORM
NT I was a little concerned that starting a conversation about the ‘materialities’
of publishing with a question about writing and text might lead us in the wrong
direction, but as is clear from Jodi’s and Dmitry’s comments, writing is of
course a material practice with its own technological and publishing forms,
cognitive and affective patterns, temporal structures, and subjectifying powers.
With regard to the materialities of digital publishing, your description, Jodi,
of a ‘media storm’ emerging from the Occupy movement is very suggestive
of the way media flows can aggregate into a kind of quasi-autonomous entity,
taking on a life of its own that has agential effects as it draws participants up
into the event. In the past that might have been the function of a manifesto
or slogan, but with social media, as you suggest, the contributing parts to
this agential aggregate become many and various, including particular blogs,
still and moving image files, analytic frameworks, slogans or memes (‘We
are the 99%’), but also more abstract forms such as densities of reposting
and forwarding, and, in that wonderful ‘VersuS’ social media visualisation
you mention, cartographies of data flow. Here a multiplicity of social media
communications, each with their particular communicative function on the
day, are converted into a strange kind of collective, intensive entity, a digital
‘solar flare’ as you put it.6 Its creators, ‘Art is Open Source’, have made
some intriguing comment about how this intensive mapping might be used
Materialities Of Independent Publishing 163

6. Art Is Open
Source, ‘VersuS
- Rome, October
15th, the Riots on
Social Networks’,
http://www.
artisopensource.
net/2011/10/16/
versus-rome-october15th-the-riots-onsocial-networks/

7. See http://
upload.wikimedia.
org/wikipedia/
commons/2/24/
Chartist_
meeting%2C_
Kennington_
Common.jpg

tactically in real time and, subsequently, as a means of rethinking the nature
and representational forms of collective action - it would be interesting in this
regard to compare the representational effects of this Twitter visualisation with
the photograph of the 1848 ‘monster meeting’ of the Chartists in Kennington
Common, said to be the first photograph of a crowd.7
But returning to your own publishing projects, I’m keen to hear more from
Pauline and Sean about the technical and organizational structure of Mute
and AAAAARG. Pauline, as Mute has developed from a printed magazine to
the current ‘distended’ arrangement of different platforms and institutions,
has it been accompanied by changes in the way the editorial group have
characterised or imagined Mute as a project? And can you comment more on
how Mute’s publishing platforms and institutional structures are organised? I
would be interested to hear too if you see Mute as having any kind of agential
effects or quasi-autonomy, along the lines mentioned above - are there ways
in which the magazine itself serves to draw certain relations between people,
things, and events?
PvMB Reading across these questions I would say that, in Mute’s case, a
decisive role has been played by the persistently auto-didactic nature of the
project; also the way we tend to see-saw between extreme stubbornness and
extreme pragmatism. Overall, our desire has been, simply, to produce the
editorial content that feels culturally, socially, politically ‘necessary’ in the
present day (and of course this is historically and even personally contingent;
a fundamentally embodied thing), and to find and develop the forms in
which to do that. These forms range from textual and visual styles and idioms
(artistic, experimental, academic, journalistic), the physical carriers for them,
and then the software systems and infrastructures for which these are also
converted and adapted. It bears re-stating that these need to be ones we are
able to access, work with; and that grant us the largest possible audience for
our work.
If you mix this ‘simple’ premise with the cultural and economic context
in which we found ourselves in the UK, then you have to account for its
interaction with a whole raft of phenomena, ranging from the dot com
boom and yBa cultures of the ’90s; the New Labour era (with its Creative
Industries and Regeneration-centric funding programmes); the increasing
corporatisation of mainstream cultural institutions and media; the explosion
of cheap, digital tools and platforms; the evolution of anti-capitalist struggles
and modes of activism; state incursion into/control over all areas of the
social body; discourses around self-organisation; the financial crisis; and so
on and so forth. In this context, which was one of easy credit and relatively
generous state funding for culture, Mute for a long time did manage to eek
out a place for its activity, adapting its working model and organisational
economy in a spirit of - as I said - radical pragmatism. The complex material
and organisational form that has resulted from this (which, to some people’s
164

New Formations

surprise, includes things like consultancy services in ‘digital strategy’ aimed
at the cultural sector, next to broadly leftist cultural critique) may indeed
have some kind of agential power, but it is really very hard to say what it is,
particularly since we resist systematic analysis of, and ‘singularising’ into,
homogenous categories of ‘audience’ or ‘client base’.
Listening to other small, independent publications analyse their
developmental process (like I recently did with, to name one example, the
journal Collapse), I think there are certain processes at play which recur in
many different settings.8 For me the most interesting and important of these
is the way that a journal or magazine can act as a kind of connection engine
with ‘strangers’, due to its function as a space of recognition, affinity, or
attractive otherness (with this I mean that it’s not just about recognising and
being semi-narcissistically drawn to an image of oneself, one’s own subjectivity
and proclivities; but the manner readers are drawn to ‘alien’ ideas that are
nonetheless compelling, troubling, or intriguing - hence drawing them
into the reader - and potentially even contributor - circle of that journal). If
there’s quite an intense editorial process at the ‘centre’ of the journal - like
there is, and has always been, with Mute - then this connection-engine draws
people in, propels people out, in a continual, dynamic process, which, due
to its intensity, very effectively blurs the lines of ‘professionalism’, friendship,
editorial, social, political praxis.
For fear of being too waffley or recherché about this, I’d say this was - if
any - the type of agential power Mute also had, and that this becomes heavily
internationalised by dint of its situation on the Internet. In terms of how
Editors then conjure that, each one would probably do it differently - some
seeing it more like a traditional (print) journal, some getting quite swallowed
up by discourses around openness/distributedness/community-participation.
Aspects of that characterisation have probably also changed over time, in the
sense that, circa 2006/7, we might have held onto a more strictly autonomous
figure for our project, which is something I don’t think even the most hopeful
are able to do now – given our partnerships with an ‘incubator’ project in
a university (Leuphana), or our state funding for a commercially oriented
publishing-technology project (Progressive Publishing System / PPS).9 Having
said all that, the minute any kind of direct or indirect manipulation of
content started to occur, our editors would cease to be interested, so whatever
institutional affiliations we might be open to now that we would not have been
several years ago, it remains a delicate balance.
ARCHIVE SCAFFOLDING
NT Sean, you talk very evocatively of AAAAARG as a generative ‘scaffolding’
between institutions. Can you say more about this? Does this image of
scaffolding relate to discourses of media ‘independence’ or ‘institutional
critique’? And if scaffolding is the more abstract aspect of AAAAARG - its
Materialities Of Independent Publishing 165

8. http://www.
afterall.org/online/
publicationsuniaayp; http://
urbanomic.com/
index.php

9. http://www.
metamute.org/
services/r-d/
progressivepublishing-system

governing image - can you talk concretely about how specific aspects of the
AAAAARG platform function to further (and perhaps also obstruct) the
scaffolding? It would be interesting to hear too if this manner of existence
runs into any difficulties - do some institutions object to having scaffolding
constructed amidst them?
SD The image of scaffolding was simply a way of describing an orientation
with respect to institutions that was neither inside nor outside, dependent
nor independent, reformist or oppositional, etc. At the time, the institutions
I meant were specifically Universities, which seemed to have absorbed theory
into closed seminar rooms, academic formalities, and rarefied publishing
worlds. Especially after the momentum of the anti-globalisation movement
ran into the aftermath of September 11, criticality had more or less retreated,
exhausted within the well-managed circuits of the academy. ‘Scaffolding’ was
meant to allude to both networked communication media and to prefigurative,
improvisational quasi-institutions. It suggested the possibility of the office
worker who shuts her door and climbs out the window.
How did AAAAARG actually function with respect to this image? For
one, it circulated scans of books and essays outside of their normal paths
(trajectories governed by geographic distribution, price, contracts, etc.) so
that they became available for people that previously didn’t have access.
People eventually began to ask others for scans or copies of particular texts,
and when those scans were uploaded they stayed available on the site. When
a reading group uploaded a few texts as a way to distribute them among
members, those texts also stayed available. Everything stayed available. The
concept of ‘Issues’ provided a way for people to make subjective groupings
of texts, from ‘anti-austerity encampment movements’ to ‘DEPOSITORY TO
POST THE WRITTEN WORKS OF AMERICAN SOCIALISM. NO SOCIAL
SCIENCES PLEASE.’ These groupings could be shared so that anyone might
add a text into an Issue, an act of collective bibliography-making. The idea
was that AAAAARG would be an infinite resource, mobilised (and nurtured)
by reading groups, social movements, fringe scholars, temporary projects,
students, and so on.
My history is too general to be accurate and what I’m about to write is too
specific to be true, but I’ll continue anyway: due in part to the seductiveness of
The Coming Insurrection as well as the wave of student occupations beginning in
2009 (many accompanied by emphatic communiqués with a theoretical force
and refusal to make demands) it felt as though a plug had been pulled. Or
maybe that’s just my impression. But the chain of events - from the revolution
in Tunisia to Occupy Everything, but also the ongoing haemorrhaging of
social wealth into the financial industry - has certainly re-oriented political
discourse and one’s sense of what is possible.
As regards your earlier question, I’ve never felt as though AAAARG has had
any agential power because it’s never really been an agent. It didn’t speak or
166

New Formations

make demands; it’s usually been more of a site of potential or vision of what’s
coming (for better or worse) than a vehicle for making change. Compared
to publishing bodies, it certainly never produced anything new or original,
rather it actively explored and exploited the affordances of asynchronous,
networked communication. But all of this is rather commonplace for what’s
called ‘piracy,’ isn’t it?
Anyway, yes, some entities did object to the site - AAAARG was ultimately
taken down by the publisher Macmillan over certain texts, including Beyond
Capital.
NT AAAAARG’s name has varied somewhat over time. Can you comment
on this? Does its variability relate at all to the structure and functionality of
the web?
SD When people say or write the name they have done it in all kinds of
different ways, adding (or subtracting) As, Rs, Gs, and sometimes Hs. It’s had
different names over time, usually adding on As as the site has had to keep
moving. Since this perpetual change seems to be part of the nature of the
project, my convention has been to be deliberately inconsistent with the name.
I think one part of what you’re referring to about the web is the way in
which data moves from place to place in two ways - one is that it is copied
between directories or computers; and the other is that the addressing is
changed. Although it seems fairly stable at this point, over time it changes
significantly with things slipping in and out of view. We rely on search engines
and the diligence of website administrators to maintain a semblance of stability
(through 301 redirects, for example) but the reality is quite the opposite. I’m
interested in how things (files or simply concepts) circulate within this system,
making use of both visibility and invisibility. Another related dimension would
be the ease of citation, the ways in which both official (executed internally) and
unofficial (accomplished from the outside) copies of entire sites are produced
and eventually confront one another. I’ve heard of people who have backed
up the entirety of AAAAARG, some of whom even initiate new library projects
(such as Henry Warwick’s Alexandria project). The inevitable consequence
of all of this seems to be that the library manifests itself in new places and in
new ways over time - sometimes with additional As, but not always.
EXPERIMENTING WITH MEDIA FORM
NT The expression ‘independent media’ may still have some tactical use to
characterise a publishing space and practice in distinction from commercial
media, but it’s clear from what Pauline and Sean say here that Mute and
AAAAARG have moved a long way from the analytic frameworks of media
‘independence’ as some kind of autonomous or liberated media space. We
might characterise these projects more as ‘topological’ media forms: neither
Materialities Of Independent Publishing 167

inside nor outside institutions, but emergent from the interaction of diverse
platforms, political conjunctures, contributors, readers, concepts, and
financial or legal structures. Media projects in this image of topology would be
immanent to those diverse material relations, not delimited and autonomous
bodies carved out from them. (Not, of course, that this kind of distributed
and mutable structure in itself guarantees progressive political effects.)
I’d like to continue with this discussion of media form and consider in
more detail some specific instances of experimentation with publishing
practice. It seems to me that it is significant that most of you have a relation
to art practice. The work that Humanities researchers and political activists
generate with poststructuralist or Marxist theory should necessarily be selfcritical of its textual and media form, but it frequently fails to be so. Whereas
reflexive approaches would seem to be less easily avoided in art practice, at
least once it engages with the same body of theory - shoot me down if that’s
naive! In any case, I would venture that experimentation in publishing form
has a central place in the media projects we’re discussing. Alessandro, you
make that point, above, that Neural has ‘always experimented with publishing
in various ways’. Can you describe particular examples? It would be very
interesting to hear from you about Neural in this regard, but also about your
art projects ‘Amazon Noir’ and ‘Face to Facebook’.
AL Neural started surrounded by the thrills of the rising global ‘telematic’
networks in 1993, reflecting an interest in intertwining culture and technology
with publishing (either cyberpunk science fiction, internet artworks, or hacker
technologies and practices) in both print and digital media. So, printing a
magazine about digital art and culture in that historical moment meant to
be surrounded by stimuli that pushed beyond the usual structural design
forms and conceptual paradigms of publishing. After almost two decades we
can recognise also that that time was the beginning of the most important
mutation of publishing, through its new networked, screen-based and real
time dimensions. And the printed page started also to have a different role
in the late 2000s, but this role is still to be extensively defined.
At that time, in the mid-1990s, Neural tried to experiment with publishing
through different perspectives. First, aesthetically: the page numbering was
strictly in binary numbers, just zeros and ones, even if the printer started to
complain that this was driving him crazy. But also sensorially: we referred
to optical art, publishing large ‘optical’ artworks in the centrefold; and we
published ‘stereograms’ apparently rude black and white images, that when
viewed from a different angle revealed a three-dimensional picture, tricking
the readers’ eyes and drawing them into a new visual dimension for a while.
And finally, politically: in issue #18 we published a hacktivist fake, a double
page of fake stickers created by the Italian hacker laboratories’ network.
These fake stickers sarcastically simulated the real ones that are mandatory
on any book or CD/DVD sold in Italy, because of the strict law supporting the
168

New Formations

national Authors’ and Musicians’ Society (SIAE). On the ones we published the
‘Unauthorized Duplication Prohibited’ sentence was replaced by: ‘Suggested
Duplication on any Media’.
As another example, in issue #30 we delivered ‘Notepad’ to all our
subscribers - an artwork by the S.W.A.M.P. duo. It was an apparently ordinary
yellow legal pad, but each ruled line, when magnified, reveals itself to be
‘microprinted’ text enumerating the full names, dates, and locations of each
Iraqi civilian death on record over the first three years of the Iraq War. And
in issue #40 we’ve printed and will distribute in the same way a leaflet of
the Newstweek project (a device which hijacks online major news websites,
changing them while you’re accessing internet on a wireless network) that at
first glance seems to be a classic telco corporate leaflet ad. All these examples
try to expand the printed page to an active role that transcends its usual mode
of private reading.
With these and other experiments in publishing, we’ve tried to avoid the
ephemerality that is the norm in ‘augmented’ content, where it exists just for
the spectacular sake of it. Placing a shortcut to a video through a QR code
can be effective if the connection between the printed resource and the online
content is not going to disappear soon, otherwise the printed information
will remain but the augmentation will be lost. And instead of augmenting the
experience in terms of entertainment, I’m much more in favour of triggering
specific actions (like supporting the online processes) and changes (like
taking responsibility for activating new online processes) through the same
smartphone-based technologies.
Another feature of our experimentation concerns the archive. The printing
and distribution of paper content has become an intrinsic and passive form of
archiving, when this content is preserved somewhere by magazine consumers,
in contrast to the potential disposability of online content which can simply
disappear at any minute if the system administrator doesn’t secure enough
copies. This is why I’ve tried to develop both theoretically and practically the
concept of the ‘distributed archive’, a structure where people personally take
the responsibility to preserve and share printed content. There are already
plenty of ‘archipelagos’ of previously submerged archives that would emerge,
if collectively and digitally indexed, and shared with those who need to access
them. I’m trying to apply this to Neural itself in the ‘Neural Archive’ project,
an online database with all the data about the publications received by Neural
during the years, which should be part of a larger network of small institutions,
whose final goal would be to test and then formulate a viable model to easily
build and share these kind of databases.
Turning to my projects outside of Neural, these social and commercial
aspects of the relation between the materiality of the printed page and the
manipulability of its digital embodiment were foregrounded in Amazon Noir,
an artwork which I developed with Paolo Cirio and Ubermorgen.10 This
work explored the boundaries of copyrighting text, examining the intrinsic
Materialities Of Independent Publishing 169

10. http://amazonnoir.com/

technological paradox of protecting a digital text from unauthorised copying,
especially when dealing with the monstrous amount of copyrighted content
buyable from Amazon.com. Amazon features a powerful and attractive
marketing tool called ‘Search Inside the Book’ which allows potential
customers to search the entire text of a book; Amazon Noir merely exploited
this mechanism by stretching it to its own logical conclusion. The software
script we used obtained the entire text and then automatically saved it as a
PDF file: once we had established the first sentence of the text, the software
then used the last words of this sentence as a search term for retrieving the
first words of the next sentence. By reiterating this process (a total of 2,000
to 3,000 queries for an average book) and automatically reconstructing the
fragments, the software ended up collecting the entire text. In order to better
visualise the process, we created an installation: two overhead projectors,
displaying the project’s logo and a diagram of the internal workings of our
software, as well as a medical incubator containing one of the ‘stolen’ (and
digitally reprinted) books. The book we chose to ‘steal’ was (of course) Steal
This Book, the American 1970s counterculture classic by the activist Abbie
Hoffman. In a sense, we literally ‘re-incarnated’ the book in a new, mutated
physical form. But we also put up a warning sign near the incubator:
The book inside the incubator is the physical embodiment of a complex Amazon.com
hacking action. It has been obtained exploiting the Amazon ‘Search Inside The Book’
tool. Take care because it’s an illegitimate and premature son born from the relationship
between Amazon and Copyright. It’s illegitimate because it’s an unauthorized print of a
copyright-protected book. And it’s premature because the gestation of this relationship’s
outcome is far from being mature.
We asked ourselves: what’s the difference between digitally scanning the text
of a book we already own, and obtaining it through Amazon Noir? In strictly
conceptual terms, there is no difference at all, other then the amount of time
we spent on the project. We wished to set up our own Amazon, definitively
circumventing the confusion of endless purchase-inducing stimuli. So we
stole the hidden and disjointed connections between the sentences of a text,
to reveal them for our own amusement and edification; we stole the digital
implementation of synaptic connections between memories (both human and
electronic) created by a giant online retailer in order to amuse and seduce us
into compulsive consumption; we were thieves of memory (in a McLuhanian
sense), stealing for the right to remember, the right to independently and
freely construct our own physical memory.
Finally, in Face to Facebook (developed again with Paolo Cirio and part of
the ‘Hacking Monopolism’ trilogy together with Amazon Noir and Google
Will Eat Itself) we ‘stole’ 1 million Facebook profiles’ public data, filtering
them through their profile pictures with face-recognition software, and then
170

New Formations

posted all the filtered data on a custom-made dating website, sorted by their
facial expression characteristics.11 In the installation we produced, we glued
more than 1,700 profile pictures on white-painted square wood panels,
and projected also the software diagram and an introductory video. Here
the ‘printed’ part deals more with materializing ‘stolen’ personal online
information. The ‘profile pictures’ treated as public data by Facebook, and
scraped with a script by Paolo and me, once properly printed are a terrific
proof of our online fragility and at the same time of how ‘printing’ is becoming
a contemporary form of ‘validation’. In fact we decided to print them on the
type of photographic paper once used for passport pictures (the ‘silk’ finish).
The amazing effect of all these faces together was completely different when
visualised in a video (‘overwhelming’ when zooming in and out), printed with
ink-jet printers (‘a huge amount of recognisable faces’), and on its proper
‘validating’ medium, photographic paper (giving the instant impression that
‘all those people are real’). What does it mean when the picture (with your
face) with which you choose to represent yourself in the potential arena of
700 Millions Facebook users is printed, re-contextualised, and exhibited
somewhere else, with absolutely no user control? Probably, it reinforces the
concept that print still has a strong role in giving information a specific status,
because more than five centuries of the social use of print have developed a
powerful instinctive attitude towards it.
POST-DIGITAL PRINT AND THE FUTURE OF THE BOOK
NT What you say here Alessandro about Neural’s concern to ‘expand the
printed page’ is very suggestive of the possibilities of print in new media
environments. Could you comment more on this theme by telling us how
you understand ‘post-digital print’, the topic of your current book project?
AL Post-Digital Print: the Mutation of Publishing since 1894 is the outcome
of quite extensive research that I carried out at the Willem De Kooning
Academy as guest researcher in the Communication Design program run by
Florian Cramer. The concept behind it is to understand both historically and
strategically the new role of print in the 2010s, dealing with the prophets
of its death and its digital competitors, but also its history as something of a
perfect medium, the oldest still in use and the protagonist of countless media
experiments, not to mention its possible evolution and further mutations. The
concept of post-digital print can be better explained through a description of
a few of its chapters. In the first chapter, I analyze ten different moments in
history when the death of paper was announced (before the digital); of course,
it never happened, proving that perhaps even current pronouncements
will prove to be mistaken (by the way, the first one I’ve found dates back to
1894, which explains the subtitle). In the second chapter I’ve tried to track
a history of how avant-garde and underground movements have used print
Materialities Of Independent Publishing 171

11. http://www.faceto-facebook.net/

tactically or strategically, reflecting or anticipating its evolutions. In the third
chapter I go deeper in analyzing the ‘mutation’ of paper in recent years, and
what ‘material paper represents in immaterial times’. And the sixth chapter
addresses the basis on which print can survive as an infrastructure and a
medium for sharing content and experience, and also as a way of generating
collective practice and alliances. Beyond this book, I’m continuing to research
the relationship between print and online in various forms, especially artistic
ones. Personally, I think this relationship will be one of the pivotal media
arenas of change (and so of new potential territories for experimentation
and innovation) in the coming years.

12. Theodor
W. Adorno,
‘Bibliographical
Musings’ in Notes
to Literature Volume
2, Shierry Weber
Nicholsen (trans),
Rolf Tiedemann
(ed), New York,
Columbia University
Press, 1992, p20.
13. Stéphane
Mallarmé, ‘The
Book: A Spiritual
Instrument’,
Bradford Cook
(trans), in Hazard
Adams (ed), Critical
Theory Since Plato,
New York, Harcourt
Brace, 1971, p691.

NT Taking a lead from some of these points, I’d like to turn to the material
forms of the book and the archive. Sensory form has historically played a key
role in constituting the body, experience, and metaphors of the book and the
archive. For both Adorno and Mallarmé, the physical and sensory properties of
the book are key to its promise, which lies to a large degree in its existence as
a kind of ‘monad’. For Adorno, the book is ‘something self-contained, lasting,
hermetic - something that absorbs the reader and closes the lid over him, as it
were, the way the cover of the book closes on the text’.12 And for Mallarmé, ‘The
foldings of a book, in comparison with the large-sized, open newspaper, have
an almost religious significance. But an even greater significance lies in their
thickness when they are piled together; for then they form a tomb in miniature
for our souls’.13 I find these to be very appealing characterisations of the book,
but today they come with a sense of nostalgia, and the strong emphasis they
place on the material form and physical characteristics of the printed book
appears to leave little room for a digital future of this medium. Sean, I want
to ask you two related questions on this theme. What happens to the sensory
properties of paper in AAAAARG - are they lost, reconfigured, replaced with
other sensory experiences? And what happens to the book in AAAAARG, once
it is digitised and becomes less a self-enclosed and autonomous object than, as
you put it, part of an ‘infinite resource’?
SD It is a romantic way of thinking about books - and a way that I also find
appealing - but of course it’s a characterisation that comes after the fact
of the book; it’s a way that Adorno, Mallarmé, and others have described
and generalised their own experiences with these objects. I see no reason
why future readers’ experiences with various forms of digital publishing
won’t cohere into something similar, feelings of attachment, enclosure,
impenetrability, and so on.
AAAARG is stuck in between both worlds. So many of the files on the site
are images of paper (usually taken with a scanner, but occasionally a camera)
packaged in a PDF. You can see it in the underlines, binding gradients,
folds, stains, and tears; and you can often, but not always, see the labour and
technology involved in making the transformation from physical to digital.
172

New Formations

So one’s experience is often to be perhaps more aware of the paper that is
not there. Of course, there are other files which have completely divorced
themselves from any sense of the paper, whether because they are texts that
are native to the digital - or because of a particularly virtuosic scanning job.
There are problems with the nostalgia for books - a nostalgia that I am
most certainly stricken with. We can’t take the book object out of the political
economy of the book, and our attempts to recreate ‘the book’ in the digital will
very likely also import legal and economic structures that ought to be radically
reformulated or overthrown. In this context, as in others, there seem to be
a few ways that this is playing out, simultaneously: one is the replication of
existing territories and power structures by extending them into the digital;
another, in the spirit of the California Ideology, would be that attempt to use
the digital as a leading edge in reshaping the public, of subsuming it into
the market; and a third could be trying to make the best of this situation,
with access to tools and each other, in order to build new structures that are
more connected to those contesting the established and emerging forces of
control.
And what’s more, it seems like the physical book itself is becoming
something else - material is recombined and re-published and re-packaged
from the web, such that we now have many more books being published each
year than ever before - perhaps not as self-enclosed as it was for Adorno. I
don’t want to make equivalences between the digital and physical book - there
are very real physiological and psychical differences between holding ink on
paper versus holding a manufactured hard drive, coursing with radio waves
and emitting some frequency of light - but I think the break is really staggered
and imperfect. We’ll never really lose the book and the digital isn’t confined
to pixels on a screen.
WHATEVER BLOGGING
NT Turning to social media, I want to ask Jodi to comment more on the
technical structures of the blog. In Blog Theory you propose an intriguing
concept of ‘whatever blogging’ to describe the association of blogs with the
decline of symbolic efficiency, as expressions are severed from their content
and converted into quantitative values and graphic representations of
communication flow.14 The more we communicate, it seems, the more what is
communicated tends toward abstraction, and the evacuation of consequence
save for the perpetuation of communication. Can you describe the technical
features and affective qualities of this process, how the field of ‘whatever
blogging’ is constituted? And how might we oppose these tendencies? Can
we reaffirm writing as deliberation and meaning? Are there any ways to make
progressive use of the ‘whatever’ field?
JD The basic features of blogs include posts (which are time-stamped,
Materialities Of Independent Publishing 173

14. Jodi Dean, Blog
Theory: Feedback and
Capture in the Circuits
of Drive, Cambridge,
Polity Press, 2010.

permalinked, and archived), comments, and links. These features aren’t
necessarily separate insofar as posts have permalinks and can themselves
be comments; for example, that a specific blog has disabled its comment
feature doesn’t preclude the possibility of a discussion arising about that blog
elsewhere. Two further features of blogs arise from their settings: hits (that
is, viewers, visitors) and a kind of generic legibility, or, what we might call
the blog form (the standard visual features associated with but not exclusive
to popular platforms like Blogger and Typepad). I bring up the latter point
since so much of online content is now time-stamped, permalinked, and
archived, yet we would not call it a blog (the New York Times website has blogs
but these are sub-features of the site, not the site itself). All these features
enable certain kinds of quantification: bloggers can know how many hits we
get on a given day (even minute by minute), we can track which posts get
the most hits, which sites send us the most visitors, who has linked to us or
re-blogged our content, how popular we are compared to other blogs, etc.
Now, this quantification is interesting because it accentuates the way that,
regardless of its content, any post, comment, or link is a contribution; it is an
addition to a communicative field. Half the visitors to my blog could be rightwing bad guys looking for examples of left-wing lunacy - but each visitor counts
the same. Likewise, quantitatively speaking, there is no difference between
comments that are spam, from trolls, or seriously thoughtful engagements.
Each comment counts the same (as in post A got 25 comments; post B didn’t
get any). Each post counts the same (an assumption repeated in surveys of
bloggers - we are asked how many times we post a day). Most bloggers who
blog for pay are paid on the basis of the two numbers: how many posts and
how many comments per post. Whether the content is inane or profound is
irrelevant.
The standardisation and quantification of blogging induce a kind of
contradictory sensibility in some bloggers. On the one hand, our opinion
counts. We are commenting on matters of significance (at least to someone
- see, look, people are reading what we write! We can prove it; we’ve got the
numbers!). Without this promise or lure of someone, somewhere, hearing
our voice, reading our words, registering that we think, opine, and feel,
there wouldn’t be blogging (or any writing for another). On the other hand,
knowing that our blog is one among hundreds of millions, that we have very
few readers, and we can prove it - look, only 100 hits today and that was to
the kitty picture - provides a cover of anonymity, the feeling that one could
write absolutely anything and it would be okay, that we are free to express
what we want without repercussion. So bloggers (and obviously I don’t have
in mind celebrity bloggers or old-school ‘A-list bloggers’) persist in this
affective interzone of unique importance and liberated anonymity. It’s like
we can expose what we want without having to deal with any consequences
- exposure without exposure. Thus, a few years ago there were all sorts of
stories about people losing their jobs because of what they wrote on their
174

New Formations

blogs. Incidentally, the same phenomenon occurs in other social media - the
repercussions of indiscrimination that made their way to Facebook.
The overall field of social media, then, relies on this double sense of
exposing without being exposed, of being unique but indistinguishable. What
registers is the addition to the communicative field, the contribution, not the
content, not the meaning. Word clouds are great examples here - they are
graphic representations of word frequency. They can say how many times a
word is used, but not the context or purpose or intent or connotation of its
use. So a preacher could use the word ‘God’ as many times as the profaner;
the only difference is that the latter also uses the words ‘damn it.’
Can this field where whatever is said counts the same as any other thing
that is said be used progressively? Not really; I mean only in a very limited
way. Sure, there are spam operations and ways to try to manipulate search
engine results. But if you think about it, most critical work relies on a level of
meaning. Satire, irony, comedy, deconstruction, détournement all invoke a
prior meaningful setting into which they intervene. Rather than ‘progressive
use of the whatever field’ I would urge a more direct and decisive assertion of
collective political will, something that cuts through the bland whateverness
without commitments to recognise that this is nothing but the maintenance
of the malleable inhabitants of capitalism when what is really needed is the
discipline of communist collectives.
NEWSPAPER AS PEDAGOGY AND MONUMENT
NT Dmitry, the Chto Delat? group produces work across a range of media film, radio, performance, installation, website, blog - but the media form of
the ‘newspaper’ has an especially significant place for you: Chto Delat? began
its collective work through the production of a newspaper and has continued
to produce newspapers as a key part of its exhibitions and interventions.
Many will argue that the newspaper is now a redundant or ‘retro’ media form,
given the superior distributive and interactive capacities of digital media.
But such assessments fail to appreciate the complex form and functionality
of the newspaper, which is not merely a means of information distribution.
It is noteworthy in this regard that the Occupy movement (which has been a
constant throughout this conversation) has been producing regular printed
newspapers from the precarious sites of occupation, when an exclusive focus
on new media might have been more practical.
So, I would like to ask you some questions about the appeal of the media
form of the newspaper. First, Chto Delat?’s emphasis on self-education is
influenced by Paulo Freire, but on this theme of the newspaper it is the
pedagogical practice of Jean Oury and Félix Guattari that comes to my mind.
For Oury and Guattari (building on work by Célestin Freinet on ‘institutional
pedagogy’) the collectively produced publication works as a therapeutic ‘third
object’, a mediator to draw out, problematise, and transversalise social and
Materialities Of Independent Publishing 175

15. Gary Genosko,
Félix Guattari: A
Critical Introduction,
Cambridge, Pluto
Press, 2009;
Genosko, ‘Busted:
Félix Guattari
and the Grande
Encyclopédie des
Homosexualités’,
Rhizomes 11/12
(2005/6), http://
www.rhizomes.net/
issue11/genosko.
html ; François
Dosse, Gilles Deleuze
and Félix Guattari:
Intersecting Lives, D.
Glassman (trans),
New York, Columbia
University Press,
2010.

16. Christina
Kiaer, Imagine
No Possessions:
The Socialist
Objects of Russian
Constructivism,
London, The
MIT Press, 2005;
Nicholas Thoburn,
‘Communist Objects
and the Values of
Printed Matter’,
Social Text 28, 2
(2010): 1-30.
17. Gilles Deleuze
and Félix Guattari,
What Is Philosophy?
G. Burchell and H.
Tomlinson (trans),
London, Verso,
1994, pp167-8,
pp176-7.

libidinal relations among groups, be they psychiatric associations or political
collectives. Gary Genosko has published some fascinating work on this aspect
of Guattari’s praxis, and it comes across clearly in the Dosse biography of
Deleuze and Guattari.15 With this question of group pedagogy in mind, what is
the role of the newspaper in the self-organisation and self-education practice
of Chto Delat?
DV The interrelations between all forms of our activity is very important, Chto
Delat? is conceived as an integral composition: we do research on a film project
and some materials of this research get published in the newspaper and in
our on-line journal (which is on-line extension of the newspaper); we start to
work on the publication and its outcomes inspire work on a new installation;
we plan an action and build a collaboration with new actors and it triggers a
new publication and so on. But in general, the newspaper is used as a medium
of contextualisation and communication with the broader community, and as
an interventionist pressure on mainstream cultural production.
I did not know about Guattari’s ideas here, but I totally agree. Yes, for us
the newspaper is also a ‘third object’ which carries a therapeutic function when it is printed despite all the impossibilities of making it happen, after all
the struggle around content, finance, and so on, the collective gets a mirror
which confirms its own fragile and crisis-ridden existence.
NT If we turn to the more physical and formal qualities, does the existence of
the newspaper as an ‘object’ have any value or significance to you? Chto Delat?
has made enticing engagements with the Constructivist project - you talk of
‘actualising’ Constructivism in new circumstances. To that end, I wonder if the
newspaper may be a way of actualising the Constructivist theme of the object
as ‘comrade’, as Rodchenko put it, where the revolution is the liberation of the
human and the object, what Arvatov called the ‘intensive expressiveness’ of
matter?16 Another way of thinking this theme of the newspaper as a political
object is through what Deleuze and Guattari call a ‘monument’, a compound
of matter and sensation that ‘stands up by itself ’, independent of its creator,
as a product of the event and a projection into the future:
the monument is not something commemorating a past, it is a bloc of
present sensations that owe their preservation only to themselves and that
provide the event with the compound that celebrates it. The monument’s
action is not memory but fabulation … [I]t confides to the ear of the future
the persistent sensations that embody the event: the constantly renewed
suffering of men and women, their recreated protestations, their constantly
resumed struggle.17
DV Yes, the materiality (the ‘weight’) of newspaper is really important.
You should carry it for distribution, pass it from hand to hand, there is an
176

New Formations

important pressure of piles of newspapers stocked in the exhibition halls as takeaway artifacts (really monumental), or used as a wallpaper for installations.
We love these qualities, and the way they organise a routine communication
inside the group: ‘Hi there! Do you have newspapers to distribute at the rally
tomorrow? How many? Should we post a new batch?’ At a more subjective
level, I love to get the freshly printed newspaper in my hands; yes, it is a drug,
particularly in my case, when all the processes of production come through
my hands - first the idea, then editorial communication, lay-out, graphics,
finance, and then print.
PRINT/ONLINE
NT On this theme, I want to ask Pauline if you can comment on the place
of printed paper in the history and future of Mute? I have in mind your
experiments with paper stock, the way paper interfaces with digital publishing
platforms (or fails to), the pleasures, pains, and constraints of producing a
printed product in the digital environment.
PvMB All this talk of newspapers is making me very nostalgic. It was the
first print format that we experimented with, and I agree it’s one of the most
powerful - both in terms of the historical resonances it can provoke, and
in terms of what you can practically do with it (which includes distributing
editorial to many people for quite low costs, being experimental with lay-out,
type, images; and yes, working through this ‘third object’, with all that that
might imply). The Scottish free-circulation newspaper, Variant, is testimony to
this, having hung onto the format much more doggedly than Mute did, and
continuing to go strong, in spite of all the difficult conditions for production
that all of us face.18 There again, where Variant has shown the potential power
and longevity of freely distributed critical content (which they also archive fully
on the web), the rise and rise of free newspapers - wherein editorial functions
as nothing more than a hook for advertising, targeted at different ‘segments’
of the market – shouldn’t be forgotten either, since this might represent the
dominant function this media form presently holds.
I shouldn’t take too much time talking about the specifics here, but the
shelf-display-and-sale model of distribution which Mute chose for its printed
matter - on the eve of the assault this suffered from free online editorial
- landed us in some kind of Catch-22 which, nearly two decades later, we
still can’t quite figure the exit to. Important coordinates here are: the costs
involved in developing high quality editorial (research, commissioning,
layout, proofing, printing; but also the maintenance of an organisation with
- apart from staff - reliable systems for admin, finance, legal, a constitutional
apparatus); the low returns you get on ‘specialist’ editorial via shelf-sales
(particularly if you can’t afford sustained Marketing/Distribution, and the
offline distribution infrastructure itself starts to crumble under the weight of
Materialities Of Independent Publishing 177

18. Since this
conversation took
place, Variant has lost
its Creative Scotland
funding and has
(temporarily, one
hopes) suspended
publication. See
http://www.variant.
org.uk/publication

online behemoths like Amazon); and then finally the lure to publish online,
borne of promises of a global audience and the transcendence of a lot of
those difficulties.
Mute’s original newspaper format constituted an art-like gesture: it
encapsulated many things we wanted to speak about, but in ‘mute’, visual,
encoded form - epitomised by the flesh tones of the FT-style newspaper,
which insisted on the corporeal substrate of the digital revolution, as well as
its intimate relationship to speculation and investment finance (a condition,
we sought to infer, that it shared with all prior communications and
infrastructural revolutions). Thereafter, our experiments with paper were an
engagement with the ‘Catch-22’ described above, whose negative effects we
nevertheless perceived as mere obstacles to be negotiated, as we continued
hopefully, stubbornly, to project a global community of readers we might
connect with and solidarities we might forge - as everyone does, I guess.
We didn’t want to change our editorial to suit the market, so instead focused
on the small degrees of freedom and change afforded to us by its carrier,
i.e. the varying magazine formats at our disposal (quarterly/biannual, small/
large, full colour/mono, lush/ziney). In retrospect, we may have overplayed
the part played by desire in reading and purchasing habits (in the sense that
we thought we could sway potential purchasers to support Mute by plying
them with ever more ‘appealing’ objects). Be that as it may, it did push us
to mine this liminal zone between paper and pixel that Sean evokes so well
- particularly, I’d say, in the late ’90s/early 2000s, when questions over the
relationship between the ‘real’ and ‘virtual’ raged to nigh obsessional levels,
and magazines’ visual languages also grappled with their representation, or
integration.
Where we stand now, things are supposed to have stabilised somewhat.
The medial and conceptual hyper experimentation triggered by projected
‘digital futures’ has notionally died down, as mature social media and digital
publishing platforms are incorporated into our everyday lives, and the
behaviours associated with them normalised (the finger flicks associated with
the mobile or tablet touch screen, for example). Somewhere along the line you
asked about ePublishing. Well, things are very much up in the air on this front
currently, as independent publishers test the parameters and possibilities of
ePublishing while struggling to maintain commercial sustainability. Indeed, I
think the independent ePublishing situation, exciting though it undoubtedly
is, actually proves that this whole narrative of normalisation and integration
is a complete fiction; that, if there is any kind of ‘monument’ under collective
construction right now, it is one built under the sign of panic and distraction.
This conversation took place by email over the course of a few months from October
2011. Sponsorship was generously provided by CRESC (Centre for Research on SocioCultural Change), http://www.cresc.ac.uk/

178

New Formations


Dekker & Barok
Copying as a Way to Start Something New A Conversation with Dusan Barok about Monoskop
2017


COPYING AS A WAY TO START SOMETHING NEW
A Conversation with Dusan Barok about Monoskop

Annet Dekker

Dusan Barok is an artist, writer, and cultural activist involved
in critical practice in the fields of software, art, and theory. After founding and organizing the online culture portal
Koridor in Slovakia from 1999–2002, in 2003 he co-founded
the BURUNDI media lab where he organized the Translab
evening series. A year later, the first ideas about building an
online platform for texts and media started to emerge and
Monoskop became a reality. More than a decade later, Barok
is well-known as the main editor of Monoskop. In 2016, he
began a PhD research project at the University of Amsterdam. His project, titled Database for the Documentation of
Contemporary Art, investigates art databases as discursive
platforms that provide context for artworks. In an extended
email exchange, we discuss the possibilities and restraints
of an online ‘archive’.
ANNET DEKKER

You started Monoskop in 2004, already some time ago. What
does the name mean?
DUSAN BAROK

‘Monoskop’ is the Slovak equivalent of the English ‘monoscope’, which means an electric tube used in analogue TV
broadcasting to produce images of test cards, station logotypes, error messages but also for calibrating cameras. Monoscopes were automatized television announcers designed to
speak to both live and machine audiences about the status
of a channel, broadcasting purely phatic messages.
AD
Can you explain why you wanted to do the project and how it
developed to what it is now? In other words, what were your
main aims and have they changed? If so, in which direction
and what caused these changes?
DB

I began Monoskop as one of the strands of the BURUNDI
media lab in Bratislava. Originally, it was designed as a wiki
website for documenting media art and culture in the eastern part of Europe, whose backbone consisted of city entries
composed of links to separate pages about various events,

212

LOST AND LIVING (IN) ARCHIVES

initiatives, and individuals. In the early days it was modelled
on Wikipedia (which had been running for two years when
Monoskop started) and contained biographies and descriptions of events from a kind of neutral point of view. Over
the years, the geographic and thematic boundaries have
gradually expanded to embrace the arts and humanities in
their widest sense, focusing primarily on lesser-known
1
phenomena.1 Perhaps the biggest change is the ongoing
See for example
shift from mapping people, events, and places towards
https://monoskop.org/
Features. Accessed
synthesizing discourses.
28 May 2016.
A turning point occurred during my studies at the
Piet Zwart Institute, in the Networked Media programme
from 2010–2012, which combined art, design, software,
and theory with support in the philosophy of open source
and prototyping. While there, I was researching aspects of
the networked condition and how it transforms knowledge,
sociality and economics: I wrote research papers on leaking
as a technique of knowledge production, a critique of the
social graph, and on the libertarian values embedded in the
design of digital currencies. I was ready for more practice.
When Aymeric Mansoux, one of the tutors, encouraged me
to develop my then side-project Monoskop into a graduation
work, the timing was good.
The website got its own domain, a redesign, and most
crucially, the Monoskop wiki was restructured from its
2
focus on media art and culture towards the much wider
https://monoskop.org/
embrace
of the arts and humanities. It turned to a media
Symposium. Accessed
28 May 2016.
library of sorts. The graduation work also consisted of
a symposium about personal collecting and media ar3
chiving,2 which saw its loose follow-ups on media aeshttps://monoskop.org/
thetics (in Bergen)3 and on knowledge classification and
The_Extensions_of_
Many. Accessed
archives (in Mons)4 last year.
28 May 2016.

AD

https://monoskop.org/
Ideographies_of_
Knowledge. Accessed
28 May 2016.

Did you have a background in library studies, or have
you taken their ideas/methods of systemization and categorization (meta data)? If not, what are your methods
and how did you develop them?

213

COPYING AS A WAY TO START SOMETHING NEW

4

been an interesting process, clearly showing the influence
of a changing back-end system. Are you interested in the
idea of sharing and circulating texts as a new way not just
of accessing and distributing but perhaps also of production—and publishing? I’m thinking how Aaaaarg started as
a way to share and exchange ideas about a text. In what
way do you think Monoskop plays (or could play) with these
kinds of mechanisms? Do you think it brings out a new
potential in publishing?

DB

Besides the standard literature in information science (I
have a degree in information technologies), I read some
works of documentation scientists Paul Otlet and Suzanne
Briet, historians such as W. Boyd Rayward and Ronald E.
Day, as well as translated writings of Michel Pêcheux and
other French discourse analysts of the 1960s and 1970s.
This interest was triggered in late 2014 by the confluence
of Femke’s Mondotheque project and an invitation to be an
artist-in-residence in Mons in Belgium at the Mundaneum,
home to Paul Otlet’s recently restored archive.
This led me to identify three tropes of organizing and
navigating written records, which has guided my thinking
about libraries and research ever since: class, reference,
and index. Classification entails tree-like structuring, such
as faceting the meanings of words and expressions, and
developing classification systems for libraries. Referencing
stands for citations, hyperlinking and bibliographies. Indexing ranges from the listing of occurrences of selected terms
to an ‘absolute’ index of all terms, enabling full-text search.
With this in mind, I have done a number of experiments.
There is an index of selected persons and terms from
5
across the Monoskop wiki and Log.5 There is a growing
https://monoskop.org/
list of wiki entries with bibliographies and institutional
Index. Accessed
28 May 2016.
infrastructures of fields and theories in the humanities.6
There is a lexicon aggregating entries from some ten
6
dictionaries of the humanities into a single page with
https://monoskop.org/
hyperlinks to each full entry (unpublished). There is an
Humanities. Accessed
28 May 2016.
alternative interface to the Monoskop Log, in which entries are navigated solely through a tag cloud acting as
a multidimensional filter (unpublished). There is a reader
containing some fifty books whose mutual references are
turned into hyperlinks, and whose main interface consists
of terms specific to each text, generated through tf-idf algorithm (unpublished). And so on.

DB

The publishing market frames the publication as a singular
body of work, autonomous from other titles on offer, and
subjects it to the rules of the market—with a price tag and
copyright notice attached. But for scholars and artists, these
are rarely an issue. Most academic work is subsidized from
public sources in the first place, and many would prefer to
give their work away for free since openness attracts more
citations. Why they opt to submit to the market is for quality
editing and an increase of their own symbolic value in direct
proportion to the ranking of their publishing house. This
is not dissimilar from the music industry. And indeed, for
many the goal is to compose chants that would gain popularity across academia and get their place in the popular
imagination.
On the other hand, besides providing access, digital
libraries are also fit to provide context by treating publications as a corpus of texts that can be accessed through an
unlimited number of interfaces designed with an understanding of the functionality of databases and an openness
to the imagination of the community of users. This can
be done by creating layers of classification, interlinking
bodies of texts through references, creating alternative
indexes of persons, things and terms, making full-text
search possible, making visual search possible—across
the whole of corpus as well as its parts, and so on. Isn’t
this what makes a difference? To be sure, websites such
as Aaaaarg and Monoskop have explored only the tip of

AD

Indeed, looking at the archive in many alternative ways has

214

LOST AND LIVING (IN) ARCHIVES

215

COPYING AS A WAY TO START SOMETHING NEW

the iceberg of possibilities. There is much more to tinker
and hack around.

within a given text and within a discourse in which it is
embedded. What is specific to digital text, however, is that
we can search it in milliseconds. Full-text search is enabled
by the index—search engines operate thanks to bots that
assign each expression a unique address and store it in a
database. In this respect, the index usually found at the
end of a printed book is something that has been automated
with the arrival of machine search.
In other words, even though knowledge in the age of the
internet is still being shaped by the departmentalization of
academia and its related procedures and rituals of discourse
production, and its modes of expression are centred around
the verbal rhetoric, the flattening effects of the index really
transformed the ways in which we come to ‘know’ things.
To ‘write’ a ‘book’ in this context is to produce a searchable
database instead.

AD

It is interesting that whilst the accessibility and search potential has radically changed, the content, a book or any other
text, is still a particular kind of thing with its own characteristics and forms. Whereas the process of writing texts seems
hard to change, would you be interested in creating more
alliances between texts to bring out new bibliographies? In
this sense, starting to produce new texts, by including other
texts and documents, like emails, visuals, audio, CD-ROMs,
or even un-published texts or manuscripts?
DB

Currently Monoskop is compiling more and more ‘source’
bibliographies, containing digital versions of actual texts
they refer to. This has been very much in focus in the past
two or three years and Monoskop is now home to hundreds
of bibliographies of twentieth-century artists, writers, groups,
and movements as well as of various theories and human7
ities disciplines.7 As the next step I would like to move
See for example
on to enabling full-text search within each such biblioghttps://monoskop.
org/Foucault,
raphy. This will make more apparent that the ‘source’
https://monoskop.
bibliography
is a form of anthology, a corpus of texts
org/Lissitzky,
https://monoskop.
representing a discourse. Another issue is to activate
org/Humanities.
cross-references
within texts—to turn page numbers in
All accessed
28 May 2016.
bibliographic citations inside texts into hyperlinks leading
to other texts.
This is to experiment further with the specificity of digital text. Which is different both to oral speech and printed
books. These can be described as three distinct yet mutually
encapsulated domains. Orality emphasizes the sequence
and narrative of an argument, in which words themselves
are imagined as constituting meaning. Specific to writing,
on the other hand, is referring to the written record; texts
are brought together by way of references, which in turn
create context, also called discourse. Statements are ‘fixed’
to paper and meaning is constituted by their contexts—both

216

LOST AND LIVING (IN) ARCHIVES

AD

So, perhaps we finally have come to ‘the death of the author’,
at least in so far as that automated mechanisms are becoming active agents in the (re)creation process. To return to
Monoskop in its current form, what choices do you make
regarding the content of the repositories, are there things
you don’t want to collect, or wish you could but have not
been able to?
DB

In a sense, I turned to a wiki and started Monoskop as
a way to keep track of my reading and browsing. It is a
by-product of a succession of my interests, obsessions, and
digressions. That it is publicly accessible is a consequence
of the fact that paper notebooks, text files kept offline and
private wikis proved to be inadequate at the moment when I
needed to quickly find notes from reading some text earlier.
It is not perfect, but it solved the issue of immediate access
and retrieval. Plus there is a bonus of having the body of
my past ten or twelve years of reading mutually interlinked
and searchable. An interesting outcome is that these ‘notes’
are public—one is motivated to formulate and frame them

217

COPYING AS A WAY TO START SOMETHING NEW

as to be readable and useful for others as well. A similar
difference is between writing an entry in a personal diary
and writing a blog post. That is also why the autonomy
of technical infrastructure is so important here. Posting
research notes on Facebook may increase one’s visibility
among peers, but the ‘terms of service’ say explicitly that
anything can be deleted by administrators at any time,
without any reason. I ‘collect’ things that I wish to be able
to return to, to remember, or to recollect easily.
AD

Can you describe the process, how do you get the books,
already digitized, or do you do a lot yourself? In other words,
could you describe the (technical) process and organizational aspects of the project?
DB

In the beginning, I spent a lot of time exploring other digital
libraries which served as sources for most of the entries on
Log (Gigapedia, Libgen, Aaaaarg, Bibliotik, Scribd, Issuu,
Karagarga, Google filetype:pdf). Later I started corresponding with a number of people from around the world (NYC,
Rotterdam, Buenos Aires, Boulder, Berlin, Ploiesti, etc.) who
contribute scans and links to scans on an irregular basis.
Out-of-print and open-access titles often come directly from
authors and publishers. Many artists’ books and magazines
were scraped or downloaded through URL manipulation
from online collections of museums, archives and libraries.
Needless to say, my offline archive is much bigger than
what is on Monoskop. I tend to put online the files I prefer
not to lose. The web is the best backup solution I have
found so far.
The Monoskop wiki is open for everyone to edit; any user
can upload their own works or scans and many do. Many of
those who spent more time working on the website ended up
being my friends. And many of my friends ended up having
an account as well :). For everyone else, there is no record
kept about what one downloaded, what one read and for
how long... we don’t care, we don’t track.

218

LOST AND LIVING (IN) ARCHIVES

AD

In what way has the larger (free) publishing context changed
your project, there are currently several free texts sharing
initiatives around (some already before you started like Textz.
com or Aaaaarg), how do you collaborate, or distinguish
from each other?
DB

It should not be an overstatement to say that while in the
previous decade Monoskop was shaped primarily by the
‘media culture’ milieu which it intended to document, the
branching out of its repository of highlighted publications
Monoskop Log in 2009, and the broadening of its focus to
also include the whole of the twentieth and twenty-first
century situates it more firmly in the context of online
archives, and especially digital libraries.
I only got to know others in this milieu later. I approached
Sean Dockray in 2010, Marcell Mars approached me the
following year, and then in 2013 he introduced me to Kenneth Goldsmith. We are in steady contact, especially through
public events hosted by various cultural centres and galleries.
The first large one was held at Ljubljana’s hackerspace Kiberpipa in 2012. Later came the conferences and workshops
organized by Kuda at a youth centre in Novi Sad (2013), by
the Institute of Network Cultures at WORM, Rotterdam (2014),
WKV and Akademie Schloss Solitude in Stuttgart (2014),
Mama & Nova Gallery in Zagreb (2015), ECC at Mundaneum,
Mons (2015), and most recently by the Media Department
8
of the University of Malmo (2016).8
For more information see,
The leitmotif of all these events was the digital library
https://monoskop.org/
Digital_libraries#
and their atmosphere can be described as the spirit of
Workshops_and_
early
hacker culture that eventually left the walls of a
conferences.
Accessed 28 May 2016.
computer lab. Only rarely there have been professional
librarians, archivists, and publishers among the speakers, even though the voices represented were quite diverse.
To name just the more frequent participants... Marcell
and Tom Medak (Memory of the World) advocate universal
access to knowledge informed by the positions of the Yugoslav

219

COPYING AS A WAY TO START SOMETHING NEW

Marxist school Praxis; Sean’s work is critical of the militarization and commercialization of the university (in the
context of which Aaaaarg will always come as secondary, as
an extension of The Public School in Los Angeles); Kenneth
aims to revive the literary avant-garde while standing on the
shoulders of his heroes documented on UbuWeb; Sebastian
Lütgert and Jan Berger are the most serious software developers among us, while their projects such as Textz.com and
Pad.ma should be read against critical theory and Situationist cinema; Femke Snelting has initiated the collaborative
research-publication Mondotheque about the legacy of the
early twentieth century Brussels-born information scientist
Paul Otlet, triggered by the attempt of Google to rebrand him
as the father of the internet.
I have been trying to identify implications of the digital-networked textuality for knowledge production, including humanities research, while speaking from the position
of a cultural worker who spent his formative years in the
former Eastern Bloc, experiencing freedom as that of unprecedented access to information via the internet following
the fall of Berlin Wall. In this respect, Monoskop is a way
to bring into ‘archival consciousness’ what the East had
missed out during the Cold War. And also more generally,
what the non-West had missed out in the polarized world,
and vice versa, what was invisible in the formal Western
cultural canons.
There have been several attempts to develop new projects,
and the collaborative efforts have materialized in shared
infrastructure and introductions of new features in respective platforms, such as PDF reader and full-text search on
Aaaaarg. Marcell and Tom along with their collaborators have
been steadily developing the Memory of the World library and
Sebastian resuscitated Textz.com. Besides that, there are
overlaps in titles hosted in each library, and Monoskop bibliographies extensively link to scans on Libgen and Aaaaarg,
while artists’ profiles on the website link to audio and video
recordings on UbuWeb.

220

LOST AND LIVING (IN) ARCHIVES

AD

It is interesting to hear that there weren’t any archivist or
professional librarians involved (yet), what is your position
towards these professional and institutional entities and
persons?
DB

As the recent example of Sci-Hub showed, in the age of
digital networks, for many researchers libraries are primarily free proxies to corporate repositories of academic
9
journals.9 Their other emerging role is that of a digital
For more information see,
repository of works in the public domain (the role piowww.sciencemag.org/
news/2016/04/whosneered in the United States by Project Gutenberg and
downloading-piratedInternet Archive). There have been too many attempts
papers-everyone.
Accessed 28 May 2016.
to transpose librarians’ techniques from the paperbound
world into the digital domain. Yet, as I said before, there
is much more to explore. Perhaps the most exciting inventive approaches can be found in the field of classics, for
example in the Perseus Digital Library & Catalog and the
Homer Multitext Project. Perseus combines digital editions
of ancient literary works with multiple lexical tools in a way
that even a non-professional can check and verify a disputable translation of a quote. Something that is hard to
imagine being possible in print.
AD

I think it is interesting to see how Monoskop and other
repositories like it have gained different constituencies
globally, for one you can see the kind of shift in the texts
being put up. From the start you tried to bring in a strong
‘eastern European voice’, nevertheless at the moment the
content of the repository reflects a very western perspective on critical theory, what are your future goals. And do
you think it would be possible to include other voices? For
example, have you ever considered the possibility of users
uploading and editing texts themselves?
DB

The site certainly started with the primary focus on east-central European media art and culture, which I considered

221

COPYING AS A WAY TO START SOMETHING NEW

myself to be part of in the early 2000s. I was naive enough
to attempt to make a book on the theme between 2008–2010.
During that period I came to notice the ambivalence of the
notion of medium in an art-historical and technological
sense (thanks to Florian Cramer). My understanding of
media art was that it is an art specific to its medium, very
much in Greenbergian terms, extended to the more recent
‘developments’, which were supposed to range from neo-geometrical painting through video art to net art.
At the same time, I implicitly understood art in the sense
of ‘expanded arts’, as employed by the Fluxus in the early
1960s—objects as well as events that go beyond the (academic) separation between the arts to include music, film,
poetry, dance, design, publishing, etc., which in turn made
me also consider such phenomena as experimental film,
electro-acoustic music and concrete poetry.
Add to it the geopolitically unstable notion of East-Central
Europe and the striking lack of research in this area and
all you end up with is a headache. It took me a while to
realize that there’s no point even attempting to write a coherent narrative of the history of media-specific expanded
arts of East-Central Europe of the past hundred years. I
ended up with a wiki page outlining the supposed mile10
stones along with a bibliography.10
https://monoskop.
For this strand, the wiki served as the main notebook,
org/CEE. Accessed
28 May 2016. And
leaving behind hundreds of wiki entries. The Log was
https://monoskop.
more or less a ‘log’ of my research path and the presence
org/Central_and_
Eastern_Europe_
of ‘western’ theory is to a certain extent a by-product of
Bibliography.
my search for a methodology and theoretical references.
Accessed 28 May 2016.
As an indirect outcome, a new wiki section was
launched recently. Instead of writing a history of mediaspecific ‘expanded arts’ in one corner of the world, it takes
a somewhat different approach. Not a sequential text, not
even an anthology, it is an online single-page annotated
index, a ‘meta-encyclopaedia’ of art movements and styles,
intended to offer an expansion of the art-historical canonical
prioritization of the western painterly-sculptural tradition

222

LOST AND LIVING (IN) ARCHIVES

11

https://monoskop.
org/Art. Accessed
28 May 2016.

to also include other artists and movements around the
world.11
AD

Can you say something about the longevity of the project?
You briefly mentioned before that the web was your best
backup solution. Yet, it is of course known that websites
and databases require a lot of maintenance, so what will
happen to the type of files that you offer? More and more
voices are saying that, for example, the PDF format is all
but stable. How do you deal with such challenges?
DB

Surely, in the realm of bits, nothing is designed to last
forever. Uncritical adoption of Flash had turned out to be
perhaps the worst tragedy so far. But while there certainly
were more sane alternatives if one was OK with renouncing its emblematic visual effects and aesthetics that went
with it, with PDF it is harder. There are EPUBs, but scholarly publications are simply unthinkable without page
numbers that are not supported in this format. Another
challenge the EPUB faces is from artists' books and other
design- and layout-conscious publications—its simplified
HTML format does not match the range of possibilities for
typography and layout one is used to from designing for
paper. Another open-source solution, PNG tarballs, is not
a viable alternative for sharing books.
The main schism between PDF and HTML is that one represents the domain of print (easily portable, and with fixed
page size), while the other the domain of web (embedded
within it by hyperlinks pointing both directions, and with
flexible page size). EPUB is developed with the intention of
synthetizing both of them into a single format, but instead
it reduces them into a third container, which is doomed to
reinvent the whole thing once again.
It is unlikely that there will appear an ultimate convertor
between PDF and HTML, simply because of the specificities
of print and the web and the fact that they overlap only in
some respects. Monoskop tends to provide HTML formats

223

COPYING AS A WAY TO START SOMETHING NEW

next to PDFs where time allows. And if the PDF were to
suddenly be doomed, there would be a big conversion party.
On the side of audio and video, most media files on
Monoskop are in open formats—OGG and WEBM. There
are many other challenges: keeping up-to-date with PHP
and MySQL development, with the MediaWiki software
and its numerous extensions, and the mysterious ICANN
organization that controls the web domain.

as an imperative to us to embrace redundancy, to promote
spreading their contents across as many nodes and sites
as anyone wishes. We may look at copying not as merely
mirroring or making backups, but opening up for possibilities to start new libraries, new platforms, new databases.
That is how these came about as well. Let there be Zzzzzrgs,
Ůbuwebs and Multiskops.

AD

What were your biggest challenges beside technical ones?
For example, have you ever been in trouble regarding copyright issues, or if not, how would you deal with such a
situation?
DB

Monoskop operates on the assumption of making transformative use of the collected material. The fact of bringing
it into certain new contexts, in which it can be accessed,
viewed and interpreted, adds something that bookstores
don’t provide. Time will show whether this can be understood as fair use. It is an opt-out model and it proves to
be working well so far. Takedowns are rare, and if they are
legitimate, we comply.
AD

Perhaps related to this question, what is your experience
with users engagement? I remember Sean (from Aaaaarg,
in conversation with Matthew Fuller, Mute 2011) saying
that some people mirror or download the whole site, not
so much in an attempt to ‘have everything’ but as a way
to make sure that the content remains accessible. It is a
conscious decision because one knows that one day everything might be taken down. This is of course particularly
pertinent, especially since while we’re doing this interview
Sean and Marcell are being sued by a Canadian publisher.
DB

That is absolutely true and any of these websites can disappear any time. Archives like Aaaaarg, Monoskop or UbuWeb
are created by makers rather than guardians and it comes

224

LOST AND LIVING (IN) ARCHIVES

225

COPYING AS A WAY TO START SOMETHING NEW

Bibliography
Fuller, Matthew. ‘In the Paradise of Too Many Books: An Interview with
Sean Dockray’. Mute, 4 May 2011. www.metamute.org/editorial/

articles/paradise-too-many-books-interview-seandockray. Accessed 31 May 2016.
Online digital libraries
Aaaaarg, http://aaaaarg.fail.
Bibliotik, https://bibliotik.me.
Issuu, https://issuu.com.
Karagarga, https://karagarga.in.
Library Genesis / LibGen, http://gen.lib.rus.ec.
Memory of the World, https://library.memoryoftheworld.org.
Monoskop, https://monoskop.org.
Pad.ma, https://pad.ma.
Scribd, https://scribd.com.
Textz.com, https://textz.com.
UbuWeb, www.ubu.com.

226

LOST AND LIVING (IN) ARCHIVES

227

COPYING AS A WAY TO START SOMETHING NEW

Dockray
The Scan and the Export
2010


the image, corrects the contrast, crops out the use­
less bits, sharpens the text, and occasionally even
attempts to read it. All of this computation wants
to repress any traces of reading and scanning, with
the obvious goal of returning to the pure book, or
an even more Platonic form.
That purified, originary version of the text
might be the e-book. Publishers are occasionally
skipping the act of printing altogether and selling
the files themselves, such that the words reserved
for “
well-scanned”books ultimately describe ebooks: clean, searchable, small (i.e., file size). Al­
though it is perfectly understandable for a reader
to prefer aligned text without smudges or other
markings where “
paper”is nothing but a pure,
bright white, this movement towards the clean has
its consequences. Distinguished as a form by the
fact that it is produced, distributed, and consumed
digitally, the e-book never leaves the factory.
A minimal gap is, however, created between
the file that the producer uses and the one that
the consumer uses— imagine the cultural chaos
if the typical way of distributing books were as
Word documents!— through the process of export­
ing. Whereas scanning is a complex process and
material transformation (which includes exporting
at the very end), exporting is merely converting
formats. But however minor an act, this conver­
sion is what puts a halt to the writing and turns
the file into a product for reading. It is also at this
stage that forms of “
digital rights management”ate
applied in order to restrict copying and printing of
the file.
Sharing and copying texts is as old as books
themselves— actually, one could argue that this is
almost a definition of the book— but computers
and the Internet have only accelerated this
activ­ity. From transcription to tracing to photocopying
to scanning, the labour and material costs involved
in producing a copy has fallen to nothing in our
present digital file situation. Once the scan has
generated a digitized version of some kind, say a
PDF, it easily replicates and circulates. This is not
aberrant behaviour, either, but normative comput­
er use: copy and paste are two of the first choices
in any contextual menu. Personal file storage has
slowly been migrating onto computer networks,
particularly with the growth of mobile devices, so

Sean Dockray

The Scan and the Export
The scan is an ambivalent image. It oscillates
back and forth: between a physical page and a
digital file, between one reader and another, be­
tween an economy of objects and an economy of
data. Scans are failures in terms of quality, neither
as “
readable”as the original book nor the inevi­
table ebook, always containing too much visual
information or too little.
Technically speaking, it is by scanning that
one can make a digital representation of a physical
object, such as a book. When a representation of
that representation (the image) appears on a digital
display device, it hovers like a ghost, one world
haunting another. But it is not simply the object
asserting itself in the milieu of light, informa­
tion, and electricity. Much more is encoded in
the image: indexes of past readings and the act of
scanning itself.
An incomplete inventory of modifications to
the book through reading and other typical events
in the life of the thing: folded pages, underlines,
marginal notes, erasures, personal symbolic sys­
tems, coffee spills, signatures, stamps, tears, etc.
Intimacy between reader and text marking the
pages, suggesting some distant future palimpsest in
which the original text has finally given way to a
mass of negligible marks.
Whereas the effects of reading are cumulative,
the scan is a singular event. Pages are spread and
pressed flat against a sheet of glass. The binding
stretches, occasionally to the point of breaking.
A camera driven by a geared down motor slides
slowly down the surface of the page. Slight move­
ment by the person scanning (who is also a scan­
ner; this is a man-machine performance) before
the scan is complete produces a slight motion blur,
the type goes askew, maybe a finger enters the
frame of the image. The glass is rarely covered in
its entirety by the book and these windows into
the actual room where the scanning is done are
ultimately rendered as solid, censored black. After
the physical scanning process comes post-produc­
tion. Software— automated or not— straightens

99

one's files are not always located on one's
equip­ment. The act of storing and retrieving shuffles
data across machines and state lines.
A public space is produced when something
is shared— which is to say, made public — but this
space is not the same everywhere or in all
circum­stances. When music is played for a room full of
people, or rather when all those people are simply
sharing the room, something is being made public.
Capitalism itself is a massive mechanism for
making things public, for appropriating materials,
people, and knowledge and subjecting them to its
logic. On the other hand, a circulating library, or a
library with a reading room, creates a public space
around the availability of books and other forms of
material knowledge. And even books being sold
through shops create a particular kind of public,
which is quite different from the public that is
formed by bootlegging those same books.
ft would appear that publicness is not simply a
question of state control or the absence of money.
Those categorical definitions offer very little to
help think about digital files and their native
tendency to replicate and travel across networks.
What kinds of public spaces are these, coming into
the foreground by an incessant circulation of data?
Tw o paradigmatic forms of publicness can be
described through the lens of the scan and the
export, two methods for producing a digital text.
Although neither method necessarily results in a
file that must be distributed, such files typically
are. In the case of the export, the system of
distribution tends to be through official, secure
digital repositories; limited previews provide a
small window into the content, which is ultimately
accessible only through the interface of the
shopping cart. On the other hand, the scan is
created by and moves between individuals, often
via improvised and itinerant distribution systems.
The scan travels from person to person, like a
virus. As long as it passes between people, that
common space between them stays alive. That
space might be contagious; it might break out into
something quite persuasive, an intimate publicness
becoming more common.
The scan is an image of a thing and is therefore
different from the thing (it is digital, not physical,
and it includes indexes of reading and scanning),

whereas a copy of the export is essentially identi­cal
to the export. Here is one reason there will ex­ist
many variations of a scan for a particular text,
while there will be one approved version (always a
clean one) of the export. A person may hold in his
or her possession a scan of a book but, no matter
what publishers may claim, the scan will never be
the book. Even if one was to inspect two files and
find them to be identical in every observable and
measurable quality, it may be revealed that these
are in fact different after all: one is a legitimate
copy and the other is not. Legitimacy in this case
has nothing whatsoever to do with internal traits,
such as fidelity to the original, but with external
ones, namely, records of economic transactions in
customer databases.
In practical terms, this means that a digital
book must be purchased by every single reader.
Unlike the book, which is commonly purchased,
read, then handed it off to a friend (who then
shares it with another friend and so on until it
comes to rest on someone’
s bookshelf) the digital
book is not transferable, by design and by law.
If ownership is fundamentally the capacity to give
something away, these books are never truly ours.
The intimate, transient publics that emerge out
of passing a book around are here eclipsed by a
singular, more inclusive public in which everyone
relates to his or her individual (identical) file.
Recently, with the popularization of digital
book readers (a device for another man-machine
pairing), the picture of this kind of publicness has
come into greater definition. Although a group of
people might all possess the same file, they will be
viewing that file through their particular readers,
which means surprisingly that they might all be
seeing something different. With variations built
into the device (in resolution, size, colour, display
technology) or afforded to the user (perhaps to
change font size or other flexible design ele­
ments), familiar forms of orientation within the
writing disappear as it loses the historical struc­
ture of the book and becomes pure, continuous
text. For example, page numbers give way to the
more abstract concept of a "location" when the
file is derived from the export as opposed to the
scan, from the text data as opposed to the
physi­cal object. The act of reading in a group is also

100

different ways. An analogy: they are not prints
from the same negative, but entirely different
photographs of the same subject. Our scans are
variations, perhaps competing (if we scanned the
same pages from the same edition), but, more
likely, functioning in parallel.
Gompletists prefer the export, which has a
number of advantages from their perspective:
the whole book is usually kept intact as one unit,
the file; file sizes are smaller because the files are
based more on the text than an image; the file is
found by searching (the Internet) as opposed to
searching through stacks, bookstores, and attics; it
is at least theoretically possible to have every file.
Each file is complete and the same everywhere,
such that there should be no need for variations.
At present, there are important examples of where
variations do occur, notably efforts to improve
metadata, transcode out of proprietary formats,
and to strip DRM restrictions. One imagines an
imminent future where variations proliferate based
on an additive reading— a reader makes highlights,
notations, and marginal arguments and then
re­distributes the file such that someone's
"reading" of a particular text would generate its own public,
the logic of the scan infiltrating the export.

different — "Turn to page 24" is followed by the
sound of a race of collective page flipping, while
"Go to location 2136" leads to finger taps and
caresses on plastic. Factions based on who has the
same edition of a book are now replaced by those
with people who have the same reading device.
If historical structures within the book are
made abstract then so are those organizing
struc­tures outside of the book. In other words, it's not
simply that the book has become the digital book
reader, but that the reader now contains the
li­brary itself! Public libraries are on the brink of be­
ing outmoded; books are either not being acquired
or they are moving into deep storage; and physical
spaces are being reclaimed as cafes, restaurants,
auditoriums, and gift shops. Even the concept
of donation is thrown into question: when most
public libraries were being initiated a century ago,
it was often women's clubs that donated their
col­lections to establish the institution; it is difficult to
imagine a corresponding form of cultural sharing
of texts within the legal framework of the export.
Instead, publishers might enter into a contract
directly with the government to allow access to
files from computers within the premises of the
library building. This fate seems counter-intuitive,
considering the potential for distribution latent
in the underlying technology, but even more so
when compared to the "traveling libraries" at the
turn of the twentieth century, which were literally
small boxes that brought books to places without
libraries (most often, rural communities).
Many scans, in fact, are made from library
books, which are identified through a stamp or a
sticker somewhere. (It is not difficult to see how
the scan is closely related to the photocopy, such
that they are now mutually evolving technolo­
gies.) Although it circulates digitally, like the
export, the scan is rooted in the object and is
never complete. In a basic sense, scanning is slow
and time-consuming (photocopies were slow and
expensive), and it requires that choices are made
about what to focus on. A scan of an entire book
is rare— really a labour of love and endurance;
instead, scanners excerpt from books, pulling out
the most interesting, compelling, difficult-to-find,
or useful bits. They skip pages. The scan is partial,
subjective. You and I will scan the same book in

About the Author

Sean Dockray is a Los Angeles-based artist. He is a
co­-director of Telic Arts Exchange and has initiated several
collaborative projects including AAAARG.ORG and The
Public School. He recently co-organized There is
noth­ing less passive than the act of fleeing, a 13-day seminar at
various sites in Berlin organized through The Public School
that discussed the promises, pitfalls, and possibilities for
extra-institutionality.

101

t often the starting-point is an idea composed of
a group of centrally aroused sensations due to simultaneous
excitation of a group
This would probably
in every case he in large part the result of association by
contiguity in terms of the older classification, although
there might be some part played by the immediate
excita­tion of the separatefP pby an external stimulus. Starting
from this given mass of central elements, all change comes
from the fact that some of the elements disappear and are
replaced by others through a second series of associations
by contiguity. The parts of the original idea which remain
serve as the excitants for the new elements which arise.
The nature of the process is exactly like that by which
the elements of the first idea were excited, and no new
process comes in. These successive associations are thus
really in their mechanism but a series of simultaneous
associations in which the elements that make up the different
ideas are constantly changing, but with some elements
that persist from idea to idea. There is thus a constant
flux of the ideas, but there is always a part of each idea
that persists over into the next and serves to start the
mechanism of revival There is never an entire stoppage
in the course of the ideas, never an absolute break in the
series, but the second idea is joined to the one that precedes
by an identical element in each.

124

A short time later, this control of urban noise had been implemented almost
everywhere, or at least in the politically best-controlled cities, where repetition
is most advanced.
We see noise reappear, however, in exemplary fashion at certain ritualized
moments: in these instances, the horn emerges as a derivative form of violence
masked by festival. All we have to do is observe how noise proliferates in echo
at such times to get a hint of what the epidemic proliferation of the essential
vio­lence can be like. The noise of car horns on New Year's Eve is, to my mind,
for the drivers an unconscious substitute for Carnival, itself a substitute for the
Dionysian festival preceding the sacrifice. A rare moment, when the hierarchies
are masked behind the windshields and a harmless civil war temporarily breaks
out throughout the city.
Temporarily. For silence and the centralized monopoly on the emission,
audition and surveillance of noise are afterward reimposed. This is an essential
control, because if effective it represses the emergence of a new order and a
challenge to repetition.

103

Thus, with the ball, we are all possible victims; we all expose our­
selves to this danger and we escape back and forth of "I."
The "I" in the game is a token exchanged. And
this passing, this network of passes, these vicariances of subjects weave
the collection. I am I now, a subject, that is to say, exposed to being
thrown down, exposed to falling, to being placed beneath the compact
mass of the others; then you take the relay, you are substituted for "I"
and become it; later on, it is he who gives it to you, his work done, his
danger finished, his part of the collective constructed. The "we" is made
by the bursts and occultations of the "I." The "we" is made by passing
the "I." By exchanging the "I." And by substitution and vicariance of
the "I."
That immediately appears easy to think about. Everyone carries
his stone, and the wall is built. Everyone carries his "I," and the "we" is
built. This addition is idiotic and resembles a political speech. No.

104

But then let them say it clearly:

The practice of happiness is subversive when it becom es collective.
Our will tor happiness and liberation is their terror, and they react by terrorizing
us with prison, when the repression of work, of the patriarchal family, and of sex­
ism is not enough.

But then let them say it clearly:

To conspire means to breathe together.

And that is what we are accused of, they want to prevent us from breathing
because we have refused to breathe In Isolation, in their asphyxiating places of
work, in their individuating familial relationships, in their atomizing houses.

There is a crime I confess I have committed:

It is the attack against the separation of life and desire, against sexism in Interindividual relationships, against the reduction of life to the payment of a salary.

105

Counterpublics

The stronger modification of ... analysis — one in which
he has shown little interest, though it is clearly of major
signifi­cance in the critical analysis of gender and sexuality — is that some
publics are defined by their tension with a larger public. Their
par­ticipants are marked off from persons or citizens in general.
Dis­cussion within such a public is understood to contravene the rules
obtaining in the world at large, being structured by alternative dis­
positions or protocols, making different assumptions about what
can be said or what goes without saying. This kind of public is, in
effect, a counterpublic: it maintains at some level, conscious or
not, an awareness of its subordinate status. The sexual cultures of
gay men or of lesbians would be one kind of example, but so would
camp discourse or the media of women's culture. A counterpublic
in this sense is usually related to a subculture, but there are
impor­tant differences between these concepts. A counterpublic, against
the background of the public sphere, enables a horizon of opinion
and exchange] its exchanges remain distinct from authority and
can have a critical relation to power; its extent is in principle
indef­inite, because it is not based on a precise demography but
medi­ated by print, theater, diffuse networks of talk, commerce, and ...

106

The term slang, which is less broad than language variety is described
by ... as a label that is frequently used to denote
certain informal or faddish usages of nearly anyone in the speech commu­nity.
However, slang, while subject to rapid change, is widespread and
familiar to a large number of speakers, unlike Polari. The terms jargon
and argot perhaps signify more what Polari stands for. as they are asso­
ciated with group membership and are used to serve as affirmation or
solidarity with other members. Both terms refer to "obscure or secret
language’or language of a particular occupational group ...
While jargon tends to refer to an occupational sociolect,
or a vocabulary particular to a field, argot is more concerned with language
varieties where speakers wish to conceal either themselves or aspects of
their communication from non-members. Although argot is perhaps the
most useful term considered so far in relation to Polari. there exists a
more developed theory that concentrates on stigmatised groups, and could
have been created with Polari specifically in mind: anti-language.
For ..., anti-language was to anti-society what language
was to society. An anti-society is a counter-culture, a society within a
society, a conscious alternative to society, existing by resisting either
pas-sively or by more hostile, destructive means. Anti-languages are
gen­erated by anti-societies and in their simplest forms arc partially relexicalised
languages, consisting of the same grammar but a different vocabulary
... in areas central to the activities ot subcultures.
Therefore a subculture based around illegal drug use would have words tor
drugs, the psychological effects of drugs, the police, money and so on. In
anti-languages the social values of words and phrases tend to be more
emphasised than in mainstream languages.

... found that 41 per cent of the criminals he
interviewed cave "the need for secrecy" as an important reason lor using
an anti-language, while 38 per cent listed 'verbal art'. However ...
in his account of the anti-language or grypserka of Polish
pris­oners. describes how, for the prisoners, their identity was threatened and
the creation of an anti-society provided a means by wtnclt an alternative
social structure (or reality) could be constructed, becoming the source of
a second identity tor the prisoners.

107

Streetwalker theorists cul­tivate the ability to sustain and create hangouts by hanging
out. Hangouts are highly fluid, worldly, nonsanctioned,
communicative, occupations of space, contestatory retreats for the
passing on of knowledge, for the tactical-strategic fashioning
of multivocal sense, of enigm atic vocabularies and gestures,
for the development of keen commentaries on structural
pres­sures and gaps, spaces of complex and open-ended recognition.
Hangouts are spaces that cannot be kept captive by the
private / public split. They are worldly, contestatory concrete
spaces within geographies sieged by and in defiance of logics
and structures of domination.20 The streetwalker theorist
walks in illegitim ate refusal to legitimate oppressive
arrange­ments and logics.

Common

108

As we apprehend it, the process of instituting com ­
munism can only take the form of a collection of
acts of communisation, of making common such-and-such
space, such-and-such machine, such-and-such knowledge.
That is to say, the elaboration
of the mode of sharing that attaches to them.
In­surrection itself is just an accelerator, a decisive
moment in the process.

... is a collection of places, infrastructures,
communised means; and the dreams, bodies,
mur­murs, thoughts, desires that circulate among those
places, the use of those means, the sharing of those
infrastructures.
The notion of ... responds to the necessity of
a minimal formalisation, which makes us accessible
as well as allows us to remain invisible. It belongs
to the communist way that we explain to ourselves
and formulate the basis of our sharing. So that the
most recent arrival is, at the very least, the equal of
the elder.

Whatever singularity, which wants to appropriate be longing itself,
its own being-in-language, and thus rejects all identity and every
condition of belonging, is the principal enemy of the State. Wherever these
singularities peacefully demonstrate their being in common there will be a
Tiananmen, and, sooner or later, the tanks will appear.

110


Dockray
Interface Access Loss
2013


Interface Access Loss

I want to begin this talk at the end -- by which I mean the end of property - at least according to
the cyber-utopian account of things, where digital file sharing and online communication liberate
culture from corporations and their drive for profit. This is just one of the promised forms of
emancipation -- property, in a sense, was undone. People, on a massive scale, used their
computers and their internet connections to share digitized versions of their objects with each
other, quickly producing a different, common form of ownership. The crisis that this provoked is
well-known -- it could be described in one word: Napster. What is less recognized - because it is
still very much in process - is the subsequent undoing of property, of both the private and common
kind. What follows is one story of "the cloud" -- the post-dot-com bubble techno-super-entity -which sucks up property, labor, and free time.

Object, Interface

It's debated whether the growing automation of production leads to global structural
unemployment or not -- Karl Marx wrote that "the self-expansion of capital by means of machinery
is thenceforward directly proportional to the number of the workpeople, whose means of
livelihood have been destroyed by that machinery" - but the promise is, of course, that when
robots do the work, we humans are free to be creative. Karl Kautsky predicted that increasing
automation would actually lead, not to a mass surplus population or widespread creativity, but
something much more mundane: the growth of clerks and bookkeepers, and the expansion of
unproductive sectors like "the banking system, the credit system, insurance empires and
advertising."

Marx was analyzing the number of people employed by some of the new industries in the middle
of the 19th century: "gas-works, telegraphy, photography, steam navigation, and railways." The
facts were that these industries were incredibly important, expansive and growing, highly
mechanized.. and employed a very small number of people. It is difficult not to read his study of
these technologies of connection and communication - against the background of our present
moment, in which the rise of the Internet has been accompanied by the deindustrialization of
cities, increased migrant and mobile labor, and jobs made obsolete by computation.

There are obvious examples of the impact of computation on the workplace: at factories and
distribution centers, robots engineered with computer-vision can replace a handful of workers,
with a savings of millions of dollars per robot over the life of the system. And there are less
apparent examples as well, like algorithms determining when and where to hire people and for
how long, according to fluctuating conditions.
Both examples have parallels within computer programming, namely reuse and garbage
collection. Code reuse refers to the practice of writing software in such a way that the code can be
used again later, in another program, to perform the same task. It is considered wasteful to give the
same time, attention, and energy to a function, because the development environment is not an
assembly line - a programmer shouldn't repeat. Such repetition then gives way to copy-andpasting (or merely calling). The analogy here is to the robot, to the replacement of human labor
with technology.

Now, when a program is in the midst of being executed, the computer's memory fills with data -but some of that is obsolete, no longer necessary for that program to run. If left alone, the memory
would become clogged, the program would crash, the computer might crash. It is the role of the
garbage collector to free up memory, deleting what is no longer in use. And here, I'm making the
analogy with flexible labor, workers being made redundant, and so on.

In Object-Oriented Programming, a programmer designs the software that she is writing around
“objects,” where each object is conceptually divided into “public” and “private” parts. The public
parts are accessible to other objects, but the private ones are hidden to the world outside the
boundaries of that object. It's a “black box” - a thing that can be known through its inputs and
outputs - even in total ignorance of its internal mechanisms. What difference does it make if the
code is written in one way versus an other .. if it behaves the same? As William James wrote, “If no
practical difference whatever can be traced, then the alternatives mean practically the same thing,
and all dispute is idle.”

By merely having a public interface, an object is already a social entity. It makes no sense to even
provide access to the outside if there are no potential objects with which to interact! So to

understand the object-oriented program, we must scale up - not by increasing the size or
complexity of the object, but instead by increasing the number and types of objects such that their
relations become more dense. The result is an intricate machine with an on and an off state, rather
than a beginning and an end. Its parts are interchangeable -- provided that they reliably produce
the same behavior, the same inputs and outputs. Furthermore, this machine can be modified:
objects can be added and removed, changing but not destroying the machine; and it might be,
using Gerald Raunig’s appropriate term, “concatenated” with other machines.

Inevitably, this paradigm for describing the relationship between software objects spread outwards,
subsuming more of the universe outside of the immediate code. External programs, powerful
computers, banking institutions, people, and satellites have all been “encapsulated” and
“abstracted” into objects with inputs and outputs. Is this a conceptual reduction of the richness
and complexity of reality? Yes, but only partially. It is also a real description of how people,
institutions, software, and things are being brought into relationship with one another according to
the demands of networked computation.. and the expanding field of objects are exactly those
entities integrated into such a network.

Consider a simple example of decentralized file-sharing: its diagram might represent an objectoriented piece of software, but here each object is a person-computer, shown in potential relation
to every other person-computer. Files might be sent or received at any point in this machine,
which seems particularly oriented towards circulation and movement. Much remains private, but a
collection of files from every person is made public and opened up to the network. Taken as a
whole, the entire collection of all files - which on the one hand exceeds the storage capacity of
any one person’s technical hardware, is on the other hand entirely available to every personcomputer. If the files were books.. then this collective collection would be a public library.

In order for a system like this to work, for the inputs and the outputs to actually engage with one
another to produce action or transmit data, there needs to be something in place already to enable
meaningful couplings. Before there is any interaction or any relationship, there must be some
common ground in place that allows heterogenous objects to ‘talk to each other’ (to use a phrase
from the business casual language of the Californian Ideology). The term used for such a common
ground - especially on the Internet - is platform, a word for that which enables and anticipates

future action without directly producing it. A platform provides tools and resources to the objects
that run “on top” of the platform so that those objects don't need to have their own tools and
resources. In this sense, the platform offers itself as a way for objects to externalize (and reuse)
labor. Communication between objects is one of the most significant actions that a platform can
provide, but it requires that the objects conform some amount of their inputs and outputs to the
specifications dictated by the platform.

But haven’t I only introduced another coupling, instead of between two objects, this time between
the object and the platform? What I'm talking about with "couplings" is the meeting point between
things - in other words, an “interface.” In the terms of OOP, the interface is an abstraction that
defines what kinds of interaction are possible with an object. It maps out the public face of the
object in a way that is legible and accessible to other objects. Similarly, computer interfaces like
screens and keyboards are designed to meet with human interfaces like fingers and eyes, allowing
for a specific form of interaction between person and machine. Any coupling between objects
passes through some interface and every interface obscures as much as it reveals - it establishes
the boundary between what is public and what is private, what is visible and what is not. The
dominant aesthetic values of user interface design actually privilege such concealment as “good
design,” appealing to principles of simplicity, cleanliness, and clarity.
Cloud, Access

One practical outcome of this has been that there can be tectonic shifts behind the interface where entire systems are restructured or revolutionized - without any interruption, as long as the
interface itself remains essentially unchanged. In Pragmatism’s terms, a successful interface keeps
any difference (in back) from making a difference (in front). Using books again as an example: for
consumers to become accustomed to the initial discomfort of purchasing a product online instead
of from a shop, the interface needs to make it so that “buying a book” is something that could be
interchangeably accomplished either by a traditional bookstore or the online "marketplace"
equivalent. But behind the interface is Amazon, which through low prices and wide selection is
the most visible platform for buying books and uses that position to push retailers and publishers
both to, at best, the bare minimum of profitability.

In addition to selling things to people and collecting data about its users (what they look at and
what they buy) to personalize product recommendations, Amazon has also made an effort to be a
platform for the technical and logistical parts of other retailers. Ultimately collecting data from
them as well, Amazon realizes a competitive advantage from having a comprehensive, up-to-theminute perspective on market trends and inventories. This volume of data is so vast and valuable
that warehouses packed with computers are constructed to store it, protect it, and make it readily
available to algorithms. Data centers, such as these, organize how commodities circulate (they run
business applications, store data about retail, manage fulfillment) but also - increasingly - they
hold the commodity itself - for example, the book. Digital book sales started the millennium very
slowly but by 2010 had overtaken hardcover sales.

Amazon’s store of digital books (or Apple’s or Google’s, for that matter) is a distorted reflection of
the collection circulating within the file-sharing network, displaced from personal computers to
corporate data centers. Here are two regimes of digital property: the swarm and the cloud. For
swarms (a reference to swarm downloading where a single file can be downloaded in parallel
from multiple sources) property is held in common between peers -- however, property is
positioned out of reach, on the cloud, accessible only through an interface that has absorbed legal
and business requirements.

It's just half of the story, however, to associate the cloud with mammoth data centers; the other
half is to be found in our hands and laps. Thin computing, including tablets and e-readers, iPads
and Kindles, and mobile phones have co-evolved with data centers, offering powerful, lightweight
computing precisely because so much processing and storage has been externalized.

In this technical configuration of the cloud, the thin computer and the fat data center meet through
an interface, inevitably clean and simple, that manages access to the remote resources. Typically,
a person needs to agree to certain “terms of service,” have a unique, measurable account, and
provide payment information; in return, access is granted. This access is not ownership in the
conventional sense of a book, or even the digital sense of a file, but rather a license that gives the
person a “non-exclusive right to keep a permanent copy… solely for your personal and noncommercial use,” contradicting the First Sale Doctrine, which gives the “owner” the right to sell,
lease, or rent their copy to anyone they choose at any price they choose. The doctrine,

established within America's legal system in 1908, separated the rights of reproduction, from
distribution, as a way to "exhaust" the copyright holder's control over the commodities that people
purchased.. legitimizing institutions like used book stores and public libraries. Computer software
famously attempted to bypass the First Sale Doctrine with its "shrink wrap" licenses that restricted
the rights of the buyer once she broke through the plastic packaging to open the product. This
practice has only evolved and become ubiquitous over the last three decades as software began
being distributed digitally through networks rather than as physical objects in stores. Such
contradictions are symptoms of the shift in property regimes, or what Jeremy Rifkin called “the age
of access.” He writes that “property continues to exist but is far less likely to be exchanged in
markets. Instead, suppliers hold on to property in the new economy and lease, rent, or charge an
admission fee, subscription, or membership dues for its short-term use.”

Thinking again of books, Rifkin’s description gives the image of a paid library emerging as the
synthesis of the public library and the marketplace for commodity exchange. Considering how, on
the one side, traditional public libraries are having their collections deaccessioned, hours of
operation cut, and are in some cases being closed down entirely, and on the other side, the
traditional publishing industry finds its stores, books, and profits dematerialized, the image is
perhaps appropriate. Server racks, in photographs inside data centers, strike an eerie resemblance
to library stacks - - while e-readers are consciously designed to look and feel something like a
book. Yet, when one peers down into the screen of the device, one sees both the book - and the
library.

Like a Facebook account, which must uniquely correspond to a real person, the e-reader is an
individualizing device. It is the object that establishes trusted access with books stored in the cloud
and ensures that each and every person purchases their own rights to read each book. The only
transfer that is allowed is of the device itself, which is the thing that a person actually does own.
But even then, such an act must be reported back to the cloud: the hardware needs to be deregistered and then re-registered with credit card and authentication details about the new owner.

This is no library - or it's only a library in the most impoverished sense of the word. It is a new
enclosure, and it is a familiar story: things in the world (from letters, to photographs, to albums, to
books) are digitized (as emails, JPEGs, MP3s, and PDFs) and subsequently migrate to a remote

location or service (Gmail, Facebook, iTunes, Kindle Store). The middle phase is the biggest
disruption, when the interface does the poorest job concealing the material transformations taking
place, when the work involved in creating those transformations is most apparent, often because
the person themselves is deeply involved in the process (of ripping vinyl, for instance). In the third
phase, the user interface becomes easier, more “frictionless,” and what appears to be just another
application or folder on one’s computer is an engorged, property-and-energy-hungry warehouse a
thousand miles away.

Capture, Loss

Intellectual property's enclosure is easy enough to imagine in warehouses of remote, secure hard
drives. But the cloud internalizes processing as well as storage, capturing the new forms of cooperation and collaboration characterizing the new economy and its immaterial labor. Social
relations are transmuted into database relations on the "social web," which absorbs selforganization as well. Because of this, the cloud impacts as strongly on the production of
publications, as on their consumption, in the tradition sense.

Storage, applications, and services offered in the cloud are marketed for consumption by authors
and publishers alike. Document editing, project management, and accounting are peeled slowly
away from the office staff and personal computers into the data centers; interfaces are established
into various publication channels from print on demand to digital book platforms. In the fully
realized vision of cloud publishing, the entire technical and logistical apparatus is externalized,
leaving only the human labor.. and their thin devices remaining. Little distinguishes the authorobject from the editor-object from the reader-object. All of them.. maintain their position in the
network by paying for lightweight computers and their updates, cloud services, and broadband
internet connections.
On the production side of the book, the promise of the cloud is a recovery of the profits “lost” to
file-sharing, as all that exchange is disciplined, standardized and measured. Consumers are finally
promised the access to the history of human knowledge that they had already improvised by
themselves, but now without the omnipresent threat of legal prosecution. One has the sneaking
suspicion though.. that such a compromise is as hollow.. as the promises to a desperate city of the

jobs that will be created in a new constructed data center - - and that pitting “food on the table”
against “access to knowledge” is both a distraction from and a legitimation of the forms of power
emerging in the cloud. It's a distraction because it's by policing access to knowledge that the
middle-man platform can extract value from publication, both on the writing and reading sides of
the book; and it's a legitimation because the platform poses itself as the only entity that can resolve
the contradiction between the two sides.

When the platform recedes behind the interface, these two sides are the the most visible
antagonism - in a tug-of-war with each other - - yet neither the “producers” nor the “consumers” of
publications are becoming more wealthy, or working less to survive. If we turn the picture
sideways, however, a new contradiction emerges, between the indebted, living labor - of authors,
editors, translators, and readers - on one side, and on the other.. data centers, semiconductors,
mobile technology, expropriated software, power companies, and intellectual property.
The talk in the data center industry of the “industrialization” of the cloud refers to the scientific
approach to improving design, efficiency, and performance. But the term also recalls the basic
narrative of the Industrial Revolution: the movement from home-based manufacturing by hand to
large-scale production in factories. As desktop computers pass into obsolescence, we shift from a
networked, but small-scale, relationship to computation (think of “home publishing”) to a
reorganized form of production that puts the accumulated energy of millions to work through
these cloud companies and their modernized data centers.

What kind of buildings are these blank superstructures? Factories for the 21st century? An engineer
named Ken Patchett described the Facebook data center that way in a television interview, “This is
a factory. It’s just a different kind of factory than you might be used to.” Those factories that we’re
“used to,” continue to exist (at Foxconn, for instance) producing the infrastructure, under
recognizably exploitative conditions, for a “different kind of factory,” - a factory that extends far
beyond the walls of the data center.

But the idea of the factory is only part of the picture - this building is also a mine.. and the
dispersed workforce devote most of their waking hours to mining-in-reverse, packing it full of data,
under the expectation that someone - soon - will figure out how to pull out something valuable.

Both metaphors rely on the image of a mass of workers (dispersed as it may be) and leave a darker
and more difficult possibility: the data center is like the hydroelectric plant, damming up property,
sociality, creativity and knowledge, while engineers and financiers look for the algorithms to
release the accumulated cultural and social resources on demand, as profit.

This returns us to the interface, site of the struggles over the management and control of access to
property and infrastructure. Previously, these struggles were situated within the computer-object
and the implied freedom provided by its computation, storage, and possibilities for connection
with others. Now, however, the eviscerated device is more interface than object, and it is exactly
here at the interface that the new technological enclosures have taken form (for example, see
Apple's iOS products, Google's search box, and Amazon's "marketplace"). Control over the
interface is guaranteed by control over the entire techno-business stack: the distributed hardware
devices, centralized data centers, and the software that mediates the space between. Every major
technology corporation must now operate on all levels to protect against any loss.

There is a centripetal force to the cloud and this essay has been written in its irresistible pull. In
spite of the sheer mass of capital that is organized to produce this gravity and the seeming
insurmountability of it all, there is no chance that the system will absolutely manage and control
the noise within it. Riots break out on the factory floor; algorithmic trading wreaks havoc on the
stock market in an instant; data centers go offline; 100 million Facebook accounts are discovered
to be fake; the list will go on. These cracks in the interface don't point to any possible future, or
any desirable one, but they do draw attention to openings that might circumvent the logic of
access.

"What happens from there is another question." This is where I left things off in the text when I
finished it a year ago. It's a disappointing ending: we just have to invent ways of occupying the
destruction, violence and collapse that emerge out of economic inequality, global warming,
dismantled social welfare, and so on. And there's not much that's happened since then to make us
very optimistic - maybe here I only have to mention the NSA. But as I began with an ending, I
really should end at a beginning.
I think we were obliged to adopt a negative, critical position in response the cyber-utopianism of

the last almost 20 years, whether in its naive or cynical forms. We had to identify and theorize the
darker side of things. But it can become habitual, and when the dark side materializes, as it has
over the past few years - so that everyone knows the truth - then the obligation flips around,
doesn't it? To break out of habitual criticism as the tacit, defeated acceptance of what is. But, what
could be? Where do we find new political imaginaries? Not to ask what is the bright side, or what
can we do to cope, but what are the genuinely emancipatory possibilities that are somehow still
latent, buried under the present - or emerging within those ruptures in it? - - - I can't make it all
the way to a happy ending, to a happy beginning, but at least it's a beginning and not the end.

Dockray
Openings and Closings
2013


Militarization of campuses

Early, on a recent November morning, 400 Military Police with tear gas and helicopters arrested 72 people, almost all students of the University of Sao Paulo. Those people were occupying the Rectory in response to an other arrest – of 3 fellow students – which was itself a consequence of the contract that the university administration signed with the MP, an agreement inviting the police back onto campus after decades in which this presence was essentially prohibited. University “autonomy” had been established by Article 207 of the 1988 Brazilian Constitution to close a chapter on Brazil’s military rule, during which time the Military Police enforced a series of decrees aimed at eliminating opposition to the dictatorship, including the local articulation of the 1960’s student movement. The 1964 Suplicy de Lacerda law forbade student organizations from engaging in politics; in 1968, Institutional Act No. 5 did away with the writ of habeus corpus; and Decree 477 a year later gave university and education authorities the right to expel students and professors involved in protests. A similar provision of “university asylum” restricted the access of policemen onto Greece’s campuses for 35 years. Like Brazil, this measure was adopted after the fall of a military regime that had violently crushed student uprisings and, like Brazil, this prohibition on police incursions into campuses collapsed in 2011. Greek politicians abolished the law in order to more effectively implement austerity measures imposed by European financial interests. Ten days after the raid at the University of Sao Paulo, the chancellor of the University of California, Davis ordered police to clear a handful of tents from a campus quadrangle. Because the peaceful demonstration was planned in solidarity with other actions on UC campuses drawing inspiration from the “occupy movement,” police force swiftly and forcefully dismantled the encampment. Students were pepper-­‐sprayed at close range by a well-­‐armored policeman wearing little concern. Such examples of the militarization of university campuses have become more common, especially in the context of growing social unrest. In California, they demonstrate the continued influence of Ronald Reagan, not simply for implementing neoliberal policies that have slashed public programs, produced a trillion dollars in US student loan debt and contracted the middle class; but also for campaigning in 1966 for governor of California on a promise to crack down on campus activists, making partnerships with conservative school officials and the FBI in order to “clean out left-­‐wing elements” from the University of California. Linda Katehi – that UC Davis chancellor – was also an author of a 2011 report that recommended terminating university asylum to the Greek government. The report noted that “the politicization of students… represents a beyond-­‐reasonable involvement in the political process,” continuing on to state that “Greek University campuses are not secure” because of “elements that seek political instability.”

Mobilization of books

After the Military Police operation in Sao Paulo, the rector appeared on television to accuse the students of preparing Molotov cocktails; independent media, on the other hand, described the students carrying left-­‐wing books. Even more recently at UC Riverside, a contingent faculty member holding a cardboard shield that was painted to look like the cover of Figures of the Thinkable, by Cornelius Castoriadis, was dragged across the pavement by police and charged with a felony, “assault with a deadly weapon.” In Berkeley, students covered a plaza in books, open and facedown, after their tent occupation was broken up. Many of the crests and seals of universities feature a book, no doubt drawing on the book as both a symbol of knowledge and an actual repository for it. And by extension, books have been mobilized at various moments in recent occupations and protests to make material reference to education and the metastasizing knowledge economy. No doubt the use of radical theory literalizes an attempt to bridge theory and practice, while evoking a utopian imaginary, or simply taking Deleuze’s words at face value: “There is no need to fear or hope, but only to look for new weapons.” Against a background of library closures and cutbacks, as well as the concomitant demands for the humanities and social sciences to justify themselves in economic terms, it is as if books have come into view, desperately, like a rat in daylight. There is almost nowhere for them to go – the space in the remaining libraries is being given over to audio-­‐visual material, computer terminals, public programming, and cafes, while publishers are shifting to digital distribution models that are designed to circumvent libraries entirely. So books have come out onto the street. Militarization of Books When knowledge does escape the jurisdiction of both the state and the market, it’s often at the hands of students (both the officially enrolled and the autodidacts). For example, returning to Brazil, the average cost of required reading material for a freshman is more than six months of minimum wage pay, with up to half of the texts not available in Brazil or simply out of print. Unsurprisingly, a system of copy shops provides on-­‐demand chapters, course readers, and other texts; but the Brazilian Association of Reprographic Rights has been particularly hostile to the practice. One year before Sao Paulo, at the Federal University in Rio de Janeiro, seven armed police officers in three cars (accompanied by the Chief of the Delegation for the Repression of Immaterial Property Crimes) raided the Xerox room of the School of Social Work, arresting the operator of the machines and confiscating all illegitimate copies. Similar shows of force have proliferated ever since Brazilian copyright law was amended in 1998 to eliminate the exceptions that had previously afforded the right to copy books for educational purposes. This act of reproduction, felt by students and faculty to be inextricably linked to university autonomy and the right to education, coalesced into a movement by 2006, Copiar Livro é Direito! (Copying Books is a Right!) Illicit copies, when confiscated, usually are destroyed. In this sense, it is worthwhile to understand such an event as a contemporary form of censorship, certainly not out of any ideological disapproval of the publication’s actual content, but rather an objection to the manner of its circulation. Many books banned (and burned) during the dictatorship were obviously a matter of content – those that could “destroy society’s moral base” might “put into practice a subversive plan that places national security in danger.” Even if explicit sexuality, crime, and drug use within literature are generally tolerated today (not everywhere, of course) the rhetoric contained within the 1970 decree that instituted censorship is still alive in matters of circulation. During negotiations for the multi-­‐national Anti-­‐Counterfeiting Trade Agreement (ACTA), the Bush and Obama administrations denied the public information, stating that it was “properly classified in the interest of national security.” Certain parts of the negotiations became known through Wikileaks and ACTA was revealed as a vehicle for exporting American intellectual property enforcement. Protecting intellectual property is essential, politicians claim, to maintaining the American “way of life,” although today this has less to do with the moral base of the country than the economic base – workers and corporations. America, Obama said in reference to ACTA, would use the “full arsenal of tools available to crack down on practices that blatantly harm our businesses.” Universities, those institutions for the production of knowledge, are deeply embedded in struggles over intellectual property, and moreover deployed as instruments of national security. The National Security Higher Education Advisory Board, which includes (surprise) Linda Katehi, brings together select university presidents and chancellors with the FBI, CIA, and other agencies several times per year. Developed to address intellectual property at the level of cyber-­‐theft (preventing sensitive research from falling into the hands of terrorists) the congenial relationship between university administrations and the FBI raises the spectre of US government spying on student activists in the 1960’s and early 1970’s. Financialization of publishing To the side of such partnerships with the state, the forces of financialization have been absorbed into universities, again with the welcome of administrations. Towering US student loan debt is one very clear index; another, less apparent, but growing, is the highly controlled circulation of academic publishing, especially journals and textbooks. Although apparently marginal (or niche) in topic, the vertical structure of the corporations behind most journals is surprisingly large. The Dutch company Elsevier, for example, publishes 250,000 articles per year, and earned $1.6 billion (a profit margin of 36%) in 2010. Texts are written, peer reviewed, and edited on a voluntary basis (usually labor costs are externalized, for example to the state or university). They are sold back to university libraries at extraordinarily high prices, and the libraries are obliged to pay because their constituency relies on access to research as a material for further research. When the website library.nu was taken-­‐down recently for providing access to over 400,000 digital texts, it was not the major “commercial” publishers that were behind the action, but a coalition of 17 educational publishers, including Elsevier, Springer, Taylor and Francis, the Cambridge University Press, and Macmillan. On the heels of FBI raids on prominent torrent sites, and with a similar level of coordination, this publishers’ alliance hired private investigators to deploy software “specifically developed by IT experts” to secure evidence. In order to expand their profitability, corporate academic publishers exploit and reinforce entrenched hierarchies within the academia. Compensation comes in the forms of CV lines and disciplinary visibility; and it is that very validation that individuals need to find and secure employment at research institutions. “Publish or die” is not anything new, but as employment grows increasingly temporary and managerial systems for assessment and quantifying productivity proliferate, it has grown more ominous. Beyond intensifying internal hierarchies, this publishing situation has widened the gap between the university and the rest of the world (even as it subjects the exchange of knowledge to the logic of the stock market); publications are meant for current students and faculty only, and their legitimacy is regularly checked against ID cards, passwords, and other credentials. One without such legitimacy finds themselves on the wrong side of a paywall. Here, we discover the quotidian dimension to the militarization of the university, in the inconveniences of proximity cards, accounts, and login screens. If our contemporary forms of censorship are focused on the manner of a thing’s circulation, then systems of control would be oriented towards policing access. Reprographic machines and file-­‐sharing software are obvious targets but, with the advent of tablet computers (such as Apple’s iPad, marketed heavily towards students), so are actual textbooks. The practice of reselling textbooks, a yearly student money-­‐saving ritual that is perfectly legal under the first-­‐sale doctrine, has long represented lost revenue to publishers. So many, including MacMillan and McGraw-­‐Hill, have moved strongly into the e-­‐textbook market, which allows them to shut the door on the secondary market because students are no longer buying the things themselves, but only temporary access to the things.

Opening of Access

Open Access publishing articulates an alternative, in order to circumvent the entire parasitical apparatus and ultimately deliver texts to readers and researchers for free. In large part, its success depends on whether or not researchers choose to publish their work with OA journals or with pay-­‐for-­‐access ones. If many have chosen the latter, it is because of factors like reputation and the interrelation between publishing and departmental structures of advancement and power. Interestingly, it is institutions with the strongest reputations that are also pushing for more ‘openness.’ Princeton University formally adopted an open-­‐access policy in 2011 (the sixth Ivy League school to do so) in order to discourage the “fruits of [their] scholarship” from languishing “artificially behind a pay wall.” MIT has long promoted openness of its materials, from OpenCourseWare (2002) to its own Open Access policy (2009), to a new online learning infrastructure, MITx. Why is it that elite, private schools are so motivated to open themselves to the world? Would this not dilute their status? The answer is obvious: opening up their research gives their faculty more exposure; it produces a positive image of an institution that is generous and genuinely interested in generating knowledge; and ultimately it builds the university’s brand. They are not giving away degrees and certainly not research positions – rather they are mobilizing their intellectual capital to attract publicity, students, donations, and contracts. We can guess what ‘opening up the university’ means for the institution and the faculty, but what about for the students, including those who may not have the proper title, those learners not enrolled? MITx is an adaptation of the common practice of distance learning, which has a century-­‐and-­‐a-­‐half long history, beginning with the University of London’s External Programme. There are populist overtones (Charles Dickens called the External Programme the “People’s University”) to distance learning that coincide with the promises of public education more generally, namely making higher education available to those traditionally without means for it. History has provided us with less than desirable motivations for distance education – the Free University of Iran was said to have been desirable to the Shah’s regime because the students would never gather – but current western programs are influenced by other concerns. Beyond publicity and social conscience, many of these online learning programs are driven by economics. At the University of California, the Board of Regents launched a pilot program as part of a plan to close a 4.7 billion dollar budget gap, with the projection that such a program could add 25,000 students at 1.1% of the normal cost. Aside from MITx’s free component (it brings in revenue as well if people want to actually get “credit”) most of these distance-­‐learning offerings are immaterial commodities. UCLA Extension is developing curricula and courses for Encore Career Institute, a for-­‐profit venture bringing together Hollywood, Silicon Valley, and a chairwoman of the UC Regents, whose goal is to “deliver some of the fantastic intellectual property that UC has.” And even MITx is not without its restricted-­‐access bedfellows; its pilot online course requires a textbook, which is owned by Elsevier. Students are here conceived of truly as consumers of product, and education has become a subgenre of publication. The classroom and library are seen as inefficient mechanisms for delivering education to the masses or, for that matter, for the delivering the masses to creditors, advertisers, and content providers. Clearly, classrooms will continue to exist, especially in the centers for the reproduction of the elite, such as those proponents of Open Access previously mentioned. But everywhere else, post-­‐ classroom (and post-­‐library) education is exploding. Students do not gather here and they certainly don’t sit-­‐in or take over buildings; they don’t argue outside during a break, over a cigarette, nor do they pass books between themselves. I am not, however, a fatalist on this point – these may be networks of access managed by capital and policed by the state, but new collective forms and subjectivities are already emerging, exploiting or evading the logic of accounts, passwords, and access. They find each other across borders and across disciplines.

Negating Access

After the capitalist restructuring of the 1970s and 1980s, how do we understand the scenes of people clashing with police formations; the revival of campus occupations as a tactic; the disappearance of university autonomy; the withdrawal of learning into the disciplined walls of the academy; in short, how do we understand a situation that appears quite like before? One way – the theme of this essay – would be through the notion of “access.” If access has moved from a question of rights (who has access?) to a matter of legality and economics (what are the terms and price for access, for a particular person?) then over the past few decades we have witnessed access being turned inside-­‐out, in a manner reminiscent of Marx’s “double freedom” of the proletariat: having access to academic resources while not being able to access each other. Library cards, passwords, and keys are assigned to individuals; and so are contracts, degrees, loans, and grades. Students (and faculty) are individuated at every turn, perhaps no more clearly than in online learning where each body collapses into their own profile. Access is not so much a passage into a space as it is an apparatus enclosing the individual. (In this sense, Open Access is one configuration of this apparatus). Two projects that I have worked within over the past 7 years – a file-­‐sharing website for texts, AAAARG.ORG, and a proposal-­‐based learning platform, The Public School – are ongoing efforts in escaping this regime of access in order to create some room to actually understand all of these conditions, their connection to larger processes, and the possibilities for future action. The Public School has no curriculum, no degrees, and nothing to do with the public school system. It is simply a framework within which people propose ideas for things they want to learn about with others; a rotating committee might organize this proposal into an actual class, bringing together a group of strangers and friends who find a way to teach each other. AAAARG.ORG is a collective library comprised of scans, excerpts, and exports that members of its public have found important enough to share with each other. They are premised, in part, on the proposition that making these kinds of spaces is an active, contingent process requiring the coordination, invention, and self-­‐ reflection of many people over time. The creation of these kinds of spaces involves a negation of access, often bringing conflict to the surface. What this means is that the spaces are not territories on which pedagogy happens, but rather that the collective activity of making and defending these spaces is pedagogical.

In the militarized raids of campus occupations and knowledge-­‐sharing assemblages, the state is acting to both produce and defend a structure that generates wealth from the process of education. While there are occasional clashes over content, usually any content is acceptable that circulates through this structure, and the very failure to circulate (to attract grant funding, attention, or feedback) becomes the operative, soft form of suppression. A resistant pedagogy should look for openings – and if they don’t exist, break them open – where space grows from a refusal of access and circulation, borders and disciplines; from an improvised diffusion that generates its own laws and dynamics. But a cautionary note – any time new social relations are born out of such an opening in space and time, a confrontation with power is not far behind.

Dockray, Forster & Public Office
README.md
2018


## Introduction

How might we ensure the survival and availability of community libraries,
individual collections and other precarious archives? If these libraries,
archives and collections are unwanted by official institutions or, worse,
buried beneath good intentions and bureaucracy, then what tools and platforms
and institutions might we develop instead?

While trying to both formulate and respond to these questions, we began making
Dat Library and HyperReadings:

**Dat Library** distributes libraries across many computers so that many
people can provide disk space and bandwidth, sharing in the labour and
responsibility of the archival infrastructure.

**HyperReadings** implements ‘reading lists’ or a structured set of pointers
(a list, a syllabus, a bibliography, etc.) into one or more libraries,
_activating_ the archives.

## Installation

The easiest way to get started is to install [Dat Library as a desktop
app](http://dat-dat-dat-library.hashbase.io), but there is also a programme
called ‘[datcat](http://github.com/sdockray/dat-cardcat)’, which can be run on
the command line or included in other NodeJS projects.

## Accidents of the Archive

The 1996 UNESCO publication [Lost Memory: Libraries and Archives Destroyed in
the Twentieth Century](http://www.stephenmclaughlin.net/ph-
library/texts/UNESCO%201996%20-%20Lost%20Memory_%20Libraries%20and%20Archives%20Destroyed%20in%20the%20Twentieth%20Century.pdf)
makes the fragility of historical repositories startlingly clear. “[A]cidified
paper that crumbles to dust, leather, parchment, film and magnetic light
attacked by light, heat humidity or dust” all assault archives. “Floods,
fires, hurricanes, storms, earthquakes” and, of course, “acts of war,
bombardment and fire, whether deliberate or accidental” wiped out significant
portions of many hundreds of major research libraries worldwide. When
expanding the scope to consider public, private, and community libraries, that
number becomes uncountable.

Published during the early days of the World Wide Web, the report acknowledges
the emerging role of digitization (“online databases, CD-ROM etc.”), but today
we might reflect on the last twenty years, which has also introduced new forms
of loss.

Digital archives and libraries are subject to a number of potential hazards:
technical accidents like disk failures, accidental deletions, misplaced data
and imperfect data migrations, as well as political-economic accidents like
defunding of the hosting institution, deaccessioning parts of the collection
and sudden restrictions of access rights. Immediately after library.nu was
shut down on the grounds of copyright infringement in 2012, [Lawrence Liang
wrote](https://kafila.online/2012/02/19/library-nu-r-i-p/) of feeling “first
and foremost a visceral experience of loss.”

Whatever its legal status, the abrupt absence of a collection of 400,000 books
appears to follow a particularly contemporary pattern. In 2008, Aaron Swartz
moved millions of US federal court documents out from behind a paywall,
resulting in a trial and an FBI investigation. Three years later he was
arrested and indicted for a similar gesture, systematically downloading
academic journal articles from JSTOR. That year, Kazakhstani scientist
Alexandra Elbakyan began [Sci-Hub](https://en.wikipedia.org/wiki/Sci-Hub) in
response to scientific journal articles that were prohibitively expensive for
scholars based outside of Western academic institutions. (See
for further analysis and an alternative
approach to the same issues: “When everyone is librarian, library is
everywhere.”) The repository, growing to more than 60 millions papers, was
sued in 2015 by Elsevier for $15 million, resulting in a permanent injunction.
Library Genesis, another library of comparable scale, finds itself in a
similar legal predicament.

Arguably one of the largest digital archives of the “avant-garde” (loosely
defined), UbuWeb is transparent about this fragility. In 2011, its founder
[Kenneth Goldsmith wrote](http://www.ubu.com/resources/): “by the time you
read this, UbuWeb may be gone. […] Never meant to be a permanent archive, Ubu
could vanish for any number of reasons: our ISP pulls the plug, our university
support dries up, or we simply grow tired of it.” Even the banality of
exhaustion is a real risk to these libraries.

The simple fact is that some of these libraries are among the largest in the
world yet are subject to sudden disappearance. We can only begin to guess at
what the contours of “Lost Memory: Libraries and Archives Destroyed in the
Twenty-First Century” will be when it is written ninety years from now.

## Non-profit, non-state archives

Cultural and social movements have produced histories which are only partly
represented in state libraries and archives. Often they are deemed too small
or insignificant or, in some cases, dangerous. Most frequently, they are not
deemed to be anything at all — they are simply neglected. While the market,
eager for new resources to exploit, might occasionally fill in the gaps, it is
ultimately motivated by profit and not by responsibility to communities or
archives. (We should not forget the moment [Amazon silently erased legally
purchased copies of George Orwell’s
1984](http://www.nytimes.com/2009/07/18/technology/companies/18amazon.html)
from readers’ Kindle devices because of a change in the commercial agreement
with the publisher.)

So, what happens to these minor libraries? They are innumerable, but for the
sake of illustration let’s say that each could be represented by a single
book. Gathered together, these books would form a great library (in terms of
both importance and scale). But to extend the metaphor, the current reality
could be pictured as these books flying off their shelves to the furthest
reaches of the world, their covers flinging open and the pages themselves
scattering into bookshelves and basements, into the caring hands of relatives
or small institutions devoted to passing these words on to future generations.

While the massive digital archives listed above (library.nu, Library Genesis,
Sci-Hub, etc.) could play the role of the library of libraries, they tend to
be defined more as sites for [biblioleaks](https://www.jmir.org/2014/4/e112/).
Furthermore, given the vulnerability of these archives, we ought to look for
alternative approaches that do not rule out using their resources, but which
also do not _depend_ on them.

Dat Library takes the concept of “a library of libraries” not to manifest it
in a single, universal library, but to realise it progressively and partially
with different individuals, groups and institutions.

## Archival properties

So far, the emphasis of this README has been on _durability_ , and the
“accidents of the archive” have been instances of destruction and loss. The
persistence of an archive is, however, no guarantee of its _accessibility_ , a
common reality in digital libraries where access management is ubiquitous.
Official institutions police access to their archives vigilantly for the
ostensible purpose of preservation, but ultimately create a rarefied
relationship between the archives and their publics. Disregarding this
precious tendency toward preciousness, we also introduce _adaptability_ as a
fundamental consideration in the making of the projects Dat Library and
HyperReadings.

To adapt is to fit something for a new purpose. It emphasises that the archive
is not a dead object of research but a set of possible tools waiting to be
activated in new circumstances. This is always a possibility of an archive,
but we want to treat this possibility as desirable, as the horizon towards
which these projects move. We know how infrastructures can attenuate desire
and simply make things difficult. We want to actively encourage radical reuse.

In the following section, we don’t define these properties but rather discuss
how we implement (or fail to implement) them in software, while highlighting
some of the potential difficulties introduced.

### Durability

In 1964, in the midst of the “loss” of the twentieth-century, Paul Baran’s
RAND Corporation publication [On Distributed
Communications](https://www.rand.org/content/dam/rand/pubs/research_memoranda/2006/RM3420.pdf)
examined “redundancy as one means of building … highly survivable and reliable
communications systems”, thus midwifing the military foundations of the
digital networks that we operate within today. While the underlying framework
of the Internet generally follows distributed principles, the client–server/
request–response model of the HTTP protocol is highly centralised in practice
and is only as durable as the server.

Capitalism places a high value on originality and novelty, as exemplified in
art where the ultimate insult would to be the label “redundant”. Worse than
being derivative or merely unoriginal, being redundant means having no reason
to exist — a uselessness that art can’t tolerate. It means wasting a perfectly
good opportunity to be creative or innovative. In a relational network, on the
other hand, redundancy is a mode of support. It doesn’t stimulate competition
to capture its effects, but rather it is a product of cooperation. While this
attitude of redundancy arose within a Western military context, one can’t help
but notice that the shared resources, mutual support, and common
infrastructure seem fundamentally communist in nature. Computer networks are
not fundamentally exploitative or equitable, but they are used in specific
ways and they operate within particular economies. A redundant network of
interrelated, mutually supporting computers running mostly open-source
software can be the guts of an advanced capitalist engine, like Facebook. So,
could it be possible to organise our networked devices, embedded as they are
in a capitalist economy, in an anti-capitalist way?

Dat Library is built on the [Dat
Protocol](https://github.com/datproject/docs/blob/master/papers/dat-paper.md),
a peer-to-peer protocol for syncing folders of data. It is not the first
distributed protocol ([BitTorrent](https://en.wikipedia.org/wiki/BitTorrent)
is the best known and is noted as an inspiration for Dat), nor is it the only
new one being developed today ([IPFS](https://ipfs.io) or the Inter-Planetary
File System is often referenced in comparison), but it is unique in its
foundational goals of preserving scientific knowledge as a public good. Dat’s
provocation is that by creating custom infrastructure it will be possible to
overcome the accidents that restrict access to scientific knowledge. We would
specifically acknowledge here the role that the Dat community — or any
community around a protocol, for that matter — has in the formation of the
world that is built on top of that protocol. (For a sense of the Dat
community’s values — see its [code of conduct](https://github.com/datproject
/Code-of-Conduct/blob/master/CODE_OF_CONDUCT.md).)

When running Dat Library, a person sees their list of libraries. These can be
thought of as similar to a
[torrent](https://en.wikipedia.org/wiki/Torrent_file), where items are stored
across many computers. This means that many people will share in the provision
of disk space and bandwidth for a particular library, so that when someone
loses electricity or drops their computer, the library will not also break.
Although this is a technical claim — one that has been made in relation to
many projects, from Baran to BitTorrent — it is more importantly a social
claim: the users and lovers of a library will share the library. More than
that, they will share in the work of ensuring that it will continue to be
shared.

This is not dissimilar to the process of reading generally, where knowledge is
distributed and maintained through readers sharing and referencing the books
important to them. As [Peter Sloterdijk
describes](https://rekveld.home.xs4all.nl/tech/Sloterdijk_RulesForTheHumanZoo.pdf),
written philosophy is “reinscribed like a chain letter through the
generations, and despite all the errors of reproduction — indeed, perhaps
because of such errors — it has recruited its copyists and interpreters into
the ranks of brotherhood (sic)”. Or its sisterhood — but, the point remains
clear that the reading / writing / sharing of texts binds us together, even in
disagreement.

### Accessibility

In the world of the web, durability is synonymous with accessibility — if
something can’t be accessed, it doesn’t exist. Here, we disentangle the two in
order to consider _access_ independent from questions of resilience.

##### Technically Accessible

When you create a new library in Dat, a unique 64-digit “key” will
automatically be generated for it. An example key is
`6f963e59e9948d14f5d2eccd5b5ac8e157ca34d70d724b41cb0f565bc01162bf`, which
points to a library of texts. In order for someone else to see the library you
have created, you must provide to them your library’s unique key (by email,
chat, on paper or you could publish it on your website). In short, _you_
manage access to the library by copying that key, and then every key holder
also manages access _ad infinitum_.

At the moment this has its limitations. A Dat is only writable by a single
creator. If you want to collaboratively develop a library or reading list, you
need to have a single administrator managing its contents. This will change in
the near future with the integration of
[hyperdb](https://github.com/mafintosh/hyperdb) into Dat’s core. At that
point, the platform will enable multiple contributors and the management of
permissions, and our single key will become a key chain.

How is this key any different from knowing the domain name of a website? If a
site isn’t indexed by Google and has a suitably unguessable domain name, then
isn’t that effectively the same degree of privacy? Yes, and this is precisely
why the metaphor of the key is so apt (with whom do you share the key to your
apartment?) but also why it is limited. With the key, one not only has the
ability to _enter_ the library, but also to completely _reproduce_ the
library.

##### Consenting Accessibility

When we say “accessibility”, some hear “information wants to be free” — but
our idea of accessibility is not about indiscriminate open access to
everything. While we do support, in many instances, the desire to increase
access to knowledge where it has been restricted by monopoly property
ownership, or the urge to increase transparency in delegated decision-making
and representative government, we also recognise that Indigenous knowledge
traditions often depend on ownership, control, consent, and secrecy in the
hands of the traditions’ people. [see [“Managing Indigenous Knowledge and
Indigenous Cultural and Intellectual
Property”](https://epress.lib.uts.edu.au/system/files_force/Aus%20Indigenous%20Knowledge%20and%20Libraries.pdf?download=1),
pg 83] Accessibility understood in merely quantitative terms isn’t able to
reconcile these positions, which this is why we refuse to limit “access” to a
question of technology.

While “digital rights management” technologies have been developed almost
exclusively for protecting the commercial interests of capitalist property
owners within Western intellectual property regimes, many of the assumptions
and technological implementations are inadequate for the protection of
Indigenous knowledge. Rather than describing access in terms of commodities
and ownership of copyright, it might be defined by membership, status or role
within a community, and the rules of access would not be managed by a
generalised legal system but by the rules and traditions of the people and
their knowledge. [[“The Role of Information Technologies in Indigenous
Knowledge
Management”](https://epress.lib.uts.edu.au/system/files_force/Aus%20Indigenous%20Knowledge%20and%20Libraries.pdf?download=1),
101-102] These rights would not expire, nor would they be bought and sold,
because they are shared, i.e., held in common.

It is important, while imagining the possibilities of a technological
protocol, to also consider how different _cultural protocols_ might be
implemented and protected through the life of a project like Dat Library.
Certain aspects of this might be accomplished through library metadata, but
ultimately it is through people hosting their own archives and libraries
(rather than, for example, having them hosted by a state institution) that
cultural protocols can be translated and reproduced. Perhaps we should flip
the typical question of how might a culture exist within digital networks to
instead ask how should digital networks operate within cultural protocols?

### Adaptability (ability to use/modify as one’s own)

Durability and accessibility are the foundations of adoptability. Many would
say that this is a contradiction, that adoption is about use and
transformation and those qualities operate against the preservationist grain
of durability, that one must always be at the expense of the other. We say:
perhaps that is true, but it is a risk we’re willing to take because we don’t
want to be making monuments and cemeteries that people approach with reverence
or fear. We want tools and stories that we use and adapt and are always making
new again. But we also say: it is through use that something becomes
invaluable, which may change or distort but will not destroy — this is the
practical definition of durability. S.R. Ranganathan’s very first Law of
Library Science was [“BOOKS ARE FOR
USE”](https://babel.hathitrust.org/cgi/pt?id=uc1.$b99721;view=1up;seq=37),
which we would extend to the library itself, such that when he arrives at his
final law, [“THE LIBRARY IS A LIVING
ORGANISM”](https://babel.hathitrust.org/cgi/pt?id=uc1.$b99721;view=1up;seq=432),
we note that to live means not only to change, but also to live _in the
world_.

To borrow and gently distort another concept of Raganathan’s concepts, namely
that of ‘[Infinite
Hospitality](http://www.dextersinister.org/MEDIA/PDF/InfiniteHospitality.pdf)’,
it could be said that we are interested in ways to construct a form of
infrastructure that is infinitely hospitable. By this we mean, infrastructure
that accommodates the needs and desires of new users/audiences/communities and
allows them to enter and contort the technology to their own uses. We really
don’t see infrastructure as aimed at a single specific group, but rather that
it should generate spaces that people can inhabit as they wish. The poet Jean
Paul once wrote that books are thick letters to friends. Books as
infrastructure enable authors to find their friends. This is how we ideally
see Dat Library and HyperReadings working.

## Use cases

We began work on Dat Library and HyperReadings with a range of exemplary use
cases, real-world circumstances in which these projects might intervene. Not
only would the use cases make demands on the software we were and still are
beginning to write, but they would also give us demands to make on the Dat
protocol, which is itself still in the formative stages of development. And,
crucially, in an iterative feedback loop, this process of design produces
transformative effects on those situations described in the use cases
themselves, resulting in further new circumstances and new demands.

### Thorunka

Wendy Bacon and Chris Nash made us aware of Thorunka and Thor.

_Thorunka_ and _Thor_ were two underground papers in the early 1970’s that
spewed out from a censorship controversy surrounding the University of New
South Wales student newspaper _Tharunka_. Between 1971 and 1973, the student
magazine was under focused attack from the NSW state police, with several
arrests made on charges of obscenity and indecency. Rather than ceding to the
charges, this prompted a large and sustained political protest from Sydney
activists, writers, lawyers, students and others, to which _Thorunka_ and
_Thor_ were central.

> “The campaign contested the idea of obscenity and the legitimacy of the
legal system itself. The newspapers campaigned on the war in Vietnam,
Aboriginal land rights, women’s and gay liberation, and the violence of the
criminal justice system. By 1973 the censorship regime in Australia was
broken. Nearly all the charges were dropped.” – [Quotation from the 107
Projects Event](http://107.org.au/event/tharunka-thor-journalism-politics-
art-1970-1973/).

Although the collection of issues of _Tharunka_ is largely accessible [via
Trove](http://trove.nla.gov.au/newspaper/page/24773115), the subsequent issues
of _Thorunka_ , and later _Thor_ , are not. For us, this demonstrates clearly
how collections themselves can encourage modes of reading. If you focus on
_Tharunka_ as a singular and long-standing periodical, this significant
political moment is rendered almost invisible. On the other hand, if the
issues are presented together, with commentary and surrounding publications,
the political environment becomes palpable. Wendy and Chris have kindly
allowed us to make their personal collection available via Dat Library (the
key is: 73fd26846e009e1f7b7c5b580e15eb0b2423f9bea33fe2a5f41fac0ddb22cbdc), so
you can discover this for yourself.

### Academia.edu alternative

Academia.edu, started in 2008, has raised tens of millions of dollars as a
social network for academics to share their publications. As a for-profit
venture, it is rife with metrics and it attempts to capitalise on the innate
competition and self-promotion of precarious knowledge workers in the academy.
It is simultaneously popular and despised: popular because it fills an obvious
desire to share the fruits of ones intellectual work, but despised for the
neoliberal atmosphere that pervades every design decision and automated
correspondence. It is, however, just trying to provide a return on investment.

[Gary Hall has written](http://www.garyhall.info/journal/2015/10/18/does-
academiaedu-mean-open-access-is-becoming-irrelevant.html) that “its financial
rationale rests … on the ability of the angel-investor and venture-capital-
funded professional entrepreneurs who run Academia.edu to exploit the data
flows generated by the academics who use the platform as an intermediary for
sharing and discovering research”. Moreover, he emphasises that in the open-
access world (outside of the exploitative practice of for-profit publishers
like Elsevier, who charge a premium for subscriptions), the privileged
position is to be the one “ _who gate-keeps the data generated around the use
of that content_ ”. This lucrative position has been produced by recent
“[recentralising tendencies](http://commonstransition.org/the-revolution-will-
not-be-decentralised-blockchains/)” of the internet, which in Academia’s case
captures various, scattered open access repositories, personal web pages, and
other archives.

Is it possible to redecentralise? Can we break free of the subjectivities that
Academia.edu is crafting for us as we are interpellated by its infrastructure?
It is incredibly easy for any scholar running Dat Library to make a library of
their own publications and post the key to their faculty web page, Facebook
profile or business card. The tricky — and interesting — thing would be to
develop platforms that aggregate thousands of these libraries in direct
competition with Academia.edu. This way, individuals would maintain control
over their own work; their peer groups would assist in mirroring it; and no
one would be capitalising on the sale of data related to their performance and
popularity.

We note that Academia.edu is a typically centripetal platform: it provides no
tools for exporting one’s own content, so an alternative would necessarily be
a kind of centrifuge.

This alternative is becoming increasingly realistic. With open-access journals
already paving the way, there has more recently been a [call for free and open
access to citation data](https://www.insidehighered.com/news/2017/12/06
/scholars-push-free-access-online-citation-data-saying-they-need-and-deserve-
access). [The Initiative for Open Citations (I4OC)](https://i4oc.org) is
mobilising against the privatisation of data and working towards the
unrestricted availability of scholarly citation data. We see their new
database of citations as making this centrifugal force a possibility.

### Publication format

In writing this README, we have strung together several references. This
writing might be published in a book and the references will be listed as
words at the bottom of the page or at the end of the text. But the writing
might just as well be published as a HyperReadings object, providing the
reader with an archive of all the things we referred to and an editable
version of this text.

A new text editor could be created for this new publication format, not to
mention a new form of publication, which bundles together a set of
HyperReadings texts, producing a universe of texts and references. Each
HyperReadings text might reference others, of course, generating something
that begins to feel like a serverless World Wide Web.

It’s not even necessary to develop a new publication format, as any book might
be considered as a reading list (usually found in the footnotes and
bibliography) with a very detailed description of the relationship between the
consulted texts. What if the history of published works were considered in
this way, such that we might always be able to follow a reference from one
book directly into the pages of another, and so on?

### Syllabus

The syllabus is the manifesto of the twenty-first century. From [Your
Baltimore “Syllabus”](https://apis4blacklives.wordpress.com/2015/05/01/your-
baltimore-syllabus/), to
[#StandingRockSyllabus](https://nycstandswithstandingrock.wordpress.com/standingrocksyllabus/),
to [Women and gender non-conforming people writing about
tech](https://docs.google.com/document/d/1Qx8JDqfuXoHwk4_1PZYWrZu3mmCsV_05Fe09AtJ9ozw/edit),
syllabi are being produced as provocations, or as instructions for
reprogramming imaginaries. They do not announce a new world but they point out
a way to get there. As a programme, the syllabus shifts the burden of action
onto the readers, who will either execute the programme on their own fleshy
operating system — or not. A text that by its nature points to other texts,
the syllabus is already a relational document acknowledging its own position
within a living field of knowledge. It is decidedly not self-contained,
however it often circulates as if it were.

If a syllabus circulated as a HyperReadings document, then it could point
directly to the texts and other media that it aggregates. But just as easily
as it circulates, a HyperReadings syllabus could be forked into new versions:
the syllabus is changed because there is a new essay out, or because of a
political disagreement, or because following the syllabus produced new
suggestions. These forks become a family tree where one can follow branches
and trace epistemological mutations.

## Proposition (or Presuppositions)

While the software that we have started to write is a proposition in and of
itself, there is no guarantee as to _how_ it will be used. But when writing,
we _are_ imagining exactly that: we are making intuitive and hopeful
presuppositions about how it will be used, presuppositions that amount to a
set of social propositions.

### The role of individuals in the age of distribution

Different people have different technical resources and capabilities, but
everyone can contribute to an archive. By simply running the Dat Library
software and adding an archive to it, a person is sharing their disk space and
internet bandwidth in the service of that archive. At first, it is only the
archive’s index (a list of the contents) that is hosted, but if the person
downloads the contents (or even just a small portion of the contents) then
they are sharing in the hosting of the contents as well. Individuals, as
supporters of an archive or members of a community, can organise together to
guarantee the durability and accessibility of an archive, saving a future
UbuWeb from ever having to worry about if their ‘ISP pulling the plug’. As
supporters of many archives, as members of many communities, individuals can
use Dat Library to perform this function many times over.

On the Web, individuals are usually users or browsers — they use browsers. In
spite of the ostensible interactivity of the medium, users are kept at a
distance from the actual code, the infrastructure of a website, which is run
on a server. With a distributed protocol like Dat, applications such as
[Beaker Browser](https://beakerbrowser.com) or Dat Library eliminate the
central server, not by destroying it, but by distributing it across all of the
users. Individuals are then not _just_ users, but also hosts. What kind of
subject is this user-host, especially as compared to the user of the server?
Michel Serres writes in _The Parasite_ :

> “It is raining; a passer-by comes in. Here is the interrupted meal once
more. Stopped for only a moment, since the traveller is asked to join the
diners. His host does not have to ask him twice. He accepts the invitation and
sits down in front of his bowl. The host is the satyr, dining at home; he is
the donor. He calls to the passer-by, saying to him, be our guest. The guest
is the stranger, the interrupter, the one who receives the soup, agrees to the
meal. The host, the guest: the same word; he gives and receives, offers and
accepts, invites and is invited, master and passer-by… An invariable term
through the transfer of the gift. It might be dangerous not to decide who is
the host and who is the guest, who gives and who receives, who is the parasite
and who is the table d’hote, who has the gift and who has the loss, and where
hospitality begins with hospitality.” — Michel Serres, The Parasite (Baltimore
and London: The Johns Hopkins University Press), 15–16.

Serres notes that _guest_ and _host_ are the same word in French; we might say
the same for _client_ and _server_ in a distributed protocol. And we will
embrace this multiplying hospitality, giving and taking without measure.

### The role of institutions in the age of distribution

David Cameron launched a doomed initiative in 2010 called the Big Society,
which paired large-scale cuts in public programmes with a call for local
communities to voluntarily self-organise to provide these essential services
for themselves. This is not the political future that we should be working
toward: since 2010, austerity policies have resulted in [120,000 excess deaths
in England](http://bmjopen.bmj.com/content/7/11/e017722). In other words,
while it might seem as though _institutions_ might be comparable to _servers_
, inasmuch as both are centralised infrastructures, we should not give them up
or allow them to be dismantled under the assumption that those infrastructures
can simply be distributed and self-organised. On the contrary, institutions
should be defended and organised in order to support the distributed protocols
we are discussing.

One simple way for a larger, more established institution to help ensure the
durability and accessibility of diverse archives is through the provision of
hardware, network capability and some basic technical support. It can back up
the archives of smaller institutions and groups within its own community while
also giving access to its own archives so that those collections might be put
to new uses. A network of smaller institutions, separated by great distances,
might mirror each other’s archives, both as an expression of solidarity and
positive redundancy and also as a means of circulating their archives,
histories and struggles amongst each of the others.

It was the simultaneous recognition that some documents are too important to
be privatised or lost to the threats of neglect, fire, mould, insects, etc.,
that prompted the development of national and state archives (See page 39 in
[Beredo, B. C., Import of the archive: American colonial bureaucracy in the
Philippines, 1898-1916](http://hdl.handle.net/10125/101724)). As public
institutions they were, and still are, tasked with often competing efforts to
house and preserve while simultaneously also ensuring access to public
documents. Fire and unstable weather understandably have given rise to large
fire-proof and climate-controlled buildings as centralised repositories,
accompanied by highly regulated protocols for access. But in light of new
technologies and their new risks, as discussed above, it is compelling to
argue now that, in order to fulfil their public duty, public archives should
be distributing their collections where possible and providing their resources
to smaller institutions and community groups.

Through the provision of disk space, office space, grants, technical support
and employment, larger institutions can materially support smaller
organisations, individuals and their archival afterlives. They can provide
physical space and outreach for dispersed collectors, gathering and piecing
together a fragmented archive.

But what happens as more people and collections are brought in? As more
institutional archives are allowed to circulate outside of institutional
walls? As storage is cut loose from its dependency on the corporate cloud and
into forms of interdependency, such as mutual support networks? Could this
open up spaces for new forms of not-quite-organisations and queer-
institutions? These would be almost-organisations that uncomfortable exist
somewhere between the common categorical markings of the individual and the
institution. In our thinking, its not important what these future forms
exactly look like. Rather, as discussed above, what is important to us is that
in writing software we open up spaces for the unknown, and allow others agency
to build the forms that work for them. It is only in such an atmosphere of
infinite hospitality that we see the future of community libraries, individual
collections and other precarious archives.

## A note on this text

This README was, and still is being, collaboratively written in a
[Git](https://en.wikipedia.org/wiki/Git)
[repository](https://en.wikipedia.org/wiki/Repository_\(version_control\)).
Git is a free and open-source tool for version control used in software
development. All the code for Hyperreadings, Dat Library and their numerous
associated modules are managed openly using Git and hosted on GitHub under
open source licenses. In a real way, Git’s specification formally binds our
collaboration as well as the open invitation for others to participate. As
such, the form of this README reflects its content. Like this text, these
projects are, by design, works in progress that are malleable to circumstances
and open to contributions, for example by opening a pull request on this
document or raising an issue on our GitHub repositories.

Dockray & Liang
Sharing Instinct: An Annotation of the Social Contract Through Shadow Libraries
2015


# Sean Dockray & Lawrence Liang — Sharing Instinct: An Annotation of the
Social Contract Through Shadow Libraries

![](/site/assets/files/1289/timbuktu_ng_ancient-manuscripts.jpg) Abdel Kader
Haïdara, a librarian who smuggled hundreds of thousands of manuscripts from
jihadist-occupied Timbuktu to safety in Bamako, stands with ancient volumes
from Timbuktu packed into metal trunks. Photo: Brent Stirton/Getty Images.

_Foederis aequas Dicamus leges _

(Let us make fair terms for the compact.)

—Virgil’s  _Aeneid_ , XI

Man was born free, and everywhere he is in chains.1All excerpts from _The
Social Contract_ are from Jean-Jacques Rousseau, _The Social Contract: And,
The First and Second Discourses_, ed. Susan Dunn and Gita May (New Haven, CT:
Yale University Press, 2002).

> _June 30, 2015_

>

> _Dear Sean,_

>

> _I have been asked by Raqs Media Collective to contribute to a special
ongoing issue of _e-flux journal _that is part of the Venice Biennale. Raqs’s
section in the issue rethinks Rousseau’s social contract and the possibility
of its being rewritten, as a way of imagining social bonds and solidarities
that can help instigate and affirm a vision of the world as a space of
potential._

>

> _I was wondering if you would join me in a conversation on shadow libraries
and social contracts. The entire universe of the book-sharing communities
seems to offer the possibility of rethinking the terms of the social contract
and its associated terms (consent, general will, private interest, and so on).
While the rise in book sharing is at one level a technological phenomenon (a
library of 100,000 books put in PDF format can presently fit on a one-terabyte
drive that costs less than seventy-five dollars), it is also about how we
think of transformations in social relations mediated by sharing books._

>

> _If the striking image of books in preprint revolution was of being “in
chains,” as Rousseau puts it, I am prompted to wonder about the contemporary
conflict between the digital and mechanisms of control. Are books born free
but are everywhere in chains, or is it the case that they have been set free?
In which case are they writing new social contracts?_

>

> _I was curious about whether you, as the founder of _[
_Aaaaarg.org_](http://aaaaarg.org/) _, had the idea of a social contract in
mind, or even a community, when you started?_

>

> _Lawrence_



**Book I, Chapter VI : The Social Pact**

To find a form of association that may defend and protect with the whole force
of the community the person and property of every associate, and by means of
which each, joining together with all, may nevertheless obey only himself, and
remain as free as before.’’ Such is the fundamental problem to which the
social contract provides the solution.

We can reduce it to the following terms: ‘‘Each of us puts in common his
person and all his power under the supreme direction of the general will; and
in return each member becomes an indivisible part of the whole.’’

> _June 30, 2015_

>

> _Dear Lawrence,_

>

> _I am just listing a few ideas to put things out there and am happy to try
other approaches:_

>

> _—To think about the two kinds of structure that digital libraries take:
either each library is shared by many user-librarians or there is a library
for each person, shared with all the others. It’s a technological design
question, yes, but it also suggests different social contracts?_

>

> _—What is subtracted when we subtract your capacity/right to share a book
with others, when every one of us must approach the market anew to come into
contact with it? But to take a stab at misappropriating the terms you’ve
listed, consent, what libraries do I consent to? Usually the consent needs to
come from the library, in the form of a card or something, but we don’t ask
enough what we want, maybe. Also what about a social contract of books? Does a
book consent to being in a library? What rights does it have or expect?_

>

> _I really loved the math equation Rousseau used to arrive at the general
will: if you subtract the pluses and minuses of particular wills that cancel
each other out, then the general will is the sum of the differences! But why
does the general need to be the lowest common denominator—certainly there are
more appropriate mathematical concepts that have been developed in the past
few hundred years?_

>

> _Sean_



**Book I, Chapter II: Primitive Societies**

This common liberty is a consequence of man’s nature. His first law is to
attend to his own survival, his first concerns are those he owes to himself;
and as soon as he reaches the age of rationality, being sole judge of how to
survive, he becomes his own master.

It is the relation of things and not of men that constitutes war; and since
the state of war cannot arise from simple personal relations, but only from
real relations, private war—war between man and man—cannot exist either in the
state of nature, where there is no settled ownership, or in the social state,
where everything is under the authority of the laws.

> _July 1, 2015_

>

> _Dear Lawrence,_

>

> _Unlike a logic of exchange, or of offer and return with its demands for
reciprocity, the logic of sharing doesn’t ask its members for anything in
return. There are no guarantees that the one who gives a book will get back
anything, whether that is money, an equivalent book, or even a token of
gratitude. Similarly, there is nothing to prevent someone from taking without
giving. I think a logic of sharing will look positively illogical across the
course of its existence. But to me, this is part of the appeal: that it can
accommodate behaviors and relationships that might be impossible within the
market._

>

> _But if there is a lack of a contract governing specific exchanges, then
there is something at another level that defines and organizes the space of
sharing, that governs its boundaries, and that establishes inclusions and
exclusions. Is this something ethics? Identity? Already I am appealing to
something that itself would be shared, and would this sharing precede the
material sharing of, for example, a library? Or would the shared
ethics/identity/whatever be a symptom of the practice of sharing? Well, this
is perhaps the conclusion that anthropologists might come to when trying to
explain the sharing practices of hunter-gatherer societies, but a library?_

>

> _Sean_

>

>

>

> _July 1, 2015_

>

> _Hi Sean,_

>

> _I liked your question of what might account for a sharing instinct when it
comes to books, and whether we appeal to something that already exists as a
shared ethics or identity, or is sharing the basis of a shared
ethics/identity? I have to say that while I have never thought of my own book-
collecting through the analogy of hunter-gatherers, the more I think about it,
the more sense it makes to me. Linguistically we always speak of going on book
hunts and my daily trawling through the various shadow libraries online does
seem to function by way of a hunting-gathering mentality._

>

> _Often I download books I know that I will never personally read because I
know that it may either be of interest to someone else, or that the place of a
library is the cave where one gathers what one has hunted down, not just for
oneself but for others. I also like that we are using so-called primitive
metaphors to account for twenty-first-century digital practices, because it
allows us the possibility of linking these practices to a primal instinct of
sharing, which precedes our encounter with the social norms that classify and
partition that instinct (legal, illegal, authorized, and so on). _

>

> _I don’t know if you remember the meeting that we had in Mumbai a few years
ago—among the other participants, we had an academic from Delhi as an
interlocutor. He expressed an absolute terror at what he saw as the “tyranny
of availability” in online libraries. In light of the immense number of books
available in electronic copies and on our computers or hard discs, he felt
overwhelmed and compared his discomfort with that of being inside a large
library and not knowing what to do. Interestingly, he regularly writes asking
me to supply him with books that he can’t find or does not have access to._

>

> _This got me thinking about the idea of a library and what it may mean, in
its classical sense and its digital sense. An encounter with any library,
especially when it manifests itself physically, is one where you encounter
your own finitude in the face of what seems like the infinity of knowledge.
But personally this sense of awe has also been tinged with an immense
excitement and possibility. The head rush of wanting to jump from a book on
forgotten swear words to an intellectual biography of Benjamin, and the
tingling anticipation as you walk out of the library with ten books, captures
for me more than any other experience the essence of the word potential._

>

> _I have a modest personal library of around four thousand books, which I
know will be kind of difficult for me to finish in my lifetime even if I stop
adding any new books, and yet the impulse to add books to our unending list
never fades. And if you think about this in terms of the number of books that
reside on our computers, then the idea of using numbers becomes a little
pointless, and we need some other way or measure to make sense of our
experience._

>

> _Lawrence_



**Book I, Chapter VII: The Sovereign**

Every individual can, as a man, have a particular will contrary to, or
divergent from, the general will which he has as a citizen; his private
interest may appear to him quite different from the common interest; his
absolute and naturally independent existence may make him envisage what he
owes to the common cause as a gratuitous contribution, the loss of which would
be less harmful to others than the payment of it would be onerous to him.

> _July 12, 2015_

>

> _Hi Sean,_

>

> _There is no symbol that to my mind captures the regulated nature of the
library more than that of the board that hushes you with its capitalized
SILENCE. Marianne Constable says, “One can acknowledge the figure of silence
in the library and its persistence, even as one may wonder what a silent
library would be, whether libraries ever are silent, and what the various
silences—if any—in a library could be.”_

>

> _If I had to think about the nature of the social contract and the
possibilities of its rewriting from the site of the library one encounters
another set of silent rules and norms. If social contracts are narrative
compacts that establish a political community under the sign of a sovereign
collective called the people, libraries also aspire to establish an authority
in the name of the readers and to that extent they share a common constitutive
character. But just as there is a foundational scandal of absence at the heart
of the social contract that presumes our collective consent (what Derrida
describes as the absence of the people and the presence of their signature)
there seems to be a similar silence in the world of libraries where readers
rarely determine the architecture, the logic, or the rules of the library._

>

> _So libraries have often mirrored, rather than inverted, power relations
that underlie the social contracts that they almost underwrite._  _In contrast
I am wondering if the various shadow libraries that have burgeoned online, the
portable personal libraries that are shared offline: Whether all of them
reimagine the social contract of libraries, and try to create a more insurgent
imagination of the library?_

>

> _Lawrence_

>

>

>

> _July 13, 2015_

>

> _Hi Lawrence,_

>

> _As you know, I’m very interested in structures that allow the people within
ways to meaningfully reconfigure them. This is distinct from participation or
interaction, where the structures are inquisitive or responsive, but not
fundamentally changeable._

>

> _I appreciate the idea that a library might have, not just a collection of
books or a system of organizing, but its own social contract. In the case of
Aaaaarg, as you noticed, it is not explicit. Not only is there no statement as
such, there was never a process prior to the library in which something like a
social contract was designed._

>

> _I did ask users to write out a short statement of their reason for joining
Aaaaarg and have around fifty thousand of these expressions of intention. I
think it’s more interesting to think of the social contract, or at least a
"general will," in terms of those. If Rousseau distinguished between the will
of all and the general will, in a way that could be illustrated by the catalog
of reasons for joining Aaaaarg. Whereas the will of all might be a sum of all
the reasons, the general will would be the sum of what remains after you "take
away the pluses and minuses that cancel one another." I haven’t done the math,
but I don’t think the general will, the general reason, goes beyond a desire
for access._

>

> _To summarize a few significant groupings:_

>

> _—To think outside institutions; _
> _—To find things that one cannot find; _
> _—To have a place to share things;_
> _—To act out a position against intellectual property; _
> _—A love of books (in whatever form)._

>

> _What I do see as common across these groupings is that the desire for
access is, more specifically, a desire to have a relationship with texts and
others that is not mediated by market relations._

>

> _In my original conception of the site, it would be something like a
collective commonplace. Like commonplacing, the excerpts that people would
keep were those parts of texts that seemed particularly useful, that produced
a spark that one wanted to share. This is important: that it was the
experience of being electrified in some way that people were sharing and not a
book as such. Over time, things changed and the shared objects became more
complete so to say, and less “subjective,” but I hope that there is still that
spark. But, at this point, I realize that I am just another one of the many
wills, and just one designer of whatever social contract is underlying the
library._

>

> _So, again—What is the social contract? It wasn’t determined in advance and
it is not written in any about section or FAQ. I would say that it is, like
the library itself, something that is growing and evolving over time, wouldn’t
you?_

>

> _Sean_



**Book II, Chapter VIII : The People**

As an architect, before erecting a large edifice, examines and tests the soil
in order to see whether it can support the weight, so a wise lawgiver does not
begin by drawing up laws that are good in themselves, but considers first
whether the people for whom he designs them are fit to maintain them.

> _July 15, 2015_

>

> _Lawrence,_

>

> _There are many different ways of organizing a library, of structuring it,
and it’s the same for online libraries. I think the most interesting
conversation would not be to bemoan the digital for overloading our ability to
be discerning, or to criticize it for not conforming to the kind of economy
that we expected publishing to have, or become nostalgic for book smells; but
to actually really wonder what it is that could make these libraries great,
places that will be missed in the future if they go away. To me, this is the
most depressing thing about the unfortunate fact that digital shadow libraries
have to operate somewhat below the radar: it introduces a precariousness that
doesn’t allow imagination to really expand, as it becomes stuck on techniques
of evasion, distribution, and redundancy. But what does it mean when a library
functions transnationally? When its contents can be searched? When reading
interfaces aren’t bound by the book form? When its contents can be referenced
from anywhere?_

>

> _What I wanted when building Aaaaarg.org the first time was to make it
useful, in the absolute fullest sense of the word, something for people who
saw books not just as things you buy to read because they’re enjoyable, but as
things you need to have a sense of self, of orientation in the world, to learn
your language and join in the conversation you are a part of—a library for
people who related to books like that._

>

> _Sean_

>

>

>

> _July 17, 2015_

>

> _Hi Sean_,

>

> _To pick up on the reasons that people give for joining Aaaaarg.org: even
though Aaaaarg.org is not bound by a social contract, we do see the
outlines—through common interests and motivations—of a fuzzy sense of a
community. And the thing with fuzzy communities is that they don’t necessarily
need to be defined with the same clarity as enumerated communities, like
nations, do. Sudipta Kaviraj, who used the term fuzzy communities, also speaks
of a “narrative contract”—perhaps a useful way to think about how to make
sense of the bibliophilic motivations and intentions, or what you describe as
the “desire to have a relationship with texts and others that is not mediated
by market relations.”_

>

> _This seems a perfectly reasonable motivation except that it is one that
would be deemed impossible at the very least, and absurd at worst by those for
whom the world of books and ideas can only be mediated by the market. And it’s
this idea of the absurd and the illogical that I would like to think a little
bit about via the idea of the ludic, a term that I think might be useful to
deploy while thinking of ways of rewriting the social contract: a ludic
contract, if you will, entered into through routes allowed by ludic libraries.
_

>

> _If we trace the word ludic back to its French and Latin roots, we find it
going back to the idea of playing (from Latin _ludere  _"to play" or _ludique
_“spontaneously playful”), but today it has mutated into most popular usage
(ludicrous) generally used in relation to an idea that is so impossible it
seems absurd. And more often than not the term conveys an absurdity associated
with a deviation from well-established norms including utility, seriousness,
purpose, and property._

>

> _But what if our participation in various forms of book sharing was less
like an invitation to enter a social contract, and more like an invitation to
play? But play what, you may ask, since the term play has childish and
sometimes frivolous connotation to it? And we are talking here about serious
business. Gadamer proposes that rather than the idea of fun and games, we can
think with the analogy of a cycle, suggesting that it was important not to
tighten the nuts on the axle too much, or else the wheel could not turn. “It
has to have some play in it … and not too much play, or the wheel will fall
off. It was all about _spielraum _, ‘play-room,’ some room for play. It needs
space.” _

>

> _The ludic, or the invitation to the ludic in this account, is first and
foremost a necessary relief—just as playing is—from constraining situations
and circumstances. They could be physical, monetary, or out of sheer
nonavailability (thus the desire for access could be thought of as a tactical
maneuver to create openings). They could be philosophical constraints
(epistemological, disciplinary), social constraints (divisions of class, work,
and leisure time). At any rate all efforts at participating in shadow
libraries seem propelled by an instinct to exceed the boundaries of the self
however defined, and to make some room for play or to create a “ludic
spaciousness,” as it were. _

>

> _The spatial metaphor is also related to the bounded/unbounded (another name
for freedom I guess) and to the extent that the unbounded allows us a way into
our impossible selves; they share a space with dreams, but rarely do we think
of the violation of the right to access as fundamentally being a violation of
our right to dream. Your compilation of the reasons that people wanted to join
Aaaaarg may well be thought of as an archive of one-sentence-long dreams of
the ludic library. _

>

> _If for Bachelard the house protects the dreamer, the library for me is a
ludic shelter, which brings me back to an interesting coincidence. I don’t
know what it is that prompted you to choose the name Aaaaarg.org; I don’t know
if you are aware it binds you irrevocably (to use the legal language of
contracts) with one of the very few theorists of the ludic, the Dutch
philosopher Johan Huizinga, who coined the word _homo ludens _(as against the
more functional, scientific homo sapiens or functional homo faber). In his
1938 text Huizinga observes that “the fun of playing, resists all analysis,
all logical interpretation,” and as a concept it cannot be reduced to any
other mental category. He feels that no language really has an exact
equivalent to the word fun but the closest he comes in his own language is the
Dutch word _aardigkeit, _so the line between aaaarg and aaard may have well
have been dreamt of before Aaaaarg.org even started._

>

> _More soon,_

>

> _Lawrence_

×

[![](/site/templates/img/conversation.svg)![](/site/templates/img
/conversation-highlight.svg)](http://conversations.e-flux.com/t
/superconversations-day-73-mohammad-salemy-responds-to-sean-dockray-lawrence-
liang-sharing-instinct-an-annotation-of-the-social-contract-through-shadow-
libraries/2244 "Sharing Instinct: An Annotation of the Social Contract Through
Shadow Libraries @ e-flux Conversations")


© 2015 e-flux and the author

Dockray, Pasquinelli, Smith & Waldorf
There is Nothing Less Passive than the Act of Fleeing
2010


# There is Nothing Less Passive than the Act of Fleeing

[The Public School](/web/20170523052416/http://journalment.org/author/public-
school)

What follows is a condensed and edited version of a text for a panel that was
presented at UCIRA’s _Future Tense: Alternative Arts and Economies in the
University_  conference held in San Diego, California on November 18, 2010.
The panel shared the same name as a 13-day itinerant seminar in Berlin
organized by Dockray, Waldorf, and Fiona Whitton earlier that year, in July.
The seminar began with an excerpt from Tiqqun’s _Introduction to Civil War_ ,
which was co-translated into English by Smith; and later read a chapter from
Pasquinelli’s _Animal Spirits: A Bestiary of the Commons_. Both authors have
also participated in meetings at The Public School in Los Angeles and Berlin.
Both the panel and the seminar developed out of longer conversations at The
Public School in Los Angeles, which began in late 2007 under Telic Arts
Exchange. The Public School is a school with no curriculum, where classes are
proposed and organized by the public.


## The Education Factory

The University as I understand it, has been a threshold between youth and the
labor market. Or it has been a threshold between a general education and a
more specialized one. In its more progressive form, it’s been a zone of
transition into an expanding middle class. But does this form still exist? I’m
inclined to think just the opposite, that the University is becoming a mean
for filtering people out of the middle class via student loan debt, which now
exceeds credit card debt. The point of the questions for me is simply what is
the point of the University? What are we fighting for or defending?

The next question might be, do students work? The University is a crucial site
in the reproduction of class relations; we know that students are consumers;
we know the student is a future worker who will be compelled to work, and work
in a specific way, because she/he is crushed by debt contracted during her/his
tenure as a student; we know that students work while attending school, and
that for many students school and work eerily begin to resemble one another.
But asking whether students work is to ask something more specific: do
students produce value and, therefore surplus-value? If we can assume, for the
moment, that students are a factor in the “knowledge production” that takes
place in the University, is this production of knowledge also the production
of value? We confront, maybe, a paradox: all social activity has become
“productive”—captured, absorbed—at the very moment value becomes unmeasurable.

What does this have to do with students, and their work? The thesis of the
social factory was supplemented by the assumption that knowledge had become a
central mode in the production of value in post-Fordist environments. Wouldn’t
this mean that the university could become an increasingly important
flashpoint in social struggles, now that it has become not simply the site of
the reproduction of the capital relation, but involved in the immediate
production process, directly productive of value? Would we have to understand
students themselves as, if not knowledge producers, an irreplaceable moment or
function within that process? None of this remains clear. The question is not
only a sociological one, it is also a political one. The strategy of
reconceptualizing students as workers is rooted in the classical Marxist
identification of revolt with the point of production, that is, exploitation.
To declare all social activity to be productive is another way of saying that
social war can be triggered at any site within society, even among the
precarious, the unemployed, and students.

_Knowledge is tied to struggle. To truly know is to hate truly. This is why
the working class can know and possess everything of capital, as it is enemy
to itself as capital._
—Tronti, 1966

That form of “hate” mentioned by Tronti is suggesting something interesting
form of political passion and a new modus operandi. The relation between hate
and knowledge, suggested by Tronti, is the opposite of the cynical detachment
of the new social figure of the entrepreneur-artist but it’s a joyful hate of
our condition. In order to educate ourselves we should hate our very own
environment and social network in which we were educated—the university. The
position of the artist in their work and the performance of themselves (often
no different) can take are manyfold. There are histories for all of these
postures that can be referenced and adopted. They are all acceptable tactics
as long as we keep doing and churning out more. But where does this get us,
both within the confines of the arts and the larger social structure? We are
taught that the artist is always working, thinking, observing. We have learned
the tricks of communication, performance and adaptability. We can go anywhere,
react to anything, respond in a thoughtful and creative way to all problems.
And we do this because while there is opportunity, we should take it. “We
shouldn’t complain, others have it much worse.” But it doesn’t mean that we
shouldn’t imagine something else. To begin thinking this way, it means a
refusal to deliver an event, to perform on demand. Maybe we need a kind of
inflexibility, of obstruction, of non-conductivity. After all, what exactly
are we producing and performing for? Can we try to think about these talents
of performance, of communication? If so, could this be the basis for an
intimacy, a friendship… another institution?


## Alternative pedagogical models

Let’s consider briefly the desire for “new pedagogical models” and “new forms
of knowledge production”. When articulated by the University, this simply
means new forms of instruction and new forms of research. Liberal faculty and
neoliberal politicians or administrators find themselves joined in this hunt
for future models and forms. On the one hand, faculty imagines that these new
techniques can provide space for continuing the good. On the other hand,
investors, politicians, and administrators look for any means to make the
University profitable; use unpaid labour, eliminate non-productive physical
spaces, and create new markets. Symptomatically, there is very little
resistance to this search for new forms and new models for the simple reason
that there is a consensus that the University should and will continue.

It’s also important to note that many of the so-called new forms and new
models being considered lie beyond the walls and payroll of the institution,
therefore both low-cost and low-risk. It is now a familiar story: the
institution attempts to renew itself by importing its own critique. The Public
School is not a new model and it’s not going to save the University. It is not
even a critique of the University any more or less than it is a critique of
the field of art or of capitalist society. It is not “the next university”
because it is a practice of leaving the University to the side. It would be a
mistake to think that this means isolation or total detachment.

Today, the forms of university governance cannot allow themselves to uproot
self-education. To the contrary, self-education constitutes a vital sap for
the survival of the institutional ruins, snatched up and rendered valuable in
the form of revenue. Governance is the trap, hasty and flexible, of the
common. Instead of countering us frontally, the enemy follows us. We must
immediately reject any weak interpretation of the theme of autonomous
institutions, according to which the institution is a self-governed structure
that lives between the folds of capitalism, without excessively bothering it.
The institutionalisation of self-education doesn’t mean being recognized as
one actor among many within the education market, but the capacity to organize
living knowledge’s autonomy and resistance.

One of the most important “new pedagogical models” that emerged over the past
year in the struggles around the implosion of the “public” university are the
occupations that took place in the Fall of 2009. Unlike other forms of action,
which tend to follow the timetable and cadence of the administration, to the
point of mirroring it, these actions had their own temporality, their own
initiative, their own internal logic. They were not at all concerned with
saving a university that was already in ruins, but rather with creating a
space at the heart of the University within which something else, some future,
could be risked, elaborated, prefigured. Everything had to be improvised, from
moment to moment, and in these improvisations new knowledges were developed
and shared. This improvisation was demanded by the aleatory quality of the
types of relations that emerged within these spaces, relations no longer
regulated by the social alibis that assigns everyone her/his place. When
students occupy university buildings—here in California, in NYC, in Puerto
Rico, in Europe and the UK, everywhere—they do so not because they want to
save their universities. They do so because they know the university for what
it is, as something to be at once seized and abandoned. They know that they
can only rely on and learn from one another.


## The Common and The Public

What is really so disconcerting about this antinomy between the logic of the
common and the logic of the social or the public? For Jacotot, it means the
development of a communist politics that is neither reformist nor seditious2.
It proposes the formation of common spaces at a distance from—if not outside
of—the public sphere and its communicative reason: “whoever forsakes the
workings of the social machine has the opportunity to make the electrical
energy of the emancipation machine.”

What does it mean to forsake the social machine? That is the major political
question facing us today. Such a forsaking would require that our political
energies organize themselves around spaces of experimentation at a distance
not only from the university and what is likely its slow-motion, or sudden,
collapse, but also from an entire imaginary inherited from the workers
movement: the task of a future social emancipation and vectors and forms of
struggle such a task implies. Perhaps what is required is not to put off
equality for the future, but presuppose the common, to affirm that commons as
a fact, a given, which must nevertheless be verified, created, not by a social
body, not by a collective force, but a power of the common, now.

School is not University. Neither is it Academy or College or even Institute.
We are all familiar with the common meaning of the word: it is a place for
learning. In another sense, it also refers to organized education in general,
which is made most clear by the decision to leave, to “drop out of school”.
Alongside these two stable, almost architectural definitions, the word
gestures to composition and movement—the school of bodies, moving
independently, together; the school only exists as long as that collective
movement does. The school takes shape in this oscillation between form and
formlessness, not through the act of constructing a wall but by the process of
realizing its boundary through practice.

Perhaps this is a way to think of how to develop what Felix Guattari called
“the associative sector” in 1982: “everything that isn’t the state, or private
capital, or even cooperatives”3. At first gloss, the associative sector is
only a name for the remainder, the already outside; but, in the language of a
school, it is a constellation of relationships, affinities, new
subjectivities, and movements, flickering into existence through life and use,
An “engaged withdrawal” that simultaneously creates an exit and institutes in
the act of passing through. Which itself might bring us back to school, to the
Greek etymology of school, skhole, “a holding back”, a “keeping clear” of
space for reflective distance. On the one hand, perhaps this reflective space
simply allows theoretical knowledge to shape or affect performative action;
but on the other hand, the production of this “clearing” is not given,
certainly not now and certainly not by the institutions that claim to give it.
Reflective space is not the precondition for performative action. On the
contrary; performative action is the precondition for reflective space—or,
more appropriately, space and action must be coproduced.

Is the University even worth “saving”? We are right to respond with
indignation, or better, with an array of tactics—some procedural, some more
“direct”—against these incursions, which always seem to authorize themselves
by appeals to economic austerity, budget shortfalls, and tightened belts.
Perhaps what is being destroyed in this process is the very notion of the
public sphere itself, a notion that. It is easy to succumb to the illusion
that the only possible result of this destruction of the figure of the public
is privatization. But what if the figure of the public was to be set off
against not only the private and property relations, but against a figure of
the “common” as well? What if, in other words, the notion of the public has
always been an unstable, mediating term between privatization and
communization, and what if the withering of this mediation left these two
process openly at odds with each other? Perhaps, then, it is not simply a
question of saving a university and, more broadly, a public space that is
already withering away; maybe our energies and our intelligence, our
collective or common intellectual forces, should be devoted to organizing and
articulating just this sort of counter-transition, at a distance from the
public and the private.


## Authorship and new forms of knowledge

For decades we have spoken about the “death of the author”. The most sustained
critiques of authorship have been made from the spheres of art and education,
but not coincidentally, these spheres have the most invested in the notion.
Credit and accreditation are the mechanisms for attaching symbolic capital to
individuals via degrees and other lines on CVs. The curriculum vitæ is an
inverted credit report, evidence of underpaid work, kept orderly with an
expectation of some future return.

All of this work, this self-documentation, this fidelity between ourselves and
our papers, is for what, for whom? And what is the consequence of a world
where every person is armed with their vitæ, other than “the war of all
against all?” It’s that sensation that there are no teams but everyone has got
their own jersey.

The idea behind the project The Public School is to teach each other in a very
horizontal way. No curriculum, no hierarchy. But is The Public School able to
produce new knowledge and new content by itself? Can the The Public School
become a sort of autonomous collective author? Or, is The Public School just
about exchanges and social networking?

In the recent history of university struggles, some collectives started to
refresh the idea of coresearch; a form of knowledge that can produce new
subjectivities by researching. New subjectivities that produce new knowledge
and new knowledge that produces new subjectivities If knowledge comes only
from conflict, knowledge goes back to conflict in order to produce new
autonomy and subjectivities.

### The Public School

Sean Dockray, Matteo Pasquinelli, Jason Smith and Caleb Waldorf are founding
members of and collaborators at The Public School. Initiated in 2007 under
Telic Arts Exchange (literally in the basement) in Los Angeles, The Public
School is a school with no curriculum. At the moment, it operates as follows:
first, classes are proposed by the public; then, people have the opportunity
to sign up for the classes; finally, when enough people have expressed
interest, the school finds a teacher and offers the class to those who signed
up. The Public School is not accredited, it does not give out degrees, and it
has no affiliation with the public school system. It is a framework that
supports autodidactic activities, operating under the assumption that
everything is in everything. The Public School currently exists in Los
Angeles, New York, Berlin, Brussels, Helsinki, Philadelphia, Durham, San Juan,
and is still expanding.


Elbakyan
Why Science is Better with Communism The Case of Sci-Hub transcript and translation
2016


# Transcript and translation of Sci-Hub presentation

_The University of North Texas 's [Open Access Symposium
2016](/symposium/2016/) included [a presentation via Skype by Alexandra
Elbakyan](/symposium/2016/why-science-better-communism-case-sci-hub), the
founder of Sci-Hub. [Elbakyan's
slides](http://digital.library.unt.edu/ark:/67531/metadc850001/) (and those of
other presenters) have been archived in the UNT Digital Library, and [video of
this presentation](https://youtu.be/hr7v5FF5c8M) (and others) is now available
on YouTube and soon in the UNT Digital Library._

_The presentation was entitled "Why Science is Better with Communism? The Case
of Sci-Hub." Below is an edited transcript of the presentation produced by
Regina Anikina and Kevin Hawkins, with a translation by Kevin Hawkins and Anna
Pechenina._

**Martin Halbert** : We have a recent addition to our lineup of speakers that
we'll start off the day with: Alexandra Elbakyan. As many of you know,
Alexandra is a Kazakhstani graduate student, computer programmer, and the
creator of the controversial Sci-Hub site. The New York Times has compared her
to Edward Snowden for leaking information and because she avoids American law,
but Ars Technica has compared her to Aaron Swartz--so a controversial figure.
We thought it was very important to include her in the dialog about open
access because we want, in this symposium series, to include all the different
perspectives on copyright, intellectual property, open access, and access to
scholarly information. So I'm delighted that we're actually able to have her
here via Skype to present.

---

**Alexandra Elbakyan** : First of all, thank you for inviting me to share my
views. My name is Alexandra. As you might have guessed, I represent the site
Sci-Hub. It was founded in 2011 and immediately became popular among the local
community, almost immediately began providing access to about 40 articles an
hour and now providing more than 200,000.

It has to be said that over the course of the site's development it was
strongly supported by donations, and when for various reasons we had to
suspend the service, there were many displeased users who clamored for the
project to return so that the work in their laboratory could continue.

This is the case not just in poor countries; I can say that in rich countries
the public also doesn't have access to scholarly articles. And not all
universities have subscriptions to those resources that are required for
research.

A few of our users insisted that we start charging users, for example, by
allowing one or two articles to be downloaded for free but charging for more,
so that the service would be supported by those who really need it. But I
didn't end up doing that because the goal of the resource is knowledge for
all.

Certain open-access advocates criticize the site, saying that what we really
need is for articles to be in open access from the start, by changing the
business models of publishers. I can respond by saying that the goal of the
project is first and foremost the dissemination of scholarly knowledge in
society, and we have to work in the conditions we find ourselves in. Of
course, if scholarly publishers had a different business model, then perhaps
this project wouldn't be necessary. We can also imagine that if humans had
wings, we wouldn't need airplanes. But in any case we need to fly, so we make
airplanes.

Scholarly publishers quickly dubbed the work of Sci-Hub as piracy. Admittedly
Sci-Hub violates the laws of copyright, but copyright is related to the rights
of intellectual property. That is, scholarly articles are the property of
publishers, and reading them for free turns out to be something like theft
according to the current law.

The concept of intellectual property itself is not new, although it can seem
otherwise. The history of copyright goes back to around the 18th century,
although the first mentions of something similar can be found in the Talmud.
It's just that recently copyright has been found at the center of passionate
debate since some are trying to forbid the free distribution of information in
the internet.

However, the central focus of the debate is on censorship and privacy. The
defense of intellectual property in the internet requires censorship of
websites, and that is consequently a violation of freedom of speech. This also
raises a question of interference in private life - that is, when the
government in some way monitors users who violate copyright. In principle this
is also an intrusion in communication.

However, the very essence of copyright - that is, the concept of intellectual
property - is almost never questioned. That is, whether knowledge can be
someone's property is rarely discussed.

However, our ancestors were even more daring. They did not just question
intellectual property but property in general. That is, there are works in
which we can find the appearance of the idea of communism. There's Thomas
More's _Utopia_ from the 16th century, but actually such works arose much
earlier, even in Ancient Greece where these questions were already been
discussed in 391 BCE.

If we look at the slogans of communism, we see that one of the core concepts
is the struggle against inequality, the revolt of the suppressed classes,
whose members don't have any power against those who have concentrated basic
resources and power in their hands, with the goal of redistributing these
resources.

We can see that even today there is a certain informational inequality, when,
for example, only students and employees of the most wealthy universities have
full access to scholarly information, while access can be completely lacking
for institutions at the next lower tier and for the general public.

An idea arises: if there isn't private property, then there's no basis for
unequal distribution of wealth. In our case as well: if there's no private
intellectual property and all scholarly publications are nationalized, then
all people will have equal access to knowledge.

However, a question arises: if there is no private property, then what can
stimulate a person to work? One of the ideas is that under communism, rather
than greed or aspiration for wealth being a stimulus for work, a person would
aspire to self-development and learning for the betterment of the world.

Even if such values can't be applied to society as a whole, they at least work
in the world of scholarship. Therefore in the Soviet Union there was a true
cult of science - statues were even erected to the glory of science - and
perhaps thanks to this our country was one of the first to go into space.

However, it's one thing to have a revolution, when there's a mass
redistribution of property in society, but an act of theft is another thing.
This, of course, is not yet a revolution, but it's a small protest against the
property rights and the unequal distribution of wealth. Theft as protest has
always been welcomed and approved of in all eras of society. For example, we
all know about Robin Hood, but there have actually been quite a few noble
bandits in history. I've listed just a few of them.

I think that if the state works well, then accordingly it has a working tax
system and a certain system of redistribution of wealth, and then,
accordingly, there's no cause for revolution, for example. But if for some
reasons the state works poorly, then people begin to solve the problem for
themselves. In this way, Sci-Hub is an appropriate response to the inequality
that has arisen due to lack of access to information.

Pictured is Aldar Köse, a Kazakh folk hero who used his cunning to deceive
wealthy beys and take possession of their property. It's interesting to note
that beys are always depicted as greedy and stupid. And if you look at what's
written in the blogosphere today about scholarly publishers, you can find
these same characteristics.

There's also the interesting figure of the ancient Greek god Hermes, the
patron of thieves. That is, theft was a sufficiently respected activity that
it had its own god.

There's a researcher named Norman Brown who wrote an academic work called
_Hermes the Thief: The Evolution of a Myth_. It turns out that this myth is
related to a certain revolution in ancient Greek society, when the lower
classes, which lacked property, began to rise up.

For example, the poet Theognis of Megara wrote that "those who were nothing
became everything" and vice versa. This is essentially one of the most well-
known communist slogans.

For the ancient Greeks this was related, again as Brown says, to the
appearance of trade. Trade was identified with theft. There was no clear
distinction between the exchange of legal and illegal goods - that is, trade
was just as much considered theft as what we call piracy today.

Why did it turn out this way? Because Hermes was originally a god of
boundaries and transitions. Therefore, we can think that property is related
to keeping something within boundaries. At the same time, the things that
Hermes protected - theft, trade and communication - are related to boundary-
crossing.

If we think about scholarly journals, then any journal is first of all a means
of communication, and therefore it's apparent that keeping journals in closed
access contradicts the essence of what they were intended for.

This is, of course, not even the most interesting thing.

Hermes actually evolved - that is, while he was once an intellectual deity, he
later came to be interpreted as the same as Thoth, the Egyptian god of
knowledge, and further came to oversee such things as astrology, alchemy, and
magic - that is, the things from which, you might say, contemporary sciences
arose. So we can say that contemporary science arose from theft.

Of course, someone can object, saying that contemporary science is very
different from esoterica, such as astrology and alchemy, but if we look at the
history of science, we see that contemporary science differs from the ancient
arts in the former being more open.

That is, when the movement towards greater openness appeared, contemporary
science also appeared. Once again this is not an argument in support of
scholarly publishers.

Indeed, in the cultural consciousness science and the process of learning have
always been closely associated with theft, beginning with the legend of Adam
and Eve and the forbidden tree, which is called simply "the tree of
knowledge." And it's interesting that Elsevier's logo depicts some kind of
tree, which, accordingly, raises associations with this tree in the Garden of
Eden - the tree of knowledge - from which it was forbidden to eat the fruit.

Likewise we can recall the well-known legend of Prometheus, a part of our
cultural consciousness, who stole some knowledge and brought it to humans.
Once again we see the connection between science and theft.

Nowadays, many scholars have described science as the knowledge of secrets.
However, if we look closely, we have to ask: what is a secret? A secret is
something private, in essence private property. Accordingly, the disclosure of
the secret signifies that it ceases to be property. Once again we see the
contradiction between scholarship and property rights.

We can recall Robert Merton, who studied research institutes and revealed four
basic ethical norms that in his opinion are important for their successful
functioning. One of them is communism - that is, knowledge is shared.

Accordingly, if we look at certain traditional communities, then we find that
those communities that function within a caste system (dividing people by
occupation) usually turn out to have certain castes of people with
intellectual occupations, and if you look at the ethical norms of such castes,
you find that they are also communistic. You can find this, for example, in
Plato. Or even if you look at India, you find the accumulation of wealth is
usually the occupation of another caste.

To sum up, we have the following take-aways. Science, as a part of culture, is
in conflict with private property. Accordingly, scholarly communication is a
dual conflict. What open access is doing is returning science to its essential
roots.

**Audience question** : I'm a former university press director. I'd just like
to point out also that "property is theft" is the watchword of French
anarchism, a famous phrase from Pierre-Joseph Proudhon, so perhaps anarchism
and science are also inseparable. But my main question really has to do with a
challenge that a librarian named Rick Anderson posted on the Scholarly Kitchen
blog two days ago, and that has to do with the fact that evidently Sci-Hub
relies a lot on the access codes that faculty have given to Sci-Hub in one way
or another so that Sci-Hub can gain access to the electronic materials that it
then uses to post on its own site. What Anderson does is points out that if
that information falls into the wrong hands, there are all sorts of terrible
things that can be done because those access codes provide access to personal
information, to student data, to all sorts of other things that could be badly
misused, so my question to you is what assurances can you give us that that
kind of information will not fall into the wrong hands.

**Elbakyan** : Well, first of all I doubt that it's possible to gain access to
all the information that is listed in the post on the Scholarly Kitchen. As a
rule, these logins and passwords can only be used for access to the proxy
server through which you can download articles, whereas for access to other
things, such as email, the login and password won't work. [ _Audience reacts
with skepticism._ ]

**Audience question** : Earlier this week a number of us participated in a
panel presentation on scholarly publishing and social justice, and one of the
primary points that came out of that was that the people who create the
published product - not necessarily the scientist but the people who actually
do the work that results in the published product - deserve to be paid for
their labor, and there is definitely labor involved. So if you're replacing
the market for these publications and eliminating these people's opportunities
to make money, where is the appropriate distribution of wealth.

**Elbakyan** : First of all, we shouldn't confuse the compensation that a
person receives for their labor with the excessive profits that publishers
wring out by limiting access to information. For example, Sci-Hub also does a
fair amount of work and has high expenses, but these expenses are for some
reason covered by donations - that is, there's no need to close access to
information - that is, it's a red herring to say that if articles are
distributed for free, people won't have anything to eat. One does not follow
from the other. In my opinion, though, an optimal system for funding would
consist of grants, donations, and membership fees.

**Audience question** : You've spoken so far exclusively about Sci-Hub. I
wonder if you could comment just briefly on LibGen and whether you see the two
models as identical or whether there are any material differences between
LibGen and Sci-Hub.

**Elbakyan** : Well, LibGen is primarily a repository. It doesn't download
new articles but is more aimed at preserving that which has already been
downloaded.



USDC
Complaint: Elsevier v. SciHub and LibGen
2015


Case 1:15-cv-04282-RWS Document 1 Filed 06/03/15 Page 1 of 16

UNITED STATES DISTRICT COURT
SOUTHERN DISTRICT OF NEW YORK

Index No. 15-cv-4282 (RWS)
COMPLAINT

ELSEVIER INC., ELSEVIER B.V., ELSEVIER LTD.
Plaintiffs,

v.

SCI-HUB d/b/a WWW.SCI-HUB.ORG, THE LIBRARY GENESIS PROJECT d/b/a LIBGEN.ORG, ALEXANDRA ELBAKYAN, JOHN DOES 1-99,
Defendants.

Plaintiffs Elsevier Inc, Elsevier B.V., and Elsevier Ltd. (collectively “Elsevier”),
by their attorneys DeVore & DeMarco LLP, for their complaint against www.scihub.org,
www.libgen.org, Alexandra Elbakyan, and John Does 1-99 (collectively the “Defendants”),
allege as follows:

NATURE OF THE ACTION

1. This is a civil action seeking damages and injunctive relief for: (1) copyright infringement under the copyright laws of the United States (17 U.S.C. § 101 et seq.); and (2) violations of the Computer Fraud and Abuse Act, 18.U.S.C. § 1030, based upon Defendants’ unlawful access to, use, reproduction, and distribution of Elsevier’s copyrighted works. Defendants’ actions in this regard have caused and continue to cause irreparable injury to Elsevier and its publishing partners (including scholarly societies) for which it publishes certain journals.

1

Case 1:15-cv-04282-RWS Document 1 Filed 06/03/15 Page 2 of 16

PARTIES

2. Plaintiff Elsevier Inc. is a corporation organized under the laws of Delaware, with its principal place of business at 360 Park Avenue South, New York, New York 10010.

3. Plaintiff Elsevier B.V. is a corporation organized under the laws of the Netherlands, with its principal place of business at Radarweg 29, Amsterdam, 1043 NX, Netherlands.

4. Plaintiff Elsevier Ltd. is a corporation organized under the laws of the United Kingdom, with its principal place of business at 125 London Wall, EC2Y 5AS United Kingdom.

5. Upon information and belief, Defendant Sci-Hub is an individual or organization engaged in the operation of the website accessible at the URL “www.sci-hub.org,” and related subdomains, including but not limited to the subdomain “www.sciencedirect.com.sci-hub.org,”
www.elsevier.com.sci-hub.org,” “store.elsevier.com.sci-hub.org,” and various subdomains
incorporating the company and product names of other major global publishers (collectively with www.sci-hub.org the “Sci-Hub Website”). The sci-hub.org domain name is registered by
“Fundacion Private Whois,” located in Panama City, Panama, to an unknown registrant. As of
the date of this filing, the Sci-Hub Website is assigned the IP address 31.184.194.81. This IP address is part of a range of IP addresses assigned to Petersburg Internet Network Ltd., a webhosting company located in Saint Petersburg, Russia.

6. Upon information and belief, Defendant Library Genesis Project is an organization which operates an online repository of copyrighted materials accessible through the website located at the URL “libgen.org” as well as a number of other “mirror” websites
(collectively the “Libgen Domains”). The libgen.org domain is registered by “Whois Privacy
Corp.,” located at Ocean Centre, Montagu Foreshore, East Bay Street, Nassau, New Providence,

2

Case 1:15-cv-04282-RWS Document 1 Filed 06/03/15 Page 3 of 16

Bahamas, to an unknown registrant. As of the date of this filing, libgen.org is assigned the IP address 93.174.95.71. This IP address is part of a range of IP addresses assigned to Ecatel Ltd., a web-hosting company located in Amsterdam, the Netherlands.

7. The Libgen Domains include “elibgen.org,” “libgen.info,” “lib.estrorecollege.org,” and “bookfi.org.”

8. Upon information and belief, Defendant Alexandra Elbakyan is the principal owner and/or operator of Sci-Hub. Upon information and belief, Elbakyan is a resident of Almaty, Kazakhstan.

9. Elsevier is unaware of the true names and capacities of the individuals named as Does 1-99 in this Complaint (together with Alexandra Elbakyan, the “Individual Defendants”),
and their residence and citizenship is also unknown. Elsevier will amend its Complaint to allege the names, capacities, residence and citizenship of the Doe Defendants when their identities are learned.

10. Upon information and belief, the Individual Defendants are the owners and operators of numerous of websites, including Sci-Hub and the websites located at the various
Libgen Domains, and a number of e-mail addresses and accounts at issue in this case.

11. The Individual Defendants have participated, exercised control over, and benefited from the infringing conduct described herein, which has resulted in substantial harm to
the Plaintiffs.

JURISDICTION AND VENUE

12. This is a civil action arising from the Defendants’ violations of the copyright laws of the United States (17 U.S.C. § 101 et seq.) and the Computer Fraud and Abuse Act (“CFAA”),

3

Case 1:15-cv-04282-RWS Document 1 Filed 06/03/15 Page 4 of 16

18.U.S.C. § 1030. Therefore, the Court has subject matter jurisdiction over this action pursuant to 28 U.S.C. § 1331.

13. Upon information and belief, the Individual Defendants own and operate computers and Internet websites and engage in conduct that injures Plaintiff in this district, while
also utilizing instrumentalities located in the Southern District of New York to carry out the acts complained of herein.

14. Defendants have affirmatively directed actions at the Southern District of New York by utilizing computer servers located in the District without authorization and by
unlawfully obtaining access credentials belonging to individuals and entities located in the
District, in order to unlawfully access, copy, and distribute Elsevier's copyrighted materials
which are stored on Elsevier’s ScienceDirect platform.
15.

Defendants have committed the acts complained of herein through unauthorized

access to Plaintiffs’ copyrighted materials which are stored and maintained on computer servers
located in the Southern District of New York.
16.

Defendants have undertaken the acts complained of herein with knowledge that

such acts would cause harm to Plaintiffs and their customers in both the Southern District of
New York and elsewhere. Defendants have caused the Plaintiff injury while deriving revenue
from interstate or international commerce by committing the acts complained of herein.
Therefore, this Court has personal jurisdiction over Defendants.
17.

Venue in this District is proper under 28 U.S.C. § 1391(b) because a substantial

part of the events giving rise to Plaintiffs’ claims occurred in this District and because the
property that is the subject of Plaintiffs’ claims is situated in this District.

4

Case 1:15-cv-04282-RWS Document 1 Filed 06/03/15 Page 5 of 16

FACTUAL ALLEGATIONS
Elsevier’s Copyrights in Publications on ScienceDirect
18.

Elsevier is a world leading provider of professional information solutions in the

Science, Medical, and Health sectors. Elsevier publishes, markets, sells, and licenses academic
textbooks, journals, and examinations in the fields of science, medicine, and health. The
majority of Elsevier’s institutional customers are universities, governmental entities, educational
institutions, and hospitals that purchase physical and electronic copies of Elsevier’s products and
access to Elsevier’s digital libraries. Elsevier distributes its scientific journal articles and book
chapters electronically via its proprietary subscription database “ScienceDirect”
(www.sciencedirect.com). In most cases, Elsevier holds the copyright and/or exclusive
distribution rights to the works available through ScienceDirect. In addition, Elsevier holds
trademark rights in “Elsevier,” “ScienceDirect,” and several other related trade names.
19.

The ScienceDirect database is home to almost one-quarter of the world's peer-

reviewed, full-text scientific, technical and medical content. The ScienceDirect service features
sophisticated search and retrieval tools for students and professionals which facilitates access to
over 10 million copyrighted publications. More than 15 million researchers, health care
professionals, teachers, students, and information professionals around the globe rely on
ScienceDirect as a trusted source of nearly 2,500 journals and more than 26,000 book titles.
20.

Authorized users are provided access to the ScienceDirect platform by way of

non-exclusive, non-transferable subscriptions between Elsevier and its institutional customers.
According to the terms and conditions of these subscriptions, authorized users of ScienceDirect
must be users affiliated with the subscriber (e.g., full-time and part-time students, faculty, staff

5

Case 1:15-cv-04282-RWS Document 1 Filed 06/03/15 Page 6 of 16

and researchers of subscriber universities and individuals using computer terminals within the
library facilities at the subscriber for personal research, education or other non-corporate use.)
21.

A substantial portion of American research universities maintain active

subscriptions to ScienceDirect. These subscriptions, under license, allow the universities to
provide their faculty and students access to the copyrighted works within the ScienceDirect
database.
22.

Elsevier stores and maintains the copyrighted material available in ScienceDirect

on servers owned and operated by a third party whose servers are located in the Southern District
of New York and elsewhere. In order to optimize performance, these third-party servers
collectively operate as a distributed network which serves cached copies of Elsevier’s
copyrighted materials by way of particular servers that are geographically close to the user. For
example, a user that accesses ScienceDirect from a University located in the Southern District of
New York will likely be served that content from a server physically located in the District.

Authentication of Authorized University ScienceDirect Users
23.

Elsevier maintains the integrity and security of the copyrighted works accessible

on ScienceDirect by allowing only authenticated users access to the platform. Elsevier
authenticates educational users who access ScienceDirect through their affiliated university’s
subscription by verifying that they are able to access ScienceDirect from a computer system or
network previously identified as belonging to a subscribing university.
24.

Elsevier does not track individual educational users’ access to ScienceDirect.

Instead, Elsevier verifies only that the user has authenticated access to a subscribing university.
25.

Once an educational user authenticates his computer with ScienceDirect on a

university network, that computer is permitted access to ScienceDirect for a limited amount of
6

Case 1:15-cv-04282-RWS Document 1 Filed 06/03/15 Page 7 of 16

time without re-authenticating. For example, a student could access ScienceDirect from their
laptop while sitting in a university library, then continue to access ScienceDirect using that
laptop from their dorm room later that day. After a specified period of time has passed, however,
a user will have to re-authenticate his or her computer’s access to ScienceDirect by connecting to
the platform through a university network.
26.

As a matter of practice, educational users access university networks, and thereby

authenticate their computers with ScienceDirect, primarily through one of two methods. First,
the user may be physically connected to a university network, for example by taking their
computer to the university’s library. Second, the user may connect remotely to the university’s
network using a proxy connection. Universities offer proxy connections to their students and
faculty so that those users may access university computing resources – including access to
research databases such as ScienceDirect – from remote locations which are unaffiliated with the
university. This practice facilitates the use of ScienceDirect by students and faculty while they
are at home, travelling, or otherwise off-campus.
Defendants’ Unauthorized Access to University Proxy Networks to Facilitate Copyright
Infringement
27.

Upon information and belief, Defendants are reproducing and distributing

unauthorized copies of Elsevier’s copyrighted materials, unlawfully obtained from
ScienceDirect, through Sci-Hub and through various websites affiliated with the Library Genesis
Project. Specifically, Defendants utilize their websites located at sci-hub.org and at the Libgen
Domains to operate an international network of piracy and copyright infringement by
circumventing legal and authorized means of access to the ScienceDirect database. Defendants’
piracy is supported by the persistent intrusion and unauthorized access to the computer networks

7

Case 1:15-cv-04282-RWS Document 1 Filed 06/03/15 Page 8 of 16

of Elsevier and its institutional subscribers, including universities located in the Southern District
of New York.
28.

Upon information and belief, Defendants have unlawfully obtained and continue

to unlawfully obtain student or faculty access credentials which permit proxy connections to
universities which subscribe to ScienceDirect, and use these credentials to gain unauthorized
access to ScienceDirect.
29.

Upon information and belief, Defendants have used and continue to use such

access credentials to authenticate access to ScienceDirect and, subsequently, to obtain
copyrighted scientific journal articles therefrom without valid authorization.
30.

The Sci-Hub website requires user interaction in order to facilitate its illegal

copyright infringement scheme. Specifically, before a Sci-Hub user can obtain access to
copyrighted scholarly journals, articles, and books that are maintained by ScienceDirect, he must
first perform a search on the Sci-Hub page. A Sci-Hub user may search for content using either
(a) a general keyword-based search, or (b) a journal, article or book identifier (such as a Digital
Object Identifier, PubMed Identifier, or the source URL).
31.

When a user performs a keyword search on Sci-Hub, the website returns a proxied

version of search results from the Google Scholar search database. 1 When a user selects one of
the search results, if the requested content is not available from the Library Genesis Project, SciHub unlawfully retrieves the content from ScienceDirect using the access previously obtained.
Sci-Hub then provides a copy of that article to the requesting user, typically in PDF format. If,
however, the requested content can be found in the Library Genesis Project repository, upon

1

Google Scholar provides its users the capability to search for scholarly literature, but does not provide the
full text of copyrighted scientific journal articles accessible through paid subscription services such as
ScienceDirect. Instead, Google Scholar provides bibliographic information concerning such articles along with a
link to the platform through which the article may be purchased or accessed by a subscriber.

8

Case 1:15-cv-04282-RWS Document 1 Filed 06/03/15 Page 9 of 16

information and belief, Sci-Hub obtains the content from the Library Genesis Project repository
and provides that content to the user.
32.

When a user searches on Sci-Hub for an article available on ScienceDirect using a

journal or article identifier, the user is redirected to a proxied version of the ScienceDirect page
where the user can download the requested article at no cost. Upon information and belief, SciHub facilitates this infringing conduct by using unlawfully-obtained access credentials to
university proxy servers to establish remote access to ScienceDirect through those proxy servers.
If, however, the requested content can be found in the Library Genesis Project repository, upon
information and belief, Sci-Hub obtains the content from it and provides it to the user.
33.

Upon information and belief, Sci-Hub engages in no other activity other than the

illegal reproduction and distribution of digital copies of Elsevier’s copyrighted works and the
copyrighted works of other publishers, and the encouragement, inducement, and material
contribution to the infringement of the copyrights of those works by third parties – i.e., the users
of the Sci-Hub website.
34.

Upon information and belief, in addition to the blatant and rampant infringement

of Elsevier’s copyrights as described above, the Defendants have also used the Sci-Hub website
to earn revenue from the piracy of copyrighted materials from ScienceDirect. Sci-Hub has at
various times accepted funds through a variety of payment processors, including PayPal,
Yandex, WebMoney, QiQi, and Bitcoin.
Sci-Hub’s Use of the Library Genesis Project as a Repository for Unlawfully-Obtained
Scientific Journal Articles and Books
35.

Upon information and belief, when Sci-Hub pirates and downloads an article from

ScienceDirect in response to a user request, in addition to providing a copy of that article to that
user, Sci-Hub also provides a duplicate copy to the Library Genesis Project, which stores the
9

Case 1:15-cv-04282-RWS Document 1 Filed 06/03/15 Page 10 of 16

article in a database accessible through the Internet. Upon information and belief, the Library
Genesis Project is designed to be a permanent repository of this and other illegally obtained
content.
36.

Upon information and belief, in the event that a Sci-Hub user requests an article

which has already been provided to the Library Genesis Project, Sci-Hub may provide that user
access to a copy provided by the Library Genesis Project rather than re-download an additional
copy of the article from ScienceDirect. As a result, Defendants Sci-Hub and Library Genesis
Project act in concert to engage in a scheme designed to facilitate the unauthorized access to and
wholesale distribution of Elsevier’s copyrighted works legitimately available on the
ScienceDirect platform.
The Library Genesis Project’s Unlawful Distribution of Plaintiff’s Copyrighted Works
37.

Access to the Library Genesis Project’s repository is facilitated by the website

“libgen.org,” which provides its users the ability to search, download content from, and upload
content to, the repository. The main page of libgen.org allows its users to perform searches in
various categories, including “LibGen (Sci-Tech),” and “Scientific articles.” In addition to
searching by keyword, users may also search for specific content by various other fields,
including title, author, periodical, publisher, or ISBN or DOI number.
38.

The libgen.org website indicates that the Library Genesis Project repository

contains approximately 1 million “Sci-Tech” documents and 40 million scientific articles. Upon
information and belief, the large majority of these works is subject to copyright protection and is
being distributed through the Library Genesis Project without the permission of the applicable
rights-holder. Upon information and belief, the Library Genesis Project serves primarily, if not

10

Case 1:15-cv-04282-RWS Document 1 Filed 06/03/15 Page 11 of 16

exclusively, as a scheme to violate the intellectual property rights of the owners of millions of
copyrighted works.
39.

Upon information and belief, Elsevier owns the copyrights in a substantial

number of copyrighted materials made available for distribution through the Library Genesis
Project. Elsevier has not authorized the Library Genesis Project or any of the Defendants to
copy, display, or distribute through any of the complained of websites any of the content stored
on ScienceDirect to which it holds the copyright. Among the works infringed by the Library
Genesis Project are the “Guyton and Hall Textbook of Medical Physiology,” and the article “The
Varus Ankle and Instability” (published in Elsevier’s journal “Foot and Ankle Clinics of North
America”), each of which is protected by Elsevier’s federally-registered copyrights.
40.

In addition to the Library Genesis Project website accessible at libgen.org, users

may access the Library Genesis Project repository through a number of “mirror” sites accessible
through other URLs. These mirror sites are similar, if not identical, in functionality to
libgen.org. Specifically, the mirror sites allow their users to search and download materials from
the Library Genesis Project repository.
FIRST CLAIM FOR RELIEF
(Direct Infringement of Copyright)
41.

Elsevier incorporates by reference the allegations contained in paragraphs 1-40

42.

Elsevier’s copyright rights and exclusive distribution rights to the works available

above.

on ScienceDirect (the “Works”) are valid and enforceable.
43.

Defendants have infringed on Elsevier’s copyright rights to these Works by

knowingly and intentionally reproducing and distributing these Works without authorization.

11

Case 1:15-cv-04282-RWS Document 1 Filed 06/03/15 Page 12 of 16

44.

The acts of infringement described herein have been willful, intentional, and

purposeful, in disregard of and indifferent to Plaintiffs’ rights.
45.

Without authorization from Elsevier, or right under law, Defendants are directly

liable for infringing Elsevier’s copyrighted Works pursuant to 17 U.S.C. §§ 106(1) and/or (3).
46.

As a direct result of Defendants’ actions, Elsevier has suffered and continues to

suffer irreparable harm for which Elsevier has no adequate remedy at law, and which will
continue unless Defendants’ actions are enjoined.
47.

Elsevier seeks injunctive relief and costs and damages in an amount to be proven

at trial.
SECOND CLAIM FOR RELIEF
(Secondary Infringement of Copyright)
48.

Elsevier incorporates by reference the allegations contained in paragraphs 1-40

49.

Elsevier’s copyright rights and exclusive distribution rights to the works available

above.

on ScienceDirect (the “Works”) are valid and enforceable.
50.

Defendants have infringed on Elsevier’s copyright rights to these Works by

knowingly and intentionally reproducing and distributing these Works without license or other
authorization.
51.

Upon information and belief, Defendants intentionally induced, encouraged, and

materially contributed to the reproduction and distribution of these Works by third party users of
websites operated by Defendants.
52.

The acts of infringement described herein have been willful, intentional, and

purposeful, in disregard of and indifferent to Elsevier’s rights.

12

Case 1:15-cv-04282-RWS Document 1 Filed 06/03/15 Page 13 of 16

53.

Without authorization from Elsevier, or right under law, Defendants are directly

liable for third parties’ infringement of Elsevier’s copyrighted Works pursuant to 17 U.S.C. §§
106(1) and/or (3).
54.

Upon information and belief, Defendants profited from third parties’ direct

infringement of Elsevier’s Works.
55.

Defendants had the right and the ability to supervise and control their websites

and the third party infringing activities described herein.
56.

As a direct result of Defendants’ actions, Elsevier has suffered and continues to

suffer irreparable harm for which Elsevier has no adequate remedy at law, and which will
continue unless Defendants’ actions are enjoined.
57.

Elsevier seeks injunctive relief and costs and damages in an amount to be proven

at trial.
THIRD CLAIM FOR RELIEF
(Violation of the Computer Fraud & Abuse Act)
58.

Elsevier incorporates by reference the allegations contained in paragraphs 1-40

59.

Elsevier’s computers and servers, the third-party computers and servers which

above.

store and maintain Elsevier’s copyrighted works for ScienceDirect, and Elsevier’s customers’
computers and servers which facilitate access to Elsevier’s copyrighted works on ScienceDirect,
are all “protected computers” under the Computer Fraud and Abuse Act (“CFAA”).
60.

Defendants (a) knowingly and intentionally accessed such protected computers

without authorization and thereby obtained information from the protected computers in a
transaction involving an interstate or foreign communication (18 U.S.C. § 1030(a)(2)(C)); and
(b) knowingly and with an intent to defraud accessed such protected computers without
13

Case 1:15-cv-04282-RWS Document 1 Filed 06/03/15 Page 14 of 16

authorization and obtained information from such computers, which Defendants used to further
the fraud and obtain something of value (18 U.S.C. § 1030(a)(4)).
61.

Defendants’ conduct has caused, and continues to cause, significant and

irreparable damages and loss to Elsevier.
62.

Defendants’ conduct has caused a loss to Elsevier during a one-year period

aggregating at least $5,000.
63.

As a direct result of Defendants’ actions, Elsevier has suffered and continues to

suffer irreparable harm for which Elsevier has no adequate remedy at law, and which will
continue unless Defendants’ actions are enjoined.
64.

Elsevier seeks injunctive relief, as well as costs and damages in an amount to be

proven at trial.
PRAYER FOR RELIEF
WHEREFORE, Elsevier respectfully requests that the Court:
A. Enter preliminary and permanent injunctions, enjoining and prohibiting Defendants,
their officers, directors, principals, agents, servants, employees, successors and
assigns, and all persons and entities in active concert or participation with them, from
engaging in any of the activity complained of herein or from causing any of the injury
complained of herein and from assisting, aiding, or abetting any other person or
business entity in engaging in or performing any of the activity complained of herein
or from causing any of the injury complained of herein;
B. Enter an order that, upon Elsevier’s request, those in privity with Defendants and
those with notice of the injunction, including any Internet search engines, Web
Hosting and Internet Service Providers, domain-name registrars, and domain name

14

Case 1:15-cv-04282-RWS Document 1 Filed 06/03/15 Page 15 of 16

registries or their administrators that are provided with notice of the injunction, cease
facilitating access to any or all domain names and websites through which Defendants
engage in any of the activity complained of herein;
C. Enter an order that, upon Elsevier’s request, those organizations which have
registered Defendants’ domain names on behalf of Defendants shall disclose
immediately to Plaintiffs all information in their possession concerning the identity of
the operator or registrant of such domain names and of any bank accounts or financial
accounts owned or used by such operator or registrant;
D. Enter an order that, upon Elsevier’s request, the TLD Registries for the Defendants’
websites, or their administrators, shall place the domain names on
registryHold/serverHold as well as serverUpdate, ServerDelete, and serverTransfer
prohibited statuses, for the remainder of the registration period for any such website.
E. Enter an order canceling or deleting, or, at Elsevier’s election, transferring the domain
name registrations used by Defendants to engage in the activity complained of herein
to Elsevier’s control so that they may no longer be used for illegal purposes;
F. Enter an order awarding Elsevier its actual damages incurred as a result of
Defendants’ infringement of Elsevier’s copyright rights in the Works and all profits
Defendant realized as a result of its acts of infringement, in amounts to be determined
at trial; or in the alternative, awarding Elsevier, pursuant to 17 U.S.C. § 504, statutory
damages for the acts of infringement committed by Defendants, enhanced to reflect
the willful nature of the Defendants’ infringement;
G. Enter an order disgorging Defendants’ profits;

15

Case 1:15-cv-04282-RWS Document 1 Filed 06/03/15 Page 16 of 16

USDC
Opinion: Elsevier against SciHub and LibGen
2015


UNITED STATES DISTRICT COURT
SOUTHERN DISTRICT OF NEW YORK
----------------------------------------

15 Civ. 4282(RWS)
OPINION

ELSEVIER INC., ELSEVIER B.V., and ELSEVIER LTD.,

Plaintiffs,

- against -

WWW.SCI-HUB.ORG, THE LIBRARY GENESIS PROJECT, d/b/a LIBGEN.ORG, ALEXANDRA ELBAKYAN, and JOHN DOES 1-99,

Defendants.

----------------------------------------

APPEARANCES

Attorneys for the Plaintiffs

DEVORE & DEMARCO LLP
99 Park Avenue, Suite 1100
New York, NY 1001 6
By:
Joseph DeMarco, Esq.
David Hirschberg, Esq.
Urvashi Sen, Esq.

Pro Se

Alexandra Elbakyan
Almaty, Kazakhstan

1

Sweet, D.J.,

Plaintiffs Elsevier Inc., Elsevier B.V., and Elsevier, Ltd. (collectively, "Elsevier" or the "Plaintiffs") have moved for a preliminary injunction preventing defendants Sci-Hub, Library Genesis Project (the " Project"), Alexandra Elbakyan ("Elbakyan"), Bookfi.org, Elibgen.org, Erestroresollege.org, and Libgen.info (collectively, the "Defendants") from distributing works to which Elsevier owns the copyright. Based upon the facts and conclusions below, the motion is granted and the Defendants are prohibited from distributing the Plaintiffs' copyrighted works.

Prior Proceedings

Elsevier, a major publisher of scientific journal articles and book chapters, brought this action on June 2, 2015, alleging that the Defendants, a series of websites affiliated with the Project (the "Website Defendants") and their owner and operator, Alexandra Elbakyan, infringed Elsevier's copyrighted works and violated the Computer Fraud and Abuse Act. (See generally Complaint, Dkt. No. 1.) Elsevier filed the instant motion for a preliminary injunction on June 11, 2015, via an Order to Show Cause. (Dkt. Nos. 5-13.) On June 18, 2015, the Court granted

2

Plaintiffs' Order to Show Cause and authorized service on the

Defendants via email.
week,

(Dkt.

No.

1 5.)

During the following

the Plaintiffs served the Website Defendants via email and

Elbakyan via email and postal mail.
On July 7,
Part One Judge,
and Elbakyan,

2015,

See Dkt.

Nos.

the Honorable Ronnie Abrams,

24-31. )
acting as

held a telephone conference with the Plaintiffs

during which Elbakyan acknowledged receiving the

papers concerning this case and declared that she did not intend
to obtain a lawyer.
conference,

(See Transcript,

Dkt.

No.

38. )

After the

Judge Abrams issued an Order directing Elbakyan to

notify the Court whether she wished assistance in obtaining pro
bono counsel,
se,

and advising her that while she could proceed pro

the Website Defendants,

not being natural persons,
(Dkt. No.

obtain counsel or risk default.

telephonic conference was held on July 14 ,

must

3 6. )

A second

2015,

during which

Elbakyan stated that she needed additional time to find a
lawyer.

( See Transcript,

the request,

Dkt.

No.

4 2. )

Judge Abrams granted

but warned Elbakyan th�t "you have to move quickly

both in attempting to retain an attorney and you' ll have to
stick to the schedule that is set once it' s set. "
After the telephone conference,

(Id.

at 6. )

Judge Abrams issued another

Order setting the preliminary injunction hearing for September
1 6 and directing Elbakyan to inform the Court by July 21 if she
wished assistance in obtaining pro bono counsel.
3

(Dkt. No.

4 0. )

The motion for a preliminary injunction was heard on
September 1 6,
hearing,

201 5.

None of the Defendants appeared at the

although Elbakyan sent a two-page letter to the court

the day before.

(Dkt. No.

50.)

Applicable Standard

Preliminary injunctions are "extraordinary and drastic
remed[ies]

that should not be granted unless the movant,

clear showing,
Armstrong,

carries the burden of persuasion. "

5 20 U. S.

district court may,

9 68,

972 (1997).

by a

Mazurek v.

In a copyright case,

at its discretion,

a

grant a preliminary

injunction when the plaintiffs demonstrate 1) a likelihood of
success on the merits,
injunction,
favor,

2) irreparable harm in the absence of an

3) a balance of the hardships tipping in their

and 4 ) that issuance of an injunction would not do a

disservice to the public interest.
F. 3d 27 5,

278 ( 2d Cir.

W PIX,

Inc.

v. ivi,

Inc.,

691

2012).

The Motion is Granted

With the exception of Elbakyan,

none of the Defendants

filed any opposition to the instant motion,

participated in any

hearing or telephone conference, or in any other way appeared in
4

the case.

Although Elbakyan acknowledges that she is the "main

operator of sci-hub. erg website"
only represent herself pro

se;

(Dkt.

No.

50 at 1. ), she may

since the Website Defendants are

not natural persons, they may only be represented by an attorney
See Max Cash Media, Inc.

admitted to practice in federal court.
v.

Prism Corp. , No.

(S.D. N. Y.

12 Civ.

147, 2012 WL 2861 162, at *1

July 9, 2012);

Auth. , 722 F. 2d 20, 22

(2d Cir.

1983)

(stating reasons for the

rule and noting that it is "venerable and widespread").

Because

the Website Defendants did not retain an attorney to defend this
action, they are in default.
However, the Website Defendants' default does not
the Plaintiffs to an injunction, nor does

automatically entit

the fact that Elbakyan's submission raises no mer
challenge to the Plaintiffs' claims.
Music, No.
2015).

13 Civ.

s-based

See Thurman v.

5194, 2015 WL 2 168134, at *4

Bun Bun
May 7,

(S. D. N. Y.

Instead, notwithstanding the default, the Plaintiffs

must present evidence sufficient to establish that they are
entitled to injunctive relief.
Curveal Fashion, No.
(S. D. N. Y.
Cir.

09 Civ.

Jan 20, 2010);

See id. ;

Inc.

v.

8458, 2010 WL 308303, at *2

CFTC v.

Vartuli, 228 F. 3d 94, 98

2000).

A. Likelihood of S

Gucci Am.,

ss on the
5

rits

(2d

, -

Elsevier has established that the Defendants have
reproduced and distributed its copyrighted works,
of the exclusive rights established by 17
Complaint,

Dkt. No. 1,

at 11-13.)

(1)

"two elements must be

ownership of a valid copyright,

and

(2)

copying of

constituent elements of the work that are original."
Records,

LLC v. Doe 3,

Feist Publ'ns,

See

U.S.C. § 106.

In order to prevail on a

claim for infringement of copyright,
proven:

in violation

604 F.3d 110,

117

Arista

(2d Cir. 2010)

Inc. v. Rural Tel. Serv. Co.,

499 U.S.

(quoting

340,

361

(1991) ) .
Elsevier has made a substantial evidentiary showing,
documenting the manner in which the Defendants access its
ScienceDirect database of scientific literature and post
copyrighted material on their own websites free of charge.
According to Elsevier,

the Defendants gain access to

ScienceDirect by using credentials fraudulently obtained from
educational institutions,

including educational institutions

located in the Southern District of New York,
legitimate access to ScienceDirect.
Woltermann

(the "Woltermann Dec.") ,

which are granted

(See Declaration of Anthony
Dkt. No. 8,

at 13-14.)

As

an attachment to one of the supporting declarations to this
motion,

Elsevier includes a sequence of screenshots showing how

a user could go to �ww.sc�-hub.org,
6

one of the Website

Defendants,

search for information on a scientific article,

a set of search results, click on a link,
copyrighted article on ScienceDirect,

get

and be redirected to a

via a proxy.

See

Elsevier also points to a

Walterman Dec. at 41-44 and Ex. U.)

Twitter post (in Russian) indicating that whenever an article is
downloaded via this method,
own servers.
1 2,

Ex.

B.)

the Defendants save a copy on their

(See Declaration of David M. Hirschberg,
As specific examples,

with their copyright registrations.
Dkt.

No. 9,

Exs. B-D.)

No.

Elsevier includes copies of

two of its articles accessed via the Defendants'

Doda,

Dkt.

websites,

along

(Declaration of Paul F.

This showing demonstrates a

likelihood of success on Elsevier' s copyright infringement
claims.
Elsevier also shows a likelihood of success on its claim
under the Computer Fraud and Abuse Act ("CFAA").
prohibits,

inter alia,

The CFAA

obtaining information from "any protected

computer" without authorization,

18 U.S. C. § 1030(a)(2)(C),

and

obtaining anything of value by accessing any protected computer
with intent to defraud.

Id.

§ (a) (4).

The definition of

"protected computer" includes one "which is used in or affecting
interstate or foreign commerce or communication,

including a

computer located outside the United States that

is used in a

manner that affects interstate or foreign commerce or
communication of the United States."
7

I .

§ (e) (2) (B);

Nexans

Wires S. A.
2006).

v.

Sa

Inc.

166 F.

App'x 559, 562 n. 5

(2d Cir.

Elsevier's ScienceDirect database is located on multiple

servers throughout the world and is accessed by educational
institutions and their students, and qualifies as a computer
used in interstate commerce, and therefore as a protected
computer under the CFAA.

See Woltermann Dec.

at 2-3. )

As

found above, Elsevier has shown that the Defendants' access to
ScienceDirect was unauthorized and accomplished via fraudulent
university credentials.

While the C fAA requires a civil

plaintiff to have suffered over $5,000 in damage or loss, see
Register. com, Inc.

v.

Verio, Inc. , 356 F. 3d 393, 439

(2d Cir.

2004), Elsevier has made the necessary showing since it
documented between 2,000 and 8,500 of its articles being added
to the LibGen database each day

(Woltermann Dec.

at 8, Exs.

G &

H) and because its articles carry purchase prices of between
$19. 95 and $41. 95 each.
Leon, No.

12 Civ.

Id.

at 2;

see Millennium TGA, Inc.

1360, 2013 WL 5719079, at *10

(E. D. N.Y.

v.

Oct.

18, 2013). 1
Elsevier's evidence is also buttressed by Elbakyan's
submission, in which she frankly admits to copyright
infringement.

1

(See Dkt.

No.

50.)

She discusses her time as a

While Elsevier's articles are likely sufficient on their own to qualify as

"[]thing[s]

of value" under the CFAA,

Elbakyan acknowledges in her submission

that the Defendants derive revenue from their website.
50,

at

1

{"That is true that website collects donations,

pressure anyone to send them.").)

8

Letter,

Dkt. No.

however we do not

student at a university in Kazakhstan, where she did not have
access to research papers and found the prices charged to be
just insane.
(Id.

at 1.)

She obtained the papers she needed

"by pirating them," and found may similar students and
researchers, predominantly in developing count

s, who were in

similar situations and helped each other illicitly obtain
research materials that they could not access legitimately or
afford on the open market.

Id.)

As Elbakyan describes it, "I

could obtain any paper by pirating it, so I solved many requests
and people always were very grateful for my help.

After that, I

created sci-hub.org website that simply makes this process
automatic and the website immediately became popular."

(Id.)

Given Elsevier's strong evidentiary showing and Elbakyan's
admissions, the first prong of the preliminary injunction test
is firmly established.

B. Irreparable Harm

Irreparable harm is present "where, but for the grant of
equitable relief, there is a substantial chance that upon final
resolution of the action the parties cannot be returned to the
positions they previously occupied."

Brenntag Int'l Chems.,

Inc. v. Bank of India, 175 F.3d 245, 249

(2d Cir. 1999).

Here,

there is irreparable harm because it is entirely likely that the
9

•'

damage to Elsevier could not be effectively quantified.
Register.com,

356 F.3d at 404

{"irreparable harm may be found

where damages are difficult to establish and measure.").
would be difficult,

if not impossible,

It

to determine how much

money the Plaintiffs have lost due to the availability of
thousands of their articles on the Defendant websites;

some

percentage of those articles would no doubt have been paid for
legitimately if they were not downloadable for free,

but there

appears to be no way of determining how many that would be.
There is also the matter of harm caused by "viral infringement, "
where Elsevier's content could be transmitted and retransmitted
by third parties who acquired it from the Defendants even after
the Defendants' websites were shut down.
Inc.,
275

765 F. Supp. 2d 594,

(2d Cir. 2012).

620

(S.D.N.Y.

See WPIX,
2011),

'to prove the loss of sales due to

infringement is .

notoriously difficult.'"

Colting,

81

607 F.3d 6 8,

(2d Cir. 2010)

Corp. v. Petri-Kine Camera Co.,
(Friendly,

aff'd 691 F.3d

"(C]ourts have tended to issue injunctions

in this context because

1971)

Inc. v. ivi,

Salinger v.

(quoting Omega Importing

451 F.2d 1190,

1195

(2d Cir.

J.)).

Additionally,

the harm done to the Plaintiffs is likely

irreparable because the scale of any money damages would
dramatically exceed Defendants' ability to pay.
F.3d at 249-50

Brenntag,

175

(explaining that even where money damages can be
10

quantified, there is irreparable harm when a defendant will be
unable to cover the damages).
Defendants'

It is highly likely that the

activities will be found to be willful - Elbakyan

herself refers to the websites'

activities as "pirating" (Dkt.

No. 50 at 1) - in which case they would be liable for between
$750 and $150,000 in statutory damages for each pirated work.
See 17 U.S.C.

§ 504(c);

HarperCollins Publishers LLC v. Open

Road Integrated Media, LLP, 58 F.
2014).

Supp. 3d 380, 38 7 (S.D.N.Y.

Since the Plaintiffs credibly allege that the Defendants

infringe an average of over 3,000 new articles each day
(Woltermann Deel. at 7), even if the Court were to award damages
at the lower end of the statutory range the Defendants'
liability could be extensive.

Since the Defendants are an

individual and a set of websites supported by voluntary
donations, the potential damages are likely to be far beyond the
Defendants'

ability to pay.

C. Balance of Hardships

The balance of hardships clearly tips in favor of the
Plaintiffs.

Elsevier has shown that it is likely to succeed on

the merits, and that it continues to suffer irreparable harm due
to the Defendants'
free.

making its copyrighted material available for

As for the Defendants, "it is axiomatic that an infringer
11

of copyright cannot complain about the loss of ability to offer
its infringing product."
omitted).

W PIX,

691 F.3d at 287 (quotation

The Defendants cannot be legally harmed by the fact

that they cannot continue to steal the Plaintiff' s content,

even

See id.

if they tried to do so for public-spirited reasons.

D. Public Interest

To the extent that Elbakyan mounts a legal challenge to the
motion for a preliminary injunction,
interest prong of the test.

it is on the public

In her letter to the Court,

notes that there are "lots of researchers .

she

. especially in

developing countries" who do not have access to key scientific
papers owned by Elsevier and similar organizations,

and who

cannot afford to pay the high fees that Elsevier charges.
No.

50,

at 1.)

Elbakyan states in her letter that Elsevier
operates by racket:
any papers.

(Dkt.

if you do not send money,

On my website,

as they want for free,

you will not read

any person can read as many papers

and sending donations is their free will.

Why Elsevier cannot work like this,

(Id.)

I wonder?
Elbakyan

also notes that researchers do not actually receive money in
exchange for granting Elsevier a copyright.

Id.)

Rather,

she

alleges they give Elsevier ownership of their works "because
Elsevier is an owner of so-called
12

'high-impact'

journals.

If a

researcher wants to be recognized,

make a career - he or she

needs to have publications in such journals.n

{ Id. at 1-2.)

Elbakyan notes that prominent researchers have made attempts to
boycott Elsevier and states that "[t]he general opinion in
research community is that research papers should be distributed
for free (open access),

not sold.

And practices of such

companies like Elsevier are unacceptable,
distribution of knowledge."

because they limit

ld. at 2.)

Elsevier contends that the public interest favors the
issuance of an injunction because doing so will "protect the
delicate ecosystem which supports scientific research
worldwide."

(Pl.'s Br.,

Dkt. No. 6,

at 21.)

It states that the

money it generates by selling access. to scientific research is
used to support new discoveries,
maintain a "de
discovery."

to create new journals,

and to

nitive and accurate record of scientif

( Id.)

It also argues that allowing its articles to

be widely distributed

sks the spread of bad science - while

Elsevier corrects and retracts articles whose conclusions are
later found to be flawed,

it has no way of doing so when the

content is taken out of its control.

Id. at 22.)

Lastly,

Elsevier argues that injunctive relief against the Defendants is
important to deter "cyber-crime," while

ling to issue an

injunction will incentivize pirates to continue to publish
copyrighted works.
13

It cannot be denied that there is a compelling public
interest in fostering scientific achievement, and that ensuring
broad access to scientific research is an important component of
that effort.

As the Second Circuit has noted, "[c]opyright law

inherently balances [] two competing public interests .

.

. the

rights of users and the public interest in broad accessibility
of creative works, and the rights of copyright owners and the
public interest in rewarding and incentivizing creative efforts
(the

'owner-user balance' )."

WPIX, 691 F.3d at 287 .

Elbakyan' s

solution to the problems she identifies, simply making
copyrighted content available for free via a foreign website,
disserves the public interest.

As the Plaintiffs have

established, there is a "delicate ecosystem which supports
scientific research worldwide,"

( Pl.' s Br., Dkt. No. 6 at 21),

and copyright law pays a critical function within that system.
"Inadequate protections for copyright owners can threaten the
very store of knowledge to be accessed; encouraging the
production of creative work thus ultimately serves the public' s
interest in promoting the accessibility of such works. "
691 F.3d at 287 .

W PIX,

The existence of Elsevier shows that

publication of scient ific research

generates substantial

economic value.
The public' s interest in the broad diffusion of scientific
knowledge is sustained by two critical exceptions in copyright
14

law.

First,

the "idea/expression dichotomy" ensures that while

a scientific article may be subject to copyright,

the ideas and

See 17 U. S.C. § 102(b)

insights within that article are not.

("In no case does copyright protection for an original work of
authorship extend to any idea,

procedure,

method of operation,

concept,

to this distinction,

every idea,

principle,

process,

system,

or discovery").

theory,

"Due

and fact in a

copyrighted work becomes instantly available for public
exploitation at the moment of publication."
537 U.S. 186,

219

(2003).

So while Elsevier may be able to keep

its actual articles behind a paywall,
them are fair game for anyone.
doctrine,

comment,

the discoveries within

Secondly,

codified at 17 U.S.C. § 107,

expressions,

as well as ideas,

news reporting,

Eldred v. Ashcroft,

the "fair use"

allows the public to use

nfor purposes such as criticism,

teaching .

.

.

scholarship,

or

research" without being liable for copyright infringement.

(emphasis added)

Under this doctrine,

themselves may be taken and used,
purposes,

Elsevier' s articles

bu.t only for legitimate

and not for wholesale infringement.

U.S. at 219.2

See Eldred,

537

The public interest in the broad dissemination and

use of scientific research is protected by the idea/expression
dichotomy and the fair use doctrine.

2

See Golan v. Holder,

The public interest in wide d1sseminat1on of scientific works

by the fact that copyrights are given only limited

464

15

U.S.

duration.

417, 431-32

132

is also served

See Sony Corp.

(1984).

S.

Ct. 873,

890 (2012);

Eldred,

537 U.S. at 219.

Given the

importance of scientific research and the critical role that
copyright plays in promoting it,

the public interest weighs in

favor of an injunction.

Conclusion

For the reasons set forth above,

It is hereby ordered that:

preliminary injunction is granted.

1. The Defendants,
agents,

their officers,

servants,

employees,

the motion for a

directors,

principals,

successors and assigns,

and

all persons and entities in active concert or participation
with them,

are hereby temporarily restrained from unlawful

access to,

use,

reproduction,

and/or distribution of

Elsevier's copyrighted works and from assisting,

aiding,

or

abetting any other person or business entity in engaging in
unlawful access to,

use,

reproduction,

and/or distribution

of Elsevier' s copyrighted works.
2. Upon the Plaintiffs'

request,

have registered Defendants'

those organizations which

domain names on behalf of

Defendants shall disclose immediately to the Plaintiffs all
information in their possession concerning the identity of
the operator or registrant of such domain names and of any
16

bank accounts or financial accounts owned or used by such
operator or registrant.
3. Defendants shall not transfer ownership of the Defendants'
websites during the pendency of this Action,

or until

further Order of the Court.
4. The TLD Registries for the Defendants'
administrators,

websites,

or their

shall place the domain names on

registryHold/serverHold as well as serverUpdate,
serverDelete,

and serverTransfer prohibited statuses,

until

further Order of the Court.
5. The Defendants shall preserve copies of all computer files
relating to the use of the websites and shall take all
necessary steps to retrieve computer files relating to the
use of the websites that may have been deleted before entry
of this Order.
6. That security in the amount of $ 5, 000 be posted by the
Plaintiffs within one week of the entry of this Order.
Fed.

R.

Civ.

P. 6 5(c).

17

See

It is so ordered.

New York,

fY
October ? ;--1

2015
R BERT W. SWEET

U.S.D.J.

18


Fuller
The Indexalist
2016


## The Indexalist

### From Mondotheque

#####

[Matthew Fuller](/wiki/index.php?title=Matthew_Fuller "Matthew Fuller")

I first spoke to the patient in the last week of that August. That evening the
sun was tender in drawing its shadows across the lines of his face. The eyes
gazed softly into a close middle distance, as if composing a line upon a
translucent page hung in the middle of the air, the hands tapping out a stanza
or two of music on legs covered by the brown folds of a towelling dressing
gown. He had the air of someone who had seen something of great amazement but
yet lacked the means to put it into language. As I got to know the patient
over the next few weeks I learned that this was not for the want of effort.

In his youth he had dabbled with the world-speak language Volapük, one
designed to do away with the incompatibility of tongues, to establish a
standard in which scientific intercourse might be conducted with maximum
efficiency and with minimal friction in movement between minds, laboratories
and publications. Latin biological names, the magnificent table of elements,
metric units of measurement, the nomenclature of celestial objects from clouds
to planets, anatomical parts and medical conditions all had their own systems
of naming beyond any specific tongue. This was an attempt to bring reason into
speech and record, but there were other means to do so when reality resisted
these early measures.

The dabbling, he reflected, had become a little more than that. He had
subscribed to journals in the language, he wrote letters to colleagues and
received them in return. A few words of world-speak remained readily on his
tongue, words that he spat out regularly into the yellow-wallpapered lounge of
the sanatorium with a disgust that was lugubriously palpable.

According to my records, and in piecing together the notes of previous
doctors, there was something else however, something more profound that the
language only hinted at. Just as the postal system did not require the
adoption of any language in particular but had its formats that integrated
them into addressee, address line, postal town and country, something that
organised the span of the earth, so there was a sense of the patient as having
sustained an encounter with a fundamental form of organisation that mapped out
his soul. More thrilling than the question of language indeed was that of the
system of organisation upon which linguistic symbols are inscribed. I present
for the reader’s contemplation some statements typical of those he seemed to
mull over.

“The index card system spoke to my soul. Suffice it to say that in its use I
enjoyed the highest form of spiritual pleasure, and organisational efficiency,
a profound flowering of intellect in which every thought moved between its
enunciation, evidence, reference and articulation in a mellifluous flow of
ideation and the gratification of curiosity.” This sense of the soul as a
roving enquiry moving across eras, across forms of knowledge and through the
serried landscapes of the vast planet and cosmos was returned to over and
over, a sense that an inexplicable force was within him yet always escaping
his touch.

“At every reference stood another reference, each more interesting than the
last. Each the apex of a pyramid of further reading, pregnant with the threat
of digression, each a thin high wire which, if not observed might lead the
author into the fall of error, a finding already found against and written
up.” He mentions too, a number of times, the way the furniture seemed to
assist his thoughts - the ease of reference implied by the way in which the
desk aligned with the text resting upon the pages of the off-print, journal,
newspaper, blueprint or book above which further drawers of cards stood ready
in their cabinet. All were integrated into the system. And yet, amidst these
frenetic recollections there was a note of mourning in his contemplative
moods, “The superposition of all planes of enquiry and of thought in one
system repels those for whom such harmonious speed is suspicious.” This
thought was delivered with a stare that was not exactly one of accusation, but
that lingered with the impression that there was a further statement to follow
it, and another, queued up ready to follow.

As I gained the trust of the patient, there was a sense in which he estimated
me as something of a junior collaborator, a clerk to his natural role as
manager. A lucky, if slightly doubtful, young man whom he might mentor into
efficiency and a state of full access to information. For his world, there was
not the corruption and tiredness of the old methods. Ideas moved faster in his
mind than they might now across the world. To possess a register of thoughts
covering a period of some years is to have an asset, the value of which is
almost incalculable. That it can answer any question respecting any thought
about which one has had an enquiry is but the smallest of its merits. More
important is the fact that it continually calls attention to matters requiring
such attention.

Much of his discourse was about the optimum means of arrangement of the
system, there was an art to laying out the cards. As the patient further
explained, to meet the objection that loose cards may easily be mislaid, cards
may be tabbed with numbers from one to ten. When arranged in the drawer, these
tabs proceed from left to right across the drawer and the absence of a single
card can thus easily be detected. The cards are further arranged between
coloured guide cards. As an alternative to tabbed cards, signal flags may be
used. Here, metal clips may be attached to the top end of the card and that
stand out like guides. For use of the system in relation to dates of the
month, the card is printed with the numbers 1 to 31 at the top. The metal clip
is placed as a signal to indicate the card is to receive attention on the
specified day. Within a large organisation a further card can be drawn up to
assign responsibility for processing that date’s cards. There were numerous
means of working the cards, special techniques for integrating them into any
type of research or organisation, means by which indexes operating on indexes
could open mines of information and expand the knowledge and capabilities of
mankind.

As he pressed me further, I began to experiment with such methods myself by
withdrawing data from the sanatorium’s records and transferring it to cards in
the night. The advantages of the system are overwhelming. Cards, cut to the
right mathematical degree of accuracy, arrayed readily in drawers, set in
cabinets of standard sizes that may be added to at ease, may be apportioned
out amongst any number of enquirers, all of whom may work on them
independently and simultaneously. The bound book, by contrast, may only be
used by one person at a time and that must stay upon a shelf itself referred
to by an index card system. I began to set up a structure of rows of mirrors
on chains and pulleys and a set of levered and hinged mechanical arms to allow
me to open the drawers and to privately consult my files from any location
within the sanatorium. The clarity of the image is however so far too much
effaced by the diffusion of light across the system.

It must further be borne in mind that a system thus capable of indefinite
expansion obviates the necessity for hampering a researcher with furniture or
appliances of a larger size than are immediately required. The continuous and
orderly sequence of the cards may be extended further into the domain of
furniture and to the conduct of business and daily life. Reasoning, reference
and the order of ideas emerging as they embrace and articulate a chaotic world
and then communicate amongst themselves turning the world in turn into
something resembling the process of thought in an endless process of
consulting, rephrasing, adding and sorting.

For the patient, ideas flowed like a force of life, oblivious to any unnatural
limitation. Thought became, with the proper use of the system, part of the
stream of life itself. Thought moved through the cards not simply at the
superficial level of the movement of fingers and the mechanical sliding and
bunching of cards, but at the most profound depths of the movement between
reality and our ideas of it. The organisational grace to be found in
arrangement, classification and indexing still stirred the remnants of his
nervous system until the last day.

Last Revision: 2*08*2016

Retrieved from
[https://www.mondotheque.be/wiki/index.php?title=The_Indexalist&oldid=8448](https://www.mondotheque.be/wiki/index.php?title=The_Indexalist&oldid=8448)

Fuller & Dockray
In the Paradise of Too Many Books An Interview with Sean Dockray
2011


# In the Paradise of Too Many Books: An Interview with Sean Dockray

By Matthew Fuller, 4 May 2011

[0 Comments](/editorial/articles/paradise-too-many-books-interview-sean-
dockray#comments_none) [9191 Reads](/editorial/articles/paradise-too-many-
books-interview-sean-dockray) Print

If the appetite to read comes with reading, then open text archive Aaaaarg.org
is a great place to stimulate and sate your hunger. Here, Matthew Fuller talks
to long-term observer Sean Dockray about the behaviour of text and
bibliophiles in a text-circulation network

Sean Dockray is an artist and a member of the organising group for the LA
branch of The Public School, a geographically distributed and online platform
for the self-organisation of learning.1 Since its initiation by Telic Arts, an
organisation which Sean directs, The Public School has also been taken up as a
model in a number of cities in the USA and Europe.2

We met to discuss the growing phenomenon of text-sharing. Aaaaarg.org has
developed over the last few years as a crucial site for the sharing and
discussion of texts drawn from cultural theory, politics, philosophy, art and
related areas. Part of this discussion is about the circulation of texts,
scanned and uploaded to other sites that it provides links to. Since
participants in The Public School often draw from the uploads to form readers
or anthologies for specific classes or events series, this project provides a
useful perspective from which to talk about the nature of text in the present
era.

**Sean Dockray** **:** People usually talk about three key actors in
discussions about publishing, which all play fairly understandable roles:
readers; publishers; and authors.

**Matthew Fuller:** Perhaps it could be said that Aaaaarg.org suggests some
other actors that are necessary for a real culture of text; firstly that books
also have some specific kind of activity to themselves, even if in many cases
it is only a latent quality, of storage, of lying in wait and, secondly, that
within the site, there is also this other kind of work done, that of the
public reception and digestion, the response to the texts, their milieu, which
involves other texts, but also systems and organisations, and platforms, such
as Aaaaarg.

![](/sites/www.metamute.org/files/u73/Roland_Barthes_web.jpg)

Image: A young Roland Barthes, with space on his bookshelf

**SD:** Where even the three actors aren't stable! The people that are using
the site are fulfilling some role that usually the publisher has been doing or
ought to be doing, like marketing or circulation.

**MF:** Well it needn't be seen as promotion necessarily. There's also this
kind of secondary work with critics, reviewers and so on - which we can say is
also taken on by universities, for instance, and reading groups, magazines,
reviews - that gives an additional life to the text or brings it particular
kinds of attention, certain kind of readerliness.

**SD:** Situates it within certain discourses, makes it intelligible in a way,
in a different way.

**MF:** Yes, exactly, there's this other category of life to the book, which
is that of the kind of milieu or the organisational structure in which it
circulates and the different kind of networks of reference that it implies and
generates. Then there's also the book itself, which has some kind of agency,
or at least resilience and salience, when you think about how certain books
have different life cycles of appearance and disappearance.

**SD:** Well, in a contemporary sense, you have something like _Nights of
Labour_ , by Ranci _è_ re - which is probably going to be republished or
reprinted imminently - but has been sort of invisible, out of print, until, by
surprise, it becomes much more visible within the art world or something.

**MF:** And it's also been interesting to see how the art world plays a role
in the reverberations of text which isn't the same as that in cultural theory
or philosophy. Certainly _Nights of Labour_ , something that is very close to
the role that cultural studies plays in the UK, but which (cultural studies)
has no real equivalent in France, so then, geographically and linguistically,
and therefore also in a certain sense conceptually, the life of a book
exhibits these weird delays and lags and accelerations, so that's a good
example. I'm interested in what role Aaaaarg plays in that kind of
proliferation, the kind of things that books do, where they go and how they
become manifest. So I think one of the things Aaaaarg does is to make books
active in different ways, to bring out a different kind of potential in
publishing.

**SD:** Yes, the debate has tended so far to get stuck in those three actors
because people tend to end up picking a pair and placing them in opposition to
one another, especially around intellectual property. The discussion is very
simplistic and ends up in that way, where it's the authors against readers, or
authors against their publishers, with the publishers often introducing
scarcity, where the authors don't want it to be - that's a common argument.
There's this situation where the record industry is suing its own audience.
That's typically the field now.

**MF:** So within that kind of discourse of these three figures, have there
been cases where you think it's valid that there needs to be some form of
scarcity in order for a publishing project to exist?

**SD:** It's obviously not for me to say that there does or doesn't need to be
scarcity but the scarcity that I think we're talking about functions in a
really specific way: it's usually within academic publishing, the book or
journal is being distributed to a few libraries and maybe 500 copies of it are
being printed, and then the price is something anywhere from $60 to $500, and
there's just sort of an assumption that the audience is very well defined and
stable and able to cope with that.

**MF:** Yeah, which recognises that the audiences may be stable as an
institutional form, but not that over time the individual parts of say that
library user population change in their relationship to the institution. If
you're a student for a few years and then you no longer have access, you lose
contact with that intellectual community...

**SD:** Then people just kind of have to cling to that intellectual community.
So when scarcity functions like that, I can't think of any reason why that
_needs_ to happen. Obviously it needs to happen in the sense that there's a
relatively stable balance that wants to perpetuate itself, but what you're
asking is something else.

**MF:** Well there are contexts where the publisher isn't within that academic
system of very high costs, sustained by volunteer labour by academics, the
classic peer review system, but if you think of more of a trade publisher like
a left or a movement or underground publisher, whose books are being
circulated on Aaaaarg...

**SD:** They're in a much more precarious position obviously than a university
press whose economics are quite different, and with the volunteer labour or
the authors are being subsidised by salary - you have to look at the entire
system rather than just the publication. But in a situation where the
publisher is much more precarious and relying on sales and a swing in one
direction or another makes them unable to pay the rent on a storage facility,
one can definitely see why some sort of predictability is helpful and
necessary.

**MF:** So that leads me to wonder whether there are models of publishing that
are emerging that work with online distribution, or with the kind of thing
that Aaaaarg does specifically. Are there particular kinds of publishing
initiatives that really work well in this kind of context where free digital
circulation is understood as an a priori, or is it always in this kind of
parasitic or cyclical relationship?

**SD:** I have no idea how well they work actually; I don't know how well,
say, Australian publisher re.press, works for example. 3 I like a lot of what
they publish, it's given visibility when re.press distributes it and that's a
lot of what a publisher's role seems to be (and what Aaaaarg does as well).
But are you asking how well it works in terms of economics?

**MF:** Well, just whether there's new forms of publishing emerging that work
well in this context that cut out some of the problems ?

**SD:** Well, there's also the blog. Certain academic discourses, philosophy
being one, that are carried out on blogs really work to a certain extent, in
that there is an immediacy to ideas, their reception and response. But there's
other problems, such as the way in which, over time, the posts quickly get
forgotten. In this sense, a publication, a book, is kind of nice. It
crystallises and stays around.

**MF:** That's what I'm thinking, that the book is a particular kind of thing
which has it's own quality as a form of media. I also wonder whether there
might be intermediate texts, unfinished texts, draft texts that might
circulate via Aaaaarg for instance or other systems. That, at least to me,
would be kind of unsatisfactory but might have some other kind of life and
readership to it. You know, as you say, the blog is a collection of relatively
occasional texts, or texts that are a work in progress, but something like
Aaaaarg perhaps depends upon texts that are finished, that are absolutely the
crystallisation of a particular thought.

![](/sites/www.metamute.org/files/u73/tree_of_knowledge_web.jpg)

Image: The Tree of Knowledge as imagined by Hans Sebald Beham in his 1543
engraving _Adam and Eve_

**SD:** Aaaaarg is definitely not a futuristic model. I mean, it occurs at a
specific time, which is while we're living in a situation where books exist
effectively as a limited edition. They can travel the world and reach certain
places, and yet the readership is greatly outpacing the spread and
availability of the books themselves. So there's a disjunction there, and
that's obviously why Aaaaarg is so popular. Because often there are maybe no
copies of a certain book within 400 miles of a person that's looking for it,
but then they can find it on that website, so while we're in that situation it
works.

**MF:** So it's partly based on a kind of asymmetry, that's spatial, that's
about the territories of publishers and distributors, and also a kind of
asymmetry of economics?

**SD:** Yeah, yeah. But others too. I remember when I was affiliated with a
university and I had JSTOR access and all these things and then I left my job
and then at some point not too long after that my proxy access expired and I
no longer had access to those articles which now would cost $30 a pop just to
even preview. That's obviously another asymmetry, even though, geographically
speaking, I'm in an identical position, just that my subject position has
shifted from affiliated to unaffiliated.

**MF:** There's also this interesting way in which Aaaaarg has gained
different constituencies globally, you can see the kind of shift in the texts
being put up. It seems to me anyway there are more texts coming from non-
western authors. This kind of asymmetry generates a flux. We're getting new
alliances between texts and you can see new bibliographies emerge.

**SD:** Yeah, the original community was very American and European and
gradually people were signing up at other places in order to have access to a
lot of these texts that didn't reach their libraries or their book stores or
whatever. But then there is a danger of US and European thought becoming
central. A globalisation where a certain mode of thought ends up just erasing
what's going on already in the cities where people are signing up, that's a
horrible possible future.

**MF:** But that's already something that's _not_ happening in some ways?

**SD:** Exactly, that's what seems to be happening now. It goes on to
translations that are being put up and then texts that are coming from outside
of the set of US and western authors and so, in a way, it flows back in the
other direction. This hasn't always been so visible, maybe it will begin to
happen some more. But think of the way people can list different texts
together as ‘issues' - a way that you can make arbitrary groupings - and
they're very subjective, you can make an issue named anything and just lump a
bunch of texts in there. But because, with each text, you can see what other
issues people have also put it in, it creates a trace of its use. You can see
that sometimes the issues are named after the reading groups, people are using
the issues format as a collecting tool, they might gather all Portuguese
translations, or The Public School uses them for classes. At other times it's
just one person organising their dissertation research but you see the wildly
different ways that one individual text can be used.

**MF:** So the issue creates a new form of paratext to the text, acting as a
kind of meta-index, they're a new form of publication themselves. To publish a
bibliography that actively links to the text itself is pretty cool. That also
makes me think within the structures of Aaaaarg it seems that certain parts of
the library are almost at breaking point - for instance the alphabetical
structure.

**SD:** Which is funny because it hasn't always been that alphabetical
structure either, it used to just be everything on one page, and then at some
point it was just taking too long for the page to load up A-Z. And today A is
as long as the entire index used to be, so yeah these questions of density and
scale are there but they've always been dealt with in a very ad hoc kind of
way, dealing with problems as they come. I'm sure that will happen. There
hasn't always been a search and, in a way, the issues, along with
alphabetising, became ways of creating more manageable lists, but even now the
list of issues is gigantic. These are problems of scale.

**MF:** So I guess there's also this kind of question that emerges in the
debate on reading habits and reading practices, this question of the breadth
of reading that people are engaging in. Do you see anything emerging in
Aaaaarg that suggests a new consistency of handling reading material? Is there
a specific quality, say, of the issues? For instance, some of them seem quite
focused, and others are very broad. They may provide insights into how new
forms of relationships to intellectual material may be emerging that we don't
quite yet know how to handle or recognise. This may be related to the lament
for the classic disciplinary road of deep reading of specific materials with a
relatively focused footprint whereas, it is argued, the net is encouraging a
much wider kind of sampling of materials with not necessarily so much depth.

**SD:** It's partially driven by people simply being in the system, in the
same way that the library structures our relationship to text, the net does it
in another way. One comment I've heard is that there's too much stuff on
Aaaaarg, which wasn't always the case. It used to be that I read every single
thing that was posted because it was slow enough and the things were short
enough that my response was, ‘Oh something new, great!' and I would read it.
But now, obviously that is totally impossible, there's too much; but in a way
that's just the state of things. It does seem like certain tactics of making
sense of things, of keeping things away and letting things in and queuing
things for reading later become just a necessary part of even navigating. It's
just the terrain at the moment, but this is only one instance. Even when I was
at the university and going to libraries, I ended up with huge stacks of books
and I'd just buy books that I was never going to read just to have them
available in my library, so I don't think feeling overwhelmed by books is
particularly new, just maybe the scale of it is. In terms of how people
actually conduct themselves and deal with that reality, it's difficult to say.
I think the issues are one of the few places where you would see any sort of
visible answers on Aaaaarg, otherwise it's totally anecdotal. At The Public
School we have organised classes in relationship to some of the issues, and
then we use the classes to also figure out what texts we are going to be
reading in the future, to make new issues and new classes. So it becomes an
organising group, reading and working its way through subject matter and
material, then revisiting that library and seeing what needs to be there.

**MF:** I want to follow that kind of strand of habits of accumulation,
sorting, deferring and so on. I wonder, what is a kind of characteristic or
unusual reading behavior? For instance are there people who download the
entire list? Or do you see people being relatively selective? How does the
mania of the net, with this constant churning of data, map over to forms of
bibliomania?

**SD:** Well, in Aaaaarg it's again very specific. Anecdotally again, I have
heard from people how much they download and sometimes they're very selective,
they just see something that's interesting and download it, other times they
download everything and occasionally I hear about this mania of mirroring the
whole site. What I mean about being specific to Aaaaarg is that a lot of the
mania isn't driven by just the need to have everything; it's driven by the
acknowledgement that the source is going to disappear at some point. That
sense of impending disappearance is always there, so I think that drives a lot
of people to download everything because, you know, it's happened a couple
times where it's just gone down or moved or something like that.

**MF:** It's true, it feels like something that is there even for a few weeks
or a few months. By a sheer fluke it could last another year, who knows.

**SD:** It's a different kind of mania, and usually we get lost in this
thinking that people need to possess everything but there is this weird
preservation instinct that people have, which is slightly different. The
dominant sensibility of Aaaaarg at the beginning was the highly partial and
subjective nature to the contents and that is something I would want to
preserve, which is why I never thought it to be particularly exciting to have
lots of high quality metadata - it doesn't have the publication date, it
doesn't have all the great metadata that say Amazon might provide. The system
is pretty dismal in that way, but I don't mind that so much. I read something
on the Internet which said it was like being in the porn section of a video
store with all black text on white labels, it was an absolutely beautiful way
of describing it. Originally Aaaaarg was about trading just those particular
moments in a text that really struck you as important, that you wanted other
people to read so it would be very short, definitely partial, it wasn't a
completist project, although some people maybe treat it in that way now. They
treat it as a thing that wants to devour everything. That's definitely not the
way that I have seen it.

**MF:** And it's so idiosyncratic I mean, you know it's certainly possible
that it could be read in a canonical mode, you can see that there's that
tendency there, of the core of Adorno or Agamben, to take the a's for
instance. But of the more contemporary stuff it's very varied, that's what's
nice about it as well. Alongside all the stuff that has a very long-term
existence, like historical books that may be over a hundred years old, what
turns up there is often unexpected, but certainly not random or
uninterpretable.

![](/sites/www.metamute.org/files/u1/malraux_web3_0.jpg)

Image: French art historian André Malraux lays out his _Musée Imaginaire_ ,
1947

**SD:** It's interesting to think a little bit about what people choose to
upload, because it's not easy to upload something. It takes a good deal of
time to scan a book. I mean obviously some things are uploaded which are, have
always been, digital. (I wrote something about this recently about the scan
and the export - the scan being something that comes out of a labour in
relationship to an object, to the book, and the export is something where the
whole life of the text has sort of been digital from production to circulation
and reception). I happen to think of Aaaaarg in the realm of the scan and the
bootleg. When someone actually scans something they're potentially spending
hours because they're doing the work on the book they're doing something with
software, they're uploading.

**MF:** Aaaarg hasn't introduced file quality thresholds either.

**SD:** No, definitely not. Where would that go?

**MF:** You could say with PDFs they have to be searchable texts?

**SD:** I'm sure a lot of people would prefer that. Even I would prefer it a
lot of the time. But again there is the idiosyncratic nature of what appears,
and there is also the idiosyncratic nature of the technical quality and
sometimes it's clear that the person that uploads something just has no real
experience of scanning anything. It's kind of an inevitable outcome. There are
movie sharing sites that are really good about quality control both in the
metadata and what gets up; but I think that if you follow that to the end,
then basically you arrive at the exported version being the Platonic text, the
impossible, perfect, clear, searchable, small - totally eliminating any trace
of what is interesting, the hand of reading and scanning, and this is what you
see with a lot of the texts on Aaaaarg. You see the hand of the person who's
read that book in the past, you see the hand of the person who scanned it.
Literally, their hand is in the scan. This attention to the labour of both
reading and redistributing, it's important to still have that.

**MF:** You could also find that in different ways for instance with a pdf, a
pdf that was bought directly as an ebook that's digitally watermarked will
have traces of the purchaser coded in there. So then there's also this work of
stripping out that data which will become a new kind of labour. So it doesn't
have this kind of humanistic refrain, the actual hand, the touch of the
labour. This is perhaps more interesting, the work of the code that strips it
out, so it's also kind of recognising that code as part of the milieu.

**SD:** Yeah, that is a good point, although I don't know that it's more
interesting labour.

**MF:** On a related note, The Public School as a model is interesting in that
it's kind of a convention, it has a set of rules, an infrastructure, a
website, it has a very modular being. Participants operate with a simple
organisational grammar which allows them to say ‘I want to learn this' or ‘I
want to teach this' and to draw in others on that basis. There's lots of
proposals for classes, some of them don't get taken up, but it's a process and
a set of resources which allow this aggregation of interest to occur. I just
wonder how you saw that kind of ethos of modularity in a way, as a set of
minimum rules or set of minimum capacities that allow a particular set of
things occur?

**SD:** This may not respond directly to what you were just talking about, but
there's various points of entry to the school and also having something that
people feel they can take on as their own and I think the minimal structure
invites quite a lot of projection as to what that means and what's possible
with it. If it's not doing what you want it to do or you think, ‘I'm not sure
what it is', there's the sense that you can somehow redirect it.

**MF:** It's also interesting that projection itself can become a technical
feature so in a way the work of the imagination is done also through this kind
of tuning of the software structure. The governance that was handled by the
technical infrastructure actually elicits this kind of projection, elicits the
imagination in an interesting way.

**SD:** Yeah, yeah, I totally agree and, not to put too much emphasis on the
software, although I think that there's good reason to look at both the
software and the conceptual diagram of the school itself, but really in a way
it would grind to a halt if it weren't for the very traditional labour of
people - like an organising committee. In LA there's usually around eight of
us (now Jordan Biren, Solomon Bothwell, Vladada Gallegos, Liz Glynn, Naoko
Miyano, Caleb Waldorf, and me) who are deeply involved in making that
translation of these wishes - thrown onto the website that somehow attract the
other people - into actual classes.

**MF:** What does the committee do?

**SD:** Even that's hard to describe and that's what makes it hard to set up.
It's always very particular to even a single idea, to a single class proposal.
In general it'd be things like scheduling, finding an instructor if an
instructor is what's required for that class. Sometimes it's more about
finding someone who will facilitate, other times it's rounding up materials.
But it could be helping an open proposal take some specific form. Sometimes
it's scanning things and putting them on Aaaaarg. Sometimes, there will be a
proposal - I proposed a class in the very, very beginning on messianic time, I
wanted to take a class on it - and it didn't happen until more than a year and
a half later.

**MF:** Well that's messianic time for you.

**SD:** That and the internet. But other times it will be only a week later.
You know we did one on the Egyptian revolution and its historical context,
something which demanded a very quick turnaround. Sometimes the committee is
going to classes and there will be a new conflict that arises within a class,
that they then redirect into the website for a future proposal, which becomes
another class: a point of friction where it's not just like next, and next,
and next, but rather it's a knot that people can't quite untie, something that
you want to spend more time with, but you may want to move on to other things
immediately, so instead you postpone that to the next class. A lot of The
Public School works like that: it's finding momentum then following it. A lot
of our classes are quite short, but we try and string them together. The
committee are the ones that orchestrate that. In terms of governance, it is
run collectively, although with the committee, every few months people drop
off and new people come on. There are some people who've been on for years.
Other people who stay on just for that point of time that feels right for
them. Usually, people come on to the committee because they come to a lot of
classes, they start to take an interest in the project and before they know it
they're administering it.

**Matthew Fuller's <[m.fuller@gold.ac.uk](mailto:m.fuller@gold.ac.uk)> most
recent book, _Elephant and Castle_ , is forthcoming from Autonomedia. **

**He is collated at**

**Footnotes**

1

2 [http://telic.info/ ](http://telic.info/)

3


Giorgetta, Nicoletti & Adema
A Conversation on Digital Archiving Practices
2015


# A Conversation on Digital Archiving Practices

A couple of months ago Davide Giorgetta and Valerio Nicoletti (both ISIA
Urbino) did an interview with me for their MA in Design of Publishing. Silvio
Lorusso, was so kind to publish the interview on the fantastic
[p-dpa.net](http://p-dpa.net/a-conversation-on-digital-archiving-practices-
with-janneke-adema/). I am reblogging it here.

* * *

[Davide Giorgetta](http://p-dpa.net/creator/davide-giorgetta/) and [Valerio
Nicoletti](http://p-dpa.net/creator/valerio-nicoletti/) are both students from
[ISIA Urbino](http://www.isiaurbino.net/home/), where they attend the Master
Course in Design for Publishing. They are currently investigating the
independent side of digital archiving practices within the scope of the
publishing world.

As part of their research, they asked some questions to Janneke Adema, who is
Research Fellow in Digital Media at Coventry University, with a PhD in Media
(Coventry University) and a background in History (MA) and Philosophy (MA)
(both University of Groningen) and Book and Digital Media Studies (MA) (Leiden
University). Janneke’s PhD thesis focuses on the future of the scholarly book
in the humanities. She has been conducting research for the
[OAPEN](http://project.oapen.org/index.php/about-oapen) project, and
subsequently the OAPEN foundation, from 2008 until 2013 (including research
for OAPEN-NL and DOAB). Her research for OAPEN focused on user needs and
publishing models concerning Open Access books in the Humanities and Social
Sciences.

**Davide Giorgetta & Valerio Nicoletti: Does a way out from the debate between
publishers and digital independent libraries (Monoskop Log, Ubuweb,
Aaaarg.org) exist, in terms of copyright? An alternative solution able to
solve the issue and to provide equal opportunities to everyone? Would the fear
of publishers of a possible reduction of incomes be legitimized if the access
to their digital publications was open and free?**

Janneke Adema: This is an interesting question, since for many academics this
‘way out’ (at least in so far it concerns scholarly publications) has been
envisioned in or through the open access movement and the use of Creative
Commons licenses. However, the open access movement, a rather plural and
loosely defined group of people, institutions and networks, in its more
moderate instantiations tends to distance itself from piracy and copyright
infringement or copy(far)left practices. Through its use of and favoring of
Creative Commons licenses one could even argue that it has been mainly
concerned with a reform of copyright rather than a radical critique of and
rethinking of the common and the right to copy (Cramer 2013, Hall
2014).1(http://p-dpa.net/a-conversation-on-digital-archiving-practices-
with-janneke-adema/#fn:1 "see footnote") Nonetheless, in its more radical
guises open access can be more closely aligned with the practices associated
with digital pirate libraries such as the ones listed above, for instance
through Aaron Swartz’s notion of [Guerilla Open
Access](https://archive.org/stream/GuerillaOpenAccessManifesto/Goamjuly2008_djvu.txt):

> We need to take information, wherever it is stored, make our copies and
share them with the world. We need to take stuff that’s out of copyright and
add it to the archive. We need to buy secret databases and put them on the
Web. We need to download scientific journals and upload them to file sharing
networks. We need to fight for Guerilla Open Access. (Swartz 2008)

However whatever form or vision of open access you prefer, I do not think it
is a ‘solution’ to any problem—such as copyright/fight—, but I would rather
see it, as I have written
[elsewhere](http://blogs.lse.ac.uk/impactofsocialsciences/2014/11/18
/embracing-messiness-adema-pdsc14/), ‘as an ongoing processual and critical
engagement with changes in the publishing system, in our scholarly
communication practices and in our media and technologies of communication.’
And in this sense open access practices offer us the possibility to critically
reflect upon the politics of knowledge production, including copyright and
piracy, openness and the commons, indeed, even upon the nature of the book
itself.

With respect to the second part of your question, again, where it concerns
scholarly books, [research by Ronald
Snijder](https://scholar.google.com/citations?view_op=view_citation&hl=en&user=PuDczakAAAAJ&citation_for_view=PuDczakAAAAJ:u-x6o8ySG0sC)
shows no decline in sales or income for publishers once they release their
scholarly books in open access. The open availability does however lead to
more discovery and online consultation, meaning that it actually might lead to
more ‘impact’ for scholarly books (Snijder 2010).

**DG, VN: In which way, if any, are digital archiving practices stimulating
new publishing phenomenons? Are there any innovative outcomes, apart the
obvious relation to p.o.d. tools? (or interesting new projects in this
field)**

JA: Beyond extending access, I am mostly interested in how digital archiving
practices have the potential to stimulate the following practices or phenomena
(which in no way are specific to digital archiving or publishing practices, as
they have always been a potential part of print publications too): reuse and
remix; processual research and iterative publishing; and collaborative forms
of knowledge production. These practices interest me mainly as they have the
potential to critique the way the (printed) book has been commodified and
essentialised over the centuries, in a bound, linear and fixed format, a
practice which is currently being replicated in a digital context. Indeed, the
book has been fixed in this way both discursively and through a system of
material production within publishing and academia—which includes our
institutions and practices of scholarly communication—that prefers book
objects as quantifiable and auditable performance indicators and as marketable
commodities and objects of symbolic value exchange. The practices and
phenomena mentioned above, i.e. remix, versioning and collaboration, have the
potential to help us to reimagine the bound nature of the book and to explore
both a spatial and temporal critique of the book as a fixed object; they can
aid us to examine and experiment with various different incisions that can be
made in our scholarship as part of the informal and formal publishing and
communication of our research that goes beyond the final research commodity.
In this sense I am interested in how these specific digital archiving,
research and publishing practices offer us the possibility to imagine a
different, perhaps more ethical humanities, a humanities that is processual,
contingent, unbound and unfinished. How can these practices aid us in how to
cut well in the ongoing unfolding of our research, how can they help us
explore how to make potentially better interventions? How can we take
responsibility as scholars for our entangled becoming with our research and
publications? (Barad 2007, Kember and Zylinska 2012)

Examples that I find interesting in the realm of the humanities in this
respect include projects that experiment with such a critique of our fixed,
print-based practices and institutions in an affirmative way: for example Mark
Amerika’s [remixthebook](http://www.remixthebook.com/) project; Open
Humanities’ [Living Books about Life](http://www.livingbooksaboutlife.org/)
series; projects such as
[Vectors](http://vectors.usc.edu/issues/index.php?issue=7) and
[Scalar](http://scalar.usc.edu/); and collaborative knowledge production,
archiving and creation projects, from wiki-based research projects to AAAARG.

**DG, VN: In which way does a digital container influence its content? Does
the same book — if archived on different platforms, such as _Internet Archive_
, _The Pirate Bay_ , _Monoskop Log_ — still remain the same cultural item?**

JA: In short my answer to this question would be ‘no’. Books are embodied
entities, which are materially established through their specific affordances
in relationship to their production, dissemination, reception and
preservation. This means that the specific materiality of the (digital) book
is partly an outcome of these ongoing processes. Katherine Hayles has argued
in this respect that materiality is an emergent property:

> In this view of materiality, it is not merely an inert collection of
physical properties but a dynamic quality that emerges from the interplay
between the text as a physical artifact, its conceptual content, and the
interpretive activities of readers and writers. Materiality thus cannot be
specified in advance; rather, it occupies a borderland— or better, performs as
connective tissue—joining the physical and mental, the artifact and the user.
(2004: 72)

Similarly, Matthew Kirschenbaum points out that the preservation of digital
objects is:

> _logically inseparable_ from the act of their creation’ (…) ‘The lag between
creation and preservation collapses completely, since a digital object may
only ever be said to be preserved _if_ it is accessible, and each individual
access creates the object anew. One can, in a very literal sense, _never_
access the “same” electronic file twice, since each and every access
constitutes a distinct instance of the file that will be addressed and stored
in a unique location in computer memory. (Kirschenbaum 2013)

Every time we access a digital object, we thus duplicate it, we copy it and we
instantiate it. And this is exactly why, in our strategies of conservation,
every time we access a file we also (re)create these objects anew over and
over again. The agency of the archive, of the software and hardware, are also
apparent here, where archives are themselves ‘active ‘‘archaeologists’’ of
knowledge’ (Ernst 2011: 239) and, as Kirschenbaum puts it, ‘the archive writes
itself’ (2013).

In this sense a book can be seen as an apparatus, consisting of an
entanglement of relationships between, among other things, authors, books, the
outside world, readers, the material production and political economy of book
publishing, its preservation and material instantiations, and the discursive
formation of scholarship. Books as apparatuses are thus reality shaping, they
are performative. This relates to Johanna Drucker’s notion of ‘performative
materiality’, where Drucker argues for an extension of what a book _is_ (i.e.
from a focus on its specific properties and affordances), to what a book
_does_ : ‘Performative materiality suggests that what something _is_ has to be
understood in terms of what it _does_ , how it works within machinic,
systemic, and cultural domains.’ For, as Drucker argues, ‘no matter how
detailed a description of material substrates or systems we have, their use is
performative whether this is a reading by an individual, the processing of
code, the transmission of signals through a system, the viewing of a film,
performance of a play, or a musical work and so on. Material conditions
provide an inscriptional base, a score, a point of departure, a provocation,
from which a work is produced as an event’ (Drucker 2013).

So, to come back to your question, these specific digital platforms (Monoskop,
The Pirate Bay etc.) become integral aspects of the apparatus of the book and
each in their own different way participates in the performance and
instantiation of the books in their archives. Not only does a digital book
therefore differ as a material or cultural object from a printed book, a
digital object also has materially distinct properties related to the platform
on which it is made available. Indeed, building further on the theories
described above, a book is a different object every time it is instantiated or
read, be it by a human or machinic entity; they become part of the apparatus
of the book, a performative apparatus. Therefore, as Silvio Lorusso has
stated:

[![The-Post-Digital-Publishing-Archive-An-Inventory-of-Speculative-Strategies
-----Coventry-University-----June-11th-2014-21](https://i2.wp.com/p-dpa.net
/wp-content/uploads/2015/06/The-Post-Digital-Publishing-Archive-An-Inventory-
of-Speculative-Strategies-Coventry-University-June-
11th-2014-21.png)](http://p-dpa.net/wp-content/uploads/2015/06/The-Post-
Digital-Publishing-Archive-An-Inventory-of-Speculative-Strategies-Coventry-
University-June-11th-2014-21.png)

**DG, VN: In your opinion, can scholarly publishing, in particular self-
archiving practices, constitute a bridge covering the gap between authors and
users in terms of access to knowledge? Could we hope that these practices will
find a broader use, moving from very specific fields (academic papers) to book
publishing in general?**

JA: On the one hand, yes. Self-archiving, or the ‘green road’ to open access,
offers a way for academics to make their research available in a preprint form
via open access repositories in a relatively simple and straightforward way,
making it easily accessible to other academics and more general audiences.
However, it can be argued that as a strategy, the green road doesn’t seem to
be very subversive, where it doesn’t actively rethink, re-imagine, or
experiment with the system of scholarly knowledge production in a more
substantial way, including peer-review and the print-based publication forms
this system continues to promote. With its emphasis on achieving universal,
free, online access to research, a rigorous critical exploration of the form
of the book itself doesn’t seem to be a main priority of green open access
activists. Stevan Harnad, one of the main proponents of green open access and
self-archiving has for instance stated that ‘it’s time to stop letting the
best get in the way of the better: Let’s forget about Libre and Gold OA until
we have managed to mandate Green Gratis OA universally’ (Harnad 2012). This is
where the self-archiving strategy in its current implementation falls short I
think with respect to the ‘breaking-down’ of barriers between authors and
users, where it isn’t necessarily committed to following a libre open access
strategy, which, one could argue, would be more open to adopting and promoting
forms of open access that are designed to make material available for others
to (re) use, copy, reproduce, distribute, transmit, translate, modify, remix
and build upon? Surely this would be a more substantial strategy to bridge the
gap between authors and users with respect to the production, dissemination
and consumption of knowledge?

With respect to the second part of your question, could these practices find a
broader use? I am not sure, mainly because of the specific characteristics of
academia and scholarly publishing, where scholars are directly employed and
paid by their institutions for the research work they do. Hence, self-
archiving this work would not directly lead to any or much loss of income for
academics. In other fields, such as literary publishing for example, this
issue of remuneration can become quite urgent however, even though many [free
culture](https://en.wikipedia.org/wiki/Free_culture_movement) activists (such
as Lawrence Lessig and Cory Doctorow) have argued that freely sharing cultural
goods online, or even self-publishing, doesn’t necessarily need to lead to any
loss of income for cultural producers. So in this respect I don’t think we can
lift something like open access self-archiving out of its specific context and
apply it to other contexts all that easily, although we should certainly
experiment with this of course in different domains of digital culture.

**DG, VN: After your answers, we would also receive suggestions from you. Do
you notice any unresolved or raising questions in the contemporary context of
digital archiving practices and their relation to the publishing realm?**

JA: So many :). Just to name a few: the politics of search and filtering
related to information overload; the ethics and politics of publishing in
relationship to when, where, how and why we decide to publish our research,
for what reasons and with what underlying motivations; the continued text- and
object-based focus of our archiving and publishing practices and platforms,
where there is a lack of space to publish and develop more multimodal,
iterative, diagrammatic and speculative forms of scholarship; issues of free
labor and the problem or remuneration of intellectual labor in sharing
economies etc.

**Bibliography**

* Adema, J. (2014) ‘Embracing Messiness’. [17 November 2014] available from [17 November 2014]
* Adema, J. and Hall, G. (2013) ‘The Political Nature of the Book: On Artists’ Books and Radical Open Access’. _New Formations_ 78 (1), 138–156
* Barad, K. (2007) _Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter and Meaning_. Duke University Press
* Cramer, F. (2013) _Anti-Media: Ephemera on Speculative Arts_. Rotterdam : New York, NY: nai010 publishers
* Drucker, J. (2013) _Performative Materiality and Theoretical Approaches to Interface_. [online] 7 (1). available from [4 April 2014]
* Ernst, W. (2011) ‘Media Archaeography: Method and Machine versus History and Narrative of Media’. in _Media Archaeology: Approaches, Applications, and Implications_. ed. by Huhtamo, E. and Parikka, J. University of California Press
* Hall, G. (2014) ‘Copyfight’. in _Critical Keywords for the Digital Humanities_ , [online] Lueneburg: Centre for Digital Cultures (CDC). available from [5 December 2014]
* Harnad, S. (2012) ‘Open Access: Gratis and Libre’. [3 May 2012] available from [4 March 2014]
* Hayles, N.K. (2004) ‘Print Is Flat, Code Is Deep: The Importance of Media-Specific Analysis’. _Poetics Today_ 25 (1), 67–90
* Kember, S. and Zylinska, J. (2012) _Life After New Media: Mediation as a Vital Process_. MIT Press
* Kirschenbaum, M. (2013) ‘The .txtual Condition: Digital Humanities, Born-Digital Archives, and the Future Literary’. _DHQ: Digital Humanities Quarterly_ [online] 7 (1). available from [20 July 2014]
* Lorusso, S. (2014) _The Post-Digital Publishing Archive: An Inventory of Speculative Strategies_. in ‘The Aesthetics of the Humanities: Towards a Poetic Knowledge Production’ [online] held 11 June 2014 at Coventry University. available from [31 May 2015]
* Snijder, R. (2010) ‘The Profits of Free Books: An Experiment to Measure the Impact of Open Access Publishing’. _Learned Publishing_ 23 (4), 293–301
* Swartz, A. (2008) _Guerilla Open Access Manifesto_ [online] available from [31 May 2015]


Goldsmith
If We Had To Ask for Permission We Wouldnt Exist: An Open Letter to the Frameworks Community
2010


To the Frameworks Community,

I have been reading your thread on UbuWeb's hacking on the list with great
interest. It seems that with a few exceptions, the list is generally positive
(with reservations) about Ubu, something that makes me happy. Ubu is a friend,
not a foe.

A few things: first of all, Ubu doesn't touch money. We don't make a cent. We
don't accept grants or donations. Nor do we -- or shall we ever -- sell
anything on the site. No one makes a salary here and the work is all done
voluntarily (more love hours than can ever be repaid). Our bandwidth and
server space is donated by universities.

We know that UbuWeb is not very good. In terms of films, the selection is
random and the quality is often poor. The accompanying text to the films can
be crummy, mostly poached from whatever is available around the net. So are
the films: they are mostly grabbed from private closed file-sharing
communities and made available for the public, hence the often lousy quality
of the films. It could be done much better.

Yet, in terms of how we've gone about building the archive, if we had to ask
for permission, we wouldn't exist. Because we have no money, we don't ask
permission. Asking permission always involves paperwork and negotiations,
lawyers, and bank accounts. Yuk. But by doing things the wrong way, we've been
able to pretty much overnight build an archive that's made publically
accessible for free of charge to anyone. And that in turn has attracted a
great number of film and video makers to want to contribute their works to the
archive legitimately. The fastest growing part of Ubu's film section is by
younger and living artists who want to be a part of Ubu. But if you want your
works off Ubu, we never question it and remove it immediately; it's your work
after all. We will try to convince you otherwise, but we will never leave
anything there that an artist or copyright holder wants removed.

Ubu presents orphaned and out-of-print works. Sometimes we had inadvertently
host works that are in print and commercially available for a reasonable
price. While this is strictly against our policy, it happens. (With an army of
interns and students and myself the only one in charge, it's sometimes hard to
keep the whole thing together.) Then someone tells us that we're doing it and
we take it down immediately and apologize. Ouch. The last thing Ubu wants to
do is to harm those who are trying to legitimately sell works. For this
reason, we don't host, for example, any films by Brakhage: they're in print
and affordable for anyone who wants them on DVD or through Netflix. Fantastic.
[The "wall of shame" was a stupid, juvenile move and we removed a few years
ago it when we heard from Joel Bachar that it was hurtful to the community.]

Some of the list members suggested that we work with distributors. That's
exactly what's starting to happen. Last winter, Ubu had a meeting with EAI and
VDB to explore ways that we could move forward together. We need each other.
EAI sent a list of artists who were uncomfortable with their films being
represented on Ubu. We responded by removing them. But others, such as Leslie
Thornton and Peggy Ahwesh insisted that their oeuvres be on Ubu as well as on
EAI. [You can see Leslie Thorton's Ubu page
here](http://ubu.com/film/thornton.html) (all permissioned).

Likewise, a younger generation is starting to see that works must take a
variety of forms and distributive methods, which happen at the same time
without cancelling each other out. The young, prominent video artist Ryan
Trecartin has all his work on Ubu, hi-res copies are distributed by EAI, The
Elizabeth Dee Gallery represent his work (and sells his videos there), while
showing in museums around the world. Clearly Ryan's career hasn't been hurt by
this approach. [You can see his Ryan Trecartin's Ubu page
here](http://ubu.com/film/trecartin.html) (all permissioned).

Older filmmakers and their estates have taken a variety of approaches.
[Michael Snow](http://ubu.com/film/snow.html) contacted Ubu to say that he was
pleased to have some of his films on Ubu, while he felt that others should be
removed. Of course we accommodated him. Having two permissioned films from
Michael Snow beats hosting ten without his blessing. We considered it a
victory. In another case, the children of [Stan
VanDerBeek](http://ubu.com/film/vanderbeek.html) contacted Ubu requesting that
we host their father's films. Re:Voir was upset by this, saying that we were
robbing his children of their royalties when they in fact had given the films
to us. We put a link to purchase DVDs from Re:Voir, regardless. We think
Re:Voir serves a crucial function: Many people prefer their beautiful physical
objects and hi-res DVDs to our pile of pixels. The point is that there is much
(understandable) suspicion and miscommunication. And I'll be the first to
admit that, on a community level, I've remained aloof and distant, and the
cause of much of that alienation. For this, I apologize.

In terms of sales and rentals ("Ubu is bad for business"), you'd know better
than me. But when [Peter Gidal](http://ubu.com/film/gidal.html) approached Ubu
and requested that his films be included in our archive, we were thrilled to
host a number of them. I met Peter in NYC a few months ago and asked him what
the effect of having his films on Ubu had been. He said, in terms of sales and
rentals, it was exactly the same, but in terms of interest, he felt there was
a big uptick from students and scholars by virtue of being able to see and
study that which was unavailable before. Ubu is used mostly by students and in
the classroom. Sadly, as many of you have noted, academic budgets don't
generally provide for adequate rental or projection money. I know this
firsthand: my wife, the video artist [Cheryl
Donegan](http://ubu.com/film/donegan.html) \-- who teaches video at two
prominent East Coast institutions -- is given approximately $200 per semester
(if that) for rentals. Good luck.

This summer, Ubu did a [show at the Walter Reade
Theater](http://www.filmlinc.com/wrt/onsale/fcssummer/ubuweb.html) at Lincoln
Center in NYC. I insisted that we show AVIs and MP4s from the site on their
giant screen. They looked horrible. But that was the point. I wanted to prove
the value of high-resolution DVDs and real film prints. I wanted to validate
the existence of distributors who make these types of copies available. Ubu's
crummy files are a substitute, a thumbnail for the real thing: sitting in a
dark from with like-minded, warm bodies watching an enormous projection in a
room with a great sound system. Cinema, as you know too well, is a social
experience; Ubu pales by comparison. It will never be a substitute. But sadly,
for many -- unable to live near the urban centers where such fare is shown,
trapped by economics, geography, career, circumstance, health, family, etc. --
Ubu is the only lifeline to this kind of work. As such, we believe that we do
more good in the world than harm.

An ideal situation happened when UbuWeb was asked to participate in a
[show](http://www.cca.qc.ca/en/intermission) at the CCA in Montreal. The CCA
insisted on showing hi-res films, which they rented from distributors of
materials that Ubu hosts. We were thrilled. By having these materials
available to be seen on Ubu, it led to rental fees for the artists and income
for the distributors. It was a win-win situation. This Ubu working at its
best.

Finally, I don't really think it's good for me to join the list. I'm not well-
enough versed in your world to keep up with the high level of conversation
going on there. Nor do I wish to get into a pissing match. However, I can be
contacted [here](http://ubu.com/contact) and am happy to respond.

It think that, in the end, Ubu is a provocation to your community to go ahead
and do it right, do it better, to render Ubu obsolete. Why should there only
be one UbuWeb? You have the tools, the resources, the artwork and the
knowledge base to do it so much better than I'm doing it. I fell into this as
Ubu has grown organically (we do it because we can) and am clearly not the
best person to be representing experimental cinema. Ubu would love you to step
in and help make it better. Or, better yet, put us out of business by doing it
correctly, the way it should have been done in the first place.

Kenneth Goldsmith
UbuWeb


---|---|---|---

Goldsmith
UbuWeb at 15 Years An Overview
2011


# UbuWeb at 15 Years: An Overview

By [Kenneth Goldsmith](https://www.poetryfoundation.org/poets/kenneth-
goldsmith)

It's amazing to me that [UbuWeb](http://ubu.com), after fifteen years, is
still going. Run with no money and put together pretty much without
permission, Ubu has succeeded by breaking all the rules, by going about things
the wrong way. UbuWeb can be construed as the Robin Hood of the avant-garde,
but instead of taking from one and giving to the other, we feel that in the
end, we're giving to all. UbuWeb is as much about the legal and social
ramifications of its self-created distribution and
[archiving](http://www.poetryfoundation.org/harriet/2011/04/archiving-is-the-
new-folk-art/) system as it is about the content hosted on the site. In a
sense, the content takes care of itself; but keeping it up there has proved to
be a trickier proposition. The socio-political maintenance of keeping free
server space with unlimited bandwidth is a complicated dance, often interfered
with by darts thrown at us by individuals calling foul-play on copyright
infringement. Undeterred, we keep on: after fifteen years, we're still going
strong. We're lab rats under a microscope: in exchange for the big-ticket
bandwidth, we've consented to be objects of university research in the
ideology and practice of radical distribution.

But by the time you read this, UbuWeb may be gone. Cobbled together, operating
on no money and an all-volunteer staff, UbuWeb has become the unlikely
definitive source for all things avant-garde on the internet. Never meant to
be a permanent archive, Ubu could vanish for any number of reasons: our ISP
pulls the plug, our university support dries up, or we simply grow tired of
it. Acquisition by a larger entity is impossible: nothing is for sale. We
don't touch money. In fact, what we host has never made money. Instead, the
site is filled with the detritus and ephemera of great artists—[the music of
Jean Dubuffet](http://www.ubu.com/sound/dubuffet.html), [the poetry of Dan
Graham](http://www.ubu.com/aspen/aspen5and6/poem.html),[ Julian Schnabel’s
country music](http://ubu.com/sound/schnabel.html), [the punk rock of Martin
Kippenberger](http://ubu.com/sound/kippenberger.html), [the diaries of John
Lennon](http://www.ubu.com/aspen/aspen7/diary.html), [the rants of Karen
Finley](http://www.ubu.com/sound/uproar.html), and [pop songs by Joseph
Beuys](http://www.ubu.com/film/beuys_sonne.html)—all of which was originally
put out in tiny editions and vanished quickly.

However the web provides the perfect place to restage these works. With video,
sound, and text remaining more faithful to the original experience than, say,
painting or sculpture, Ubu proposes a different sort of revisionist art
history, one based on the peripheries of artistic production rather than on
the perceived, or market-based, center. Few people, for example, know that
Richard Serra makes videos. Whilst visiting his recent retrospective at The
Museum of Modern Art in New York, there was no sign of [TELEVISION DELIVERS
PEOPLE](http://www.ubu.com/film/serra_television.html) (1973) or
[BOOMERANG](http://www.ubu.com/film/serra_boomerang.html) (1974), both being
well-visited resources on UbuWeb. Similarly, Salvador Dali’s obscure video,
[IMPRESSIONS DE LA HAUTE MONGOLIE—HOMMAGE Á RAYMOND
ROUSSEL](http://www.ubu.com/film/dali_impressions.html) from the mid-70s can
be viewed. Outside of UN CHIEN ANDALOU (1929), it’s the only other film he
completed in his lifetime. While you won’t find reproductions of Dali’s
paintings on UbuWeb, you will find [a 1967 recording of an advertisement he
made for a bank.](http://ubumexico.centro.org.mx/sound/dali_salvador/Dali-
Salvador_Apoth-du-dollar_1967.mp3)

It’s not all off-beat: there is, in all fairness, lots of primary expressions
of artists’ works which port to the web perfectly: [the films of Hollis
Frampton](http://ubu.com/film/frampton.html), [readings by Alain Robbe-
Grillet](http://www.ubu.com/aspen/aspen5and6/audio5B.html#jealousy), [Samuel
Beckett radio plays](http://www.ubu.com/sound/beckett.html), [the concrete
poems of Mary Ellen Solt](http://ubu.com/historical/solt/index.html), [the
writings of Maurice Blanchot](http://ubu.com/ubu/blanchot_last_man.html) and
the [music of Meredith Monk](http://www.ubu.com/sound/monk.html), to name a
few.

UbuWeb began in 1996 as a site focusing on visual and concrete poetry. With
the advent of the graphical web browser, we began scanning old concrete poems,
astonished by how fresh they looked backlit by the computer screen. Shortly
thereafter, when streaming audio became available, it made sense to extend our
scope to sound poetry, and as bandwidth increased we later added MP3s as well
as video. Sound poetry opened up a whole new terrain: certain of [John Cage’s
readings](http://www.ubu.com/sound/cage.html) of his mesostic texts could be
termed “sound poetry,” hence we included them. As often, though, Cage combined
his readings with an orchestral piece; we included those as well. But soon, we
found ourselves unable to distinguish the difference between “sound poetry”
and “music.” We encountered this dilemma time and again whether it was with
the compositions of [Maurico Kagel](http://www.ubu.com/sound/kagel.html),
[Joan La Barbara](http://www.ubu.com/sound/lab.html), or [Henri
Chopin](http://www.ubu.com/sound/chopin.html), all of whom are as well-known
as composers as they are sound artists. After a while, we gave up trying to
name things; we dropped the term “sound poetry” and referred to it thenceforth
simply as “[Sound](http://www.ubu.com/sound/index.html).”

When we began posting [found street
poems](http://www.ubu.com/outsiders/ass.html) that used letter forms in
fantastically innovative ways, we had to reconsider what “concrete poetry”
was. As time went on, we seemed to be outgrowing our original taxonomies until
we simply became a repository for the “avant-garde” (whatever that means—our
idea of what is “avant-garde” seems to be changing all the time). UbuWeb
adheres to no one historical narrative, rather we’re more interested in
putting several disciplines into the same space and seeing how they interact:
poetry, music, film, and literature from all periods encounter and bounce off
of each other in unexpected ways.

In 2005, we acquired a collection called [The 365 Days
Project](http://www.ubu.com/outsiders/365/index.shtml), a year’s worth of
outrageous MP3s that can be best described as celebrity gaffs, recordings of
children screeching, how-to records, song-poems, propagandistic religious
ditties, spoken word pieces, even ventriloquist acts. However, buried deep
within The 365 Days Project were rare tracks by the legendary avant-gardist
[Nicolas Slonimsky](http://www.ubu.com/outsiders/365/2003/070.shtml), an
early-to-mid-twentieth century conductor, performer, and composer belting out
advertisements and children’s ditties on the piano in an off-key voice. UbuWeb
had already been hosting historical recordings from the 1920s he
[conducted](http://www.ubu.com/sound/slonimsky.html) of [Charles
Ives](http://ubumexico.centro.org.mx/sound/slonimsky_nicolas/Slonimsky-
Nicolas_02_Ives-Barn-Dance.mp3), [Carl
Ruggles](http://www.ubu.com/sound/agp/AGP167.html), and [Edgard
Varèse](http://ubumexico.centro.org.mx/sound/slonimsky_nicolas/Slonimsky-
Nicolas_01_Varese-Ionisation.mp3) in our Sound section, yet nestled in amongst
oddballs like [Louis Farrakhan singing
calypso](http://www.ubu.com/outsiders/365/2003/091.shtml) or high school
choir’s renditions of “[Fox On The
Run](http://blogfiles.wfmu.org/DP/2003/01/365-Days-Project-01-04-dondero-high-
school-a-capella-choir-fox-on-the-run-1996.mp3),” Slonimsky fit into both
categories—high and low—equally well.

A few years back, Jerome Rothenberg, the leading scholar of
[Ethnopoetics](http://ubu.com/ethno/), approached us with an idea to include a
wing which would feature Ethnopoetic sound, visual art, poetry, and essays.
Rothenberg’s interest was specific to UbuWeb: how the avant-garde dovetailed
with the world’s deep cultures—those surviving in situ as well as those that
had vanished except for transcriptions in books or recordings from earlier
decades. Sound offerings include everything from [Slim
Gaillard](http://ubu.com/ethno/soundings/gaillard.html) to [Inuit throat
singing](http://ubu.com/ethno/soundings/inuit.html), each making formal
connections to modernist strains of [Dada](http://www.ubu.com/sound/dada.html)
or [sound poetry](http://ubu.com/sound/poesia_sonora.html). Likewise, the
Ethnopoetic visual poetry section ranges from [Chippewa song
pictures](http://ubu.com/ethno/visuals/chip.html) to [Paleolithic
palimpsests](http://ubu.com/ethno/visuals/paleo.html) to [Apollinaire’s
Calligrammes](http://ubu.com/historical/app/index.html) (1912–18) There are
dozens of papers with topics like “[Louis Armstrong and the Syntax of
Scat](http://ubu.com/ethno/discourses/syntax_of_scat.doc)” to [Kenneth
Rexroth’s writings on American Indian
song](http://ubu.com/ethno/discourses/rexroth_indian.html).

There are over 2500 full-length avant-garde films and videos, both streaming
and downloadable, including the videos of [Vito
Acconci](http://www.ubu.com/film/acconci.html) and the filmic oeuvre of [Jack
Smith](http://www.ubu.com/film/smith_jack.html), You can also find several
biographies and interviews with authors such as [Jorge Luis
Borges](http://www.ubu.com/film/borges.html),[ J. G.
Ballard](http://www.ubu.com/film/ballard.html), [Allen
Ginsberg](http://www.ubu.com/film/ginsberg.html), and [Louis-Ferdinand
Céline](http://www.ubu.com/film/celine.html). And there are a number of films
about avant-garde music, most notably [Robert
Ashley](http://www.ubu.com/sound/ashley.html)’s epic 14-hour [Music with Roots
in the Aether](http://www.ubu.com/film/aether.html), a series of composer
portraits made in the mid-70s featuring artists such as [Pauline
Oliveros](http://www.ubu.com/film/oliveros.html), [Philip
Glass](http://www.ubu.com/film/glass_aether.html), and [Alvin
Lucier](http://www.ubu.com/film/aether.html). A dozen of the rarely screened
films by [Mauricio Kagel](http://www.ubu.com/film/kagel.html) can be viewed as
can [Her Noise](http://www.ubu.com/film/her_noise.html), a documentary about
women and experimental music from 2005. There are also hours of performance
documentation, notably the entire [Cinema of
Transgression](http://www.ubu.com/film/transgression.html) series with films
by [Beth B](http://www.ubu.com/film/b.html) and [Richard
Kern](http://www.ubu.com/film/kern.html), a lecture by [Chris
Burden](http://www.ubu.com/film/burden.html), a bootleg version of [Robert
Smithson’s HOTEL PALENQUE](http://www.ubu.com/film/smithson.html), (1969) and
an astonishing [21-minute clip of Abbie Hoffman making gefilte
fish](http://www.ubu.com/film/hoffman.html) on Christmas Eve of 1973.

Other portions of the site include a vast repository of papers about audio,
performance, conceptual art, and poetry. There are large sections of artists
simply placed together under categories of Historical and Contemporary. And
then there is [/ubu Editions](http://www.ubu.com/ubu/), which offers full-
length PDFs of literature and poetry. Among the 73 titles, authors include Tim
Davis, Ron Silliman, Maurice Blanchot, Caroline Bergvall, Claude Simon, Jeremy
Sigler, Severo Sarduy, and Juliana Spahr. And finally there is a [Conceptual
Writing](http://ubu.com/concept/index.html) wing which highlights contemporary
trends in poetry as well as its historical precedents.

How does it all work? Most importantly, UbuWeb functions on no money: all work
is done by volunteers. Our server space and bandwidth is donated by several
universities, who use UbuWeb as an object of study for ideas related to
radical distribution and gift economies on the web. In terms of content, each
section has an editor who brings to the site their area of expertise. Ubu is
constantly being updated but the mission is different from the flotsam and
jetsam of a blog; rather, we liken it to a library which is ever-expanding in
uncanny—and often uncategorizable—directions. Fifteen years into it, UbuWeb
hosts over 7,500 artists and several thousand works of art. You’ll never find
an advertisement, a logo, or a donation box. UbuWeb has always been and will
always be free and open to all.

The future is eminently scalable: as long as we have the bandwidth and server
space, there is no limit as to how big the site can grow. For the moment, we
have no competition, a fact we’re not happy about. We’re distressed that there
is only one UbuWeb: why aren’t there dozens like it? Looking at the art world,
the problem appears to be a combination of an adherence to an old economy (one
that is working very well with a booming market) and sense of trepidation,
particularly in academic circles, where work on the internet is often not
considered valid for academic credit. As long as the art world continues to
prize economies of scarcity over those based on plentitude, the change will be
a long time coming. But UbuWeb seeks to offer an alternative by invoking a
gift economy of plentitude with a strong emphasis on global education. We’re
on numerous syllabi, ranging from kindergarteners studying pattern poetry to
post graduates listening to hours of Jacques Lacan’s
[Séminaires](http://www.ubu.com/sound/lacan.html).

And yet . . . it could vanish any day. Beggars can’t be choosers and we gladly
take whatever is offered to us. We don’t run on the most stable of servers or
on the swiftest of machines; hacks and crashes eat into the archive on a
periodic basis; sometimes the site as a whole goes down for days; occasionally
the army of volunteers dwindles to a team of one. But that’s the beauty of it:
UbuWeb is vociferously anti-institutional, eminently fluid, refusing to bow to
demands other than what we happen to be moved by at a specific moment,
allowing us flexibility and the ability to continually surprise our audience .
. . and even ourselves.

Originally Published: April 26th, 2011

Kenneth Goldsmith's writing has been called some of the most "exhaustive and
beautiful collage work yet produced in poetry" by _Publishers Weekly._
Goldsmith is the author of eight books of poetry, founding editor of the
online archive UbuWeb (http://ubu.com), and the editor _I 'll Be Your Mirror:
The Selected Andy Warhol..._



Graziano
Pirate Care: How do we imagine the health care for the future we want?
2018


Pirate Care - How do we imagine the health care for the future we want?

Oct 5, 2018 · 19 min read

by Valeria Graziano

A recent trend to reimagine the systems of care for the future is based on many of the principles of self-organization. From the passive figure of the patient — an aptly named subject, patiently awaiting aid from medical staff and carers — researchers and policymakers are moving towards a model defined as people-powered health — where care is discussed as transforming from a top-down service to a network of coordinated actors.

At the same time, for large numbers of people, to self-organize around their own healthcare needs is not a matter of predilection, but increasingly one of necessity. In Greece, where the measures imposed by the Troika decimated public services, a growing number of grassroots clinics set up by the Solidarity Movement have been providing medical attention to those without a private insurance. In Italy, initiatives such as the Ambulatorio Medico Popolare in Milan offer free consultations to migrants and other vulnerable citizens.

The new characteristic in all of these cases is the fact that they frame what they do in clearly political terms, rejecting or sidestepping the more neutral ways in which the third sector and the NGOs have long presented care practices as apolitical, as ways to help out that should never ask questions bigger than the problems they set out to confront, and as standing beyond left and right (often for the sake of not alienating potential donors and funders).

Rather, the current trends towards self-organization in health care are very vocal and clear in their messages: the care system is in crisis, and we need to learn from what we know already. One thing we know is that the market or the financialization of assets cannot be the solution (do you remember when just a few years ago Occupy was buying back healthcare debts from financial speculators, thus saving thousands Americans from dire economic circumstances? Or that scene from Michael Moore’s Sicko, the documentary where a guy has to choose which finger to have amputated because he does not have enough cash for saving both?).

Another thing we also know is that we cannot simply hold on to past models of managing the public sector, as most national healthcare systems were built for the needs of the last century. Administrations have been struggling to adapt to the changing nature of health conditions (moving from a predominance of epidemic to chronic diseases) and the different needs of today’s populations. And finally, we most definitely know that to go back to even more conservative ideas that frame care as a private issue that should fall on the shoulders of family members (and most often, of female relatives) or hired servants (also gendered and racialised) is not the best we can come up with.

Among the many initiatives that are rethinking how we organize the provision of health and care in ways that are accessible, fair, and efficient, there are a number of actors — mostly small organizations — who are experimenting with the opportunities introduced by digital technologies. While many charities and NGOs remain largely ignorant of the opportunities offered by technology, these new actors are developing DIY devices, wearables, 3D-printed bespoke components, apps and smart objects to intervene in areas otherwise neglected by the bigger players in the care system. These practices are presenting a new mode of operating that I want to call ‘pirate care’.
Pirate Care

Piracy and Care are not always immediately relatable notions. The figure of the pirate in popular and media cultures is often associated with cunning intelligence and masculine modes of action, of people running servers which are allowing people to illegally download music or movie files. One of the very first organizations that articulated the stakes of sharing knowledge was actually named Piratbyrån. “When you pirate mp3s, you are downloading communism” was a popular motto at the time. And yet, bringing the idea of a pirate ethics into resonance with contemporary modes of care invites a different consideration for practices that propose a paradigm change and therefore inevitably position themselves in tricky positions vis-à-vis the law and the status quo. I have been noticing for a while now that another kind of contemporary pirate is coming to the fore in our messy society in the midst of many crises. This new kind of pirate could be best captured by another image: this time it is a woman, standing on the dock of a boat sailing through the Caribbean sea towards the Mexican Gulf, about to deliver abortion pills to other women for whom this option is illegal in their country.

Women on Waves, founded in 1999, engages in its abortion-on-boat missions every couple of years. They are mostly symbolic actions, as they are rather expensive operations, and yet they are potent means for stirring public debate and have often been met with hostility — even military fleets. So far, they have visited seven countries so far, including Mexico, Guatemala and, more recently, Ireland and Poland, where feminists movements have been mobilizing in huge numbers to reclaim reproductive rights.

According to official statistics, more than 47,000 women die every year from complications resulting from illegal, unsafe abortion procedures, a service used by over 21 million women who do not have another choice. As Leticia Zenevich, spokesperson of Women on Waves, told HuffPost: “The fact that women need to leave the state sovereignty to retain their own sovereignty ― it makes clear states are deliberately stopping women from accessing their human right to health.” Besides the boat campaigns, the organization also runs Women on Web, an online medical abortion service active since 2005. The service is active in 17 languages, and it is helping more than 100,000 women per year to get information and access abortion pills. More recently, Women on Waves also begun experimenting with the use of drones to deliver the pills in countries impacted by restrictive laws (such as Poland in 2015 and Northern Ireland in 2016).

Women on Waves are the perfect figure to begin to illustrate my idea of ‘pirate care’. By this term I want to bring attention to an emergent phenomenon in the contemporary world, where more and more often initiatives that want to bring support and care to the most vulnerable subjects in the most unstable situations, increasingly have to do so by operating in that grey zone that exists between the gaps left open by various rules, laws and technologies. Some thrive in this shadow area, carefully avoiding calling attention to themselves for fear of attracting ferocious polemics and the trolling that inevitably accompanies them. In other cases, care practices that were previously considered the norm have now been pushed towards illegality.

Consider for instance the situation highlighted by the Docs Not Cops campaign that started in the UK four years ago, when the government had just introduced its ‘hostile environment’ policy with the aim to make everyday life as hard as possible for migrants with an irregular status. Suddenly, medical staff in hospitals and other care facilities were supposed to carry out document checks before being allowed to offer any assistance. Their mobilization denounced the policy as an abuse of mandate on the part of the Home Office and a threat to public health, given that it effectively discouraged patients to seek help for fear of retaliations. Another sadly famous example of this trend of pushing many acts of care towards illegality would the straitjacketing and criminalization of migrant rescuing NGOs in the Mediterranean on the part of various European countries, a policy led by Italian government. Yet another example would be the increasing number of municipal decrees that make it a crime to offer food, money or shelter to the homeless in many cities in North America and Europe.
Hacker Ethics

This scenario reminds us of the tragic story of Antigone and the age-old question of what to do when the relationship between what the law says and one what feels it is just becomes fraught with tensions and contradictions. Here, the second meaning of ‘pirate care’ becomes apparent as it points to the way in which a number of initiatives have been responding to the current crisis by mobilizing tactics and ethics as first developed within the hacker movement.

As described by Steven Levy in Hackers, the general principles of a hacker ethic include sharing, openness, decentralization, free access to knowledge and tools, and an effort of contributing to society’s democratic wellbeing. To which we could add, following Richard Stallman, founder of the free software movement, that “bureaucracy should not be allowed to get in the way of doing anything useful.” While here Stallman was reflecting on the experience of the M.I.T. AI Lab in 1971, his critique of bureaucracy captures well a specific trait of the techno-political nexus that is also shaping the present moment: as more technologies come to mediate everyday interactions, they are also reshaping the very structure of the institutions and organizations we inhabit, so that our lives are increasingly formatted to meet the requirements of an unprecedented number of standardised procedures, compulsory protocols, and legal obligations.

According to anthropologists David Graeber, we are living in an era of “total bureaucratization”. But while contemporary populism often presents bureaucracy as a problem of the public sector, implicitly suggesting “the market” to be the solution, Graeber’s study highlights how historically all so-called “free markets” have actually been made possible through the strict enforcement of state regulations. Since the birth of the modern corporation in 19th century America, “bureaucratic techniques (performance reviews, focus groups, time allocation surveys …) developed in financial and corporate circles came to invade the rest of society — education, science, government — and eventually, to pervade almost every aspect of everyday life.”
The forceps and the speculum

And thus, in resonance with the tradition of hacker ethics, a number of ‘pirate care’ practices are intervening in reshaping what looking after our collective health will look like in the future. CADUS, for example, is a Berlin based NGO which has recently set up a Crisis Response Makerspace to build open and affordable medical equipment specifically designed to bring assistance in extreme crisis zones where not many other organizations would venture, such as Syria and Northern Iraq. After donating their first mobile hospital to the Kurdish Red Crescent last year, CADUS is now working to develop a second version, in a container this time, able to be deployed in conflict zones deprived of any infrastructure, and a civil airdrop system to deliver food and medical equipment as fast as possible. The fact that CADUS adopted the formula of the makerspace to invent open emergency solutions that no private company would be interested in developing is not a coincidence, but emerges from a precise vision of how healthcare innovations should be produced and disseminated, and not only for extreme situations.

“Open source is the only way for medicine” — says Marcus Baw of Open Health Hub — as “medical software now is medicine”. Baw has been involved in another example of ‘pirate care’ in the UK, founding a number of initiatives to promote the adoption of open standards, open source code, and open governance in Health IT. The NHS spends about £500 million each time it refreshes Windows licenses, and aside from avoiding the high costs, an open source GP clinical system would be the only way to address the pressing ethical issue facing contemporary medicine: as software and technology become more and more part of the practice of medicine itself, they need to be subject to peer-review and scrutiny to assess their clinical safety. Moreover, that if such solutions are found to be effective and safe lives, it is the duty of all healthcare practitioners to share their knowledge with the rest of humanity, as per the Hippocratic Oath. To illustrate what happens when medical innovations are kept secret, Baw shares the story of the Chamberlen family of obstetricians, who kept the invention of the obstetric forceps, a family trade secret for over 150 years, using the tool only to treat their elite clientele of royals and aristocracy. As a result, thousands of mothers and babies likely died in preventable circumstances.

It is perhaps significant that such a sad historical example of the consequences ofclosed medicine must come from the field of gynaecology, one of the most politically charged areas of medical specialization to this day. So much so that last year another collective of ‘pirate carers’ named GynePunk developed a biolab toolkit for emergency gynaecological care, to allow those excluded from the reproductive healthcare — undocumented migrants, trans and queer women, drug users and sex workers — to perform basic checks on their own bodily fluids. Their prototypes include a centrifuge, a microscope and an incubator that can be cheaply realised by repurposing components of everyday items such as DVD players and computer fans, or by digital fabrication. In 2015, GynePunk also developed a 3D-printable speculum and — who knows? — perhaps their next project might include a pair of forceps…

As the ‘pirate care’ approach keeps proliferating more and more, its tools and modes of organizing is keeping alive a horizon in which healthcare is not de facto reduced to a privilege.

PS. This article was written before the announcement of the launch of Mediterranea, which we believe to be another important example of pirate care. #piratecare #abbiamounanave

Graziano, Mars & Medak
Learning from #Syllabus
2019


ACTIONS

LEARNING FROM
#SYLLABUS
VALERIA GRAZIANO,
MARCELL MARS,
TOMISLAV MEDAK

115

116

STATE MACHINES

LEARNING FROM #SYLLABUS
VALERIA GRAZIANO, MARCELL MARS, TOMISLAV MEDAK
The syllabus is the manifesto of the 21st century.
—Sean Dockray and Benjamin Forster1
#Syllabus Struggles
In August 2014, Michael Brown, an 18-year-old boy living in Ferguson, Missouri,
was fatally shot by police officer Darren Wilson. Soon after, as the civil protests denouncing police brutality and institutional racism began to mount across the United
States, Dr. Marcia Chatelain, Associate Professor of History and African American
Studies at Georgetown University, launched an online call urging other academics
and teachers ‘to devote the first day of classes to a conversation about Ferguson’ and ‘to recommend texts, collaborate on conversation starters, and inspire
dialogue about some aspect of the Ferguson crisis.’2 Chatelain did so using the
hashtag #FergusonSyllabus.
Also in August 2014, using the hashtag #gamergate, groups of users on 4Chan,
8Chan, Twitter, and Reddit instigated a misogynistic harassment campaign against
game developers Zoë Quinn and Brianna Wu, media critic Anita Sarkeesian, as well as
a number of other female and feminist game producers, journalists, and critics. In the
following weeks, The New Inquiry editors and contributors compiled a reading list and
issued a call for suggestions for their ‘TNI Syllabus: Gaming and Feminism’.3
In June 2015, Donald Trump announced his candidacy for President of the United
States. In the weeks that followed, he became the presumptive Republican nominee,
and The Chronicle of Higher Education introduced the syllabus ‘Trump 101’.4 Historians N.D.B. Connolly and Keisha N. Blain found ‘Trump 101’ inadequate, ‘a mock college syllabus […] suffer[ing] from a number of egregious omissions and inaccuracies’,
failing to include ‘contributions of scholars of color and address the critical subjects
of Trump’s racism, sexism, and xenophobia’. They assembled ‘Trump Syllabus 2.0’.5
Soon after, in response to a video in which Trump engaged in ‘an extremely lewd
conversation about women’ with TV host Billy Bush, Laura Ciolkowski put together a
‘Rape Culture Syllabus’.6

1
2
3
4
5
6

Sean Dockray, Benjamin Forster, and Public Office, ‘README.md’, Hyperreadings, 15 February
2018, https://samiz-dat.github.io/hyperreadings/.
Marcia Chatelain, ‘Teaching the #FergusonSyllabus’, Dissent Magazine, 28 November 2014,
https://www.dissentmagazine.org/blog/teaching-ferguson-syllabus/.
‘TNI Syllabus: Gaming and Feminism’, The New Inquiry, 2 September 2014, https://thenewinquiry.
com/tni-syllabus-gaming-and-feminism/.
‘Trump 101’, The Chronicle of Higher Education, 19 June 2016, https://www.chronicle.com/article/
Trump-Syllabus/236824/.
N.D.B. Connolly and Keisha N. Blain, ‘Trump Syllabus 2.0’, Public Books, 28 June 2016, https://
www.publicbooks.org/trump-syllabus-2-0/.
Laura Ciolkowski, ‘Rape Culture Syllabus’, Public Books, 15 October 2016, https://www.
publicbooks.org/rape-culture-syllabus/.

ACTIONS

117

In April 2016, members of the Standing Rock Sioux tribe established the Sacred Stone
Camp and started the protest against the Dakota Access Pipeline, the construction of
which threatened the only water supply at the Standing Rock Reservation. The protest at the site of the pipeline became the largest gathering of native Americans in
the last 100 years and they earned significant international support for their ReZpect
Our Water campaign. As the struggle between protestors and the armed forces unfolded, a group of Indigenous scholars, activists, and supporters of the struggles of
First Nations people and persons of color, gathered under the name the NYC Stands
for Standing Rock Committee, put together #StandingRockSyllabus.7
The list of online syllabi created in response to political struggles has continued to
grow, and at present includes many more examples:
All Monuments Must Fall Syllabus
#Blkwomensyllabus
#BLMSyllabus
#BlackIslamSyllabus
#CharlestonSyllabus
#ColinKaepernickSyllabus
#ImmigrationSyllabus
Puerto Rico Syllabus (#PRSyllabus)
#SayHerNameSyllabus
Syllabus for White People to Educate Themselves
Syllabus: Women and Gender Non-Conforming People Writing about Tech
#WakandaSyllabus
What To Do Instead of Calling the Police: A Guide, A Syllabus, A Conversation, A
Process
#YourBaltimoreSyllabus
It would be hard to compile a comprehensive list of all the online syllabi that have
been created by social justice movements in the last five years, especially, but not
exclusively, those initiated in North America in the context of feminist and anti-racist
activism. In what is now a widely spread phenomenon, these political struggles use
social networks and resort to the hashtag template ‘#___Syllabus’ to issue calls for
the bottom-up aggregation of resources necessary for political analysis and pedagogy
centering on their concerns. For this reason, we’ll call this phenomenon ‘#Syllabus’.
During the same years that saw the spread of the #Syllabus phenomenon, university
course syllabi have also been transitioning online, often in a top-down process initiated
by academic institutions, which has seen the syllabus become a contested document
in the midst of increasing casualization of teaching labor, expansion of copyright protections, and technology-driven marketization of education.
In what follows, we retrace the development of the online syllabus in both of these
contexts, to investigate the politics enmeshed in this new media object. Our argument

7

‘#StandingRockSyllabus’, NYC Stands with Standing Rock, 11 October 2016, https://
nycstandswithstandingrock.wordpress.com/standingrocksyllabus/.

118

STATE MACHINES

is that, on the one hand, #Syllabus names the problem of contemporary political culture as pedagogical in nature, while, on the other hand, it also exposes academicized
critical pedagogy and intellectuality as insufficiently political in their relation to lived
social reality. Situating our own stakes as both activists and academics in the present
debate, we explore some ways in which the radical politics of #Syllabus could be supported to grow and develop as an articulation of solidarity between amateur librarians
and radical educators.
#Syllabus in Historical Context: Social Movements and Self-Education
When Professor Chatelain launched her call for #FergusonSyllabus, she was mainly
addressing a community of fellow educators:
I knew Ferguson would be a challenge for teachers: When schools opened across
the country, how were they going to talk about what happened? My idea was simple, but has resonated across the country: Reach out to the educators who use
Twitter. Ask them to commit to talking about Ferguson on the first day of classes.
Suggest a book, an article, a film, a song, a piece of artwork, or an assignment that
speaks to some aspect of Ferguson. Use the hashtag: #FergusonSyllabus.8
Her call had a much greater resonance than she had originally anticipated as it reached
beyond the limits of the academic community. #FergusonSyllabus had both a significant impact in shaping the analysis and the response to the shooting of Michael
Brown, and in inspiring the many other #Syllabus calls that soon followed.
The #Syllabus phenomenon comprises different approaches and modes of operating. In some cases, the material is clearly claimed as the creation of a single individual, as in the case of #BlackLivesMatterSyllabus, which is prefaced on the project’s
landing page by a warning to readers that ‘material compiled in this syllabus should
not be duplicated without proper citation and attribution.’9 A very different position on
intellectual property has been embraced by other #Syllabus interventions that have
chosen a more commoning stance. #StandingRockSyllabus, for instance, is introduced as a crowd-sourced process and as a useful ‘tool to access research usually
kept behind paywalls.’10
The different workflows, modes of engagements, and positioning in relation to
intellectual property make #Syllabus readable as symptomatic of the multiplicity
that composes social justice movements. There is something old school—quite
literally—about the idea of calling a list of online resources a ‘syllabus’; a certain
quaintness, evoking thoughts of teachers and homework. This is worthy of investigation especially if contrasted with the attention dedicated to other online cultural
phenomena such as memes or fake news. Could it be that the online syllabus offers

8

9
10

Marcia Chatelain, ‘How to Teach Kids About What’s Happening in Ferguson’, The Atlantic, 25
August 2014, https://www.theatlantic.com/education/archive/2014/08/how-to-teach-kids-aboutwhats-happening-in-ferguson/379049/.
Frank Leon Roberts, ‘Black Lives Matter: Race, Resistance, and Populist Protest’, 2016, http://
www.blacklivesmattersyllabus.com/fall2016/.
‘#StandingRockSyllabus’, NYC Stands with Standing Rock, 11 October 2016, https://
nycstandswithstandingrock.wordpress.com/standingrocksyllabus/.

ACTIONS

119

a useful, fresh format precisely for the characteristics that foreground its connections to older pedagogical traditions and techniques, predating digital cultures?
#Syllabus can indeed be analyzed as falling within a long lineage of pedagogical tools
created by social movements to support processes of political subjectivation and the
building of collective consciousness. Activists and militant organizers have time and
again created and used various textual media objects—such as handouts, pamphlets,
cookbooks, readers, or manifestos—to facilitate a shared political analysis and foment
mass political mobilization.
In the context of the US, anti-racist movements have historically placed great emphasis on critical pedagogy and self-education. In 1964, the Council of Federated Organizations (an alliance of civil rights initiatives) and the Student Nonviolent
Coordinating Committee (SNCC), created a network of 41 temporary alternative
schools in Mississippi. Recently, the Freedom Library Project, a campaign born out
of #FergusonSyllabus to finance under-resourced pedagogical initiatives, openly
referenced this as a source of inspiration. The Freedom Summer Project of 1964
brought hundreds of activists, students, and scholars (many of whom were white)
from the north of the country to teach topics and issues that the discriminatory
state schools would not offer to black students. In the words of an SNCC report,
Freedom Schools were established following the belief that ‘education—facts to
use and freedom to use them—is the basis of democracy’,11 a conviction echoed
by the ethos of contemporary #Syllabus initiatives.
Bob Moses, a civil rights movement leader who was the head of the literary skills initiative in Mississippi, recalls the movement’s interest, at the time, in teaching methods
that used the very production of teaching materials as a pedagogical tool:
I had gotten hold of a text and was using it with some adults […] and noticed that
they couldn’t handle it because the pictures weren’t suited to what they knew […]
That got me into thinking about developing something closer to what people were
doing. What I was interested in was the idea of training SNCC workers to develop
material with the people we were working with.12
It is significant that for him the actual use of the materials the group created was much
less important than the process of producing the teaching materials together. This focus
on what could be named as a ‘pedagogy of teaching’, or perhaps more accurately ‘the
pedagogy of preparing teaching materials’, is also a relevant mechanism at play in the
current #Syllabus initiatives, as their crowdsourcing encourages different kinds of people
to contribute what they feel might be relevant resources for the broader movement.
Alongside the crucial import of radical black organizing, another relevant genealogy in
which to place #Syllabus would be the international feminist movement and, in particular, the strategies developed in the 70s campaign Wages for Housework, spearheaded

11
12

Daniel Perlstein, ‘Teaching Freedom: SNCC and the Creation of the Mississippi Freedom Schools’,
History of Education Quarterly 30.3 (Autumn 1990): 302.
Perlstein, ‘Teaching Freedom’: 306.

120

STATE MACHINES

by Selma James and Silvia Federici. The Wages for Housework campaign drove home
the point that unwaged reproductive labor provides a foundation for capitalist exploitation. They wanted to encourage women to denaturalize and question the accepted
division of labor into remunerated work outside the house and labor of love within
the confines of domesticity, discussing taboo topics such as ‘prostitution as socialized housework’ and ‘forced sterilization’ as issues impacting poor, often racialized,
women. The organizing efforts of Wages for Housework held political pedagogy at their
core. They understood that that pedagogy required:
having literature and other materials available to explain our goals, all written in a
language that women can understand. We also need different types of documents,
some more theoretical, others circulating information about struggles. It is important
that we have documents for women who have never had any political experience.
This is why our priority is to write a popular pamphlet that we can distribute massively and for free—because women have no money.13
The obstacles faced by the Wages for Housework campaign were many, beginning
with the issue of how to reach a dispersed constituency of isolated housewives
and how to keep the revolutionary message at the core of their claims accessible
to different groups. In order to tackle these challenges, the organizers developed
a number of innovative communication tactics and pedagogical tools, including
strategies to gain mainstream media coverage, pamphlets and leaflets translated
into different languages,14 a storefront shop in Brooklyn, and promotional tables at
local events.
Freedom Schools and the Wages for Housework campaign are only two amongst
the many examples of the critical pedagogies developed within social movements.
The #Syllabus phenomenon clearly stands in the lineage of this history, yet we should
also highlight its specificity in relation to the contemporary political context in which it
emerged. The #Syllabus acknowledges that since the 70s—and also due to students’
participation in protests and their display of solidarity with other political movements—
subjects such as Marxist critical theory, women studies, gender studies, and African
American studies, together with some of the principles first developed in critical pedagogy, have become integrated into the educational system. The fact that many initiators of #Syllabus initiatives are women and Black academics speaks to this historical
shift as an achievement of that period of struggles. However, the very necessity felt by
these educators to kick-start their #Syllabus campaigns outside the confines of academia simultaneously reveals the difficulties they encounter within the current privatized and exclusionary educational complex.

13
14

Silvia Federici and Arlen Austin (eds) The New York Wages for Housework Committee 1972-1977:
History, Theory and Documents. New York: Autonomedia, 2017: 37.
Some of the flyers and pamphlets were digitized by MayDay Rooms, ‘a safe haven for historical
material linked to social movements, experimental culture and the radical expression of
marginalised figures and groups’ in London, and can be found in their online archive: ‘Wages
for Housework: Pamphlets – Flyers – Photographs’, MayDay Rooms, http://maydayrooms.org/
archives/wages-for-housework/wfhw-pamphlets-flyers-photographs/.

ACTIONS

121

#Syllabus as a Media Object
Besides its contextualization within the historical legacy of previous grassroots mobilizations, it is also necessary to discuss #Syllabus as a new media object in its own
right, in order to fully grasp its relevance for the future politics of knowledge production and transmission.
If we were to describe this object, a #Syllabus would be an ordered list of links to
scholarly texts, news reports, and audiovisual media, mostly aggregated through a
participatory and iterative process, and created in response to political events indicative of larger conditions of structural oppression. Still, as we have seen, #Syllabus
as a media object doesn’t follow a strict format. It varies based on the initial vision
of their initiators, political causes, and social composition of the relevant struggle.
Nor does it follow the format of traditional academic syllabi. While a list of learning
resources is at the heart of any syllabus, a boilerplate university syllabus typically
also includes objectives, a timetable, attendance, coursework, examination, and an
outline of the grading system used for the given course. Relieved of these institutional
requirements, the #Syllabus typically includes only a reading list and a hashtag. The
reading list provides resources for understanding what is relevant to the here and
now, while the hashtag provides a way to disseminate across social networks the call
to both collectively edit and teach what is relevant to the here and now. Both the list
and the hashtag are specificities and formal features of the contemporary (internet)
culture and therefore merit further exploration in relation to the social dynamics at
play in #Syllabus initiatives.
The different phases of the internet’s development approached the problem of the
discoverability of relevant information in different ways. In the early days, the Gopher
protocol organized information into a hierarchical file tree. With the rise of World Wide
Web (WWW), Yahoo tried to employ experts to classify and catalog the internet into
a directory of links. That seemed to be a successful approach for a while, but then
Google (founded in 1998) came along and started to use a webgraph of links to rank
the importance of web pages relative to a given search query.
In 2005, Clay Shirky wrote the essay ‘Ontology is Overrated: Categories, Links and
Tags’,15 developed from his earlier talk ‘Folksonomies and Tags: The Rise of User-Developed Classification’. Shirky used Yahoo’s attempt to categorize the WWW to argue
against any attempt to classify a vast heterogenous body of information into a single
hierarchical categorical system. In his words: ‘[Yahoo] missed [...] that, if you’ve got
enough links, you don’t need the hierarchy anymore. There is no shelf. There is no file
system. The links alone are enough.’ Those words resonated with many. By following
simple formatting rules, we, the internet users, whom Time magazine named Person of
the Year in 2006, proved that it is possible to collectively write the largest encyclopedia
ever. But, even beyond that, and as per Shirky’s argument, if enough of us organized
our own snippets of the vast body of the internet, we could replace old canons, hierarchies, and ontologies with folksonomies, social bookmarks, and (hash)tags.

15

Clay Shirky, ‘Ontology Is Overrated: Categories, Links, and Tags’, 2005, http://shirky.com/writings/
herecomeseverybody/ontology_overrated.html.

122

STATE MACHINES

Very few who lived through those times would have thought that only a few years later
most user-driven services would be acquired by a small number of successful companies and then be shut down. Or, that Google would decide not to include the biggest
hashtag-driven platform, Twitter, into its search index and that the search results on
its first page would only come from a handful of usual suspects: media conglomerates, Wikipedia, Facebook, LinkedIn, Amazon, Reddit, Quora. Or, that Twitter would
become the main channel for the racist, misogynist, fascist escapades of the President
of United States.
This internet folk naivety—stoked by an equally enthusiastic, venture-capital-backed
startup culture—was not just naivety. This was also a period of massive experimental
use of these emerging platforms. Therefore, this history would merit to be properly
revisited and researched. In this text, however, we can only hint to this history: to contextualize how the hashtag as a formalization initially emerged, and how with time the
user-driven web lost some of its potential. Nonetheless, hashtags today still succeed in
propagating political mobilizations in the network environment. Some will say that this
propagation is nothing but a reflection of the internet as a propaganda machine, and
there’s no denying that hashtags do serve a propaganda function. However, it equally
matters that hashtags retain the capacity to shape coordination and self-organization,
and they are therefore a reflection of the internet as an organization machine.
As mentioned, #Syllabus as a media object is an ordered list of links to resources.
In the long history of knowledge retrieval systems and attempts to help users find
relevant information from big archives, the list on the internet continues in the tradition of the index card catalog in libraries, of charts in the music industry, or mixtapes
and playlists in popular culture, helping people tell their stories of what is relevant and
what isn’t through an ordered sequence of items. The list (as a format) together with
the hashtag find themselves in the list (pun intended) of the most iconic media objects
of the internet. In the network media environment, being smart in creating new lists
became the way to displace old lists of relevance, the way to dismantle canons, the
way to unlearn. The way to become relevant.
The Academic Syllabus Migrates Online
#Syllabus interventions are a challenge issued by political struggles to educators as
they expose a fundamental contradiction in the operations of academia. While critical pedagogies of yesteryear’s social movements have become integrated into the
education system, the radical lessons that these pedagogies teach students don’t
easily reconcile with their experience: professional practice courses, the rethoric of
employability and compulsory internships, where what they learn is merely instrumental, leaves them wondering how on earth they are to apply their Marxism or feminism
to their everyday lives?
Cognitive dissonance is at the basis of degrees in the liberal arts. And to make things
worse, the marketization of higher education, the growing fees and the privatization
of research has placed universities in a position where they increasingly struggle to
provide institutional space for critical interventions in social reality. As universities become more dependent on the ‘customer satisfaction’ of their students for survival, they
steer away from heated political topics or from supporting faculty members who might
decide to engage with them. Borrowing the words of Stefano Harney and Fred Moten,

ACTIONS

123

‘policy posits curriculum against study’,16 creating the paradoxical situation wherein
today’s universities are places in which it is possible to do almost everything except
study. What Harney and Moten propose instead is the re-appropriation of the diffuse
capacity of knowledge generation that stems from the collective processes of selforganization and commoning. As Moten puts it: ‘When I think about the way we use the
term ‘study,’ I think we are committed to the idea that study is what you do with other
people.’17 And it is this practice of sharing a common repertoire—what Moten and
Harney call ‘rehearsal’18—that is crucially constitutive of a crowdsourced #Syllabus.
This contradiction and the tensions it brings to contemporary neoliberal academia can
be symptomatically observed in the recent evolution of the traditional academic syllabus. As a double consequence of (some) critical pedagogies becoming incorporated
into the teaching process and universities striving to reduce their liability risks, academic syllabi have become increasingly complex and extensive documents. They are
now understood as both a ‘social contract’ between the teachers and their students,
and ‘terms of service’19 between the institution providing educational services and the
students increasingly framed as sovereign consumers making choices in the market of
educational services. The growing official import of the syllabus has had the effect that
educators have started to reflect on how the syllabus translates the power dynamics
into their classroom. For instance, the critical pedagogue Adam Heidebrink-Bruno has
demanded that the syllabus be re-conceived as a manifesto20—a document making
these concerns explicit. And indeed, many academics have started to experiment with
the form and purpose of the syllabus, opening it up to a process of co-conceptualization with their students, or proposing ‘the other syllabus’21 to disrupt asymmetries.
At the same time, universities are unsurprisingly moving their syllabi online. A migration
that can be read as indicative of three larger structural shifts in academia.
First, the push to make syllabi available online, initiated in the US, reinforces the differential effects of reputation economy. It is the Ivy League universities and their professorial star system that can harness the syllabus to advertise the originality of their
scholarship, while the underfunded public universities and junior academics are burdened with teaching the required essentials. This practice is tied up with the replication
in academia of the different valorization between what is considered to be the labor of
production (research) and that of social reproduction (teaching). The low esteem (and
corresponding lower rewards and remuneration) for the kinds of intellectual labors that
can be considered labors of care—editing journals, reviewing papers or marking, for
instance—fits perfectly well with the gendered legacies of the academic institution.

Stefano Harney and Fred Moten, The Undercommons: Fugitive Planning & Black Study, New York:
Autonomedia, 2013, p. 81.
17 Harney and Moten, The Undercommons, p. 110.
18 Harney and Moten, The Undercommons, p. 110.
19 Angela Jenks, ‘It’s In The Syllabus’, Teaching Tools, Cultural Anthropology website, 30 June 2016,
https://culanth.org/fieldsights/910-it-s-in-the-syllabu/.
20 Adam Heidebrink-Bruno, ‘Syllabus as Manifesto: A Critical Approach to Classroom Culture’,
Hybrid Pedagogy, 28 August 2014, http://hybridpedagogy.org/syllabus-manifesto-criticalapproach-classroom-culture/.
21 Lucy E. Bailey, ‘The “Other” Syllabus: Rendering Teaching Politics Visible in the Graduate
Pedagogy Seminar’, Feminist Teacher 20.2 (2010): 139–56.
16

124

STATE MACHINES

Second, with the withdrawal of resources to pay precarious and casualized academics during their ‘prep’ time (that is, the time in which they can develop new
course material, including assembling new lists of references, updating their courses as well as the methodologies through which they might deliver these), syllabi
now assume an ambivalent role between the tendencies for collectivization and
individualization of insecurity. The reading lists contained in syllabi are not covered
by copyrights; they are like playlists or recipes, which historically had the effect of
encouraging educators to exchange lesson plans and make their course outlines
freely available as a valuable knowledge common. Yet, in the current climate where
universities compete against each other, the authorial function is being extended
to these materials too. Recently, US universities have been leading a trend towards
the interpretation of the syllabus as copyrightable material, an interpretation that
opened up, as would be expected, a number of debates over who is a syllabus’
rightful owner, whether the academics themselves or their employers. If the latter interpretation were to prevail, this would enable universities to easily replace
academics while retaining their contributions to the pedagogical offer. The fruits of
a teacher’s labor could thus be turned into instruments of their own deskilling and
casualization: why would universities pay someone to write a course when they can
recycle someone else’s syllabus and get a PhD student or a precarious post doc to
teach the same class at a fraction of the price?
This tendency to introduce a logic of property therefore spurs competitive individualism and erasure of contributions from others. Thus, crowdsourcing the syllabus
in the context of growing precarization of labor risks remaining a partial process,
as it might heighten the anxieties of those educators who do not enjoy the security
of a stable job and who are therefore the most susceptible to the false promises of
copyright enforcement and authorship understood as a competitive, small entrepreneurial activity. However, when inserted in the context of live, broader political
struggles, the opening up of the syllabus could and should be an encouragement
to go in the opposite direction, providing a ground to legitimize the collective nature
of the educational process and to make all academic resources available without
copyright restrictions, while devising ways to secure the proper attribution and the
just remuneration of everyone’s labor.
The introduction of the logic of property is hard to challenge as it is furthered by commercial academic publishers. Oligopolists, such as Elsevier, are not only notorious for
using copyright protections to extract usurious profits from the mostly free labor of
those who write, peer review, and edit academic journals,22 but they are now developing all sorts of metadata, metrics, and workflow systems that are increasingly becoming central for teaching and research. In addition to their publishing business, Elsevier
has expanded its ‘research intelligence’ offering, which now encompasses a whole
range of digital services, including the Scopus citation database; Mendeley reference
manager; the research performance analytics tools SciVal and Research Metrics; the
centralized research management system Pure; the institutional repository and pub-

22 Vincent Larivière, Stefanie Haustein, and Philippe Mongeon, ‘The Oligopoly of Academic
Publishers in the Digital Era’, PLoS ONE 10.6 (10 June 2015),https://journals.plos.org/plosone/
article?id=10.1371/journal.pone.0127502/.

ACTIONS

125

lishing platform Bepress; and, last but not least, grant discovery and funding flow tools
Funding Institutional and Elsevier Funding Solutions. Given how central digital services
are becoming in today’s universities, whoever owns these platforms is the university.
Third, the migration online of the academic syllabus falls into larger efforts by universities to ‘disrupt’ the educational system through digital technologies. The introduction
of virtual learning environments has led to lesson plans, slides, notes, and syllabi becoming items to be deposited with the institution. The doors of public higher education are being opened to commercial qualification providers by means of the rise in
metrics-based management, digital platforming of university services, and transformation of students into consumers empowered to make ‘real-time’ decisions on how to
spend their student debt.23 Such neoliberalization masquerading behind digitization
is nowhere more evident than in the hype that was generated around Massive Open
Online Courses (MOOCs), exactly at the height of the last economic crisis.
MOOCs developed gradually from the Massachusetts Institute of Techology’s (MIT) initial experiments with opening up its teaching materials to the public through the OpenCourseWare project in 2001. By 2011, MOOCs were saluted as a full-on democratization of access to ‘Ivy-League-caliber education [for] the world’s poor.’24 And yet, their
promise quickly deflated following extremely low completion rates (as low as 5%).25
Believing that in fifty years there will be no more than 10 institutions globally delivering
higher education,26 by the end of 2013 Sebastian Thrun (Google’s celebrated roboticist
who in 2012 founded the for-profit MOOC platform Udacity), had to admit that Udacity
offered a ‘lousy product’ that proved to be a total failure with ‘students from difficult
neighborhoods, without good access to computers, and with all kinds of challenges in
their lives.’27 Critic Aaron Bady has thus rightfully argued that:
[MOOCs] demonstrate what the technology is not good at: accreditation and mass
education. The MOOC rewards self-directed learners who have the resources and
privilege that allow them to pursue learning for its own sake [...] MOOCs are also a
really poor way to make educational resources available to underserved and underprivileged communities, which has been the historical mission of public education.28
Indeed, the ‘historical mission of public education’ was always and remains to this
day highly contested terrain—the very idea of a public good being under attack by
dominant managerial techniques that try to redefine it, driving what Randy Martin

23 Ben Williamson, ‘Number Crunching: Transforming Higher Education into “Performance Data”’,
Medium, 16 August 2018, https://medium.com/ussbriefs/number-crunching-transforming-highereducation-into-performance-data-9c23debc4cf7.
24 Max Chafkin, ‘Udacity’s Sebastian Thrun, Godfather Of Free Online Education, Changes Course’,
FastCompany, 14 November 2013, https://www.fastcompany.com/3021473/udacity-sebastianthrun-uphill-climb/.
25 ‘The Rise (and Fall?) Of the MOOC’, Oxbridge Essays, 14 November 2017, https://www.
oxbridgeessays.com/blog/rise-fall-mooc/.
26 Steven Leckart, ‘The Stanford Education Experiment Could Change Higher Learning Forever’,
Wired, 20 March 2012, https://www.wired.com/2012/03/ff_aiclass/.
27 Chafkin, ‘Udacity’s Sebastian Thrun’.
28 Aaron Bady, ‘The MOOC Moment and the End of Reform’, Liberal Education 99.4 (Fall 2013),
https://www.aacu.org/publications-research/periodicals/mooc-moment-and-end-reform.

126

STATE MACHINES

aptly called the ‘financialization of daily life.’29 The failure of MOOCs finally points to a
broader question, also impacting the vicissitudes of #Syllabus: Where will actual study
practices find refuge in the social, once the social is made directly productive for capital at all times? Where will study actually ‘take place’, in the literal sense of the phrase,
claiming the resources that it needs for co-creation in terms of time, labor, and love?
Learning from #Syllabus
What have we learned from the #Syllabus phenomenon?
The syllabus is the manifesto of 21st century.
Political struggles against structural discrimination, oppression, and violence in the
present are continuing the legacy of critical pedagogies of earlier social movements
that coupled the process of political subjectivation with that of collective education.
By creating effective pedagogical tools, movements have brought educators and students into the fold of their struggles. In the context of our new network environment,
political struggles have produced a new media object: #Syllabus, a crowdsourced list
of resources—historic and present—relevant to a cause. By doing so, these struggles
adapt, resist, and live in and against the networks dominated by techno-capital, with
all of the difficulties and contradictions that entails.
What have we learned from the academic syllabus migrating online?
In the contemporary university, critical pedagogy is clashing head-on with the digitization of higher education. Education that should empower and research that should
emancipate are increasingly left out in the cold due to the data-driven marketization
of academia, short-cutting the goals of teaching and research to satisfy the fluctuating demands of labor market and financial speculation. Resistance against the capture of data, research workflows, and scholarship by means of digitization is a key
struggle for the future of mass intellectuality beyond exclusions of class, disability,
gender, and race.
What have we learned from #Syllabus as a media object?
As old formats transform into new media objects, the digital network environment defines the conditions in which these new media objects try to adjust, resist, and live. A
right intuition can intervene and change the landscape—not necessarily for the good,
particularly if the imperatives of capital accumulation and social control prevail. We
thus need to re-appropriate the process of production and distribution of #Syllabus
as a media object in its totality. We need to build tools to collectively control the workflows that are becoming the infrastructures on top of which we collaboratively produce
knowledge that is vital for us to adjust, resist, and live. In order to successfully intervene in the world, every aspect of production and distribution of these new media objects becomes relevant. Every single aspect counts. The order of items in a list counts.
The timestamp of every version of the list counts. The name of every contributor to

29 Randy Martin, Financialization Of Daily Life, Philadelphia: Temple University Press, 2002.

ACTIONS

127

every version of the list counts. Furthermore, the workflow to keep track of all of these
aspects is another complex media object—a software tool of its own—with its own order and its own versions. It is a recursive process of creating an autonomous ecology.
#Syllabus can be conceived as a recursive process of versioning lists, pointing to textual, audiovisual, or other resources. With all of the linked resources publicly accessible to all; with all versions of the lists editable by all; with all of the edits attributable to
their contributors; with all versions, all linked resources, all attributions preservable by
all, just such an autonomous ecology can be made for #Syllabus. In fact, Sean Dockray, Benjamin Forster, and Public Office have already proposed such a methodology in
their Hyperreadings, a forkable readme.md plaintext document on GitHub. They write:
A text that by its nature points to other texts, the syllabus is already a relational
document acknowledging its own position within a living field of knowledge. It is
decidedly not self-contained, however it often circulates as if it were.
If a syllabus circulated as a HyperReadings document, then it could point directly to the texts and other media that it aggregates. But just as easily as it circulates, a HyperReadings syllabus could be forked into new versions: the syllabus
is changed because there is a new essay out, or because of a political disagreement, or because following the syllabus produced new suggestions. These forks
become a family tree where one can follow branches and trace epistemological
mutations.30
It is in line with this vision, which we share with the HyperReadings crew, and in line
with our analysis, that we, as amateur librarians, activists, and educators, make our
promise beyond the limits of this text.
The workflow that we are bootstrapping here will keep in mind every aspect of the media object syllabus (order, timestamp, contributor, version changes), allowing diversity
via forking and branching, and making sure that every reference listed in a syllabus
will find its reference in a catalog which will lead to the actual material, in digital form,
needed for the syllabus.
Against the enclosures of copyright, we will continue building shadow libraries and
archives of struggles, providing access to resources needed for the collective processes of education.
Against the corporate platforming of workflows and metadata, we will work with social
movements, political initiatives, educators, and researchers to aggregate, annotate,
version, and preserve lists of resources.
Against the extractivism of academia, we will take care of the material conditions that
are needed for such collective thinking to take place, both on- and offline.

30 Sean Dockray, Benjamin Forster, and Public Office, ‘README.md’, Hyperreadings, 15 February
2018, https://samiz-dat.github.io/hyperreadings/.

128

STATE MACHINES

Bibliography
Bady, Aaron. ‘The MOOC Moment and the End of Reform’, Liberal Education 99.4 (Fall 2013), https://
www.aacu.org/publications-research/periodicals/mooc-moment-and-end-reform/.
Bailey, Lucy E. ‘The “Other” Syllabus: Rendering Teaching Politics Visible in the Graduate Pedagogy
Seminar’, Feminist Teacher 20.2 (2010): 139–56.
Chafkin, Max. ‘Udacity’s Sebastian Thrun, Godfather Of Free Online Education, Changes Course’,
FastCompany, 14 November 2013, https://www.fastcompany.com/3021473/udacity-sebastianthrun-uphill-climb/.
Chatelain, Marcia. ‘How to Teach Kids About What’s Happening in Ferguson’, The Atlantic, 25 August
2014, https://www.theatlantic.com/education/archive/2014/08/how-to-teach-kids-about-whatshappening-in-ferguson/379049/.
_____. ‘Teaching the #FergusonSyllabus’, Dissent Magazine, 28 November 2014, https://www.dissentmagazine.org/blog/teaching-ferguson-syllabus/.
Ciolkowski, Laura. ‘Rape Culture Syllabus’, Public Books, 15 October 2016, https://www.publicbooks.
org/rape-culture-syllabus/.
Connolly, N.D.B. and Keisha N. Blain. ‘Trump Syllabus 2.0’, Public Books, 28 June 2016, https://www.
publicbooks.org/trump-syllabus-2-0/.
Dockray, Sean, Benjamin Forster, and Public Office. ‘README.md’, HyperReadings, 15 February 2018,
https://samiz-dat.github.io/hyperreadings/.
Federici, Silvia, and Arlen Austin (eds) The New York Wages for Housework Committee 1972-1977: History, Theory, Documents, New York: Autonomedia, 2017.
Harney, Stefano, and Fred Moten, The Undercommons: Fugitive Planning & Black Study, New York:
Autonomedia, 2013.
Heidebrink-Bruno, Adam. ‘Syllabus as Manifesto: A Critical Approach to Classroom Culture’, Hybrid
Pedagogy, 28 August 2014, http://hybridpedagogy.org/syllabus-manifesto-critical-approach-classroom-culture/.
Jenks, Angela. ‘It’s In The Syllabus’, Teaching Tools, Cultural Anthropology website, 30 June 2016,
https://culanth.org/fieldsights/910-it-s-in-the-syllabus/.
Larivière, Vincent, Stefanie Haustein, and Philippe Mongeon, ‘The Oligopoly of Academic Publishers in the Digital Era’, PLoS ONE 10.6 (10 June 2015), https://journals.plos.org/plosone/
article?id=10.1371/journal.pone.0127502/.
Leckart, Steven. ‘The Stanford Education Experiment Could Change Higher Learning Forever’, Wired,
20 March 2012, https://www.wired.com/2012/03/ff_aiclass/.
Martin, Randy. Financialization Of Daily Life, Philadelphia: Temple University Press, 2002.
Perlstein, Daniel. ‘Teaching Freedom: SNCC and the Creation of the Mississippi Freedom Schools’,
History of Education Quarterly 30.3 (Autumn 1990).
Roberts, Frank Leon. ‘Black Lives Matter: Race, Resistance, and Populist Protest’, 2016, http://www.
blacklivesmattersyllabus.com/fall2016/.
‘#StandingRockSyllabus’, NYC Stands with Standing Rock, 11 October 2016, https://nycstandswithstandingrock.wordpress.com/standingrocksyllabus/.
Shirky, Clay. ‘Ontology Is Overrated: Categories, Links, and Tags’, 2005, http://shirky.com/writings/
herecomeseverybody/ontology_overrated.html.
‘The Rise (and Fall?) Of the MOOC’, Oxbridge Essays, 14 November 2017, https://www.oxbridgeessays.
com/blog/rise-fall-mooc/.
‘TNI Syllabus: Gaming and Feminism’, The New Inquiry, 2 September 2014, https://thenewinquiry.com/
tni-syllabus-gaming-and-feminism/.
‘Trump 101’, The Chronicle of Higher Education, 19 June 2016, https://www.chronicle.com/article/
Trump-Syllabus/236824/.
‘Wages for Housework: Pamphlets – Flyers – Photographs,’ MayDay Rooms, http://maydayrooms.org/
archives/wages-for-housework/wfhw-pamphlets-flyers-photographs/.
Williamson, Ben. ‘Number Crunching: Transforming Higher Education into “Performance Data”’,
Medium, 16 August 2018, https://medium.com/ussbriefs/number-crunching-transforming-highereducation-into-performance-data-9c23debc4cf7/.


Hamerman
Pirate Libraries and the Fight for Open Information
2015


| | SEPTEMBER 11TH, 2015 | A BI-WEEKLY WEBPAPER | ISSUE 61

|
---|---|---|---|---
PIRATE LIBRARIES and the fight for open information
/ by _Sarah Hamerman_ |

In a digital era that destabilizes traditional notions of intellectual
property, cultural producers must rethink information access.

Over the last several years, a number of _pirate libraries_ have done just
that. Collaboratively run digital libraries such as
[_Aaaaaarg_](http://aaaaarg.fail/),
_[Monoskop](http://www.monoskop.org/Monoskop)_ , _[Public
Library](https://www.memoryoftheworld.org/)_ , and
_[UbuWeb_](http://www.ubuweb.tv/) have emerged, offering access to humanities
texts and audiovisual resources that are technically ‘pirated’ and often hard
to find elsewhere.

Though these sites differ somewhat in content, architecture, and ideological
bent, all of them disavow intellectual copyright law to varying degrees,
offering up pirated books and media with the aim of advancing information
access.

“Information wants to be free,” has served as a catchphrase in recent internet
activism, calling for information democracy, led by media, library and
information advocates.

As online information access is increasingly embedded within the networks of
capital, the digital text-sharing underground actualizes the Internet’s
potential to build a true information commons.

With such projects, the archive becomes a record of collective power, not
corporate or state power; the digital book becomes unlocked, linkable, and
shareable.

Still, these sites comprise but a small subset of the networks of peer-to-peer
file sharing. Many legal battles waged over the explosion of audiovisual file
sharing through p2p services such as Napster, BitTorrent and MediaFire. At its
peak, Napster boasted over 80 million users; the p2p music-sharing service was
shut down after a high-profile lawsuit by the RIAA in 2001.

The US Department of Justice brought charges against open access activist
_[Aaron Swartz](http://www.fvckthemedia.com/issue51/editorial)_ in 2011 for
his large-scale unauthorized downloading of files from the JStor Academic
database. Swartz, who sadly committed suicide before his trial, was an
organizer for Demand Progress, a campaign against the Stop Online Piracy Act,
which was defeated in 2012. Swartz’s actions and the fight around SOPA
represent a benchmark in the struggle for open-access and anti-copyright
practices surrounding the digital book.

Aaaaaarg, Monoskop, UbuWeb and Public Library are representative cases of the
pirate library because of their explicit engagement with archival form, their
embrace of ideas of the _[digital commons](https://en.wikipedia.org/wiki/Digital_Commons)_ within current left-leaning thought, and their like-minded focus on critical theory and the arts.

All of these projects lend themselves to be considered _as libraries_ ,
retooled for open digital networks.

_Aaaaaarg.org_ , started by Los Angeles based artist Sean Dockray, hosts
full-text pdfs of over 50,000 books and articles. The library is connected to a an
alternative education project called the Public School, which serves as a
platform for self-organizing lectures, workshops and projects in cities across
the globe. _Aaaaaarg_ ’s catalog is viewable by the public, but
upload/download privileges are restricted through an invite system, thus
circumventing copyright law.

![](http://i.imgur.com/rbdvPIG.png)

The site is divided into a “Library,” in which users can search for texts by
author; “Collections,” or user-generated grouping of texts designed for
reading groups or research interests; and “Discussions,” a message board where
participants can request texts and volunteer for working groups. Most
recently, _Aaaaaarg_ has introduced a “compiler” tool that allows readers to
select excerpts from longer texts and assemble them into new PDFs, and a
reading tool that allows readers to save reference points and insert comments
into texts. Though the library is easily searchable, it doesn’t maintain
high-quality _[metadata](https://en.wikipedia.org/wiki/Metadata)_. Dockray and
other organizers intend to preserve a certain subjective and informal quality,
focusing more on discussion and collaboration than correct preservation and
classification practice.

_Aaaaaarg_ has been threatened with takedowns a few times, but has survived by
creating mirrored sites and reconstituted itself by varying the number of A’s
in the URL. Its shifts in location, organization, and capabilities reflect
both the decentralized, ad-hoc nature of its maintenance and the organizers’
attempts to elude copyright regulations. Text-sharing sites such as _Aaaaaarg_
have also been referred to as _[shadow
libraries_](http://supercommunity.e-flux.com/texts/sharing-instinct/),
reflecting their quasi-covert status and their efforts to evade shutdown.

Monoskop.org, a project founded by media artist _[Dušan
Barok](http://monoskop.org/Du%C5%A1an_Barok)_ , is a wiki for collaborative
studies of art, media and the humanities that was born in 2004 out of Barok’s
study of media art and related cultural practices. Its significant holdings -
about 3,000 full-length texts and many more excerpts, links and citations -
include avant-garde and modernist magazines, writings on sound art, scanned
illustrations, and media theory texts.

As a wiki, any user can edit any article or upload content, and see their
changes reflected immediately. Monoskop is comprised of two sister sites: the
Monoskop wiki and Monoskop Log, the accompanying text repository. Monoskop Log
is structured as a Wordpress site with links hosted on third-party sites, much
like the rare-music download blogs that became popular in the mid-2000s.
Though this architecture is relatively unstable, links are fixed on-demand and
site mirroring and redundancy balance out some of the instability.

Monoskop makes clear that it is offering content under the fair-use doctrine
and that this content is for personal and scholarly use, not commercial use.
Barok notes that though there have been a small number of takedowns, people
generally appreciate unrestricted access to the types of materials in Monoskop
log, whether they are authors or publishers.

_Public Library_ , a somewhat newer pirate library founded by Croatian
Internet activist and researcher Marcell Mars and his collaborators, currently
offers a collection of about 6,300 texts. The project frames itself through a
utopian philosophy of building a truly universal library, radically extending
enlightenment-era conceptions of democracy. Through democratizing the _tools
of librarianship_ – book scanning, classification systems, cataloging,
information – it promises a broader, de-institutionalized public library.

In __[Public Library: An
Essay](https://www.memoryoftheworld.org/blog/2014/10/27/public-library-an-essay/#sdendnote19sym)__ , Public Library’s organizers frame p2p libraries as
“fragile knowledge infrastructures built and maintained by brave librarians
practicing civil disobedience which the world of researchers in the humanities
rely on.” This civil disobedience is a politically motivated refutation of
intellectual property law and the orientation of information networks toward
venture capital and advertising. While the pirate libraries fulfill this
dissident function as a kind of experimental provocation, their content is
audience-specific rather than universal.

_[UbuWeb](http://www.ubuweb.com/resources/index.html)_ , founded in 1996 by
conceptual artist/ writer Kenneth Goldsmith, is the largest online archive of
avant-garde art resources. Its holdings include sound, video and text-based
works dating from the historical avant-garde era to today. While many of the
sites in the “pirate library” continuum source their content through
community-based or peer-to-peer models, UbuWeb focuses on making available out
of print, obscure or difficult to access artistic media, stating that
uploading such historical artifacts doesn’t detract from the physical value of
the work; rather, it enhances it. The website’s philosophy blends the utopian
ideals of avant-garde concrete poetry with the ideals of the digital gift
economy, and it has specifically refused to accept corporate or foundation
funding or adopt a more market-oriented business model.

![](http://i.imgur.com/pHdiL9S.png)

**Pirate Libraries vs. “The Sharing Economy”**

In pirate libraries, information users become archive builders by uploading
often-copyrighted content to shared networks.

Within the so-called “ _[sharing
economy](https://en.wikipedia.org/wiki/Sharing_economy)_ ,” users essentially
lease e-book content from information corporations such as Amazon, which
markets both the Kindle as platform. This centralization of intellectual
property has dire impacts on the openness of the digital book as a
collaborative knowledge-sharing device.

In contrast, the pirate library actualizes a gift economy based on qualitative
and communal rather than monetized exchange. As Mackenzie Wark writes in _A
Hacker Manifesto_ (2004), “The gift is marginal, but nevertheless plays a
vital role in cementing reciprocal and communal relations among people who
otherwise can only confront each other as buyers and sellers of commodities.”

From theorizing new media art to building solidarity against repressive
regimes, such communal information networks can crucially articulate shared
bodies of political and aesthetic desire and meaning. According to author
Matthew Stadler, literature is by nature communal. “Literature is not owned,”
he writes. “It is, by definition, a space of mutually negotiated meanings that
never closes or concludes, a space that thrives on — indeed requires — open
access and sharing.”

In a roundtable discussion published in _New Formations_ , _Aaaaaarg_ founder
Sean Dockray remarks that the site “actively explored and exploited the
affordances of asynchronous, networked communication,” functioning upon the
logic of the hack. Dockray continues: “But all of this is rather commonplace
for what’s called ‘piracy,’ isn’t it?” Pirate librarianship can be thought of
as a practice of civil disobedience within the stringent information
environment of today.

These projects promise both the realization and destruction of the public
library. They promote information democracy while calling the _professional_
institution of the Library into question, allowing amateurs to upload,
catalog, lend and maintain collections. In _Public Library: An Essay_ , Public
Library’s organizers _[write](https://www.memoryoftheworld.org/blog/2014/10/27
/public-library-an-essay/)_ : “With the emergence of the internet…
librarianship has been given an opportunity… to include thousands of amateur
librarians who will, together with the experts, build a distributed peer-to-peer network to care for the catalog of available knowledge.”

Public Library frames amateur librarianship as a free, collaboratively
maintained and democratic activity, drawing upon the language of the French
Revolution and extending it for the 21st century. While these practices are
democratic in form, they are not necessarily democratic in the populist sense;
rather, they focus on bringing high theoretical discourses to people outside
the academy. Accordingly, they attract a modest but engaged audience of
critics, artists, designers, activists, and scholars.

The activities of Aaaaaarg and Public Library may fall closer to ‘ _[peer
preservation](http://computationalculture.net/article/book-piracy-as-peer-preservation)_ ’
than ‘peer production,’ as the desires to share information
widely and to preserve these collections against shutdown often come into
conflict. In a _[recent piece](http://supercommunity.e-flux.com/texts/sharing-instinct/)_ for e-flux coauthored with Lawrence Liang, Dockray accordingly
laments “the unfortunate fact that digital shadow libraries have to operate
somewhat below the radar: it introduces a precariousness that doesn’t allow
imagination to really expand, as it becomes stuck on techniques of evasion,
distribution, and redundancy.”

![](http://i.imgur.com/KFe3chu.png)

UbuWeb and Monoskop, which digitize rare, out-of-print art texts and media
rather than in-print titles, can be said to fulfill the aims of preservation
and access. UbuWeb and Monoskop are openly used and discussed as classroom
resources and in online arts journalism more frequently than the more
aggressively anti-copyright sources; more on-the-record and mainstream
visibility likely -- but doesn’t necessarily -- equate to wider usage.

**From Alternative Space to Alternative Media**

Aaaaaarg _[locates itself as a
‘scaffolding’](http://chtodelat.org/b9-texts-2/vilensky/materialities-of-independent-publishing-a-conversation-with-aaaaarg-chto-delat-i-cite-mute-and-neural/)_ between institutions, a platform that unfolds between institutional
gaps and fills them in, rather than directly opposing them. Over ten years
after it was founded, it continues to provide a community for “niche”
varieties of political critique.

Drawing upon different strains of ‘alternative networking,’ the digital
text-sharing underground gives a voice to those quieted by the mechanisms of
institutional archives, publishing, and galleries. On the one hand, pirate
libraries extend the logic of alternative art spaces/artist-run spaces that
challenge the “white cube” and the art market; instead, they showcase ways of
making that are often ephemeral, performative, and anti-commercial.

Lawrence Liang refers to projects such as Aaaaaarg as “ _[ludic
libraries](http://supercommunity.e-flux.com/texts/sharing-instinct/)_ ,” as
they encourage a sense of intellectual play that deviates from well-
established norms of utility, seriousness, purpose, and property.

Just as alternative, community-oriented art spaces promote “fringe” art forms,
the pirate libraries build upon open digital architectures to promote “fringe”
scholarship, art, technological and archival practices. Though the comparison
between physical architecture and virtual architecture is a metaphor here, the
impact upon creative communities runs parallel.

At the same time, the digital text-sharing underground builds upon Robert W.
McChesney’s calls in _Digital Disconnect_ for a democratic media system that
promotes the expansion of public, student and community journalism. A truly
heterogeneous media system, for McChesney, would promote a multiplicity of
opinions, supplementing for-profit mass media with a substantial and varied
portion of nonprofit and independent media.

In order to create a political system – and a media system – that reflects
multiple interests, rather than the supposedly neutral status quo, we must
support truly free, not-for-profit alternatives to corporate journalism and
“clickbait” media designed to lure traffic for advertisers. We must support
creative platforms that encourage blending high-academic language with pop-
culture; quantitative analysis with art-making; appropriation with
authenticity: the pirate libraries serve just these purposes.

Pirate libraries help bring about what Gary Hall calls the “unbound book” as
text-form; as he writes, we can perceive such a digital book “as liquid and
living, open to being continually updated and collaboratively written, edited,
annotated, critiqued, updated, shared, supplemented, revised, re-ordered,
reiterated and reimagined.” These projects allow us to re-imagine both
archival practices and the digital book for social networks based on the gift.

Aaaaaarg, Monoskop, UbuWeb, and Public Library build a record of critical and
artistic discourse that is held in common, user-responsive and networkable.
Amateur librarians sustain these projects through technological ‘hacks’ that
innovate upon present archival tools and push digital preservation practices
forward.

Pirate libraries critique the ivory tower’s monopoly over the digital book.
They posit a space where alternative communities can flourish.

Between the cracks of the new information capital, the digital text-sharing
underground fosters the coming-into-being of another kind of information
society, one in which the historical record is the democratically-shared basis
for new forms of knowledge.

From this we should take away the understanding that _piracy is normal_ and
the public domain it builds is abundant. While these practices will continue
just beneath the official surface of the information economy, it is high time
for us to demand that our legal structures catch up.


Kelty, Bodo & Allen
Guerrilla Open Access
2018


Memory
of the
World

Edited by

Guerrilla
Open Access
Christopher
Kelty

Balazs
Bodo

Laurie
Allen

Published by Post Office Press,
Rope Press and Memory of the
World. Coventry, 2018.
© Memory of the World, papers by
respective Authors.
Freely available at:
http://radicaloa.co.uk/
conferences/ROA2
This is an open access pamphlet,
licensed under a Creative
Commons Attribution-ShareAlike
4.0 International (CC BY-SA 4.0)
license.
Read more about the license at:
https://creativecommons.org/
licenses/by-sa/4.0/
Figures and other media included
with this pamphlet may be under
different copyright restrictions.
Design by: Mihai Toma, Nick White
and Sean Worley
Printed by: Rope Press,
Birmingham

This pamphlet is published in a series
of 7 as part of the Radical Open
Access II – The Ethics of Care
conference, which took place June
26-27 at Coventry University. More
information about this conference
and about the contributors to this
pamphlet can be found at:
http://radicaloa.co.uk/conferences/
ROA2
This pamphlet was made possible due
to generous funding from the arts
and humanities research studio, The
Post Office, a project of Coventry
University’s Centre for Postdigital
Cultures and due to the combined
efforts of authors, editors, designers
and printers.

Table of Contents

Guerrilla Open Access:
Terms Of Struggle
Memory of the World
Page 4

Recursive Publics and Open Access
Christopher Kelty
Page 6

Own Nothing
Balazs Bodo
Page 16

What if We Aren't the Only
Guerrillas Out There?
Laurie Allen
Page 26

Guerilla
Open
Access:
Terms Of
Struggle

In the 1990s, the Internet offered a horizon from which to imagine what society
could become, promising autonomy and self-organization next to redistribution of
wealth and collectivized means of production. While the former was in line with the
dominant ideology of freedom, the latter ran contrary to the expanding enclosures
in capitalist globalization. This antagonism has led to epochal copyfights, where free
software and piracy kept the promise of radical commoning alive.
Free software, as Christopher Kelty writes in this pamphlet, provided a model ‘of a
shared, collective, process of making software, hardware and infrastructures that
cannot be appropriated by others’. Well into the 2000s, it served as an inspiration
for global free culture and open access movements who were speculating that
distributed infrastructures of knowledge production could be built, as the Internet
was, on top of free software.
For a moment, the hybrid world of ad-financed Internet giants—sharing code,
advocating open standards and interoperability—and users empowered by these
services, convinced almost everyone that a new reading/writing culture was
possible. Not long after the crash of 2008, these disruptors, now wary monopolists,
began to ingest smaller disruptors and close off their platforms. There was still
free software somewhere underneath, but without the ‘original sense of shared,
collective, process’. So, as Kelty suggests, it was hard to imagine that for-profit
academic publishers wouldn't try the same with open access.
Heeding Aaron Swartz’s call to civil disobedience, Guerrilla Open Access has
emerged out of the outrage over digitally-enabled enclosure of knowledge that
has allowed these for-profit academic publishers to appropriate extreme profits
that stand in stark contrast to the cuts, precarity, student debt and asymmetries
of access in education. Shadow libraries stood in for the access denied to public
libraries, drastically reducing global asymmetries in the process.

4

This radicalization of access has changed how publications
travel across time and space. Digital archiving, cataloging and
sharing is transforming what we once considered as private
libraries. Amateur librarianship is becoming public shadow
librarianship. Hybrid use, as poetically unpacked in Balazs
Bodo's reflection on his own personal library, is now entangling
print and digital in novel ways. And, as he warns, the terrain
of antagonism is shifting. While for-profit publishers are
seemingly conceding to Guerrilla Open Access, they are
opening new territories: platforms centralizing data, metrics
and workflows, subsuming academic autonomy into new
processes of value extraction.
The 2010s brought us hope and then realization how little
digital networks could help revolutionary movements. The
redistribution toward the wealthy, assisted by digitization, has
eroded institutions of solidarity. The embrace of privilege—
marked by misogyny, racism and xenophobia—this has catalyzed
is nowhere more evident than in the climate denialism of the
Trump administration. Guerrilla archiving of US government
climate change datasets, as recounted by Laurie Allen,
indicates that more technological innovation simply won't do
away with the 'post-truth' and that our institutions might be in
need of revision, replacement and repair.
As the contributions to this pamphlet indicate, the terms
of struggle have shifted: not only do we have to continue
defending our shadow libraries, but we need to take back the
autonomy of knowledge production and rebuild institutional
grounds of solidarity.

Memory of the World
http://memoryoftheworld.org

5

Recursive
Publics and
Open Access

Christopher
Kelty

Ten years ago, I published a book calledTwo Bits: The Cultural Significance of Free
Software (Kelty 2008).1 Duke University Press and my editor Ken Wissoker were
enthusiastically accommodating of my demands to make the book freely and openly
available. They also played along with my desire to release the 'source code' of the
book (i.e. HTML files of the chapters), and to compare the data on readers of the
open version to print customers. It was a moment of exploration for both scholarly
presses and for me. At the time, few authors were doing this other than Yochai Benkler
(2007) and Cory Doctorow2, both activists and advocates for free software and open
access (OA), much as I have been. We all shared, I think, a certain fanaticism of the
convert that came from recognizing free software as an historically new, and radically
different mode of organizing economic and political activity. Two Bits gave me a way
to talk not only about free software, but about OA and the politics of the university
(Kelty et al. 2008; Kelty 2014). Ten years later, I admit to a certain pessimism at the
way things have turned out. The promise of free software has foundered, though not
disappeared, and the question of what it means to achieve the goals of OA has been
swamped by concerns about costs, arcane details of repositories and versioning, and
ritual offerings to the metrics God.
When I wrote Two Bits, it was obvious to me that the collectives who built free
software were essential to the very structure and operation of a standardized
Internet. Today, free software and 'open source' refer to dramatically different
constellations of practice and people. Free software gathers around itself those
committed to the original sense of a shared, collective, process of making software,
hardware and infrastructures that cannot be appropriated by others. In political
terms, I have always identified free software with a very specific, updated, version
of classical Millian liberalism. It sustains a belief in the capacity for collective action
and rational thought as aids to establishing a flourishing human livelihood. Yet it
also preserves an outdated blind faith in the automatic functioning of meritorious
speech, that the best ideas will inevitably rise to the top. It is an updated classical
liberalism that saw in software and networks a new place to resist the tyranny of the
conventional and the taken for granted.

6

Christopher Kelty

By contrast, open source has come to mean something quite different: an ecosystem
controlled by an oligopoly of firms which maintains a shared pool of components and
frameworks that lower the costs of education, training, and software creation in the
service of establishing winner-take-all platforms. These are built on open source, but
they do not carry the principles of freedom or openness all the way through to the
platforms themselves.3 What open source has become is now almost the opposite of
free software—it is authoritarian, plutocratic, and nepotistic, everything liberalism
wanted to resist. For example, precarious labor and platforms such as Uber or Task
Rabbit are built upon and rely on the fruits of the labor of 'open source', but the
platforms that result do not follow the same principles—they are not open or free
in any meaningful sense—to say nothing of the Uber drivers or task rabbits who live
by the platforms.
Does OA face the same problem? In part, my desire to 'free the source' of my book
grew out of the unfinished business of digitizing the scholarly record. It is an irony
that much of the work that went into designing the Internet at its outset in the
1980s, such as gopher, WAIS, and the HTML of CERN, was conducted in the name
of the digital transformation of the library. But by 2007, these aims were swamped
by attempts to transform the Internet into a giant factory of data extraction. Even
in 2006-7 it was clear that this unfinished business of digitizing the scholarly record
was going to become a problem—both because it was being overshadowed by other
concerns, and because of the danger it would eventually be subjected to the very
platformization underway in other realms.
Because if the platform capitalism of today has ended up being parasitic on the
free software that enabled it, then why would this not also be true of scholarship
more generally? Are we not witnessing a transition to a world where scholarship
is directed—in its very content and organization—towards the profitability of the
platforms that ostensibly serve it?4 Is it not possible that the platforms created to
'serve science'—Elsevier's increasing acquisition of tools to control the entire lifecycle of research, or ResearchGate's ambition to become the single source for all
academics to network and share research—that these platforms might actually end up
warping the very content of scholarly production in the service of their profitability?
To put this even more clearly: OA has come to exist and scholarship is more available
and more widely distributed than ever before. But, scholars now have less control,
and have taken less responsibility for the means of production of scientific research,
its circulation, and perhaps even the content of that science.

Recursive Publics and Open Access

7

The Method of Modulation
When I wrote Two Bits I organized the argument around the idea of modulation:
free software is simply one assemblage of technologies, practices, and people
aimed at resolving certain problems regarding the relationship between knowledge
(or software tools related to knowledge) and power (Hacking 2004; Rabinow
2003). Free software as such was and still is changing as each of its elements
evolve or are recombined. Because OA derives some of its practices directly from
free software, it is possible to observe how these different elements have been
worked over in the recent past, as well as how new and surprising elements are
combined with OA to transform it. Looking back on the elements I identified as
central to free software, one can ask: how is OA different, and what new elements
are modulating it into something possibly unrecognizable?

Sharing source code
Shareable source code was a concrete and necessary achievement for free
software to be possible. Similarly, the necessary ability to circulate digital texts
is a significant achievement—but such texts are shareable in a much different way.
For source code, computable streams of text are everything—anything else is a
'blob' like an image, a video or any binary file. But scholarly texts are blobs: Word or
Portable Document Format (PDF) files. What's more, while software programmers
may love 'source code', academics generally hate it—anything less than the final,
typeset version is considered unfinished (see e.g. the endless disputes over
'author's final versions' plaguing OA).5 Finality is important. Modifiability of a text,
especially in the humanities and social sciences, is acceptable only when it is an
experiment of some kind.
In a sense, the source code of science is not a code at all, but a more abstract set
of relations between concepts, theories, tools, methods, and the disciplines and
networks of people who operate with them, critique them, extend them and try to
maintain control over them even as they are shared within these communities.

avoid the waste of 'reinventing the wheel' and of pathological
competition, allowing instead modular, reusable parts that
could be modified and recombined to build better things in an
upward spiral of innovation. The 1980s ideas of modularity,
modifiability, abstraction barriers, interchangeable units
have been essential to the creation of digital infrastructures.
To propose an 'open science' thus modulates this definition—
and the idea works in some sciences better than others.
Aside from the obviously different commercial contexts,
philosophers and literary theorists just don't think about
openness this way—theories and arguments may be used
as building blocks, but they are not modular in quite the
same way. Only the free circulation of the work, whether
for recombination or for reference and critique, remains a
sine qua non of the theory of openness proposed there. It
is opposed to a system where it is explicit that only certain
people have access to the texts (whether that be through
limitations of secrecy, or limitations on intellectual property,
or an implicit elitism).

Writing and using copyright licenses
Of all the components of free software that I analyzed, this
is the one practice that remains the least transformed—OA
texts use the same CC licenses pioneered in 2001, which
were a direct descendant of free software licenses.

For free software to make sense as a solution, those involved first had to
characterize the problem it solved—and they did so by identifying a pathology in
the worlds of corporate capitalism and engineering in the 1980s: that computer
corporations were closed organizations who re-invented basic tools and
infrastructures in a race to dominate a market. An 'open system,' by contrast, would

A novel modulation of these licenses is the OA policies (the
embrace of OA in Brazil for instance, or the spread of OA
Policies starting with Harvard and the University of California,
and extending to the EU Mandate from 2008 forward). Today
the ability to control the circulation of a text with IP rights is
far less economically central to the strategies of publishers
than it was in 2007, even if they persist in attempting to do
so. At the same time, funders, states, and universities have all
adopted patchwork policies intended to both sustain green
OA, and push publishers to innovate their own business
models in gold and hybrid OA. While green OA is a significant
success on paper, the actual use of it to circulate work pales

8

Recursive Publics and Open Access

Defining openness

Christopher Kelty

9

in comparison to the commercial control of circulation on the
one hand, and the increasing success of shadow libraries on
the other. Repositories have sprung up in every shape and
form, but they remain largely ad hoc, poorly coordinated, and
underfunded solutions to the problem of OA.

Coordinating collaborations
The collective activity of free software is ultimately the
most significant of its achievements—marrying a form of
intensive small-scale interaction amongst programmers,
with sophisticated software for managing complex objects
(version control and GitHub-like sites). There has been
constant innovation in these tools for controlling, measuring,
testing, and maintaining software.
By contrast, the collective activity of scholarship is still
largely a pre-modern affair. It is coordinated largely by the
idea of 'writing an article together' and not by working
to maintain some larger map of what a research topic,
community, or discipline has explored—what has worked and
what has not.
This focus on the coordination of collaboration seemed to
me to be one of the key advantages of free software, but it
has turned out to be almost totally absent from the practice
or discussion of OA. Collaboration and the recombination of
elements of scholarly practice obviously happens, but it does
not depend on OA in any systematic way: there is only the
counterfactual that without it, many different kinds of people
are excluded from collaboration or even simple participation
in, scholarship, something that most active scholars are
willfully ignorant of.

Fomenting a movement
I demoted the idea of a social movement to merely one
component of the success of free software, rather than let
it be—as most social scientists would have it—the principal
container for free software. They are not the whole story.

10

Christopher Kelty

Is there an OA movement? Yes and no. Librarians remain
the most activist and organized. The handful of academics
who care about it have shifted to caring about it in primarily
a bureaucratic sense, forsaking the cross-organizational
aspects of a movement in favor of activism within universities
(to which I plead guilty). But this transformation forsakes
the need for addressing the collective, collaborative
responsibility for scholarship in favor of letting individual
academics, departments, and disciplines be the focus for
such debates.
By contrast, the publishing industry works with a
phantasmatic idea of both an OA 'movement' and of the actual
practices of scholarship—they too defer, in speech if not in
practice, to the academics themselves, but at the same time
must create tools, innovate processes, establish procedures,
acquire tools and companies and so on in an effort to capture
these phantasms and to prevent academics from collectively
doing so on their own.
And what new components? The five above were central to
free software, but OA has other components that are arguably
more important to its organization and transformation.

Money, i.e. library budgets
Central to almost all of the politics and debates about OA
is the political economy of publication. From the 'bundles'
debates of the 1990s to the gold/green debates of the 2010s,
the sole source of money for publication long ago shifted into
the library budget. The relationship that library budgets
have to other parts of the political economy of research
(funding for research itself, debates about tenured/nontenured, adjunct and other temporary salary structures) has
shifted as a result of the demand for OA, leading libraries
to re-conceptualize themselves as potential publishers, and
publishers to re-conceptualize themselves as serving 'life
cycles' or 'pipeline' of research, not just its dissemination.

Recursive Publics and Open Access

11

Metrics
More than anything, OA is promoted as a way to continue
to feed the metrics God. OA means more citations, more
easily computable data, and more visible uses and re-uses of
publications (as well as 'open data' itself, when conceived of
as product and not measure). The innovations in the world
of metrics—from the quiet expansion of the platforms of the
publishers, to the invention of 'alt metrics', to the enthusiasm
of 'open science' for metrics-driven scientific methods—forms
a core feature of what 'OA' is today, in a way that was not true
of free software before it, where metrics concerning users,
downloads, commits, or lines of code were always after-thefact measures of quality, and not constitutive ones.
Other components of this sort might be proposed, but the
main point is to resist to clutch OA as if it were the beating
heart of a social transformation in science, as if it were a
thing that must exist, rather than a configuration of elements
at a moment in time. OA was a solution—but it is too easy to
lose sight of the problem.
Open Access without Recursive Publics
When we no longer have any commons, but only platforms,
will we still have knowledge as we know it? This is a question
at the heart of research in the philosophy and sociology
of knowledge—not just a concern for activism or social
movements. If knowledge is socially produced and maintained,
then the nature of the social bond surely matters to the
nature of that knowledge. This is not so different than asking
whether we will still have labor or work, as we have long known
it, in an age of precarity? What is the knowledge equivalent of
precarity (i.e. not just the existence of precarious knowledge
workers, but a kind of precarious knowledge as such)?

knowledge and power is shifting dramatically, because the costs—and the stakes—
of producing high quality, authoritative knowledge have also shifted. It is not so
powerful any longer; science does not speak truth to power because truth is no
longer so obviously important to power.
Although this is a pessimistic portrait, it may also be a sign of something yet to
come. Free software as a community, has been and still sometimes is critiqued as
being an exclusionary space of white male sociality (Nafus 2012; Massanari 2016;
Ford and Wajcman 2017; Reagle 2013). I think this critique is true, but it is less a
problem of identity than it is a pathology of a certain form of liberalism: a form that
demands that merit consists only in the content of the things we say (whether in
a political argument, a scientific paper, or a piece of code), and not in the ways we
say them, or who is encouraged to say them and who is encouraged to remain silent
(Dunbar-Hester 2014).
One might, as a result, choose to throw out liberalism altogether as a broken
philosophy of governance and liberation. But it might also be an opportunity to
focus much more specifically on a particular problem of liberalism, one that the
discourse of OA also relies on to a large extent. Perhaps it is not the case that
merit derives solely from the content of utterances freely and openly circulated,
but also from the ways in which they are uttered, and the dignity of the people
who utter them. An OA (or a free software) that embraced that principle would
demand that we pay attention to different problems: how are our platforms,
infrastructures, tools organized and built to support not just the circulation of
putatively true statements, but the ability to say them in situated and particular
ways, with respect for the dignity of who is saying them, and with the freedom to
explore the limits of that kind of liberalism, should we be so lucky to achieve it.

Do we not already see the evidence of this in the 'posttruth' of fake news, or the deliberate refusal by those in
power to countenance evidence, truth, or established
systems of argument and debate? The relationship between

12

Christopher Kelty

Recursive Publics and Open Access

13

References

¹ https://twobits.net/download/index.html

Benkler, Yochai. 2007. The Wealth of Networks: How Social Production Transforms Markets
and Freedom. Yale University Press.
Dunbar-Hester, Christina. 2014. Low Power to the People: Pirates, Protest, and Politics in
FM Radio Activism. MIT Press.
Ford, Heather, and Judy Wajcman. 2017. “‘Anyone Can Edit’, Not Everyone Does:
Wikipedia’s Infrastructure and the Gender Gap”. Social Studies of Science 47 (4):
511–527. doi:10.1177/0306312717692172.
Hacking, I. 2004. Historical Ontology. Harvard University Press.
Kelty, Christopher M. 2014. “Beyond Copyright and Technology: What Open Access Can
Tell Us About Precarity, Authority, Innovation, and Automation in the University
Today”. Cultural Anthropology 29 (2): 203–215. doi:10.14506/ca29.2.02.
——— . 2008. Two Bits: The Cultural Significance of Free Software. Durham, N.C.: Duke
University Press.
Kelty, Christopher M., et al. 2008. “Anthropology In/of Circulation: a Discussion”. Cultural
Anthropology 23 (3).
Massanari, Adrienne. 2016. “#gamergate and the Fappening: How Reddit’s Algorithm,
Governance, and Culture Support Toxic Technocultures”. New Media & Society 19 (3):
329–346. doi:10.1177/1461444815608807.
Nafus, Dawn. 2012. “‘Patches don’t have gender’: What is not open in open source
software”. New Media & Society 14, no. 4: 669–683. Visited on 04/01/2014. http://
doi:10.1177/1461444811422887.
Rabinow, Paul. 2003. Anthropos Today: Reflections on Modern Equipment. Princeton
University Press.
Reagle, Joseph. 2013. “"Free As in Sexist?" Free Culture and the Gender Gap”. First
Monday 18 (1). doi:10.5210/fm.v18i1.4291.

² https://craphound.com/

³ For example, Platform Cooperativism
https://platform.coop/directory

See for example the figure from ’Rent
Seeking by Elsevier,’ by Alejandro Posada
and George Chen (http://knowledgegap.
org/index.php/sub-projects/rent-seekingand-financialization-of-the-academicpublishing-industr preliminary-findings/)
4

See Sherpa/Romeo
http://www.sherpa.ac.uk/romeo/index.php
5

14

Christopher Kelty

Recursive Publics and Open Access

15

Own
Nothing

the contexts we were fleeing from. We made a choice to leave
behind the history, the discourses, the problems and the pain
that accumulated in the books of our library. I knew exactly
what it was I didn’t want to teach to my children once we moved.
So we did not move the books. We pretended that we would
never have to think about what this decision really meant. Up
until today. This year we needed to empty the study with the
shelves. So I’m standing in our library now, the dust covering
my face, my hands, my clothes. In the middle of the floor there
are three big crates and one small box. The small box swallows
what we’ll ultimately take with us, the books I want to show to
my son when he gets older, in case he still wants to read. One of
the big crates will be taken away by the antiquarian. The other
will be given to the school library next door. The third is the
wastebasket, where everything else will ultimately go.

Balazs
Bodo

Flow My Tears
My tears cut deep grooves into the dust on my face. Drip, drip,
drop, they hit the floor and disappear among the torn pages
scattered on the floor.
This year it dawned on us that we cannot postpone it any longer:
our personal library has to go. Our family moved countries
more than half a decade ago, we switched cultures, languages,
and chose another future. But the past, in the form of a few
thousand books in our personal library, was still neatly stacked
in our old apartment, patiently waiting, books that we bought
and enjoyed — and forgot; books that we bought and never
opened; books that we inherited from long-dead parents and
half-forgotten friends. Some of them were important. Others
were relevant at one point but no longer, yet they still reminded
us who we once were.
When we moved, we took no more than two suitcases of personal
belongings. The books were left behind. The library was like
a sick child or an ailing parent, it hung over our heads like an
unspoken threat, a curse. It was clear that sooner or later
something had to be done about it, but none of the options
available offered any consolation. It made no sense to move
three thousand books to the other side of this continent. We
decided to emigrate, and not to take our past with us, abandon

16

Balazs Bodo

Drip, drip, drip, my tears flow as I throw the books into this
last crate, drip, drip, drop. Sometimes I look at my partner,
working next to me, and I can see on her face that she is going
through the same emotions. I sometimes catch the sight of
her trembling hand, hesitating for a split second where a book
should ultimately go, whether we could, whether we should
save that particular one, because… But we either save them all
or we are as ruthless as all those millions of people throughout
history, who had an hour to pack their two suitcases before they
needed to leave. Do we truly need this book? Is this a book we’ll
want to read? Is this book an inseparable part of our identity?
Did we miss this book at all in the last five years? Is this a text
I want to preserve for the future, for potential grandchildren
who may not speak my mother tongue at all? What is the function
of the book? What is the function of this particular book in my
life? Why am I hesitating throwing it out? Why should I hesitate
at all? Drop, drop, drop, a decision has been made. Drop, drop,
drop, books are falling to the bottom of the crates.
We are killers, gutting our library. We are like the half-drown
sailor, who got entangled in the ropes, and went down with the
ship, and who now frantically tries to cut himself free from the
detritus that prevents him to reach the freedom of the surface,
the sunlight and the air.

Own Nothing

17

advantages of a fully digital book future. What I see now is the emergence of a strange
and shapeshifting-hybrid of diverse physical and electronic objects and practices,
where the relative strengths and weaknesses of these different formats nicely
complement each other.
This dawned on me after we had moved into an apartment without a bookshelf. I grew
up in a flat that housed my parents’ extensive book collection. I knew the books by their
cover and from time to time something made me want to take it from the shelf, open
it and read it. This is how I discovered many of my favorite books and writers. With
the e-reader, and some of the best shadow libraries at hand, I felt the same at first. I
felt liberated. I could experiment without cost or risk, I could start—or stop—a book,
I didn’t have to consider the cost of buying and storing a book that was ultimately
not meant for me. I could enjoy the books without having to carry the burden and
responsibility of ownership.

Own Nothing, Have Everything
Do you remember Napster’s slogan after it went legit, trying to transform itself into
a legal music service around 2005? ‘Own nothing, have everything’ – that was the
headline that was supposed to sell legal streaming music. How stupid, I thought. How
could you possibly think that lack of ownership would be a good selling point? What
does it even mean to ‘have everything’ without ownership? And why on earth would
not everyone want to own the most important constituents of their own self, their
own identity? The things I read, the things I sing, make me who I am. Why wouldn’t I
want to own these things?
How revolutionary this idea had been I reflected as I watched the local homeless folks
filling up their sacks with the remains of my library. How happy I would be if I could
have all this stuff I had just thrown away without actually having to own any of it. The
proliferation of digital texts led me to believe that we won’t be needing dead wood
libraries at all, at least no more than we need vinyl to listen to, or collect music. There
might be geeks, collectors, specialists, who for one reason or another still prefer the
physical form to the digital, but for the rest of us convenience, price, searchability, and
all the other digital goodies give enough reason not to collect stuff that collects dust.

Did you notice how deleting an epub file gives you a different feeling than throwing
out a book? You don’t have to feel guilty, you don’t have to feel anything at all.
So I was reading, reading, reading like never before. But at that time my son was too
young to read, so I didn’t have to think about him, or anyone else besides myself. But
as he was growing, it slowly dawned on me: without these physical books how will I be
able to give him the same chance of serendipity, and of discovery, enchantment, and
immersion that I got in my father’s library? And even later, what will I give him as his
heritage? Son, look into this folder of PDFs: this is my legacy, your heritage, explore,
enjoy, take pride in it?
Collections of anything, whether they are art, books, objects, people, are inseparable
from the person who assembled that collection, and when that person is gone, the
collection dies, as does the most important inroad to it: the will that created this
particular order of things has passed away. But the heavy and unavoidable physicality
of a book collection forces all those left behind to make an effort to approach, to
force their way into, and try to navigate that garden of forking paths that is someone
else’s library. Even if you ultimately get rid of everything, you have to introduce
yourself to every book, and let every book introduce itself to you, so you know what
you’re throwing out. Even if you’ll ultimately kill, you will need to look into the eyes of
all your victims.
With a digital collection that’s, of course, not the case.

I was wrong to think that. I now realize that the future is not fully digital, it is more
a physical-digital hybrid, in which the printed book is not simply an endangered
species protected by a few devoted eccentrics who refuse to embrace the obvious

The e-book is ephemeral. It has little past and even less chance to preserve the
fingerprints of its owners over time. It is impersonal, efficient, fast, abundant, like

18

Own Nothing

Balazs Bodo

19

fast food or plastic, it flows through the hand like sand. It lacks the embodiment, the
materiality which would give it a life in a temporal dimension. If you want to network the
dead and the unborn, as is the ambition of every book, then you need to print and bind,
and create heavy objects that are expensive, inefficient and a burden. This burden
subsiding in the object is the bridge that creates the intergenerational dimension,
that forces you to think of the value of a book.
Own nothing, have nothing. Own everything, and your children will hate you when
you die.
I have to say, I’m struggling to find a new balance here. I started to buy books again,
usually books that I’d already read from a stolen copy on-screen. I know what I want
to buy, I know what is worth preserving. I know what I want to show to my son, what
I want to pass on, what I would like to take care of over time. Before, book buying for
me was an investment into a stranger. Now that thrill is gone forever. I measure up
the merchandise well beforehand, I build an intimate relationship, we make love again
and again, before moving in together.
It is certainly a new kind of relationship with the books I bought since I got my e-reader.
I still have to come to terms with the fact that the books I bought this way are rarely
opened, as I already know them, and their role is not to be read, but to be together.
What do I buy, and what do I get? Temporal, existential security? The chance of
serendipity, if not for me, then for the people around me? The reassuring materiality
of the intimacy I built with these texts through another medium?
All of these and maybe more. But in any case, I sense that this library, the physical
embodiment of a physical-electronic hybrid collection with its unopened books and
overflowing e-reader memory cards, is very different from the library I had, and the
library I’m getting rid of at this very moment. The library that I inherited, the library
that grew organically from the detritus of the everyday, the library that accumulated
books similar to how the books accumulated dust, as is the natural way of things, this
library was full of unknowns, it was a library of potentiality, of opportunities, of trips
waiting to happen. This new, hybrid library is a collection of things that I’m familiar with.
I intimately know every piece, they hold little surprise, they offer few discoveries — at
least for me. The exploration, the discovery, the serendipity, the pre-screening takes
place on the e-reader, among the ephemeral, disposable PDFs and epubs.

We Won
This new hybrid model is based on the cheap availability of digital books. In my case, the
free availability of pirated copies available through shadow libraries. These libraries
don’t have everything on offer, but they have books in an order of magnitude larger
than I’ll ever have the time and chance to read, so they offer enough, enough for me
to fill up hard drives with books I want to read, or at least skim, to try, to taste. As if I
moved into an infinite bookstore or library, where I can be as promiscuous, explorative,
nomadic as I always wanted to be. I can flirt with books, I can have a quickie, or I can
leave them behind without shedding a single tear.
I don’t know how this hybrid library, and this analogue-digital hybrid practice of reading
and collecting would work without the shadow libraries which make everything freely
accessible. I rely on their supply to test texts, and feed and grow my print library.
E-books are cheaper than their print versions, but they still cost money, carry a
risk, a cost of experimentation. Book-streaming, the flat-rate, the all-you-can-eat
format of accessing books is at the moment only available to audiobooks, but rarely
for e-books. I wonder why.
Did you notice that there are no major book piracy lawsuits?

Have everything, and own a few.

20

Balazs Bodo

Own Nothing

21

Of course there is the lawsuit against Sci-Hub and Library Genesis in New York, and
there is another one in Canada against aaaaarg, causing major nuisance to those who
have been named in these cases. But this is almost negligible compared to the high
profile wars the music and audiovisual industries waged against Napster, Grokster,
Kazaa, megaupload and their likes. It is as if book publishers have completely given up on
trying to fight piracy in the courts, and have launched a few lawsuits only to maintain
the appearance that they still care about their digital copyrights. I wonder why.
I know the academic publishing industry slightly better than the mainstream popular
fiction market, and I have the feeling that in the former copyright-based business
models are slowly being replaced by something else. We see no major anti-piracy
efforts from publishers, not because piracy is non-existent — on the contrary, it is
global, and it is big — but because the publishers most probably realized that in the
long run the copyright-based exclusivity model is unsustainable. The copyright wars
of the last two decades taught them that law cannot put an end to piracy. As the
Sci-Hub case demonstrates, you can win all you want in a New York court, but this
has little real-world effect as long as the conditions that attract the users to the
shadow libraries remain.
Exclusivity-based publishing business models are under assault from other sides as
well. Mandated open access in the US and in the EU means that there is a quickly
growing body of new research for the access of which publishers cannot charge
money anymore. LibGen and Sci-Hub make it harder to charge for the back catalogue.
Their sheer existence teaches millions on what uncurtailed open access really is, and
makes it easier for university libraries to negotiate with publishers, as they don’t have
to worry about their patrons being left without any access at all.
The good news is that radical open access may well be happening. It is a less and less
radical idea to have things freely accessible. One has to be less and less radical to
achieve the openness that has been long overdue. Maybe it is not yet obvious today
and the victory is not yet universal, maybe it’ll take some extra years, maybe it won’t
ever be evenly distributed, but it is obvious that this genie, these millions of books on
everything from malaria treatments to critical theory, cannot be erased, and open
access will not be undone, and the future will be free of access barriers.

We Are Not Winning at All
But did we really win? If publishers are happy to let go of access control and copyright,
it means that they’ve found something that is even more profitable than selling
back to us academics the content that we have produced. And this more profitable
something is of course data. Did you notice where all the investment in academic
publishing went in the last decade? Did you notice SSRN, Mendeley, Academia.edu,
ScienceDirect, research platforms, citation software, manuscript repositories, library
systems being bought up by the academic publishing industry? All these platforms
and technologies operate on and support open access content, while they generate
data on the creation, distribution, and use of knowledge; on individuals, researchers,
students, and faculty; on institutions, departments, and programs. They produce data
on the performance, on the success and the failure of the whole domain of research
and education. This is the data that is being privatized, enclosed, packaged, and sold
back to us.

Drip, drip, drop, its only nostalgia. My heart is light, as I don’t have to worry about
gutting the library. Soon it won’t matter at all.

Taylorism reached academia. In the name of efficiency, austerity, and transparency,
our daily activities are measured, profiled, packaged, and sold to the highest bidder.
But in this process of quantification, knowledge on ourselves is lost for us, unless we
pay. We still have some patchy datasets on what we do, on who we are, we still have
this blurred reflection in the data-mirrors that we still do control. But this path of
self-enlightenment is quickly waning as less and less data sources about us are freely
available to us.

22

Own Nothing

Who is downloading books and articles? Everyone. Radical open access? We won,
if you like.

Balazs Bodo

23

I strongly believe that information on the self is the foundation
of self-determination. We need to have data on how we operate,
on what we do in order to know who we are. This is what is being
privatized away from the academic community, this is being
taken away from us.
Radical open access. Not of content, but of the data about
ourselves. This is the next challenge. We will digitize every page,
by hand if we must, that process cannot be stopped anymore.
No outside power can stop it and take that from us. Drip, drip,
drop, this is what I console myself with, as another handful of
books land among the waste.
But the data we lose now will not be so easy to reclaim.

24

Balazs Bodo

Own Nothing

25

What if
We Aren't
the Only
Guerrillas
Out
There?
Laurie
Allen

My goal in this paper is to tell the story
of a grass-roots project called Data
Refuge (http://www.datarefuge.org)
that I helped to co-found shortly after,
and in response to, the Trump election
in the USA. Trump’s reputation as
anti-science, and the promise that his
administration would elevate people into
positions of power with a track record
of distorting, hiding, or obscuring the
scientific evidence of climate change
caused widespread concern that
valuable federal data was now in danger.
The Data Refuge project grew from the
work of Professor Bethany Wiggin and
the graduate students within the Penn
Program in Environmental Humanities
(PPEH), notably Patricia Kim, and was
formed in collaboration with the Penn
Libraries, where I work. In this paper, I
will discuss the Data Refuge project, and
call attention to a few of the challenges
inherent in the effort, especially as
they overlap with the goals of this
collective. I am not a scholar. Instead,
I am a librarian, and my perspective as
a practicing informational professional
informs the way I approach this paper,
which weaves together the practical
and technical work of ‘saving data’ with
the theoretical, systemic, and ethical
issues that frame and inform what we
have done.

I work as the head of a relatively small and new department within the libraries
of the University of Pennsylvania, in the city of Philadelphia, Pennsylvania, in the
US. I was hired to lead the Digital Scholarship department in the spring of 2016,
and most of the seven (soon to be eight) people within Digital Scholarship joined
the library since then in newly created positions. Our group includes a mapping
and spatial data librarian and three people focused explicitly on supporting the
creation of new Digital Humanities scholarship. There are also two people in the
department who provide services connected with digital scholarly open access
publishing, including the maintenance of the Penn Libraries’ repository of open
access scholarship, and one Data Curation and Management Librarian. This
Data Librarian, Margaret Janz, started working with us in September 2016, and
features heavily into the story I’m about to tell about our work helping to build Data
Refuge. While Margaret and I were the main people in our department involved in
the project, it is useful to understand the work we did as connected more broadly
to the intersection of activities—from multimodal, digital, humanities creation to
open access publishing across disciplines—represented in our department in Penn.
At the start of Data Refuge, Professor Wiggin and her students had already been
exploring the ways that data about the environment can empower communities
through their art, activism, and research, especially along the lower Schuylkill
River in Philadelphia. They were especially attuned to the ways that missing data,
or data that is not collected or communicated, can be a source of disempowerment.
After the Trump election, PPEH graduate students raised the concern that the
political commitments of the new administration would result in the disappearance
of environmental and climate data that is vital to work in cities and communities
around the world. When they raised this concern with the library, together we cofounded Data Refuge. It is notable to point out that, while the Penn Libraries is a
large and relatively well-resourced research library in the United States, it did not
have any automatic way to ingest and steward the data that Professor Wiggin and
her students were concerned about. Our system of acquiring, storing, describing
and sharing publications did not account for, and could not easily handle, the
evident need to take in large quantities of public data from the open web and make
them available and citable by future scholars. Indeed, no large research library
was positioned to respond to this problem in a systematic way, though there was
general agreement that the community would like to help.
The collaborative, grass-roots movement that formed Data Refuge included many
librarians, archivists, and information professionals, but it was clear from the
beginning that my own profession did not have in place a system for stewarding
these vital information resources, or for treating them as ‘publications’ of the

26

Laurie Allen

What if We Aren't the Only Guerrillas Out There?

27

federal government. This fact was widely understood by various members of our
profession, notably by government document librarians, who had been calling
attention to this lack of infrastructure for years. As Government Information
Librarian Shari Laster described in a blog post in November of 2016, government
documents librarians have often felt like they are ‘under siege’ not from political
forces, but from the inattention to government documents afforded by our systems
and infrastructure. Describing the challenges facing the profession in light of the
2016 election, she commented: “Government documents collections in print are
being discarded, while few institutions are putting strategies in place for collecting
government information in digital formats. These strategies are not expanding in
tandem with the explosive proliferation of these sources, and certainly not in pace
with the changing demands for access from public users, researchers, students,
and more.” (Laster 2016) Beyond government documents librarians, our project
joined efforts that were ongoing in a huge range of communities, including: open
data and open science activists; archival experts working on methods of preserving
born-digital content; cultural historians; federal data producers and the archivists
and data scientists they work with; and, of course, scientists.

the scientific record to fight back, in a concrete way, against
an anti-fact establishment. By downloading data and moving
it into the Internet Archive and the Data Refuge repository,
volunteers were actively claiming the importance of accurate
records in maintaining or creating a just society.

This distributed approach to the work of downloading and saving the data
encouraged people to see how they were invested in environmental and scientific
data, and to consider how our government records should be considered the
property of all of us. Attending Data Rescue events was a way for people who value

Of course, access to data need not rely on its inclusion in
a particular repository. As is demonstrated so well in other
contexts, technological methods of sharing files can make
the digital repositories of libraries and archives seem like a
redundant holdover from the past. However, as I will argue
further in this paper, the data that was at risk in Data Refuge
differed in important ways from the contents of what Bodó
refers to as ‘shadow libraries’ (Bodó 2015). For opening
access to copies of journals articles, shadow libraries work
perfectly. However, the value of these shadow libraries relies
on the existence of the widely agreed upon trusted versions.
If in doubt about whether a copy is trustworthy, scholars
can turn to more mainstream copies, if necessary. This was
not the situation we faced building Data Refuge. Instead, we
were often dealing with the sole public, authoritative copy
of a federal dataset and had to assume that, if it were taken
down, there would be no way to check the authenticity of
other copies. The data was not easily pulled out of systems
as the data and the software that contained them were often
inextricably linked. We were dealing with unique, tremendously
valuable, but often difficult-to-untangle datasets rather than
neatly packaged publications. The workflow we established
was designed to privilege authenticity and trustworthiness
over either the speed of the copying or the easy usability of
the resulting data. 2 This extra care around authenticity was
necessary because of the politicized nature of environmental
data that made many people so worried about its removal
after the election. It was important that our project
supported the strongest possible scientific arguments that
could be made with the data we were ‘saving’. That meant
that our copies of the data needed to be citable in scientific
scholarly papers, and that those citations needed to be
able to withstand hostile political forces who claim that the
science of human-caused climate change is ‘uncertain’. It

28

What if We Aren't the Only Guerrillas Out There?

Born from the collaboration between Environmental Humanists and Librarians,
Data Refuge was always an effort both at storytelling and at storing data. During
the first six months of 2017, volunteers across the US (and elsewhere) organized
more than 50 Data Rescue events, with participants numbering in the thousands.
At each event, a group of volunteers used tools created by our collaborators at
the Environmental and Data Governance Initiative (EDGI) (https://envirodatagov.
org/) to support the End of Term Harvest (http://eotarchive.cdlib.org/) project
by identifying seeds from federal websites for web archiving in the Internet
Archive. Simultaneously, more technically advanced volunteers wrote scripts to
pull data out of complex data systems, and packaged that data for longer term
storage in a repository we maintained at datarefuge.org. Still other volunteers
held teach-ins, built profiles of data storytellers, and otherwise engaged in
safeguarding environmental and climate data through community action (see
http://www.ppehlab.org/datarefugepaths). The repository at datarefuge.org that
houses the more difficult data sources has been stewarded by myself and Margaret
Janz through our work at Penn Libraries, but it exists outside the library’s main
technical infrastructure.1

Laurie Allen

29

was easy to imagine in the Autumn of 2016, and even easier
to imagine now, that hostile actors might wish to muddy the
science of climate change by releasing fake data designed
to cast doubt on the science of climate change. For that
reasons, I believe that the unique facts we were seeking
to safeguard in the Data Refuge bear less similarity to the
contents of shadow libraries than they do to news reports
in our current distributed and destabilized mass media
environment. Referring to the ease of publishing ideas on the
open web, Zeynep Tufecki wrote in a recent column, “And
sure, it is a golden age of free speech—if you can believe your
lying eyes. Is that footage you’re watching real? Was it really
filmed where and when it says it was? Is it being shared by altright trolls or a swarm of Russian bots? Was it maybe even
generated with the help of artificial intelligence? (Yes, there
are systems that can create increasingly convincing fake
videos.)” (Tufekci 2018). This was the state we were trying to
avoid when it comes to scientific data, fearing that we might
have the only copy of a given dataset without solid proof that
our copy matched the original.
If US federal websites cease functioning as reliable stewards
of trustworthy scientific data, reproducing their data
without a new model of quality control risks producing the
very censorship that our efforts are supposed to avoid,
and further undermining faith in science. Said another way,
if volunteers duplicated federal data all over the Internet
without a trusted system for ensuring the authenticity of
that data, then as soon as the originals were removed, a sea of
fake copies could easily render the original invisible, and they
would be just as effectively censored. “The most effective
forms of censorship today involve meddling with trust and
attention, not muzzling speech itself.” (Tufekci 2018).
These concerns about the risks of open access to data should
not be understood as capitulation to the current marketdriven approach to scholarly publishing, nor as a call for
continuation of the status quo. Instead, I hope to encourage
continuation of the creative approaches to scholarship
represented in this collective. I also hope the issues raised in

30

Laurie Allen

Data Refuge will serve as a call to take greater responsibility for the systems into
which scholarship flows and the structures of power and assumptions of trust (by
whom, of whom) that scholarship relies on.
While plenty of participants in the Data Refuge community posited scalable
technological approaches to help people trust data, none emerged that were
strong enough to risk further undermining faith in science that a malicious attack
might cause. Instead of focusing on technical solutions that rely on the existing
systems staying roughly as they are, I would like to focus on developing networks
that explore different models of trust in institutions, and that honor the values
of marginalized and indigenous people. For example, in a recent paper, Stacie
Williams and Jarrett Drake describe the detailed decisions they made to establish
and become deserving of trust in supporting the creation of an Archive of Police
Violence in Cleveland (Williams and Drake 2017). The work of Michelle Caswell and
her collaborators on exploring post-custodial archives, and on engaging in radical
empathy in the archives provide great models of the kind of work that I believe is
necessary to establish new models of trust that might help inform new modes of
sharing and relying on community information (Caswell and Cifor 2016).
Beyond seeking new ways to build trust, it has become clear that new methods
are needed to help filter and contextualize publications. Our current reliance
on a few for-profit companies to filter and rank what we see of the information
landscape has proved to be tremendously harmful for the dissemination of facts,
and has been especially dangerous to marginalized communities (Noble 2018).
While the world of scholarly humanities publishing is doing somewhat better than
open data or mass media, there is still a risk that without new forms of filtering and
establishing quality and trustworthiness, good ideas and important scholarship will
be lost in the rankings of search engines and the algorithms of social media. We
need new, large scale systems to help people filter and rank the information on the
open web. In our current situation, according to media theorist dana boyd, “[t]he
onus is on the public to interpret what they see. To self-investigate. Since we live
in a neoliberal society that prioritizes individual agency, we double down on media
literacy as the ‘solution’ to misinformation. It’s up to each of us as individuals to
decide for ourselves whether or not what we’re getting is true.” (boyd 2018)
In closing, I’ll return to the notion of Guerrilla warfare that brought this panel
together. While some of our collaborators and some in the press did use the term
‘Guerrilla archiving’ to describe the data rescue efforts (Currie and Paris 2017),
I generally did not. The work we did was indeed designed to take advantage of
tactics that allow a small number of actors to resist giant state power. However,

What if We Aren't the Only Guerrillas Out There?

31

if anything, the most direct target of these guerrilla actions in my mind was not
the Trump administration. Instead, the action was designed to prompt responses
by the institutions where many of us work and by communities of scholars and
activists who make up these institutions. It was designed to get as many people as
possible working to address the complex issues raised by the two interconnected
challenges that the Data Refuge project threw into relief. The first challenge,
of course, is the need for new scientific, artistic, scholarly and narrative ways of
contending with the reality of global, human-made climate change. And the second
challenge, as I’ve argued in this paper, is that our systems of establishing and
signaling trustworthiness, quality, reliability and stability of information are in dire
need of creative intervention as well. It is not just publishing but all of our systems
for discovering, sharing, acquiring, describing and storing that scholarship that
need support, maintenance, repair, and perhaps in some cases, replacement. And
this work will rely on scholars, as well as expert information practitioners from a
range of fields (Caswell 2016).

¹ At the time of this writing, we are working
on un-packing and repackaging the data
within Data Refuge for eventual inclusion
in various Research Library Repositories.

Ideally, of course, all federally produced
datasets would be published in neatly
packaged and more easily preservable
containers, along with enough technical
checks to ensure their validity (hashes,
checksums, etc.) and each agency would
create a periodical published inventory of
datasets. But the situation we encountered
with Data Refuge did not start us in
anything like that situation, despite the
hugely successful and important work of
the employees who created and maintained
data.gov. For a fuller view of this workflow,
see my talk at CSVConf 2017 (Allen 2017).

2

Closing note: The workflow established and used at Data Rescue events was
designed to tackle this set of difficult issues, but needed refinement, and was retired
in mid-2017. The Data Refuge project continues, led by Professor Wiggin and her
colleagues and students at PPEH, who are “building a storybank to document
how data lives in the world – and how it connects people, places, and non-human
species.” (“DataRefuge” n.d.) In addition, the set of issues raised by Data Refuge
continue to inform my work and the work of many of our collaborators.

32

Laurie Allen

What if We Aren't the Only Guerrillas Out There?

33

References
Allen, Laurie. 2017. “Contexts and Institutions.” Paper presented at csv,conf,v3, Portland,
Oregon, May 3rd 2017. Accessed May 20, 2018. https://youtu.be/V2gwi0CRYto.
Bodo, Balazs. 2015. “Libraries in the Post - Scarcity Era.” In Copyrighting Creativity:
Creative Values, Cultural Heritage Institutions and Systems of Intellectual Property,
edited by Porsdam. Routledge.
boyd, danah. 2018. “You Think You Want Media Literacy… Do You?” Data & Society: Points.
March 9, 2018. https://points.datasociety.net/you-think-you-want-media-literacy-doyou-7cad6af18ec2.
Caswell, Michelle. 2016. “‘The Archive’ Is Not an Archives: On Acknowledging the
Intellectual Contributions of Archival Studies.” Reconstruction: Studies in
Contemporary Culture 16:1 (2016) (special issue “Archives on Fire”),
http://reconstruction.eserver.org/Issues/161/Caswell.shtml.
Caswell, Michelle, and Marika Cifor. 2016. “From Human Rights to Feminist Ethics: Radical
Empathy in the Archives.” Archivaria 82 (0): 23–43.
Currie, Morgan, and Britt Paris. 2017. “How the ‘Guerrilla Archivists’ Saved History – and
Are Doing It Again under Trump.” The Conversation (blog). February 21, 2017.
https://theconversation.com/how-the-guerrilla-archivists-saved-history-and-aredoing-it-again-under-trump-72346.
“DataRefuge.” n.d. PPEH Lab. Accessed May 21, 2018.
http://www.ppehlab.org/datarefuge/.
“DataRescue Paths.” n.d. PPEH Lab. Accessed May 20, 2018.
http://www.ppehlab.org/datarefugepaths/.
“End of Term Web Archive: U.S. Government Websites.” n.d. Accessed May 20, 2018.
http://eotarchive.cdlib.org/.
“Environmental Data and Governance Initiative.” n.d. EDGI. Accessed May 19, 2018.
https://envirodatagov.org/.
Laster, Shari. 2016. “After the Election: Libraries, Librarians, and the Government - Free
Government Information (FGI).” Free Government Information (FGI). November 23,
2016. https://freegovinfo.info/node/11451.
Noble, Safiya Umoja. 2018. Algorithms of Oppression: How Search Engines Reinforce
Racism. New York: NYU Press.
Tufekci, Zeynep. 2018. “It’s the (Democracy-Poisoning) Golden Age of Free Speech.”
WIRED. Accessed May 20, 2018.
https://www.wired.com/story/free-speech-issue-tech-turmoil-new-censorship/.
“Welcome - Data Refuge.” n.d. Accessed May 20, 2018. https://www.datarefuge.org/.
Williams, Stacie M, and Jarrett Drake. 2017. “Power to the People: Documenting Police
Violence in Cleveland.” Journal of Critical Library and Information Studies 1 (2).
https://doi.org/10.24242/jclis.v1i2.33.

34

Laurie Allen

Guerrilla
Open
Access


Liang
Shadow Libraries
2012


Journal #37 - September 2012

# Shadow Libraries

Over the last few monsoons I lived with the dread that the rain would
eventually find its ways through my leaky terrace roof and destroy my books.
Last August my fears came true when I woke up in the middle of the night to
see my room flooded and water leaking from the roof and through the walls.
Much of the night was spent rescuing the books and shifting them to a dry
room. While timing and speed were essential to the task at hand they were also
the key hazards navigating a slippery floor with books perched till one’s
neck. At the end of the rescue mission, I sat alone, exhausted amongst a
mountain of books assessing the damage that had been done, but also having
found books I had forgotten or had not seen in years; books which I had
thought had been permanently borrowed by others or misplaced found their way
back as I set many aside in a kind of ritual of renewed commitment.

[ ](//images.e-flux-systems.com/2012_09_book-library-small-WEB.jpg,2000)

Sorting the badly damaged from the mildly wet, I could not help but think
about the fragile histories of books from the library of Alexandria to the
great Florence flood of 1966. It may have seemed presumptuous to move from the
precarity of one’s small library and collection to these larger events, but is
there any other way in which one experiences earth-shattering events if not
via a microcosmic filtering through one’s own experiences? I sent a distressed
email to a friend Sandeep a committed bibliophile and book collector with a
fantastic personal library, who had also been responsible for many of my new
acquisitions. He wrote back on August 17, and I quote an extract of the email:

> Dear Lawrence

>

> I hope your books are fine. I feel for you very deeply, since my nightmares
about the future all contain as a key image my books rotting away under a
steady drip of grey water. Where was this leak, in the old house or in the
new? I spent some time looking at the books themselves: many of them I greeted
like old friends. I see you have Lewis Hyde’s _Trickster Makes the World_ and
Edward Rice’s _Captain Sir Richard Francis Burton_ in the pile: both top-class
books. (Burton is a bit of an obsession with me. The man did and saw
everything there was to do and see, and thought about it all, and wrote it all
down in a massive pile of notes and manuscripts. He squirrelled a fraction of
his scholarship into the tremendous footnotes to the Thousand and One Nights,
but most of it he could not publish without scandalising the Victorians, and
then he died, and his widow made a bonfire in the backyard, and burnt
everything because she disapproved of these products of a lifetime’s labors,
and of a lifetime such as few have ever had, and no one can ever have again. I
almost hope there is a special hell for Isabel Burton to burn in.)

Moving from one’s personal pile to the burning of the work of one of the
greatest autodidacts of the nineteenth century and back it was strangely
comforting to be reminded that libraries—the greatest of time machines
invented—were testimonies to both the grandeur and the fragility of
civilizations. Whenever I enter huge libraries it is with a tingling sense of
excitement normally reserved for horror movies, but at the same time this same
sense of awe is often accompanied by an almost debilitating sense of what it
means to encounter finitude as it is dwarfed by centuries of words and
scholarship. Yet strangely when I think of libraries it is rarely the New York
public library that comes to mind even as I wish that we could have similar
institutions in India. I think instead of much smaller collections—sometimes
of institutions but often just those of friends and acquaintances. I enjoy
browsing through people’s bookshelves, not just to discern their reading
preferences or to discover for myself unknown treasures, but also to take
delight in the local logic of their library, their spatial preferences and to
understand the order of things not as a global knowledge project but as a
personal, often quirky rationale.

[ ](//images.e-flux-systems.com/2012_09_library-of-congress.jpg,2000 "Machine
room for book transportation at the Library of Congress, early 20th century.")

Machine room for book transportation at the Library of Congress, early 20th
century.

Like romantic love, bibliophilia is perhaps shaped by one’s first love. The
first library that I knew intimately was a little six by eight foot shop
hidden in a by-lane off one of the busiest roads in Bangalore, Commercial
street. From its name to what it contained, Mecca stores could well have been
transported out of an Arabian nights tale. One side of the store was lined
with plastic ware and kitchen utensils of every shape and size while the other
wall was piled with books, comics, and magazines. From my eight-year-old
perspective it seemed large enough to contain all the knowledge of the world.
I earned a weekly stipend packing noodles for an hour every day after school
in the home shop that my parents ran, which I used to either borrow or buy
second hand books from the store. I was usually done with them by Sunday and
would have them reread by Wednesday. The real anguish came in waiting from
Wednesday to Friday for the next set. After finally acquiring a small
collection of books and comics myself I decided—spurred on by a fatal
combination of entrepreneurial enthusiasm and a pedantic desire to educate
others—to start a small library myself. Packing my books into a small aluminum
case and armed with a makeshift ledger, I went from house to house convincing
children in the neighborhood to forgo twenty-five paisa in exchange for a book
or comic with an additional caveat that they were not to share them with any
of their friends. While the enterprise got off to a reasonable start it soon
met its end when I realized that despite my instructions, my friends were
generously sharing the comics after they were done with them, which thereby
ended my biblioempire ambitions.

Over the past few years the explosion of ebook readers and consequent rise in
the availability of pirated books have opened new worlds to my booklust.
[Library.nu](library.nu), which began as gigapedia, suddenly made the idea of
the universal library seem like reality. By the time it shut down in February
2012 the library had close to a million books and over half a million active
users. Bibliophiles across the world were distraught when the site was shut
down and if it were ever possible to experience what the burning of the
library of Alexandria must have felt it was that collective ache of seeing the
closure of [library.nu.](library.nu)

What brings together something as monumental as the New York public library, a
collective enterprise like [library.nu](library.nu) and Mecca stores if not
the word library? As spaces they may have little in common but as virtual
spaces they speak as equals even if the scale of their imagination may differ.
All of them partake of their share in the world of logotopias. In an
exhibition designed to celebrate the place of the library in art, architecture
and imagination the curator Sascha Hastings coined the term logotopia to
designate “word places”—a happy coincidence of architecture and language.

There is however a risk of flattening the differences between these spaces by
classifying them all under a single utopian ideal of the library. Imagination
after all has a geography and physiology and requires our alertness to these
distinctions. Lets think instead of an entire pantheon (both of spaces as well
as practices) that we can designate as shadow libraries (or shadow logotopias
if you like) which exist in the shadows cast by the long history of monumental
libraries. While they are often dwarfed by the idea of the library, like the
shadows cast by our bodies, sometimes these shadows surge ahead of the body.

[ ](//images.e-flux-systems.com/2012_09_london-blitz-WEB.jpg,2000 "The London
Library after the Blitz, c. 1940.")

The London Library after the Blitz, c. 1940.

At the heart of all libraries lies a myth—that of the burning of the library
of Alexandria. No one knows what the library of Alexandria looked like or
possesses an accurate list of its contents. What we have long known though is
a sense of loss. But a loss of what? Of all the forms of knowledge in the
world in a particular time. Because that was precisely what the library of
Alexandria sought to collect under its roofs. It is believed that in order to
succeed in assembling a universal library, King Ptolemy I wrote “to all the
sovereigns and governors on earth” begging them to send to him every kind of
book by every kind of author, “poets and prose-writers, rhetoricians and
sophists, doctors and soothsayers, historians, and all others too.” The king’s
scholars had calculated that five hundred thousand scrolls would be required
if they were to collect in Alexandria “all the books of all the peoples of the
world.”1

What was special about the Library of Alexandria was the fact that until then
the libraries of the ancient world were either private collections of an
individual or government storehouses where legal and literary documents were
kept for official reference. By imagining a space where the public could have
access to all the knowledge of the world, the library also expressed a new
idea of the human itself. While the library of Alexandria is rightfully
celebrated, what is often forgotten in the mourning of its demise is another
library—one that existed in the shadows of the grand library but whose
whereabouts ensured that it survived Caesar’s papyrus destroying flames.

According to the Sicilian historian Diodorus Siculus, writing in the first
century BC, Alexandria boasted a second library, the so-called daughter
library, intended for the use of scholars not affiliated with the Museion. It
was situated in the south-western neighborhood of Alexandria, close to the
temple of Serapis, and was stocked with duplicate copies of the Museion
library’s holdings. This shadow library survived the fire that destroyed the
primary library of Alexandria but has since been eclipsed by the latter’s
myth.

Alberto Manguel says that if the library of Alexandria stood tall as an
expression of universal ambitions, there is another structure that haunts our
imagination: the tower of Babel. If the library attempted to conquer time, the
tower sought to vanquish space. He says “The Tower of Babel in space and the
Library of Alexandria in time are the twin symbols of these ambitions. In
their shadow, my small library is a reminder of both impossible yearnings—the
desire to contain all the tongues of Babel and the longing to possess all the
volumes of Alexandria.”2 Writing about the two failed projects Manguel adds
that when seen within the limiting frame of the real, the one exists only as
nebulous reality and the other as an unsuccessful if ambitious real estate
enterprise. But seen as myths, and in the imagination at night, the solidity
of both buildings for him is unimpeachable.3

The utopian ideal of the universal library was more than a question of built
up form or space or even the possibility of storing all of the knowledge of
the world; its real aspiration was in the illusion of order that it could
impose on a chaotic world where the lines drawn by a fine hairbrush
distinguished the world of animals from men, fairies from ghosts, science from
magic, and Europe from Japan. In some cases even after the physical structure
that housed the books had crumbled and the books had been reduced to dust the
ideal remained in the form of the order imagined for the library. One such
residual evidence comes to us by way of the _Pandectae_ —a comprehensive
bibliography created by Conrad Gesner in 1545 when he feared that the Ottoman
conquerors would destroy all the books in Europe. He created a bibliography
from which the library could be built again—an all embracing index which
contained a systematic organization of twenty principal groups with a matrix
like structure that contained 30,000 concepts.4

It is not surprising that Alberto Manguel would attempt write a literary,
historical and personal history of the library. As a seventeen-year-old man in
Buenos Aries, Manguel read for the blind seer Jorge Luis Borges who once
imagined in his appropriately named story—The Tower of Babel—paradise as a
kind of library. Modifying his mentor’s statement in what can be understood as
a gesture to the inevitable demands of the real and yet acknowledging the
possible pleasures of living in shadows, Manguel asserts that sometimes
paradise must adapt itself to suit circumstantial requirements. Similarly
Jacques Rancière writing about the libraries of the working class in the
eighteenth century tells us about Gauny a joiner and a boy in love with
vagrancy and botany who decides to build a library for himself. For the sons
of the poor proletarians living in Saint Marcel district, libraries were built
only a page at a time. He learnt to read by tracing the pages on which his
mother bought her lentils and would be disappointed whenever he came to the
end of a page and the next page was not available, even though he urged his
mother to buy her lentils from the same grocer. 5

[ ](//images.e-flux-systems.com/2012_09_DGF-D-Tropics-detail-hi-res-
WEB.jpg,2000 "Dominique Gonzalez-Foerster, Chronotopes & Dioramas , 2009.
Diorama installation at The Hispanic Society of America, New York.")

Dominique Gonzalez-Foerster, _Chronotopes & Dioramas_, 2009. Diorama
installation at The Hispanic Society of America, New York.

Is the utopian ideal of the universal library as exemplified by the library of
Alexandria or modernist pedagogic institutions of the twentieth century
adequate to the task of describing the space of the shadow library, or do we
need a different account of these other spaces? In an era of the ebook reader
where the line between a book and a library is blurred, the very idea of a
library is up for grabs. It has taken me well over two decades to build a
collection of a few thousand books while around two hundred thousand books
exist as bits and bytes on my computer. Admittedly hard drives crash and data
is lost, but is that the same threat as those of rain or fire? Which then is
my library and which its shadow? Or in the spirit of logotopias would it be
more appropriate to ask the spatial question: where is the library?

If the possibility of having 200,000 books on one’s computer feels staggering
here is an even more startling statistic. The Library of Congress which is the
largest library in the world with holdings of approximately thirty million
books, which would—if they were piled on the floor—cover 364 kilometers could
potentially fit into an SD card. It is estimated that by 2030 an ordinary SD
card will have the capacity of storing up to 64 TB and assuming each book were
digitized at an average size of 1MB it would technically be possible to fit
two Libraries of Congress in one’s pocket.

It sounds like science fiction, but isn’t it the case that much of the science
fiction of a decade ago finds itself comfortably within the weaves of everyday
life. How do we make sense of the future of the library? While it may be
tempting to throw our hands up in boggled perplexity about what it means to be
able to have thirty million books lets face it: the point of libraries have
never been that you will finish what’s there. Anyone with even a modest book
collection will testify to the impossibility of ever finishing their library
and if anything at all the library stands precisely at the cusp of our
finitude and our infinity. Perhaps that is what Borges—the consummate mixer of
time and space—meant when he described paradise as a library, not as a spatial
idea but a temporal one: that it was only within the confines of infinity that
one imagine finishing reading one’s library. It would therefore be more
interesting to think of the shadow library as a way of thinking about what it
means to dwell in knowledge. While all our aspirations for a habitat should
have a utopian element to them, lets face it, utopias have always been
difficult spaces to live in.

In contrast to the idea of utopia is heterotopia—a term with its origins in
medicine (referring to an organ of the body that had been dislodged from its
usual space) and popularized by Michel Foucault both in terms of language as
well as a spatial metaphor. If utopia exists as a nowhere or imaginary space
with no connection to any existing social spaces, then heterotopias in
contrast are realities that exist and are even foundational, but in which all
other spaces are potentially inverted and contested. A mirror for instance is
simultaneously a utopia (placeless place) even as it exists in reality. But
from the standpoint of the mirror you discover your absence as well. Foucault
remarks, “The mirror functions as a heterotopia in this respect: it makes this
place that I occupy at the moment when I look at myself in the glass at once
absolutely real, connected with all the space that surrounds it, and
absolutely unreal, since in order to be perceived it has to pass through this
virtual point which is over there.”6

In _The Order of Things_ Foucault sought to investigate the conceptual space
which makes the order of knowledge possible; in his famed reading of Borges’s
Chinese encyclopedia he argues that the impossibility involved in the
encyclopedia consists less in the fantastical status of the animals and their
coexistence with real animals such as (d) sucking pigs and (e) sirens, but in
where they coexist and what “transgresses the boundaries of all imagination,
of all possible thought, is simply that alphabetical series (a, b, c, d) which
links each of those categories to all the others.” 7 Heterotopias destabilize
the ground from which we build order and in doing so reframe the very
epistemic basis of how we know.

Foucault later developed a greater spatial understanding of heterotopias in
which he uses specific examples such as the cemetery (at once the space of the
familiar since everyone has someone in the cemetery and at the heart of the
city but also over a period of time the other city, where each family
possesses its dark resting place).8 Indeed, the paradox of heterotopias is
that they are both separate from yet connected to all other spaces. This
connectedness is precisely what builds contestation into heterotopias.
Imaginary spaces such as utopias exist completely outside of order.
Heteretopias by virtue of their connectedness become sites in which epistemes
collide and overlap. They bring together heterogeneous collections of unusual
things without allowing them a unity or order established through resemblance.
Instead, their ordering is derived from a process of similitude that produces,
in an almost magical, uncertain space, monstrous combinations that unsettle
the flow of discourse.

If the utopian ideal of the library was to bring together everything that we
know of the world then the length of its bookshelves was coterminous with the
breadth of the world. But like its predecessors in Alexandria and Babel the
project is destined to be incomplete haunted by what it necessarily leaves out
and misses. The library as heterotopia reveals itself only through the
interstices and lays bare the fiction of any possibility of a coherent ground
on which a knowledge project can be built. Finally there is the question of
where we stand once the grounds that we stand on itself has been dislodged.
The answer from my first foray into the tiny six by eight foot Mecca store to
the innumerable hours spent on [ library.nu]( library.nu) remains the same:
the heterotopic pleasure of our finite selves in infinity.

×

This essay is a part of a work I am doing for an exhibition curated by Raqs
Media Collective, Sarai Reader 09. The show began on August 19, 2012, with a
deceptively empty space containing only the proposal, with ideas for the
artworks to come over a period of nine months. See
.

**Lawrence Liang** is a researcher and writer based at the Alternative Law
Forum, Bangalore. His work lies at the intersection of law and cultural
politics, and has in recent years been looking at question of media piracy. He
is currently finish a book on law and justice in Hindi cinema.

© 2012 e-flux and the author

[ ![](//images.e-flux-systems.com/Banner-Eflux-760x1350px-Learoyd-ing-
ok.gif,300) ](/ads/redirect/271922)

Journal # 37

Related

Conversations

Notes

Share

[Download PDF](http://worker01.e-flux.com/pdf/article_8957468.pdf)

More

Julieta Aranda, Brian Kuan Wood, and Anton Vidokle

## [Editorial](/journal/37/61227/editorial/)

![](data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7)

It is hard to avoid the feeling these days that the future is behind us. It’s
not so much that time has stopped, but rather that the sense of promise and
purpose that once drove historical progress has become impossible to sustain.
On the one hand, the faith in modernist, nationalist, or universalist utopias
continues to retreat, while on the other, a more immediate crisis of faith has
accompanied the widespread sense of diminishing economic prospects felt in so
many places. Not to mention...

## [Shadow Libraries](/journal/37/61228/shadow-libraries/)

![](data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7)

Over the last few monsoons I lived with the dread that the rain would
eventually find its ways through my leaky terrace roof and destroy my books.
Last August my fears came true when I woke up in the middle of the night to
see my room flooded and water leaking from the roof and through the walls.
Much of the night was spent rescuing the books and shifting them to a dry
room. While timing and speed were essential to the task at hand they were also
the key hazards navigating a slippery floor...

Metahaven

## [Captives of the Cloud: Part I](/journal/37/61232/captives-of-the-cloud-
part-i/)

![](data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7)

We are the voluntary prisoners of the cloud; we are being watched over by
governments we did not elect. Wael Ghonim, Google's Egyptian executive, said:
“If you want to liberate a society just give them the internet.” 1 But how
does one liberate a society that already has the internet? In a society
permanently connected through pervasive broadband networks, the shared
internet is, bit by bit and piece by piece, overshadowed by the “cloud.” The
Coming of the Cloud The cloud,...

Amelia Groom

## [There’s Nothing to See Here: Erasing the
Monochrome](/journal/37/61233/there-s-nothing-to-see-here-erasing-the-
monochrome/)

![](data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7)

There was once a typist from Texas named Bette Nesmith Graham, who wasn’t very
good at her job. In 1951 she started erasing her typing mistakes with a white
tempera paint solution she mixed in her kitchen blender. She called her
invention Mistake Out and began distributing small green bottles of it to her
coworkers. In 1956 she founded the delectably named Mistake Out Company.
Shortly after, she was apparently fired from her typist job because she made a
“mistake” that she failed to cover...

Nato Thompson

## [The Last Pictures: Interview with Trevor Paglen](/journal/37/61238/the-
last-pictures-interview-with-trevor-paglen/)

![](data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7)

In 1963 NASA launched the first communications satellite, Syncom 2, into a
geosynchronous orbit over the Atlantic Ocean. Since then, humans have slowly
and methodically added to this space-based communications infrastructure.
Currently, more than 800 spacecraft in geosynchronous orbit form a man-made
ring of satellites around Earth at an altitude of 36,000 kilometers. Most of
these spacecraft powered down long ago, yet continue to float aimlessly around
the planet. Geostationary satellites...

Claire Tancons

## [Carnival to Commons: Pussy Riot, Punk Protest, and the Exercise of
Democratic Culture](/journal/37/61239/carnival-to-commons-pussy-riot-punk-
protest-and-the-exercise-of-democratic-culture/)

![](data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7)

Once again, the press has dismissed a popular movement as carnival—this time
not Occupy Wall Street, but the anti-Putin protests. On March 1, 2012, in a
Financial Times article titled “Carnival spirit is not enough to change
Russia,” Konstantin von Eggert wrote, “One cannot sustain [the movement] on
carnival spirit alone.” 1 A little over a week later, Reuters sought to close
the debate with an article by Alissa de Carbonnel, in which she announced,
“The carnival is over for Russia’s...

Anton Vidokle and Brian Kuan Wood

## [Breaking the Contract](/journal/37/61241/breaking-the-contract/)

![](data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7)

1\. The Contract The Duchampian revolution leads not to the liberation of the
artist from work, but to his or her proletarization via alienated construction
and transportation work. In fact, contemporary art institutions no longer need
an artist as a traditional producer. Rather, today the artist is more often
hired for a certain period of time as a worker to realize this or that
institutional project. — Boris Groys 1 When his readymades entered the space
of art, Duchamp...

Shadow Libraries

There is nothing related.

Conversations - Shadow Libraries

Conversations

[Join the Conversation](http://conversations.e-flux.com/t/5546)

e-flux conversations is a discussion platform for e-flux readers. Click to
start a discussion of the article above.

Start the Conversation

Notes - Shadow Libraries

1

Esther Shipman and Sascha Hastings eds., _Logotopia: The Library in
Architecture Art and the Imagination,_ (Cambridge Galleries: Abc Art Books
Canada, 2008).

Go to Text

2

Alberto Manguel, “My Library” in Hastings and Shipman eds. _Logotopia, The
Library in Art and Architecture and the Imagination, (Cambridge Galleries: ABC
Art Books Canada, 2008)._

Go to Text

3

Alberto Manguel, _The Library at Night_ , (Yale University Press 2009).

Go to Text

4

Ray Hastings and Esther Shipman, eds. _Logotopia: The Library in Architecture
Art and the Imagination_. Cambridge Galleries / ABC Art Books Canada, 2008.

Go to Text

5

Jacques Rancière, _The Nights of Labour: The Workers’ Dream in Nineteenth
Century France,_ (Philadelphia: Temple University Press, 1991).

Go to Text

6

Michel Foucault, “Different Spaces,” in _Aesthetics, Method, Epistemology_ ,
ed. James D. Faubion (New York: The New Press, 1998), 179; For Foucault on
language and heterotopias see _The Order of Things: An Archaeology of the
Human Sciences,_ (New York: Pantheon, 1970).

Go to Text

7

Ibid, xv.

Go to Text

8

In Foucault, “Different Spaces,” which was presented as a lecture to the
_Architecture Studies Circle_ in 1967, a few years after the writing of _The
Order of Things_.

Go to Text

Esther Shipman and Sascha Hastings eds., _Logotopia: The Library in
Architecture Art and the Imagination,_ (Cambridge Galleries: Abc Art Books
Canada, 2008).

Alberto Manguel, “My Library” in Hastings and Shipman eds. _Logotopia, The
Library in Art and Architecture and the Imagination, (Cambridge Galleries: ABC
Art Books Canada, 2008)._

Alberto Manguel, _The Library at Night_ , (Yale University Press 2009).

Ray Hastings and Esther Shipman, eds. _Logotopia: The Library in Architecture
Art and the Imagination_. Cambridge Galleries / ABC Art Books Canada, 2008.

Jacques Rancière, _The Nights of Labour: The Workers’ Dream in Nineteenth
Century France,_ (Philadelphia: Temple University Press, 1991).

Michel Foucault, “Different Spaces,” in _Aesthetics, Method, Epistemology_ ,
ed. James D. Faubion (New York: The New Press, 1998), 179; For Foucault on
language and heterotopias see _The Order of Things: An Archaeology of the
Human Sciences,_ (New York: Pantheon, 1970).

Ibid, xv.

In Foucault, “Different Spaces,” which was presented as a lecture to the
_Architecture Studies Circle_ in 1967, a few years after the writing of _The
Order of Things_.


Ludovico
The Liquid Library
2013


# The liquid library

* [Alessandro Ludovico](https://www.eurozine.com/authors/alessandro-ludovico/)

26 August 2013

Traditional libraries are increasingly putting their holdings online, if not
in competition with Google Books then in partnership, in order to keep pace
with the mass digitization of content. Yet it isn't only the big institutional
actors that are driving this process forward: small-scale, independent
initiatives based on open source principles offer interesting approaches to
re-defining the role and meaning of the library, writes Alessandro Ludovico.

A deep conflict is brewing silently in libraries around the globe. Traditional
librarians - skilled, efficient and acknowledged - are being threatened by
bosses, themselves trying to cope with substantial funding cuts, with the word
"digital", touted as a panacea for saving space and money. At the same time,
in other (less traditional) places, there is a massive digitization of books
underway aimed at establishing virtual libraries much bigger than any
conventional one. These phenomena are questioning the library as point of
reference and as public repository of knowledge. Not only is its bulky
physicality being questioned, but the core idea that, after the advent of
truly ubiquitous networks, we still need a central place to store, preserve,
index, lend and share knowledge.

![Books vs. tablet](http://www.eurozine.com/wp-content/uploads/2013/08
/eurozine-tablet-book.jpg)

Tablet-PC on hardcover book. Photo: Anton Kudelin. Source: Shutterstock

It is important not to forget that traditional libraries (public and private)
still guarantee the preservation of and access to a huge number of digitally-
unavailable texts, and that a book's material condition sometimes tells part
of the story, not to mention the experience of reading it in a library. Still,
it is evident that we are facing the biggest digitization ever attempted, in a
process comparable to what Napster meant for music in the early 2000s. But
this time there are many more "institutional" efforts running simultaneously,
so that we are constantly hearing announcements that new historical material
has been made accessible online by libraries and institutions of all sizes.

The biggest digitizers are Google Books (private) and Internet Archive (non-
profit). The former is officially aiming to create a privately owned,
"universal library", which in April 2013 claimed to contain 30 millions
digitized books.1 The latter is an effort to make a comparably huge public
library by using Creative Commons licenses and getting rid of Digital Rights
Management chains, and currently claims to hold almost 5 millions digitized
books.

These monumental efforts are struggling with one specific element: the time it
takes to create digital content by converting it from another medium. This
process, of course, creates accidents. Krissy Wilson's blog/artwork _The Art
of Google Books_2 explores daily the non-digital elements (accidental or not)
emerging in scanned pages, which can be purely material - such as scribbled
notes, parts of the scanning person's hand, dried flowers - or typographical
or linguistic, or deleted or missing parts, all of them precisely annotated.
This small selection of illustrations of how physicality causes technology to
fail may be self-reflective, but it shows a particular aspect of a larger
development. In fact, industrial scanning is only one side of the coin. The
other is the private and personal digitization and sharing of books.

On the basis of brilliant open source tools like the DIY Bookscanner,3 there
are various technical and conceptual efforts to building specialist digital
libraries. _Monoskop_4 is exemplary: its creator Dusan Barok has transformed
his impressive personal collection of media (about contemporary art, culture
and politics, with a special focus on eastern Europe) into a common resource,
freely downloadable and regularly updated. It is a remarkably inspired
selection that can be shared regardless of possible copyright restrictions.
_Monoskop_ is an extreme and excellent example of a personal digital library
made public. But any small or big collection can be easily shared. Calibre5 is
an open source software that enables one to efficiently manage a personal
library and to create temporary or stable autonomous zones in which entire
libraries can be shared among a few friends or entire communities.

Marcell Mars,6 a hacktivist and programmer, has worked intensively around this
subject. Together with Tomislav Medak and Vuk Cosic, he organized the HAIP
2012 festival in Ljubljana, where software developers worked collectively on a
complex interface for searching and downloading from major independent online
e-book collections, turning them into a sort of temporary commons. Mars'
observation that, "when everyone is a librarian, the library is everywhere,"
explains the infinite and recursive de-centralization of personal digital
collections and the role of the digital in granting much wider access to
published content.

This access, however, emphasizes the intrinsic fragility of the digital - its
complete dependence on electricity and networks, on the integrity of storage
media and on updated hard and software. Among the few artists to have
conceptually explored this fragility as it affects books is David Guez, whose
work _Humanpédia_7 can be defined as an extravagant type of "time-based art".
The work is clearly inspired by Ray Bradbury's _Fahrenheit 451_ , in which a
small secret community conspires against a total ban on books by memorizing
entire tomes, preserving and orally transmitting their contents. Guez applies
this strategy to Wikipedia, calling for people to memorize a Wikipedia
article, thereby implying that our brains can store information more reliably
than computers.

So what, in the end, will be the role of old-fashioned libraries?
Paradoxically enough, they could become the best place to learn how to
digitize books or how to print out and bind digitized books that have gone out
of print. But they must still be protected as a common good, where cultural
objects can be retrieved and enjoyed anytime in the future. A timely work in
this respect is La Société Anonyme's _The SKOR Codex_.8 The group (including
Dusan Barok, Danny van der Kleij, Aymeric Mansoux and Marloes de Valk) has
printed a book whose content (text, pictures and sounds) is binary encoded,
with enclosed visual instructions about how to decode it. A copy will be
indefinitely preserved at the Bibliothèque nationale de France by signed
agreement. This work is a time capsule, enclosing information meant to be
understood in the future. At any rate, we can rest assured that it will be
there (with its digital content), ready to be taken from the shelf, for many
years to come.

1

See:
[http://www.nybooks.com/](http://www.nybooks.com/articles/archives/2013/apr/25
/national-digital-public-library-launched/)

2



3



4



5



6



7



8



Published 26 August 2013
Original in English
First published by Springerin 3/2013 (German version); Eurozine (English
version)

Contributed by Springerin © Alessandro Ludovico / Springerin / Eurozine


Marczewska, Adema, McDonald & Trettien
The Poethics of Scholarship
2018


Post
Office
Press

Edited by

The Poethics
of Scholarship
Kaja
Marczewska

Janneke
Adema

Frances
McDonald

Whitney
Trettien

Published by Post Office Press and
Rope Press. Coventry, 2018.
© Post Office Press, papers by
respective Authors.
Freely available at:
http://radicaloa.co.uk/
conferences/ROA2
This is an open access pamphlet,
licensed under a Creative
Commons Attribution 4.0
International (CC BY 4.0) license.
Read more about the license at:
https://creativecommons.org/
licenses/by/4.0/
Figures and other media included
with this pamphlet may be under
different copyright restrictions.

This pamphlet is published in a series
of 7 as part of the Radical Open
Access II – The Ethics of Care
conference, which took place June
26-27 at Coventry University. More
information about this conference
and about the contributors to this
pamphlet can be found at:
http://radicaloa.co.uk/conferences/
ROA2
This pamphlet was made possible due
to generous funding from the arts
and humanities research studio, The
Post Office, a project of Coventry
University’s Centre for Postdigital
Cultures and due to the combined
efforts of authors, editors, designers
and printers.

Table of Contents

Introduction
Post Office Press
Page 4

The Horizon of The Publishable in/as
Open Access: From Poethics to Praxis
Kaja Marczewska
Page 6

Design by: Mihai Toma, Nick White
and Sean Worley
Printed by: Rope Press,
Birmingham

The Poethics of Openness
Janneke Adema
Page 16

Diffractive Publishing
Frances McDonald & Whitney Trettien
Page 26

Introduction

Kaja Marczewska tracks in her contribution OA’s development
from a radical and political project driven by experimental
impetus, into a constrained model, limiting publishing in the
service of the neoliberal university. Following Malik, she
argues that OA in its dominant top-down implementation is
determining the horizon of the publishable. Yet a horizon also
suggests conditions of possibility for experimentation and
innovation, which Marczewska locates in a potential OA ethos
of poethics and praxis, in a fusion of attitude and form.

This pamphlet explores ways in which to engage scholars to
further elaborate the poethics of their scholarship. Following
Joan Retallack, who has written extensively about the
responsibility that comes with formulating and performing a
poetics, which she has captured in her concept of poethics
(with an added h), this pamphlet examines what connects
the 'doing' of scholarship with the ethical components of
research. Here, in order to remain ethical we are not able to
determine in advance what being ethical would look like, yet, at
the same time, ethical decisions need to be made and are being
made as part of our publishing practices: where we publish
and with whom, in an open way or not, in what form and shape
and in which formats. Should we then consider the poethics
of scholarship as a poetics of/as change, or as Retallack calls
it, a poetics of the swerve (clinamen), which continuously
unsettles our familiar notions?
This pamphlet considers how, along with discussions about
the contents of our scholarship, and about the different
methodologies, theories and politics that we use to give
meaning and structure to our research, we should have similar
deliberations about the way we do research. This involves
paying more attention to the crafting of our own aesthetics
and poetics as scholars, including a focus on the medial forms,
the formats, and the graphic spaces in and through which we
communicate and perform scholarship (and the discourses
that surround these), as well as the structures and institutions
that shape and determine our scholarly practices.

4

Janneke Adema explores in her paper the relationship between
openness and experimentation in scholarly publishing, outlining
how open access in specific has enabled a reimagining of its
forms and practices. Whilst Adema emphasises that this
relationship is far from guaranteed, through the concept
of scholarly poethics she speculates on how we can forge a
connection between the doing of scholarship and its political,
ethical and aesthetical elements.
In the final contribution to this pamphlet Whitney Trettien and
Frances McDonald ask a pertinent question: ‘how can we build
scholarly infrastructures that foster diffractive reading and
writing?’. To address this question, they reflect on their own
experiences of editing an experimental digital zine: thresholds,
which brings the creative affordances of the split screen, of
the gutter, to scholarship. By transforming materially how
we publish, how we read and write together, McDonald and
Trettien explore the potential of thresholds as a model for
digital publishing more attuned to the ethics of entanglement.

Post Office Press

5

The Horizon of
The Publishable
in/as Open
Access: From
Poethics to
Praxis

maintain by contributing to it for the sake of career progression
and a regular salary. This transgression is unlikely to be noticed
by my publisher (who probably does not care anyway).1 It is a
small and safe act of resistance, but it gestures towards the
centrality of thinking about the poethics—the ethics and the
aesthetics—of any act of making work public that is so crucial
to all discussions of open access (OA) publishing.

Kaja
Marczewska

I am writing this piece having just uploaded a PDF of my recent
book to aaaarg; a book published by Bloomsbury as a hardback
academic monograph retailing at £86—and that is after the
generous 10% discount offered on the publisher’s website. The
book focuses on copying and reproduction as perhaps the most
prominent forms of contemporary cultural production. Given
this focus, it seemed fitting to make the material available via
this guerrilla library, to enable its different circulation and less
controlled iterations. My decision to publish with Bloomsbury
was a pragmatic one. As an early career academic working
within UK higher education, I had little choice but to publish
with an established press if I wanted to continue in the privileged
position I currently find myself in. As someone interested in
economies of cultural production, forms of publishing and
self-organisation, the decision to breach my contract with the
publisher offered a welcome and necessary respite from the
discomfort I felt every time I saw my unaffordable (and perhaps
as a result, unreadable) book for sale. It served as a way of acting
(po)ethically within the system of which I am part. It was both a
gesture of sharing, of making my book more widely available to
a community that might otherwise be unable to access it, and
a selfish act, enabling my ongoing existence within a system I

6

Kaja Marczewska

I open with this personal reflection because I see my participation
inside-outside of academic publishing as pertinent to thinking
about the nature of OA today. Since its inception, OA publishing
has rapidly transformed from a radical, disruptive project of
sharing, making public, and community building, into one that
under the guise of ‘openness’ and ‘access’ maintains the system
that limits the possibilities of both. That is, OA has moved away
from the politically motivated initiative that it once was, opening
up spaces for publishing experimentation, to instead become a
constrained and constraining model of publishing in the service
of the neoliberal university. With this transformation of OA also
come limitations on the forms of publication. The introduction of
the OA requirement as one of the key criteria of REF-ability was
one of the factors contributing to the loss of the experimental
impetus that once informed the drive towards the OA model.
My home institution, for example, requires its staff to deposit
all our REF-able publications in a commercial, Elsevier-owned
repository, as PDFs—even if they have been published in OA
journals on custom-built platforms. The death-by-PDF that
such institutionalised forms of OA bring about, inevitably limits
the potential for pushing the boundaries of form that working
in digital spaces makes possible.
While conventional academic publishers are driven by market
demands and the value of the academic book as a commodity in
their decisions as to what to publish, mainstream OA publishing
practices tend to be motivated by questions on how to publish
a REF-able output, i.e. for all the wrong reasons. This tension
between content and form, and a characteristic commitment
to the latter that publishing OA makes necessary, is the central
focus of my paper. As I will argue, this is perhaps the greatest
paradox of OA: that in its fixation on issues of openness, it is

The Horizon of The Publishable

7

increasingly open only to the kinds of publications that can be
effortlessly slotted into the next institutional REF submission.
But, by doing so, OA publishing as we have come to know it
introduces significant constraints on the forms of publication
possible in academic publishing. In this paper, I consider OA as
a limit to what can be published in academia today, or what I will
refer to here, after Rachel Malik, as a horizon of the publishable.
‘Publishing,’ writes Malik, ‘or rather the horizon of the
publishable, precedes and constitutes both what can be written
and read. […] the horizon of the publishable governs what is
thinkable to publish within a particular historical moment […]
the horizon denotes […] a boundary or limit’ (2015, 709, 72021). Malik suggests that a number of distinct horizons can be
identified and argues that the limits of all writing are based on
generic conventions, i.e. crime fiction, biography, or children’s
picture books, for example, are all delimited by a different
set of categories and practices—by a different horizon. Her
understanding of publishing foregrounds the multiplicity of
processes and relations between them as well as the role
of institutions: commercial, legal, educational, political, and
cultural. It is the conjunction of practices and their contexts
that always constitutes, according to Malik, various horizons
of the publishable. For Malik, then, there is no singular concept
of publishing and no single horizon but rather a multiplicity of
practices and a diversity of horizons.
Open access could be added to Malik’s list as another practice
defined by its unique horizon. Following Malik, it would be
very easy to identify what the horizon of OA might be—what
processes, practices, and institutions define and confine what
can be published OA. But I would like to suggest here that
thinking about OA in the context of Malik’s argument does more
than offer tools for thinking about the limits of OA. I suggest
that it invites a rethinking of the place of OA in publishing today
and, more broadly, of the changing nature of publishing in HE.
That is, I propose that today OA assumes the role of a horizon
in its own right; that it defines and delimits the possibilities of
what can be made public in academia. If seen as such, OA is more
than just one of the practices of publishing; it has become the

8

Kaja Marczewska

horizon of the publishable in academic publishing in the UK today.
The new horizon in academic publishing seems increasingly to
only allow certain accepted forms of OA (such as the PDF or
the postprint) which under the guise of openness, sharing and
access, replicate the familiar and problematic models of our
knowledge economy. The promise of OA as a response to these
fixed forms of publishing seems to have given way to a peculiar
openness that favours metrics and monitoring. Where OA was
originally imagined to shift the perception of the established
horizon, it has now become that very horizon.
Here I want to posit that we should understand poethics as a
commitment to the kind of publishing that recognises the agency
of the forms in which we distribute and circulate published
material and acknowledges that these are always, inevitably
ideological. In her notion of poethics, Joan Retallack (2003)
gestures towards a writing that in form and content questions
what language does and how it works—to ‘the what’ and ‘the
how’ of writing. Similarly, the project of imagining OA as a
poethics is an attempt at thinking about publishing that forces a
reconsideration of both. However, I suggest, that with an often
thoughtless and technodeterministic push towards ‘access’ and
‘openness’, ‘the what’ gets obscured at the cost of ‘the how.’ This
attitude manifests itself most prominently in the proliferation
of OA platforms, similar to Coventry University’s depository
mentioned earlier here, that fit the parameters of REF. But
platforms, as Nick Srnicek (2017) warns us, are problematic. In
their design and modes of operation, they hold out the promise
of freedom, openness, flexibility and entrepreneurial success,
while maintaining the proprietary regimes and modes of capital
accumulation that contribute to new forms of exploitation and
new monopolies. The kind of publishing that mainstream OA
has become (what Sarah Kember describes as a top-down,
policy-driven OA)2 is more akin to this platform capitalism than
a publishing model which evokes the philosophy of openness
and access. In a shift away from a diversity of forms of OA
towards standardised OA platforms, OA has become inherently
antithetical to the politics of OA publishing.

The Horizon of The Publishable

9

What follows, then, is that any work that takes advantage of its openness and circulation
in digital spaces to experiment with ‘the how’ of publishing, in the current knowledge
economy inevitably becomes the negative of publishable, i.e. the unpublishable. OA as
platform capitalism is openly hostile to OA’s poethical potential. In other words, the
REF-able version of OA takes little interest in openness and delimits what is at the
heart of the practice itself, i.e. what can be made open to the public (as a colleague
from one of the Russell Group universities tells me, this only includes three or fourstar rated publications in their case, with other works deemed not good enough to
be made available via the University’s website). To imagine OA as a poethical mode of
publishing is to envisage a process of publishing that pushes beyond the horizon set
by OA itself. It invites reading and writing of texts that might be typically thought of
as unreadable, unwriteable, and unpublishable.
The concept of the ‘horizon’ also interest Joan Retallack, who in Poethical Wager
(2003) explores the horizon as a way of thinking about the contemporary. Retallack
identifies two types of horizons: the pseudoserene horizon of time and the dynamic
coastline of historical poesis (14). Reading Retallack in the context of OA, I would
like to suggest that similarly two models of OA can be identified today: OA as a
pseudoserene horizon and OA as a cultural coastline. One is predictable, static, and
limiting, i.e. designed to satisfy the managerial class of the contemporary university;
the other works towards a poethics of OA, with all its unpredictability, complexity,
and openness. OA publishing which operates within the confines of the pseudoserene
horizon is representative of what happens when we become complacent in the way we
think about the work of publishing. Conversely, OA seen as a dynamic coastline–the
model that Radical Open Access (ROA) collective works to advance–is a space where
publishing is always in process and makes possible a rethinking of the experience of
publishing. Seen as such, ROA is an exposition of the forms of publishing that we
increasingly take for granted, and in doing so mirrors the ethos of poethics. The role
of ROA, then, is to highlight the importance of searching for new models of OA, if
OA is to enact its function as a swerve in attitudes towards knowledge production
and consumption.
But anything new is ugly, Retallack suggests, via Picasso: ‘This is always a by-product
of a truly experimental aesthetics, to move into unaestheticized territory. Definitions
of the beautiful are tied to previous forms’ (Retallack 2003, 28). OA, as it has evolved
in recent years, has not allowed the messiness of the ugly. It has not been messy enough
because it has been co-opted, too quickly and unquestionably, by the agendas of
the contemporary university. OA has become too ‘beautiful’ to enact its disruptive
potential.3 In its drive for legitimisation and recognition, the project of OA has been
motivated by the desire to make this form of publishing too immediately familiar, and

10

Kaja Marczewska

too willingly PDF-able. The consequences of this attitude are
significant. The constraints on the methods and forms of OA
publishing that the institutionalisation of OA have brought
about, inevitably limit the content that is published. As a result,
what is delivered openly to the public is the familiar and the
beautiful. The new, radical, and ugly remains out of sight; not
recognised as a formal REF-able publication, the new lies beyond
the horizon of the OA publication as we know it. In order to enact
a poethics of openness and access, OA requires a more complex
understanding of the notion of openness itself. To be truly ‘open’,
OA publishing need not make as its sole objective a commitment
to openness as a mode of making publications open for the
public, i.e. circulated without a paywall, but instead should also
be driven by an openness to ambiguity, experimentation, and ‘a
delight in complex possibility’ (Retallack 2003, 221) that the
dominant models of OA are unable to accommodate.
To accuse OA of fixing in place the horizon of academic
publishing is to suggest that ‘a certain poetics of responsibility’
(Retallack 2003, 3) seems to have been lost in the bigger
project of OA, responsibility to the community of writers and
readers, and responsibility to the project of publishing. OA as
a ‘poethical attitude’ (Retallack 2003, 3) rather than rampant
technodeterminism, need not be a project which we have to
conform to under the guidelines of the current REF, but can
rather be a practice we choose to engage and engage with,
under conditions that make the poethics of OA possible. What a
re-thinking of OA as a poethics offers, is a way of acknowledging
the need for publishing that models how we want to participate
in academia. Exploring OA as a horizon of academic publishing
is one possible way of addressing this challenge. Although by
nature limiting, the horizon is also, Malik suggests, ‘a condition
of possibility’ (721). The task of OA as poethics is predicated on
the potential of moving away from the horizon as a boundary or a
limit and towards the horizon as a possibility of experimentation
and innovation. I want to conclude with another proposition,
which gestures towards such rethinking of OA as a more open
iteration of the horizon.

The Horizon of The Publishable

11

I have referred to OA publishing as a practice a number of
times in this paper. A decision to use this term was a conscious
attempt at framing OA as praxis. A shift away from poiesis–or
making–and towards the discourse of praxis–action or doing–
has been shaping the debates in the visual arts for some time
now. Art seen as praxis emerges out of a desire for social life
shaped by collective, transformative action. Praxis is a means of
reformulating life and art into a new fusion of critical thought,
creative production, and political activity. This approach grows
out of Aristotle’s understanding of praxis as action which is
always valuable in itself, as opposed to poiesis, i.e. actions aimed
at making or creation. Aristotelean praxis is always implicitly
ethical–always informed by and informing decisions as to how to
live–and political, concerned with forms of living with others. My
understanding of OA as praxis here is informed by such thinking
about ethical action as absolutely necessary for OA to enact
its potential for experimentation and change.

process of producing OA publications, a never-ending flow of
new PDFs and platforms. Instead, open accessing is a mode
of being in academia through the project of publishing as an
ongoing intervention. OA as platform capitalism gives little
consideration to the bigger project of OA as praxis, and as a
result fails to acknowledge the significance of the relationship
between the form of OA, the content published OA, and the
political project that informs both. Approaching OA as praxis,
then, is a tool for reshaping what constitutes the work of
publishing. What a commitment to open accessing, as opposed
to open access, makes possible, is a collective work against OA
as a tool of the neoliberal university and for OA as a poethical
form of publication: a fusion of making and doing, of OA as an
attitude and OA as form. But for poethical OA to become a
possibility, OA as praxis needs to emerge first.

To think about OA as praxis is to invite a conceptual shift
away from making publications OA and towards ‘doing OA’
as a complete project. OA seen as such ceases to exist as yet
another platform and emerges as an attitude that has the
potential to translate into forms of publishing best suited to
communicate it. This is not to suggest that OA should move
away from its preoccupation with the form and medium of
publishing altogether–the emergence of the so called postmedium condition in the arts, the glorification of generalised
‘doing’, and more recently, the popularity of related forms of
‘entrepreneurship’, all have their own problems. Rather, this
move towards praxis is an attempt at drawing attention to a
necessary relationship between making and doing, forms and
attitudes, that seems to be lacking in a lot of OA publishing. OA
as praxis offers a way out of what seems to be the end game
of academic publishing today; it is an invitation to participate
collectively and ethically in the process of making public the
work of scholarship.
Doing OA–open accessing–implies a way of thinking about
what producing various forms of knowledge should stand for.
In other words, open accessing does not suggest a continuous

12

Kaja Marczewska

The Horizon of The Publishable

13

References

¹ For a discussion of the effects of similar
practices of academic book sharing
on publishers, see Janneke Adema,
“Scanners, Collectors and Aggregators. On
the ‘underground movement’ of (pirated)
theory text sharing,” Open Reflections, 20
September 2009, https://openreflections.
wordpress.com/2009/09/20/scannerscollectors-and-aggregators-on-the‘underground-movement’-of-piratedtheory-text-sharing/.

Adema, Janneke. 2009. “Scanners, Collectors and Aggregators. On the ‘underground
movement’ of (pirated) theory text sharing.” Open Reflections. Accessed 15 May
2018. https://openreflections.wordpress.com/2009/09/20/scanners-collectors-andaggregators-on-the-‘underground-movement’-of-pirated-theory-text-sharing/.
Adema, Janneke. 2014. “Embracing Messiness: Open access offers the chance to
creatively experiment with scholarly publishing.” LSE Impact Blog. Accessed 15
May 2018. http://blogs.lse.ac.uk/impactofsocialsciences/2014/11/18/embracingmessiness-adema-pdsc14/.
Kember, Sarah. 2014. “Opening Out from Open Access: Writing and Publishing in Response
to Neoliberalism.” Ada: A Journal of Gender, New Media, and Technology 4.
doi:10.7264/N31C1V51.
Malik, Rachel. 2017. “Horizons of the Publishable: Publishing in/as Literary Studies.” ELH 75
(3): 707-735.
Retallack, Joan. 2003. The Poethical Wager. Berkeley, CA: University of California Press.
Srnicek, Nick. 2017. Platform Capitalism. Cambridge: Polity Press.

² see: Sarah Kember, “Opening Out from
Open Access: Writing and Publishing in
Response to Neoliberalism,” Ada: A Journal
of Gender, New Media, and Technology 4
(2014): doi:10.7264/N31C1V51.

³ see also: Janneke Adema, “Embracing
Messiness: Open access offers the
chance to creatively experiment with
scholarly publishing,” LSE Impact Blog,
18 November 2014, http://blogs.lse.ac.uk/
impactofsocialsciences/2014/11/18/
embracing-messiness-adema-pdsc14/.

14

Kaja Marczewska

The Horizon of The Publishable

15

The
Poethics
Of
Openness

I won’t imply here that openness is the sole or even main reason/motivator/
enabler behind any kind of reimagining in this context; openness has always been
part of a constellation of material-discursive factors—including most importantly
perhaps, the digital, in addition to various other socio-cultural elements—which
have together created (potential) conditions for change in publishing. Yet, within
this constellation I would like to explore how open access, applied and valued in
certain specific, e.g. radical open access, ways—where in other implementations it
has actually inhibited experimentation, but I will return to that later—has been an
instrumental condition for ethico-aesthetic experimentation to take place.

Janneke
Adema

Potential for Experimentation

Last year from the 23rd until the 29th of October the annual Open Access
Week took place, an international advocacy event focused on open access and
related topics. The theme of 2017’s Open Access week was ‘open in order to…’,
prompting participants to explore the concrete, tangible benefits of openness
for scholarly communication and inviting them to reflect on how openness can
make things possible. Behind this prompt, however, lies a wider discussion on
whether openness is a value that is an end in itself, that is intrinsically good, or
whether it predominantly has instrumental value as a means to achieve a certain
end. I will focus on the latter and will start from the presumption that openness
has no intrinsic value, it functions as a floating or empty signifier (Laclau 2005,
129–55; Adema 2014) with no ethics or politics of its own, only in relation to how it
is applied or positioned.1 It is therefore in discussions on the instrumental value of
openness that our politics and ethics in relation to openness come to the fore (for
example, do we value open in order to… ‘grow the commons’ or ‘increase return on
investments and contribute to economic growth’?). In this paper I want to explore
ways in which openness has contributed to and advanced a specific ‘end’: how has
it enabled experimentation with the material forms and relations that underlie and
structure scholarly publishing? Here, I am thinking of both the formats (e.g. print,
digital) we use to communicate our research, and the systems, roles, models and
practices that have evolved around them (e.g. notions of authorship, the book and
publication, publishing models). How has open access facilitated an exploration of
new practices, structures and institutions, questioning the system of academic
publishing as currently set up?

16

Janneke Adema

What is clear foremost, is that the open availability of research content has
been an important material condition for scholars and publishers to explore new
formats and new forms of interaction around publications. In order to remix and
re-use content, do large scale text and data-mining, experiment with open peer
review and emerging genres such as living books, wiki-publications, versionings and
multimodal adaptations, both the scholarly materials and platforms that lie at the
basis of these publishing gestures strongly benefit from being open. To enable new
forms of processual scholarship, communal authorship and public engagement with
texts online, open access is essential; it is no surprise therefore that many of the
ground-breaking experimental journals and projects in the HSS, such as Kairos,
Vectors and Inflexions, have been purposefully open access from the start.
Yet openness as a specific practice of publishing materials online has also influenced
how publishing itself is perceived. Making content openly available on blogs and
personal websites, or via institutional repositories and shadow libraries, has
enabled scholars to bypass legacy publishers, intermediaries and other traditional
gatekeepers, to publish their research and connect to other researchers in more
direct ways. This development has led to various reimaginings of the system of
scholarly publishing and the roles and structures that have traditionally buttressed
the publishing value chain in a print-based environment (which still predominantly
echoes Robert Darnton’s communication circuit, modelled on the 18th century
publishing history of Voltaire's Questions sur l'Encyclopédie (Darnton 1982)).
But next to this rethinking of the value chain, this more direct and open (self-)
publishing also enabled a proliferation of new publication forms, from blogposts to
podcasts and Twitter feeds.
Fuelled on by the open access movement, scholars, libraries and universities are
increasingly making use of open source platforms and software such as OJS to

The Poethics of Openness

17

take the process of publishing itself back into their own hands, setting up their
own formal publication outlets, from journals to presses and repositories. The open
access movement has played an important role in making a case against the high
profits sustaining the commercial publishing industry. This situation has created
serious access issues (e.g. the monograph crisis) due to the toxic combination
of market-driven publication decisions and increasingly depleted library funds,
affecting the availability of specialised and niche content (Fitzpatrick 2011; Hall
2008). This frustration in particular, next to the lack of uptake of open access
and multimodal publishing by the legacy presses, has motivated the rise of not-forprofit scholar- and library-led presses (Adema and Stone 2017). To that effect,
open access has stimulated a new ecosystem of publishing models and communities
to emerge.
Additionally, the iterative publishing of research-in-process, disseminating content
and eliciting community feedback during and as part of a project’s development,
has strengthened a vision of publishing in which it is perceived as an integral part of
the research process. The open science and notebook movements have simulated
this kind of processual publishing and helped imagine a different definition
of what publishing is and what purposes it fulfils. One of the more contentious
arguments I want to make here is that this potential to publish our research-inprogress has strengthened our agency as scholars with respect to how and when
we communicate our research. With that, our responsibility towards the specific
ways in which we produce it, from the formats (digital, multi-modal, processual), to
the material platforms and relations that support its production and dissemination,
is further extended. Yet, on the other hand, it has also highlighted the plurality of
material and discursive agencies involved in knowledge production, complicating
the centrality of liberal authorial agency. The closed and fixed codex-format, the
book as object, is what is being complicated and experimented with through preand post-publication feedback and interactions, from annotations in the margins
to open peer review and communal forms of knowledge production. The publication
as endpoint, as commodity, is what is being reconsidered here; but also our
author-function, when, through forms of open notebook science the roles of our
collaborators, of the communities involved in knowledge production, become even
more visible. I would like to end this section by highlighting the ways in which mainly
scholar-led projects within the open access landscape have played an important
role in carving out a different (ethical) framework for publishing too, one focused
on an ethics of care and communality, one in which publishing itself is perceived as
a form of care, acknowledging and supporting the various agencies involved in the
publishing process instead of being focused solely on its outcomes.

18

Janneke Adema

Impediment to Change
The above analysis of how openness and open access more
specifically has enabled experimentation, focuses mainly
on how it has the potential to do so. Yet there are similarly
many ways in which it has been inhibiting experimentation,
further strengthening existing publishing models and
established print-based formats. Think for example of how
most openly available scholarly publications are either
made available as PDFs or through Google Books limited
preview, both mimicking closed print formats online; of how
many open licences don’t allow for re-use and adaptations;
of how the open access movement has strategically been
more committed to gratis than to libre openness; of how
commercial publishers
are increasingly adopting open
access as just another profitable business model, retaining
and further exploiting existing relations instead of disrupting
them; of how new commercial intermediaries and gatekeepers
parasitical on open forms of communication are mining
and selling the data around our content to further their
own pockets—e.g. commercial SSRNs such as Academia.
edu and ResearchGate. In addition to all this, open access
can do very little to further experimentation if it is met by
a strong conservatism from scholars, their communities
and institutions, involving fears about the integrity of
scholarly content, and historical preferences for established
institutions and brands, and for the printed monograph and
codex format in assessment exercises—these are just a few
examples of how openness does not necessarily warrant
progressive change and can even effect further closures.
Openness itself does not guarantee experimentation, but
openness has and can be instrumentalised in such a way as
to enable experimenting to take place. It is here that I would
like to introduce a new concept to think and speculate with,
the concept of poethics. I use poethics in Derridean terms, as
a ‘nonself-identical’ concept (Derrida 1973), one that is both
constituted by and alters and adapts itself in intra-action
with the concepts I am connecting it to here: openness and
experimentation. I will posit that as a term poethics can

The Poethics of Openness

19

function in a connecting role as a bridging concept, outlining
the speculative relationship between the two. I borrowed the
concept of poethics (with an added h) from the poet, essayist,
and scholar Joan Retallack, where it has been further taken
on by the artist and critical racial and postcolonial studies
scholar Denise Ferreira da Silva; but in my exploration of
the term, I will also draw on the specific forms of feminist
poetics developed by literary theorist Terry Threadgold. I
will weave these concepts together and adapt them to start
speculating what a specific scholarly poethics might be. I
will argue in what follows that a scholarly poethics connects
the doing of scholarship, with both its political, ethical and
aesthetical elements. In this respect, I want to explore how
in our engagement as scholars with openness, a specific
scholarly poethics can arise, one that enables and creates
conditions for the continual reimagining and reperforming of
the forms and relations of knowledge production.
A Poethics of Scholarship
Poetics is commonly perceived as the theory of readymade textual and literary forms—it presumes structure and
fixed literary objects. Threadgold juxtaposes this theory of
poetics with the more dynamic concept of poiesis, the act of
making or performing in language, which, she argues, better
reflects and accommodates cultural and semiotic processes
and with that the writing process itself (Threadgold 1997, 3).
For Threadgold, feminist writings in particular have examined
this concept of poiesis, rather than poetics, of textuality by
focusing on the process of text creation and the multiple
identities and positions from which meaning is derived. This
is especially visible in forms of feminist rewriting, e.g. of
patriarchal knowledges, theories and narratives, which ‘reveal
their gaps and fissures and the binary logic which structures
them’ (Threadgold 1997, 16). A poetics of rewriting then goes
beyond a passive analysis of texts as autonomous artefacts,
where the engagement with and appraisal of a text is
actively performed, becoming performative, becoming itself
a poiesis, a making; the ‘analyst’ is embodied, becoming part
of the complex socio-cultural context of meaning-making

20

Janneke Adema

(Threadgold 1997, 85). Yet Threadgold emphasises that both
terms complement and denote each other, they are two sides
of the same coin; poetics forms the necessary static counterpoint to the dynamism of poiesis.
Joan Retallack moves beyond any opposition of poetics and
poiesis in her work, bringing them together in her concept of
poethics, which captures the responsibility that comes with
the formulating and performing of a poetics. This, Retallack
points out, always involves a wager, a staking of something
that matters on an uncertain outcome—what Mouffe and
Laclau have described as taking a decision in an undecideable
terrain (Mouffe 2013, 15). For Retallack a poethical attitude
thus necessarily comes with the ‘courage of the swerve’,
where, ‘swerves (like antiromantic modernisms, the civil rights
movement, feminism, postcolonialist critiques) are necessary
to dislodge us from reactionary allegiances and nostalgias’
(Retallack 2004, 3). In other words, they allow change to
take place in already determined situations. A poetics of the
swerve, of change, thus continuously unsettles our familiar
routes and notions; it is a poetics of conscious risk, of letting
go of control, of placing our inherited conceptions of ethics
and politics at risk, and of questioning them, experimenting
with them. For Retallack taking such a wager as a writer or
an artist, is necessary to connect our aesthetic registers to
the ‘character of our time’, acknowledging the complexities
and changing qualities of life and the world. Retallack initially
coined the term poethics to characterise John Cage’s
aesthetic framework, seeing it as focused on ‘making art
that models how we want to live’ (Retallack 2004, 44). The
principle of poethics then implies a practice in which ethics
and aesthetics can come together to reflect upon and
perform life’s changing experiences, whilst insisting upon our
responsibility (in interaction with the world) to guide this
change the best way we can, and to keep it in motion.
Denise Ferreira da Silva takes the concept of poethics
further to consider a new kind of speculative thinking—a
black feminist poethics—which rejects the linear and rational,
one-dimensional thought that characterises Western

The Poethics of Openness

21

European philosophy and theory in favour of a fractal or fourdimensional thinking, which better captures the complexity
of our world. Complicating linear conceptions of history and
memory as being reductive, Ferreira da Silva emphasises
how they are active elements, actively performing our past,
present and future. As such, she points out how slavery and
colonialism, often misconstrued in linear thinking as bygone
remnants of our past, are actively performed in and through
our present, grounded in that past, a past foundational to
our consciousness. Using fractal thinking as a poethical tool,
Ferreira da Silva hopes to break through the formalisations
of linear thought, by mapping blackness, and modes of
colonialism and racial violence not only on time, but on various
forms of space and place, exploring them explicitly from a
four-dimensional perspective (Bradley 2016). As such, she
explains, poethical thinking, ‘deployed as a creative (fractal)
imaging to address colonial and racial subjugation, aims to
interrupt the repetition characteristic of fractal patterns’
(Ferreira da Silva 2016) and refuses ‘to reduce what exists—
anyone and everything—to the register of the object, the
other, and the commodity’ (Ferreira da Silva 2014).

(such as the closed print-based book, single authorship, linear thought, copyright,
exploitative publishing relationships) or succumb to the closures that its own
implementation (e.g. through commercial adaptations) and institutionalisation (e.g.
as part of top-down policy mandates) of necessity also implies and brings with it.
It involves an awareness that publishing in an open way directly impacts on what
research is, what authorship is, and with that what publishing is. It asks us to take
responsibility for how we engage with open access, to take a position in towards
it—towards publishing more broadly—and towards the goals we want it to serve
(which I and others have done through the concept and project of radical open
access, for example). Through open publishing we can take in a critical position,
and we can explore new formats, practices and institutions, we just have to risk it.

These three different but complementary perspectives
from the point of view of literary scholarship and practice,
albeit themselves specific and contextual, map well onto
what I would perceive a ‘scholarly poethics’ to be: a form
of doing scholarship that pays specific attention to the
relation between context and content, ethics and aesthetics;
between the methods and theories informing our scholarship
and the media formats and graphic spaces we communicate
through. It involves scholars taking responsibility for the
practices and systems they are part of and often uncritically
repeat, but also for the potential they have to perform them
differently; to take risks, to take a wager on exploring other
communication forms and practices, or on a thinking that
breaks through formalisations of thought. Especially if as part
of our intra-actions with the world and today’s society we
can better reflect and perform its complexities. A scholarly
poethics, conceptualised as such, would include forms of
openness that do not simply repeat either established forms

22

Janneke Adema

The Poethics of Openness

23

References

This doesn’t mean that as part of
discussions on openness and open access,
openness has not often been perceived as
an intrinsic good, something we want to
achieve exactly because it is perceived as
an a priori good in itself, an ideal to strife
for in opposition to closedness (Tkacz
2014). A variant of this also exists, where
openness is simply perceived as ‘good’
because it opens up access to information,
without further exploring or considering why
this is necessarily a good thing, or simply
assuming that other benefits and change
will derive from there, at the moment
universal access is achieved (Harnad 2012).
1

24

Adema, Janneke. 2014. “Open Access”. In Critical Keywords for the Digital Humanities.
Lueneburg: Centre for Digital Cultures (CDC).
Adema, Janneke, and Graham Stone. 2017. “Changing Publishing Ecologies: A Landscape
Study of New University Presses and Academic-Led Publishing”. London: Jisc. http://
repository.jisc.ac.uk/6666/.
Bradley, Rizvana. 2016. “Poethics of the Open Boat (In Response to Denise Ferreira Da
Silva)”. ACCeSsions, no. 2.
Darnton, Robert. 1982. “What Is the History of Books?” Daedalus 111 (3): 65–83.
Derrida, Jacques. 1973. Speech and Phenomena, and Other Essays on Husserl’s Theory of
Signs. Northwestern University Press.
Ferreira da Silva, Denise. 2014. “Toward a Black Feminist Poethics”. The Black Scholar 44
(2): 81–97. https://doi.org/10.1080/00064246.2014.11413690.
———. 2016. ‘Fractal Thinking’. ACCeSsions, no. 2.
Fitzpatrick, Kathleen. 2011. Planned Obsolescence: Publishing, Technology, and the Future
of the Academy. NYU Press.
Hall, Gary. 2008. Digitize This Book! The Politics of New Media, or Why We Need Open
Access Now. Minneapolis: University of Minnesota Press.
Harnad, Stevan. 2012. “Open Access: Gratis and Libre”. Open Access Archivangelism
(blog). 3 May 2012. http://openaccess.eprints.org/index.php?/archives/885-OpenAccess-Gratis-and-Libre.html.
Laclau, Ernesto. 2005. On Populist Reason. Verso.
McPherson, Tara. 2010. “Scaling Vectors: Thoughts on the Future of Scholarly
Communication”. Journal of Electronic Publishing 13 (2). http://dx.doi.org/
10.3998/3336451.0013.208.
Mouffe, Chantal. 2013. Agonistics: Thinking the World Politically. London; New York: Verso
Books.
Retallack, Joan. 2004. The Poethical Wager. Berkeley: University of California Press.
Threadgold, Terry. 1997. Feminist Poetics Poiesis, Performance, Histories. London; New
York: Routledge.
Tkacz, Nathaniel. 2014. Wikipedia and the Politics of Openness. Chicago; London:
University of Chicago Press.

Janneke Adema

The Poethics of Openness

25

entangled with it—a verb rooted in the Old Norse word for
seaweed, thongull, that undulating biomass that ensnares
and is ensnared by oars and fishing nets; by hydrophones and
deep-sea internet cables; by coral and other forms of marine
life. Adapting another fragment from Haraway, we ask: ‘What
forms of life survive and flourish in these dense, imploded
zones?’ (Haraway 1994, 62).

Diffractive
Publishing
Frances
McDonald
&
Whitney
Trettien

Haraway’s ‘regenerative project’—which now extends far beyond her early work—
has been to craft a critical consciousness based on a different optical metaphor:
diffraction. In physics, a diffraction pattern is the bending of waves, especially
light and sound waves, around obstacles and through apertures. It is, Haraway
writes, ‘the production of difference patterns in the world, not just of the same
reflected—displaced—elsewhere’ (268). If reflective reading forever inscribes the
reader’s identity onto whatever text she touches, then diffractive reading sees
the intimate touching of text and reader as a contingent, dynamic unfolding of
mutually transformative affinities. To engage diffractively with an idea is to become

This question remains not only relevant but is today
increasingly urgent. When Haraway began writing about
diffraction in the late 80s and early 90s, the web was nascent;
it would be several years before Mozilla would launch its
Mosaic browser, bringing the full throttle of connectivity to
a broader public. Today, we wash in the wake of the changes
brought by these new technologies, swirling in the morass of
social media, email, Amazon, e-books, and pirated PDF libraries
that constitute our current textual ecology. Much lies at
stake in how we imagine and practise the work of swimming
through these changing tides. For Karen Barad, a friend
and colleague of Haraway’s and an advocate of diffractive
scholarship, reading and writing are ‘ethical practices’ that
must be reimagined according to an ‘ethics not of externality
but rather entanglement’ (Barad 2012). To Barad’s list of
reading and writing we here add publishing. If entanglement
has an ethics, then it behooves us as scholars to not just
describe and debate it but to transform materially the ways
we see ourselves as reading and writing together. Adding our
voices to a rising chorus that includes Janneke Adema (2015),
Kathleen Fitzpatrick (2018), Eileen Joy (2017), Sarah Kember
(2016), Tara McPherson (2018), Gary Hall (2016), Iris van der
Tuin (2014), and others working at the intersection of digital
humanities, scholarly publishing, and feminist methodologies,
we ask: how can we build scholarly infrastructures that foster
diffractive reading and writing? What kind of publishing
model might be best suited to expressing and emboldening
diffractive practices? These are big questions that must be
collectively addressed; in this short piece, we offer our own
experiences designing thresholds, an experimental digital zine,
as one potential model for digital publishing that is attuned to
the ethics of entanglement.

26

Diffractive Publishing

Over a quarter century ago, Donna Haraway observed that the grounding metaphor
for humanistic inquiry is reflection. We describe the process of interpretation as
reflecting upon an object. To learn from a text, we ask students to write reflection
pieces, which encourages them to paper their own experiences over a text’s dense
weave. For Haraway, reflection is a troubling trope for critical study because it
‘displaces the same elsewhere’—that is, it conceives of reading and writing as
exercises in self-actualisation, with the text serving as a mirrored surface upon
which the scholar might see her own reflection cast back at her, mise en abyme.
‘Reflexivity has been much recommended as a critical practice,’ she writes, ‘but my
suspicion is that reflexivity, like reflection, only displaces the same elsewhere, setting
up the worries about copy and original and the search for the authentic and really
real’ (Haraway 1997, 16).

Frances McDonald & Whitney Trettien

27

⁕ ⁕ ⁕

handwritten sticky notes, highlighted document pages, and
grainy photographs rub against one another, forming dense and shifting
thickets. the blank spaces between once-distinct districts become cluttered and
close. geographically distant realms ache to converge. the bookcase furiously
semaphores toward the far corner of the room. thin lines of coloured paper
arrive to splay across sections. the wall bursts at every seam.

Whether it be real or virtual, every research project has its own ‘wall’: a ‘dense,
imploded zone’ that is populated by the ideas, images, scenes, and sentences
that ‘stick’ to us, to use Lara Farina’s evocative phrase (2014, 33). They are the
‘encounters’ that Gilles Deleuze describes as the impetus toward work, the things
that ‘strike’ us, as Walter Benjamin puts it, like a hammer to unknown inner chords.
Although instrumental to every humanities project, this entangled web of texts and
ideas has a brutally short lifespan. The writer strives to reassert control by whittling
down its massy excesses; indeed, training to be a scholar in the humanities is in large
part learning to compress and contain the wall’s licentious sprawl. We shorten our
focus to a single period, place, or author, excise those fragments that fall outside
the increasingly narrow range of our expertise, and briskly sever any loose ends that
refuse to be tied. These regulatory measures help align our work with the temporal,
geographic, and aesthetic boundaries of our disciplinary arbiters: the journals and
university presses that publish our work, the departments that hire and tenure us.
In an increasingly tight academic marketplace, where the qualified scholars, articles,
and projects far outnumber the available positions, deviation from the standard
model can seem like risky business indeed.

of such distinguished critics as Judith Butler, Homi Bhabha,
and Fredric Jameson for their long-winded impenetrability.
Unlike its prizewinning paragraphs, the Contest’s message
was clear: the opaque abstractions that clogged the arteries
of academic writing were no longer to be tolerated.
The academy’s stylistic strip-down has served to puncture
the unseemly bloat that had disfigured its prose. But its
sweeping injunction against incomprehensibility bears with
it other casualties. As we slim and trim our texts, cutting
any tangents that distract from the argument’s main thrust,
we unwittingly excise writing’s other gaits—those twists,
roils, and scintillating leaps that Eric Hayot, in his recent
rejoinder to academic style guides, so beautifully describes
as ‘gyrations in prose’ (2014, 58). For Hayot, these stylistic
excesses occur when an author’s passion for her subject
becomes so overwhelming that it can no longer be expressed
plainly. The kinetic energy of these gyrations recalls the
dynamism of the wall; one may glimpse its digressiveness in the
meandering aside, its piecemeal architecture in the sentence
fragment, or its vaulting span in the photo quote. These
snags in intelligibility are not evidence of an elitist desire to
exclude, but are precisely the moments in which the decorous
surface of a text cracks open to offer a glimpse of the tangled
expanses beneath. To experience them as such, the reader
must sacrifice her grip on a text’s argument and allow herself
to be swept up in the muddy momentum of its dance. Caught
amidst a piece’s movements, the reader trades intellectual
insight for precarious intimacy, the ungraspable streaming of
one into another.

The institutional imperatives of compression and containment not only dictate the
structural parameters of a work—its scope and trajectory—but the very texture of
our writing. In a bid to render academic texts more comprehensible to their readers,
modern style guides advocate plain prose. Leanness, they remind us, is legibility. This
aversion to ornament was part of a larger mutiny against the scourge of obfuscation
that plagued the humanities in the latter half of the twentieth century. Between
1995 and 1998, the journal Philosophy and Literature ran a Bad Writing Contest
that took this turgid academic prose as its target, and cheerfully skewered the work

By polishing over these openings under the edict of legibility,
plain prose breeds a restrictive form of plain reading, in which
the reader’s role is to digest discrete parcels of information,
rather than move and be moved along with the rollicking
contours of a work. At stake in advocating for a plurality of
readerly and writerly practices is an ethics of criticism. The
institutional apparatuses that shape our critical practices
instruct us to erase all traces of the serendipitous gyrations
that constitute our writing and reading, and erect in their place

28

Diffractive Publishing

Frances McDonald & Whitney Trettien

29

a set of boundaries that keep our work in check. Yet our habits
of critical inquiry are irrefutably subjective and collaborative.
In an effort to move toward such a methodology, we might ask:
What forms of scholarship and knowledge become possible
when we reconceive of the spaces between readers, writers,
and texts as thresholds rather than boundaries, that is, as
contiguous zones of entanglement? How would our critical
apparatus mutate if we ascribed value to the shifting sprawl
of the wall and make public the diffractive processes that
constitute our writing and reading practices?
To put these questions into action, we have created thresholds
(http://openthresholds.org). We solicit work that a traditional
academic journal may deem unfinished, unseemly, or otherwise
unbound, but which discovers precisely in its unboundedness
new and oblique perspectives on art, culture, history, and
philosophy. Along with her piece, the author also submits
the fragments that provoked and surreptitiously steered her
work. We the editors then collaborate closely with the author
to custom-design these pieces for the platform’s split screen
architecture. The result is a more open-ended, processoriented webtext that blooms from, but never fully leaves, the
provocative juxtapositions of the author’s wall.
The split screen design aligns thresholds with a long history
of media that splits content and divides the gaze. In film, the
split screen has long been used to splice together scenes that
are temporally or spatially discontinuous. This divided frame
disrupts the illusion that the camera provides a direct feed of
information and so reveals film to be an authored and infinitely
interpretable object, each scene refracted through others.
The split screen developed under a different name in HTML:
the frame element. Now considered a contrivance due to its
overuse in the late 90s, Netscape Navigator’s development
of the frameset nonetheless marked a major development in
the history of the web. For the first time, designers could load
multiple documents in a single visual field, each with their own
independent actions and scrolling.

30

Frances McDonald & Whitney Trettien

Of course, both the cinematic split screen and the HTML
frameset gesture towards a much older material threshold:
the gutter that divides the pages of the codex. Since most of
its content is presented and read linearly, we rarely consider
the book as a split form. However, many writers and poets have
played with the gutter as a signifying space. In Un coup de dés,
a late nineteenth-century poem that inspired much continental
theory and philosophy in the latter half of the twentieth
century, Stéphane Mallarmé famously uses each two-page
spread to rhetorical effect, jumping and twirling the reader’s
eye around and across the gutter. Blaise Cendrars and Sonia
Delaunay in their self-published avant-garde artist’s book La
Prose du Transsiberien (1913) similarly create a ‘simultaneous’
aesthetic that pairs image and text through an accordion
fold. These early instances have more recent cousins in the
textile art of Eve Sedgwick, the extraordinary visual poetry
of Claudia Rankine’s Citizen, and the work of artists like Fred
Hagstrom and Heather Weston, whose multidimensional books
spur new ways of looking at and thinking about texts.
Drawing inspiration from these exemplars, thresholds brings
the creative affordances of the split screen to the web, and
to scholarship. Think of it as an artist’s browser that hearkens
back to the early web; or imagine in its recto/verso design a
speculative future for the post-digital book. Here, the eye
not only flows along (with) the split screen’s vertical scroll,
but also cuts distinctive lateral lines between each piece as
the reader bends left and right through an issue, one halfscreen at a time. How the reader decides to characterize each
threshold—and how the writer and editors collaboratively
design it—determines the interpretive freight its traversal
can bear. In their poem ‘Extraneous,’ published in the first
issue, Charles Bernstein and Ted Greenwald treat it as a lens
through which their collaboratively authored text passes,
darkly. What emerges on the other side is an echo of the
original, where language, newly daubed in hot swaths of
colour, takes on the acoustic materiality of a riotous chorus. In
‘Gesture of Photographing,’ another collaboratively-authored
piece, Carla Nappi and Dominic Pettman use the threshold to
diffract the work of Vilem Flusser. Each sink into his words on

Diffractive Publishing

31

photography and emerge having penned a short creative work
that responds to yet pushes away from his ideas.
As the reader navigates horizontally through an issue,
twisting and bumping from theory to fiction to image to sound,
thresholds invites her to engage with reading and writing as
a way of making waves of difference in the world. That is, the
platform does not divide each contribution taxonomically
but rather produces an entangled line of juxtapositions and
ripples, producing what Haraway calls ‘worldly interference
patterns’ (Haraway 1994, 60). There is a place, thresholds
implicitly argues, for the fragmentary in our collecting and
collective practices; for opacity and disorientation; for the
wall’s sprawl within the more regimented systems that order
our work.
To reach this place, criticism might begin at the threshold.
The threshold is the zone of entanglement that lies betwixt
and between writing and reading, text and reader, and
between texts themselves. It is restless and unruly, its
dimensions under perpetual renegotiation. To begin here
requires that we acknowledge that criticism does not rest on
solid ground; it too is a restless and unruly set of practices
given to proliferation and digression. To begin here is to enter
into a set of generative traversals that forge fragments into
new relations that in turn push against the given limits of our
inherited architectures of knowledge. To begin here is to
relinquish the fantasy that a text or texts may ever be fully,
finally known, and reconceive of our work as a series of partial
engagements and affective encounters that participate in
texts’ constant remaking.

32

Frances McDonald & Whitney Trettien

References
Adema, Janneke. 2015. “Cutting Scholarship Together/Apart: Rethinking the Political
Economy of Scholarly Book Publishing.” In The Routledge Companion to Remix
Studies, ed. By Eduardo Navas, Owen Gallagher, and xtine burrough. London:
Routledge.
Barad, Karen. 2012. “Matter feels, converses, suffers, desires, yearns and remembers”:
Interview with Karen Barad. In New Materialism: Interviews and Cartographies, ed. by
Rick Dolphijn and Iris van der Tuin. Ann Arbor: Open Humanities Press.
Haraway, Donna. 1994. “A Game of Cat's Cradle: Science Studies, Feminist Theory, Cultural
Studies.” Configurations 2.1: 59-71.
Farina, Lara. 2014. “Sticking Together.” In Burn After Reading/The Future We Want, ed. by
Jeffrey J. Cohen, Eileen A. Joy, and Myra Seaman. Brooklyn: Punctum Books:
31-8.
Fitzpatrick, Kathleen. 2018. Generous Thinking. In-progress manuscript posted online at:
https://generousthinking.hcommons.org/.
Hall, Gary. 2016. Pirate Philosophy: For a Digital Posthumanities. Cambridge: MIT Press.
Haraway, Donna. 1997. Modest_Witness@Second_Millennium.FemaleMan_Meets_
OncoMouse: Feminism and Technoscience. London: Routledge.
Hayot, Eric. 2014. "Academic Writing, I Love You. Really, I Do." Critical Inquiry 41, no. 1
(2014): 53-77.
Joy, Eileen. 2017. “Here Be Monsters: A Punctum Publishing Primer.” Posted online: https://
punctumbooks.com/blog/here-be-monsters-a-punctum-publishing-primer/.
Kember, Sarah. 2016. “At Risk? The Humanities and the Future of Academic Publishing,”
Journal of Electronic Publishing 19.2 (Fall). Online: https://quod.lib.umich.edu/j/
jep/3336451.0019.210?view=text;rgn=main.
McPherson, Tara. 2018. Feminist in a Software Lab: Difference + Design. Cambridge, MA:
Harvard University Press.
van der Tuin, Iris. 2014. “Diffraction as a Methodology for Feminist Onto-Epistemology: On
Encountering Chantal Chawaf and Posthuman Interpellation,” Parallax 20:3: 231-244.

Diffractive Publishing

33

34

The
Poethics of
Scholarship


Mars & Medak
Against Innovation
2019


Against Innovation: Compromised institutional agency and acts of custodianship
Marcell Mars and Tomislav Medak

abstract
In this essay we reflect on the historic crisis of the university and the public library as two
modern institutions tasked with providing universal access to knowledge and education.
This crisis, precipitated by pushes to marketization, technological innovation and
financialization in universities and libraries, has prompted the emergence of shadow
libraries as collective disobedient practices of maintenance and custodianship. In their
illegal acts of reversing property into commons, commodification into care, we detect a
radical gesture comparable to that of the historical avant-garde. To better understand how
the university and the public library ended up in this crisis, we re-trace their development
starting with the capitalist modernization around the turn of the 20th century, a period of
accelerated technological innovation that also birthed historical avant-garde. Drawing on
Perry Anderson’s ‘Modernity and Revolution’, we interpret that uniquely creative period
as a period of ambivalence toward an ‘unpredictable political future’ that was open to
diverging routes of social development. We situate the later re-emergence of avant-garde
practices in the 1960s as an attempt to subvert the separations that a mature capitalism
imposes on social reality. In the present, we claim, the radicality equivalent to the avantgarde is to divest from the disruptive dynamic of innovation and focus on the repair,
maintenance and care of the broken social world left in techno-capitalism’s wake.
Comparably, the university and the public library should be able to claim the radical
those gesture of slowdown and custodianship too, against the imperative of innovation
imposed on them by policymakers and managers.

Custodians.online, the first letter
On 30 November, 2015 a number of us shadow librarians who advocate, build
and maintain ‘shadow libraries’, i.e. online infrastructures allowing users to
digitise, share and debate digital texts and collections, published a letter
article | 345

ephemera: theory & politics in organization


(Custodians.online, 2015) in support of two of the largest user-created
repositories of pirated textbooks and articles on the Internet – Library Genesis
and Science Hub. Library Genesis and Science Hub’s web domain names were
taken down after a New York court issued an injunction following a copyright
infringement suit filed by the largest commercial academic publisher in the
world – Reed Elsevier. It is a familiar trajectory that a shared digital resource,
once it grows in relevance and size, gets taken down after a court decision.
Shadow libraries are no exception.
The world of higher education and science is structured by uneven development.
The world’s top-ranked universities are concentrated in a dozen rich countries
(Times Higher Education, 2017), commanding most of the global investment
into higher education and research. The oligopoly of commercial academic
publishers is headquartered in no more than half of those. The excessive rise of
subscription fees has made it prohibitively expensive even for the richest
university libraries of the Global North to provide access to all the journals they
would need to (Sample, 2012), drawing protest from academics all over the world
against the outrageously high price tag that Reed Elsevier puts on their work
(‘The Cost of Knowledge’, 2012). Against this concentration of economic might
and exclusivity to access, stands the fact that the rest of the world has little access
to the top-ranked research universities (Baty, 2017; Henning, 2017) and that the
poor universities are left with no option but to tacitly encourage their students to
use shadow libraries (Liang, 2012). The editorial director of global rankings at the
Times Higher Education Phil Baty minces no words when he bluntly states ‘that
money talks in global higher education seems … to be self-evident’ (Baty, 2017).
Uneven economic development reinforces global uneven development in higher
education and science – and vice versa. It is in the face of this combined
economic and educational unevenness, that Library Genesis and Science Hub,
two repositories for a decommodified access to otherwise paywalled resources,
attain a particular import for students, academics and researchers worldwide.
And it is in the face of combined economic and educational unevenness, that
Library Genesis and Science Hub continue to brave the court decisions,
continuously changing their domain names, securing ways of access beyond the
World Wide Web and ensuring robust redundancy of the materials in their
repositories.
The Custodians.online letter highlights two circumstances in this antagonism
that cut to the core of the contradictions of reproduction within academia in the
present. The first is the contrast between the extraction of extreme profits from
academia through inflated subscription prices and the increasingly precarious
conditions of studying, teaching and researching:

346 | article

Marcell Mars and Tomislav Medak

Against innovation

Consider Elsevier, the largest scholarly publisher, whose 37% profit margin stands
in sharp contrast to the rising fees, expanding student loan debt and poverty-level
wages for adjunct faculty. Elsevier owns some of the largest databases of academic
material, which are licensed at prices so scandalously high that even Harvard, the
richest university of the global north, has complained that it cannot afford them
any longer. (Custodians.online, 2015: n.p.)

The enormous profits accruing to an oligopoly of academic publishers are a
result of a business model premised on harvesting and enclosing the scholarly
writing, peer reviewing and editing is done mostly for free by academics who are
often-times struggling to make their ends meet in the higher education
environment (Larivière et al., 2015).
The second circumstance is that shadow libraries invert the property relation of
copyright that allows publishers to exclude all those students, teachers and
researchers who don’t have institutional access to scholarly writing and yet need
that access for their education and research, their work and their livelihood in
conditions of heightened precarity:
This is the other side of 37% profit margins: our knowledge commons grows in
the fault lines of a broken system. We are all custodians of knowledge, custodians
of the same infrastructures that we depend on for producing knowledge,
custodians of our fertile but fragile commons. To be a custodian is, de facto, to
download, to share, to read, to write, to review, to edit, to digitize, to archive, to
maintain libraries, to make them accessible. It is to be of use to, not to make
property of, our knowledge commons.) (Custodians.online, 2015)

Shadow libraries thus perform an inversion that replaces the ability of ownership
to exclude, with the practice of custodianship (notion implying both the labor of
preservation of cultural artifacts and the most menial and invisible labor of daily
maintenance and cleaning of physical structures) that makes one useful to a
resource held in common and the infrastructures that sustain it.
These two circumstances – antagonism between value extraction and precarity
and antagonism between exclusive property and collective custodianship – signal
a deeper-running crisis of two institutions of higher education and research that
are caught in a joint predicament: the university and the library. This crisis is a
reflection of the impossible challenges placed on them by the capitalist
development, with its global division of labor and its looming threat of massive
technological unemployment, and the response of national policymakers to those
challenges: Are they able to create a labor force that will be able to position itself
in the global labor market with ever fewer jobs to go around? Can they do it with
less money? Can they shift the cost, risk and responsibility for social challenges
to individual students and patrons, who are now facing the prospect of their
investment in education never working out? Under these circumstances, the
article | 347



imperative is that these institutions have to re-invent themselves, that they have
to innovate in order to keep up with the disruptive course and accelerated the
pace of change.

Custodianship and repair
In what follows we will argue against submitting to this imperative of innovation.
Starting from the conditions from which shadow libraries emerge, as laid out in
the first Custodians.online letter, we claim that the historical trajectory of the
university and the library demands that they now embrace a position of
disobedience. They need to go back to their universalizing mission of providing
access to knowledge and education unconditionally to all members of society.
That universalism is a powerful political gesture. An infinite demand (Critchley,
2007) whereby they seek to abolish exclusions and affirm the legacy of the radical
equality they have built as part of the history of emancipatory struggles and
advances since the revolutions of 1789 and 1848. At the core of this legacy is a
promise that the capacity of members of society to collectively contest and claim
rights so as to become free, equal and solidaric is underwritten by a capacity to
have informed opinion, attain knowledge and produce a pedagogy of their own.
The library and the university stand in a historical trajectory of revolutions, a
series of historical discontinuities. The French Revolution seized the holdings of
the aristocracy and the Church, and brought a deluge of books to the Blibliotèque
Nationale and the municipal libraries across France (Harris, 1999). The Chartism
might have failed in its political campaign in 1848, but was successful in setting
up the reading rooms and emancipating the working class education from moral
inculcation imposed on them by the ruling classes (Johnson, 2014). The tension
between continuity and discontinuity that comes with disruptive changes was
written into their history long before the present imperative of innovation. And
yet, if these institutions are social infrastructures that have ever since sustained
the production of knowledge and pedagogy by re-producing the organizational
and material conditions of their production, they warn us against taking that
imperative of innovation at face value.
The entrepreneurial language of innovation is the vernacular of global technocapitalism in the present. Radical disruption is celebrated for its ability to depose
old monopolies and birth new ones, to create new markets and its first movers to
replace old ones (Bower and Christensen, 1996). It is a formalization reducing
the complexity of the world to the capital’s dynamic of creative destruction
(Schumpeter, 2013), a variant of an old and still hegemonic productivism that
understands social development as primarily a function of radical advances in
348 | article

Marcell Mars and Tomislav Medak

Against innovation

technological productivity (Mumford, 1967). According to this view, what counts
is that spurts of technological innovation are driven by cycles of financial capital
facing slumping profits in production (Perez, 2011).
However, once the effect of gains from new technologies starts to slump, once
the technologist’s dream of improving the world hits the hard place of venture
capital monetization and capitalist competition, once the fog of hyped-up
technological boom clears, that which is supposedly left behind comes the fore.
There’s then the sunken fixed capital that is no longer productive enough.
There’s then technical infrastructures and social institutions that were there
before the innovation and still remain there once its effect tapers off, removed
from view in the productivist mindset, and yet invisibly sustaining that activity of
innovation and any other activity in the social world we inhabit (Hughes, 1993).
What remains then is the maintenance of stagnant infrastructures, the work of
repair to broken structures and of care for resources that we collectively depend
on.
As a number of scholars who have turned their attention to the matters of repair,
maintenance and care suggest, it is the sedimented material infrastructures of
the everyday and their breakdown that in fact condition and drive much of the
innovation process (Graham and Thrift, 2007; Jackson, 2014). As the renowned
historian of technology Thomas Hughes suggested (Hughes, 1993),
technological changes largely address the critical problems of existing
technologies. Earlier still, in the 1980s, David Noble convincingly argued that the
development of forces of production is a function of the class conflict (Noble,
2011). This turns the temporal logic of innovation on its head. Not the creative
destruction of a techno-optimist kind, but the malfunctioning of technological
infrastructures and the antagonisms of social structures are the elementary
pattern of learning and change in our increasingly technological world. As
Stephen Graham and Nigel Thrift argued (2007), once the smooth running
production, consumption and communication patterns in the contemporary
capitalist technosphere start to collapse, the collective coping strategies have to
rise to the challenge. Industrial disasters, breakdowns of infrastructures and
natural catastrophes have taught us that much.
In an age where a global division of labor is producing a growing precarity for
ever larger segments of the world’s working population and the planetary
systems are about to tip into non-linear changes, a truly radical gesture is that
which takes as its focus the repair of the effects of productivism. Approaching the
library and the university through the optic of social infrastructure allows us to
glimpse a radicality that their supposed inertia, complexity and stability make

article | 349



possible. This slowdown enables the processes of learning and the construction
of collective responses to the double crisis of growth and the environment.
In a social world in which precarity is differently experienced between different
groups, these institutions can accommodate that heterogeneity and diminish
their insecurities, helping the society effectively support structural change. They
are a commons in the non-substantive sense that Lauren Berlant (2016)
proposes, a ‘transitional form’ that doesn’t elide social antagonisms and that lets
different social positions loosely converge, in order to become ‘a powerful vehicle
for troubling troubled times’ (Berlant, 2016: 394-395).
The trajectory of radical gestures, discontinuities by re-invention, and creative
destruction of the old have been historically a hallmark of the avant-gardes. In
what follows, we will revisit the history of the avant-gardes, claiming that,
throughout their periodic iterations, the avant-gardes returned and mutated
always in response to the dominant processes and crises of the capitalist
development of their time. While primarily an artistic and intellectual
phenomenon, the avant-gardes emerged from both an adversarial and a coconstitutive relation to the institutions of higher education and knowledge
production. By revisiting three epochal moments along the trajectory of the
avant-gardes – 1917, 1967 and 2017 – we now wish to establish how the
structural context for radical disruption and radical transformation were
historically changing, bringing us to the present conjuncture where the library
and the university can reclaim the legacy of the avant-gardes by seemingly doing
its exact opposite: refusing innovation.

1917 – Industrial modernization,
revolutionary subjectivity

accelerated

temporality

and

In his text on ‘Modernity and Revolution’ Perry Anderson (1984) provides an
unexpected, yet the cogent explanation of the immense explosion of artistic
creativity in the short span of time between the late nineteenth and early
twentieth century that is commonly periodized as modernism (or avant-garde,
which he uses sparsely yet interchangeably). Rather than collapsing these wildly
diverging movements and geographic variations of artistic practices into a
monolithic formation, he defines modernism as a broad field of singular
responses resulting from the larger socio-political conjuncture of industrial
modernity. The very different and sometimes antithetical currents of symbolism,
constructivism, futurism, expressionism or suprematism that emerge in
modernism’s fold were defined by three coordinates: 1) an opposition to the
academicism in the art of the ancien régime, which modernist art tendencies both
350 | article

Marcell Mars and Tomislav Medak

Against innovation

draw from and position themselves against, 2) a transformative use of
technologies and means of communication that were still in their promising
infancy and not fully integrated into the exigencies of capitalist accumulation and
3) a fundamental ambivalence vis-à-vis the future social formation – capitalism or
socialism, state or soviet – that the process of modernization would eventually
lead to. As Anderson summarizes:
European modernism in the first years of this century thus flowered in the space
between a still usable classical past, a still indeterminate technical present, and a
still unpredictable political future. Or, put another way, it arose at the intersection
between a semi-aristocratic ruling order, a semi-industrialized capitalist economy,
and a semi-emergent, or -insurgent, labour movement. (Anderson, 1984: 150)

Thus these different modernisms emerged operating within the coordinates of
their historical present, – committed to a substantive subversion of tradition or to
an acceleration of social development. In his influential theory of the avant-garde,
Peter Bürger (1984) roots its development in the critique of autonomy the art
seemingly achieved with the rise of capitalist modernity between the eighteenth
and late nineteenth century. The emergence of bourgeois society allowed artists
to attain autonomy in a triple sense: art was no longer bounded to the
representational hierarchies of the feudal system; it was now produced
individually and by individual fiat of the artist; and it was produced for individual
appreciation, universally, by all members of society. Starting from the ideal of
aesthetic autonomy enshrined in the works of Kant and Schiller, art eventually
severed its links from the boundedness of social reality and made this freedom
into its subject matter. As the markets for literary and fine artworks were
emerging, artists were gaining material independence from feudal patronage, the
institutions of bourgeois art were being established, and ‘[a]estheticism had made
the distance from the praxis of life the content of works’ (Bürger, 1984: 49)
While capitalism was becoming the dominant reality, the freedom of art was
working to suppress the incursion of that reality in art. It was that distance,
between art and life, that historical avant-gardes would undertake to eliminate
when they took aim at bourgeois art. With the ‘pathos of historical
progressiveness on their side’ (Bürger, 1984: 50), the early avant-gardes were
thus out to relate and transform art and life in one go.
Early industrial capitalism unleashed an enormous social transformation
through the formalization and rationalization of processes, the coordination and
homogenization of everyday life, and the introduction of permanent innovation.
Thus emerged modern bureaucracy, mass society and technological revolutions.
Progress became the telos of social development. Productive forces and global
expansion of capitalist relations made the humanity and the world into a new

article | 351



horizon of both charitable and profitable endeavors, emancipatory and imperial.
The world became a project (Krajewski, 2014).
The avant-gardes around the turn of the 20th century integrated and critically
inflected these transformations. In the spirit of the October Revolution, its
revolutionary subjectivity approached social reality as eminently transformable.
And yet, a recurrent concern of artists was with the practical challenges and
innovations of accelerated modernization: how to control, coordinate and socially
integrate the immense expansionary forces of early industrialization. This was an
invitation to insert one’s own radical visions into life and create new forms of
standardization and rationality that would bring society out of its pre-industrial
backwardness. Central to the avant-garde was abolishing the old and creating the
new, while overcoming the separation of art and social practice. Unleashing
imaginary and constructive forces in a reality that has become rational, collective
and universal: that was its utopian promise; that was its radical innovation. Yet,
paradoxically, it is only once there is the new that the previously existing social
world can be formalized and totalized as the old and the traditional. As Boris
Groys (2014) insisted, the new can be only established once it stands in a relation
to the archive and the museum. This tendency was probably nowhere more in
evidence than, as Sven Spieker documents in his book ‘The big archive – Art
from bureaucracy’ (2008), in the obsession of Soviet constructivists and
suprematists with the archival ordering of the flood of information that the
emergent bureaucratic administration and industrial management were creating
on an unprecedented scale.
The libraries and the universities followed a similar path. As the world became a
project, the aggregation and organization of all knowledge about the world
became a new frontier. The pioneers of library science, Paul Otlet and Melvil
Dewey, consummating the work of centuries of librarianship, assembled index
card catalogs of everything and devised classificatory systems that were powerful
formalizations of the increasingly complex world. These index card catalogs were
a ‘precursor of computing: universal paper machine’, (Krajewski, 2011), predating the ‘universal Turing machine’ and its hardware implementations by
Konrad Zuse and John von Neumann by almost half a century. Knowledge thus
became universal and universalizable: while libraries were transforming into
universal information infrastructures, they were also transforming into places of
popular reading culture and popular pedagogy. Libraries thus were gaining
centrality in the dissemination of knowledge and culture, as the reading culture
was becoming a massive and general phenomenon. Moreover, during the second
part of the nineteenth and the first part of the twentieth century, the working
class would struggle to transform not only libraries, but also universities, into
public institutions providing free access to culture and really useful knowledge
352 | article

Marcell Mars and Tomislav Medak

Against innovation

necessary for the self-development and self-organization of the masses (Johnson,
2014).
While universities across the modernizing Europe, US and USSR would see their
opening to the masses only in the coming decades later, they shyly started to
welcome the working class and women. And yet, universities and schools were
intense places of experimentation and advancement. The Moscow design school
VKhUTEMAS, for instance, carried over the constructivists concerns into the
practicalities of the everyday, constructing socialist objects for a new collective
life, novyi byt, in the spirit of ‘Imagine no possessions’ (2005), as Christina Kiaer
has punned in the title of her book. But more importantly, the activities of
universities were driven by the promise that there are no limits to scientific
discovery and that a Leibnitzian dream of universal formalization of language
can be achieved through advances in mathematics and logic.

1967 – Mature capitalism, spectacle, resistant subjectivity
In this periodization, the central contention is that the radical gesture of
destruction of the old and creation of the new that was characteristic of the avantgarde has mutated as the historic coordinates of its emergence have mutated too.
Over the last century the avant-garde has divested from the radical gestures and
has assumed a relation to the transformation of social reality that is much more
complicated than its erstwhile cohort in disruptive change – technological
innovation – continues to offer. If technological modernization and the avantgarde were traveling companions at the turn of the twentieth century, after the
WWII they gradually parted their ways. While the avant-garde rather critically
inflects what capitalist modernity is doing at a particular moment of its
development, technological innovation remained in the same productivist pattern
of disruption and expansion. That technological innovation would remain
beholden to the cyclical nature of capitalist accumulation is, however, no mere
ideological blind-spot. Machinery and technology, as Karl Marx insists in The
Grundrisse, is after all ‘the most adequate form of capital’ (1857) and thus vital to
its dynamic. Hence it comes as no surprise that the trajectory of the avant-garde
is not only a continued substantive subversion of the ever new separations that
capitalist system produces in the social reality, but also a growing critical distance
to technology’s operation within its development.
Thus we skip forward half a century. The year is 1967. Industrial development is
at its apex. The despotism of mass production and its attendant consumerist
culture rules over the social landscape. After the WWII, the working class has
achieved great advances in welfare. The ‘control crisis’ (Beniger, 1989), resulting
article | 353



from an enormous expansion of production, distribution and communication in
the 19th century, and necessitating the emergence of the capacity for
coordination of complex processes in the form of modern bureaucracy and
information technology, persists. As the post-WWII golden period of gains in
productivity, prosperity and growth draws to a close, automation and
computerization start to make their way from the war room to the shop floor.
Growing labor power at home and decolonization abroad make the leading
capitalist economies increasingly struggle to keep profits rates at levels of the
previous two decades. Socialist economies struggle to overcome the initial
disadvantages of belated modernization and instill the discipline over labor in
order to compete in the dual world-system. It is still a couple of years before the
first oil crisis will break out and the neo-liberal retrenchment begin.
The revolutionary subjectivity of 1917 is now replaced by resistant militancy.
Facing the monotony of continuous-flow production and the prospect of bullshit
jobs in service industries that start to expand through the surplus of labor time
created by technological advances (Graeber, 2013), the workers perfect the
ingenuity in shirking the intensity and dullness of work. The consumerist culture
instills boredom (Vaneigem, 2012), the social division of labor produces
gendered exploitation at home (James, 2012), the paternalistic welfare provision
results in loss of autonomy (Oliver, 1990).
Sensibility is shaped by mass media whose form and content are structured by
the necessity of creating aggregate demand for the ever greater mass of
commodities and thus the commodity spectacle comes to mediate social
relations. In 1967 Guy Debord’s ‘The society of the spectacle’ is published. The
book analyses the totalizing capture of Western capitalist society by commodity
fetishism, which appears as objectively given. Commodities and their mediatized
simulacra become the unifying medium of social integration that obscures
separations within the society. So, as the crisis of 1970s approaches, the avantgarde makes its return. It operates now within the coordinates of the mature
capitalist conjuncture. Thus re-semantization, détournement and manipulation
become the representational equivalent of simulating busyness at work, playing
the game of hide-and-seek with the capitalist spectacle and turning the spectacle
onto itself. While the capitalist development avails itself of media and computers
to transform the reality into the simulated and the virtual, the avant-garde’s
subversive twist becomes to take the simulated and the virtual as reality and reappropriate them for playful transformations. Critical distance is no longer
possible under the centripetal impact of images (Foster, 1996), there’s no
revolutionary outside from which to assail the system, just one to escape from.

354 | article

Marcell Mars and Tomislav Medak

Against innovation

Thus, the exodus and autonomy from the dominant trajectory of social
development rather than the revolutionary transformation of the social totality
become the prevailing mode of emancipatory agency. Autonomy through forms
of communitarian experimentation attempts to overcome the separation of life
and work, home and workplace, reproduction and production and their
concealment in the spectacle by means of micro-political experiments.
The university – in the meanwhile transformed into an institution of mass
education, accessible to all social strata – suddenly catapults itself center-stage,
placing the entire post-WWII political edifice with its authoritarian, repressive
and neo-imperial structure into question, as students make radical demands of
solidarity and liberation. The waves of radical political movements in which
students play a central role spread across the world: the US, Czechoslovakia,
France, Western Germany, Yugoslavia, Pakistan, and so on. The institution
becomes a site from which and against which mass civil rights, anti-imperial,
anti-nuclear, environmental, feminist and various other new left movements
emerge.
It is in the context of exodus and autonomy that new formalizations and
paradigms of organizing knowledge emerge. Distributed, yet connected. Built
from bottom up, yet powerful enough to map, reduce and abstract all prior
formalizations. Take, for instance, Ted Nelson’s Project Xanadu that introduced
to the world the notion of hypertext and hyperlinking. Pre-dating the World Wide
Web by a good 25 years, Xanadu implemented the idea that a body of written
texts can be understood as a network of two-way references. With the advent of
computer networks, whose early adopters were academic communities, that
formalization materialized in real infrastructure, paving the way for a new
instantiation of the idea that the entire world of knowledge can be aggregated,
linked and made accessible to the entire world. As Fred Turner documents in
‘From counterculture to cyberculture’ (2010), the links between autonomyseeking dropouts and early cyberculture in the US were intimate.
Countercultural ideals of personal liberation at a distance from the society
converged with the developments of personal computers and computer networks
to pave the way for early Internet communities and Silicon Valley
entrepreneurialism.
No less characteristic of the period were new formalizations and paradigms of
technologically-mediated subjectivity. The tension between the virtual and the
real, autonomy and simulation of autonomy, was not only present in the avantgarde’s playful takes on mass media. By the end of the 1950s, the development of
computer hardware reached a stage where it was running fast enough to cheat
human perception in the same way moving images on film and television did. In
article | 355



the computer world, that illusion was time-sharing. Before the illusion could
work, the concept of an individual computer user had to be introduced (Hu,
2015). The mainframe computer systems such as IBM 360/370 were fast enough
to run a software-simulated (‘virtual’) clone of the system for every user (Pugh et
al., 1991). This allowed users to access the mainframe not sequentially one after
the other, but at the same time – sharing the process-cycles among themselves.
Every user was made to feel as if they were running their own separate (‘real’)
computer. The computer experience thus became personal and subjectivities
individuated. This interplay of simulation and reality became common in the late
1960s. Fifty years later this interplay would become essential for the massive
deployment of cloud computing, where all computer users leave traces of their
activity in the cloud, but only few can tell what is virtual (i.e. simulated) and what
is real (i.e. ‘bare machine’).
The libraries followed the same double trajectory of universities. In the 1960s,
the library field started to call into question the merit of objectivity and neutrality
that librarianship embraced in the 1920s with its induction into the status of
science. In the context of social upheavals of the 1960s and 1970s, librarians
started to question ‘The Myth of Library Neutrality’ (Branum, 2008). With the
transition to a knowledge economy and transformation of the information into a
commodity, librarians could no longer ignore that the neutrality had the effect of
perpetuating the implicit structural exclusions of class, gender and race and that
they were the gatekeepers of epistemic and material privilege (Jansen, 1989;
Iverson 1999). The egalitarian politics written into the de-commodification and
enabling the social mission of public libraries started to trump neutrality. Thus
libraries came to acknowledge their commitment to the marginalized, their
pedagogies and their struggles.
At the same time, library science expanded and became enmeshed with
information science. The capacity to aggregate, organize and classify huge bodies
of information, to view it as an interlinked network of references indexed in a
card catalog, sat well with the developments in the computer world. In return, the
expansion of access to knowledge that the new computer networks promised fell
in line with the promise of public libraries.

2017 – Crisis in the present, financialization, compromised subjectivity
We arrive in the present. The effects of neo-liberal restructuring, the global
division of labor and supply-chain economy are petering out. Global capitalism
struggles to maintain growth, while at the same time failing to slow down
accelerating consumption of energy and matter. It thus arrives at a double crisis
356 | article

Marcell Mars and Tomislav Medak

Against innovation

– a crisis of growth and a crisis of planetary boundaries. Against the profit
squeeze of 1970s, fixes were applied in the form of the relocation of production,
the breaking-up of organized labor and the integration of free markets across the
world. Yet those fixes have not stopped the long downturn of the capitalist system
that pinnacled in the crisis of 2008 (Brenner, 2006). Currently capital prefers to
sit on US$ 13.4 trillion of negative yielding bonds rather than risk investing into
production (Wigglesworth and Platt, 2016). Financialization is driving the efforts
to quickly boost and capture value where long-term investment makes little
sense. The finance capital privileges the short-term value maximization through
economic rents over long-term investment into growth. Its logic dominates all
aspects of the economy and the everyday (Brown, 2015). When it is betting on
long-term changes in production, capital is rather picky and chooses to bet on
technologies that are the harbingers of future automation. Those technologies
might be the death knell of the social expectation of full employment, creating a
reserve army of labor that will be pushed to various forms of casualized work,
work on demand and workfare. The brave new world of the gig-economy awaits.
The accelerated transformation of the labor market has made adaptation through
education and re-skilling difficult. Stable employment is mostly available in
sectors where highly specialized technological skills are required. Yet those
sectors need far less workers than the mass-manufacture required. Re-skilling is
only made more difficult by the fact that austerity policies are reducing the
universal provision of social support needed to allow workers to adapt to these
changes: workfare, the housing crisis, cuts in education and arts have converged
to make it so. The growing precarity of employment is doing away with the
separation between working time and free time. The temporal decomposition is
accompanied by the decomposition of workplace and living space. Fewer and
fewer jobs have a defined time and place in which they are performed (Huws,
2016) and while these processes are general, the conditions of precarity diverge
greatly from profession to profession, from individual to individual.
At the same time, we are living through record global warming, the seventh great
extinction and the destabilization of Earth’s biophysical systems. Globally, we’re
overshooting Earth’s regenerative capacities by a factor of 1.6 (Latouche, 2009),
some countries such as the US and the Gulf by a factor of 5 (Global Footprint
Network, 2013). And the environmental inequalities within countries are greater
than those between the countries (Piketty and Chancel, 2015). Unless by some
wonder almost non-existent negative emissions technologies do materialize
(Anderson and Peters, 2016), we are on a path of global destabilization of socioenvironmental metabolisms that no rate of technological change can realistically
mitigate (Loftus et al., 2015). Betting on settling on Mars is equally plausible.

article | 357



So, if the avant-garde has at the beginning of the 20th century responded to the
mutations of early modernization, in the 1960s to the integrated spectacle of the
mature capitalism, where is the avant-garde in the present?
Before we try to address the question, we need to return to our two public
institutions of mass education and research – the university and the library.
Where is their equalizing capacity in a historical conjuncture marked by the
rising levels of inequality? In the accelerating ‘race against the machine’
(Brynjolfsson and McAfee, 2012), with the advances in big data, AI and
robotization threatening to obliterate almost half of the jobs in advanced
economies (Frey and Osborne, 2013; McKinsey Global Institute, 2018), the
university is no longer able to fulfill the promise that it can provide both the
breadth and the specialization that are required to stave off the effect of a
runaway technological unemployment. It is no surprise that it can’t, because this
is ultimately a political question of changing the present direction of
technological and social development, and not a question of institutional
adaptation.
Yet while the university’s performance becomes increasingly scrutinized on the
basis of what its work is contributing to the stalling economy and challenges of
the labor market, on the inside it continues to be entrenched in defending
hierarchies. The uncertainty created by assessment-tied funding puts academics
on the defensive and wary of experimentation and resistance. Imperatives of
obsessive administrative reporting, performance metrics and short-term
competition for grant-based funding have, in Stefan Collini’s words, led to a ‘a
cumulative reduction in the autonomy, status and influence of academics’, where
‘[s]ystemic underfunding plus competition and punitive performancemanagement is seen as lean efficiency and proper accountability’ (Collini, 2017:
ch.2). Assessment-tied activities produce a false semblance of academic progress
by creating impact indicators that are frequently incidental to the research, while
at the same time demanding enormous amount of wasted effort that goes into
unsuccessful application proposals (Collini, 2017). Rankings based on
comparative performance metrics then allow university managers in the
monetized higher education systems such as UK to pitch to prospective students
how best to invest the debt they will incur in the future, in order to pay for the
growing tuition fees and cost of study, making the prospect of higher education
altogether less plausible for the majority in the long run (Bailey and Freedman,
2011).
Given that universities are not able to easily provide evidence that they are
contributing to the stalling economy, they are asked by the funders to innovate
instead. To paraphrase Marx, ‘innovate innovate that is their Moses and the
358 | article

Marcell Mars and Tomislav Medak

Against innovation

prophets’. Innovation, a popular catch-all word with the government and
institutional administrators, gleaned from the entrepreneurial language of
techno-capitalism, to denote interventions, measures and adaptations in the
functioning of all kind of processes that promise to bring disruptive, almost
punitive radical changes to the failures to respond to the disruptive challenges
unleashed by that very same techno-capitalism.
For instance, higher education policy makers such as former UK universities
minister David Willets, advocate that the universities themselves should use their
competitive advantage, embrace the entrepreneurial opportunity in the global
academic marketplace and transform themselves into startups. Universities have
to become the ‘equivalent of higher education Google or Amazon’ (Gill, 2015). As
Gary Hall reports in his ‘Uberfication of the university’ (2016), a survey UK vicechancellors has detected a number of areas where universities under their
command should become more disruptively innovative:
Among them are “uses of student data analytics for personalized services” (the
number one innovation priority for 90 percent of vice-chancellors); “uses of
technology to transform learning experiences” (massive open online courses
[MOOCs]; mobile virtual learning environments [VLEs]; “anytime-anywhere
learning” (leading to the demise of lectures and timetables); and “student-driven
flexible study modes” (“multiple entry points” into programs, bringing about an
end to the traditional academic year). (Hall, 2016: n.p.)

Universities in the UK are thus pushed to constantly create trendy programs,
‘publish or perish’, perform and assess, hire and fire, find new sources of
funders, find students, find interest of parents, vie for public attention, produce
evidence of immediate impact. All we can expect from such attempts to
transform universities into Googles and Amazons, is that we will end up with an
oligopoly of a few prestige brands franchised all around the world – if the
strategy proves ‘successful’, or – if not – just with a world in which universities
go on faking disruptive innovations while waiting for some miracle to happen
and redeem them in the eyes of neoliberal policy makers.
These are all short-term strategies modeled on the quick extraction of value that
Wendy Brown calls the ‘financialization of everything’ (Brown, 2015: 70).
However, the best in the game of such quick rent-seeking are, as always, those
universities that carry the most prestige, have the most assets and need to be
least afraid for their future, whereas the rest are simply struggling in the prospect
of reduced funding.
Those universities in ‘peripheral’ countries, which rarely show up anywhere near
the top of the global rankings, are in a particularly disadvantaged situation. As
Danijela Dolenec has calculated:
article | 359



[T]he whole region [of Western Balkans] invests approximately EUR 495 million in
research and development per year, which is equivalent of one (second-largest) US
university. Current levels of investment cannot have a meaningful impact on the
current model of economic development ... (Dolenec, 2016: 34)

So, these universities don’t have much capacity to capture value in the global
marketplace. In fact, their work in educating masses matters less to their
economies, as these economies are largely based on selling cheap low-skilled
labor. So, their public funders leave them in their underfunded torpor to
improvise their way through education and research processes. It is these
institutions that depend the most on the Library Genesis and Science Hubs of
this world. If we look at the download data of Library Genesis, as has Balasz Bodó
(2015), we can discern a clear pattern that the users in the rich economies use
these shadow libraries to find publications that are not available in the digital
form or are pay-walled, while the users in the developing economies use them to
find publications they don’t have access to in print to start with.
As for libraries, in the shift to the digital they were denied the right to provide
access that has now radically expanded (Sullivan, 2012), so they are losing their
central position in the dissemination and access to knowledge. The decades of
retrenchment in social security, unemployment support, social housing, arts and
education have made libraries, with their resources open to broad communities,
into a stand-in for failing welfare institutions (Mattern, 2014). But with the onset
of 2008 crisis, libraries have been subjected to brutal cuts, affecting their ability
to stay open, service their communities and in particular the marginalized
groups and children (Kean, 2017). Just as universities, libraries have thus seen
their capacity to address structural exclusions of marginalized groups and
provide support to those affected by precarity compromised.
Libraries thus find themselves struggling to provide legitimation for the support
they receive. So they re-invent and re-brand themselves as ‘third places’ of
socialization for the elderly and the youth (Engel-Johnson, 2017), spaces where
the unemployed can find assistance with their job applications and the socially
marginalized a public location with no economic pressures. All these functions,
however, are not something that public libraries didn’t do before, along with
what was their primary function – providing universal access to all written
knowledge, in which they are however nowadays – in the digital economy –
severely limited.
All that innovation that universities and libraries are undertaking seems to be
little innovation at all. It is rather a game of hide and seek, behind which these
institutions are struggling to maintain their substantive mission and operation.
So, what are we to make of this position of compromised institutional agency? In
360 | article

Marcell Mars and Tomislav Medak

Against innovation

a situation where progressive social agency no longer seems to be within the
remit of these institutions? The fact is that with the growing crisis of precarity
and social reproduction, where fewer and fewer have time from casualized work
to study, convenience to do so at home and financial prospects to incur a debt by
enrolling in a university, these institutions should, could and sometimes do
provide sustaining social arrangements and resources – not only to academics,
students and patrons, but also to a general public – that can reduce economic
imperatives and diminish insecurities. While doing this they also create
institutional preconditions that, unlike business-cycle driven institutions, can
support the structural repair that the present double crisis demands.
If the historical avant-garde was birthing of the new, nowadays repeating its
radicalism would seem to imply cutting through the fog of innovation. Its
radicalism would be to inhabit the non-new. The non-new that persists and in the
background sustains the broken social and technological world that the technocapitalist innovation wants to disrupt and transcend. Bullshit jobs and simulating
busyness at work are correlative of the fact that free time and the abundance of
social wealth created by growing productivity have paradoxically resulted in
underemployment and inequality. We’re at a juncture: accelerated crisis of
capitalism, accelerated climate change, accelerated erosion of political systems
are trajectories that leave little space for repair. The full surrender of
technological development into the hands of the market forces leaves even less.
The avant-garde radicalism nowadays is standing with the social institutions that
permit, speaking with Lauren Berlant, the ‘loose convergence’ of social
heterogeneity needed to construct ‘transitional form[s]’ (2016: 394). Unlike the
solutionism of techno-communities (Morozov, 2013) that tend to reduce
uncertainty of situations and conflict of values, social institutions permit
negotiating conflict and complexity in the situations of crisis that Gary Ravetz
calls postnormal – situations ‘where facts are uncertain, values in dispute, stakes
high and decisions urgent’ (Ravetz, 2003: 75). On that view, libraries and
universities as social infrastructures, provide a chance for retardation and
slowdown, and a capacity for collective disobedience. Against the radicalizing
exclusions of property and labor market, they can lower insecurities and
disobediently demand universal access to knowledge and education, a mass
intellectuality and autonomous critical pedagogy that increasingly seems a thing
of the past. Against the imposition to translate quality into metrics and capture
short-term values through assessment, they can resist the game of simulation.
While the playful simulation of reality was a thing in 1967, in 2017 it is no
longer. Libraries and universities can stop faking ‘innovativity’, ‘efficiency’ and
‘utility’.

article | 361



Custodians.online, the second letter
On 30 November, 2016 a second missive was published by Custodians.online
(2016). On the twentieth anniversary of UbuWeb, ‘the single-most important
archive of avant-garde and outsider art’ on the Internet, the drafters of the letter
followed up on their initial call to acts of care for the infrastructure of our shared
knowledge commons that the first letter ended with. The second letter was a gift
card to Ubu, announcing that it had received two mirrors, i.e. exact copies of the
Ubu website accessible from servers in two different locations – one in Iceland,
supported by a cultural activist community, and another one in Switzerland,
supported by a major art school – whose maintenance should ensure that Ubu
remains accessible even if its primary server is taken down.
McKenzie Wark in their text on UbuWeb poignantly observes that shadow
libraries are:
tactics for intervening in three kinds of practices, those of the art-world, of
publishing and of scholarship. They respond to the current institutional, technical
and political-economic constraints of all three. As it says in the Communist
Manifesto, the forces for social change are those that ask the property question.
While détournement was a sufficient answer to that question in the era of the
culture industries, they try to formulate, in their modest way, a suitable tactic for
answering the property question in the era of the vulture industries. (Wark, 2015:
116)

As we claimed, the avant-garde radicalism can be recuperated for the present
through the gestures of disobedience, deceleration and demands for
inclusiveness. Ubu already hints toward such recuperation on three coordinates:
1) practiced opposition to the regime of intellectual property, 2) transformative
use of old technologies, and 3) a promise of universal access to knowledge and
education, helping to foster mass intellectuality and critical pedagogy.
The first Custodians.online letter was drafted to voice the need for a collective
disobedience. Standing up openly in public for the illegal acts of piracy, which
are, however, made legitimate by the fact that students, academics and
researchers across the world massively contribute and resort to pirate repositories
of scholarly texts, holds the potential to overturn the noxious pattern of court
cases that have consistently lead to such resources being shut down.
However, the acts of disobedience need not be made explicit in the language of
radicalism. For a public institution, disobedience can also be doing what should
not be done: long-term commitment to maintenance – for instance, of a mirror –
while dealing institutionally with all the conflicts and challenges that doing this
publicly entails.
362 | article

Marcell Mars and Tomislav Medak

Against innovation

The second Custodians.online letter was drafted to suggest that opportunity:
In a world of money-crazed start-ups and surveillance capitalism, copyright
madness and abuse, Ubu represents an island of culture. It shows what a single
person, with dedication and focus, can achieve. There are lessons to be drawn
from this:

1) Keep it simple and avoid constant technology updates. Ubu is plain
HTML, written in a text-editor.
2) Even a website should function offline. One should be able to take the
hard disk and run. Avoid the cloud – computers of people you don’t
know and who don’t care about you.
3) Don’t ask for permission. You would have to wait forever, turning
yourself into an accountant and a lawyer.
4) Don’t promise anything. Do it the way you like it.
5) You don’t need search engines. Rely on word-of-mouth and direct
linking to slowly build your public. You don’t need complicated
protocols, digital currencies or other proxies. You need people who
care.
6) Everything is temporary, even after 20 years. Servers crash, disks die,
life changes and shit happens. Care and redundancy is the only path to
longevity. Care and redundancy is the reason why we decided to run
mirrors. We care and we want this resource to exist… should shit
happen, this multiplicity of locations and institutions might come in
handy. We will see. Find your Ubu. It’s time to mirror each other in
solidarity. (Custodians.online, 2016)

references
Anderson, K. and G. Peters (2016) ‘The trouble with negative emissions’, Science,
354 (6309): 182– 183.
Anderson, P. (1984) ‘Modernity and revolution’, New Left Review, (144): 96– 113.
Bailey, M. and D. Freedman (2011) The assault on universities: A manifesto for
resistance. London: Pluto Press.
Baty, P. (2017) ‘These maps could change how we understand the role of the
world’s top universities’, Times Higher Education online, May 27. [https://www

article | 363



.timeshighereducation.com/blog/these-maps-could-change-how-we-understa
nd-role-worlds-top-universities]
Beniger, J. (1989) The control revolution: Technological and economic origins of the
information society. Cambridge: Harvard University Press.
Berlant, L. (2016) ‘The commons: Infrastructures for troubling times’,
Environment and Planning D: Society and Space, 34 (3): 393– 419.
Bodó, B. (2015) ‘Libraries in the post-scarcity era’, in H. Porsdam (ed.)
Copyrighting creativity: Creative values, cultural heritage institutions and systems
of intellectual property. London: Routledge.
Bower, J.L. and C.M. Christensen. (1996) ‘Disruptive technologies: Catching the
wave’, The Journal of Product Innovation Management, 1(13): 75– 76.
Branum, C. (2008) ‘The myth of library neutrality’. [https://candisebranum.word
press.com 2014/05/15/the-myth-of-library-neutrality/]
Brenner, R. (2006) The economics of global turbulence: The advanced capitalist
economies from long boom to long downturn, 1945-2005. London: Verso.
Brown, W. (2015) Undoing the demos: Neoliberalism’s stealth revolution. Cambridge:
MIT Press.
Brynjolfsson, E. and A. McAfee (2012) Race against the machine: How the digital
revolution is accelerating innovation, driving productivity, and irreversibly
transforming employment and the economy. Lexington: Digital Frontier Press.
Bürger, P. (1984) Theory of the avant-garde. Manchester: Manchester University
Press.
Collini, S. (2017) Speaking of universities. London: Verso.
Critchley, S. (2007) Infinitely demanding: Ethics of commitment, politics of
resistance. London: Verso.
Custodians.online (2015) ‘In solidarity with Library Genesis and Sci-Hub’. [http:/
/custodians.online]
Custodians.online
.online/ubu]

(2016)

‘Happy

birthday,

Ubu.com’.

[http://custodians

Dolenec, D. (2016) ‘The implausible knowledge triangle of the Western Balkans’,
in S. Gupta, J. Habjan and H. Tutek (eds.) Academic labour, unemployment and
global higher education: Neoliberal policies of funding and management. New
York: Springer.

364 | article

Marcell Mars and Tomislav Medak

Against innovation

Engel-Johnson, E. (2017) ‘Reimagining the library as an anti-café’, Discover
Society. [http://discoversociety.org/2017/04/05/reimagining-the-library-as-ananti-cafe/]
Foster, H. (1996) The return of the real: The avant-garde at the end of the century.
Cambridge: MIT Press.
Frey, C.B. and M. Osborne, (2013) The future of employment: How susceptible are
jobs
to
computerisation?
Oxford:
Oxford
Martin
School.
[http://www.oxfordmartin.ox.ac.uk/publications/view/1314]
Gill, J. (2015) ‘Losing Our Place in the Vanguard?’ Times Higher Education, 18
.
June. [https://www.timeshighereducation.com/opinion/losing-our-place-van
guard]
Global Footprint Network (2013) Ecological wealth of nations. [http://www.
footprintnetwork.org/ecological_footprint_nations/]
Graeber, D. (2013) ‘On the phenomenon of bullshit jobs’, STRIKE! Magazine.
[https://strikemag.org/bullshit-jobs/]
Graham, S. and N. Thrift, (2007) ‘Out of order: Understanding repair and
maintenance’, Theory, Culture & Society, 24(3): 1– 25.
Groys, B. (2014) On the new. London:Verso.
Hall, G. (2016) The uberfication of the university. Minneapolis: Minnesota
University Press.
Harris, M.H. (1999) History of libraries of the western world. London: Scarecrow
Press.
Henning, B. (2017) ‘Unequal elite: World university rankings 2016/17’, Views of
the World. [http://www.viewsoftheworld.net/?p=5423]
Hu, T.-H. (2015) A prehistory of the cloud. Cambridge: MIT Press.
Hughes, T.P. (1993) Networks of power: Electrification in Western society, 1880-1930.
Baltimore: JHU Press.
Huws, U. (2016) ‘Logged labour: A new paradigm of work organisation?’, Work
Organisation, Labour and Globalisation, 10(1): 7– 26.
Iverson, S. (1999) ‘Librarianship and resistance’, Progressive Librarian, 15, 14-19.
Jackson, S.J. (2014) ‘Rethinking repair’, in T. Gillespie, P.J. Boczkowski and K.A.
Foot (eds.), Media technologies: Essays on communication, materiality, and
society. Cambridge: MIT Press.
James, S. (2012) ‘A woman’s place’, in S. James, Sex, race and class, the perspective
of winning: A selection of writings 1952-2011. Oakland: PM Press.
article | 365



Jansen, S.C. (1989) ‘Gender and the information society: A socially structured
silence’. Journal of Communication, 39(3): 196– 215.
Johnson, R. (2014) ‘Really useful knowledge’, in A. Gray, J. Campbell, M.
Erickson, S. Hanson and H. Wood (eds.) CCCS selected working papers: Volume
1. London: Routledge.
Kean, D. (2017) ‘Library cuts harm young people’s mental health services, warns
lobby’. The Guardian, 13 January. [http://www.theguardian.com/books
/2017/jan/13/library-cuts-harm-young-peoples-mental-health-services-warnslobby]
Kiaer, C. (2005) Imagine no possessions: The socialist objects of Russian
Constructivism. Cambridge: MIT Press.
Krajewski, M. (2014) World projects: Global information before World War I.
Minneapolis: University of Minnesota Press.
Krajewski, M. (2011) Paper machines: About cards & catalogs, 1548-1929.
Cambridge: MIT Press.
Larivière, V., S. Haustein and P. Mongeon (2015) ‘The oligopoly of academic
publishers in the digital era’. PLoS ONE 10(6). [http://dx.doi.org/
10.1371/journal.pone.0127502]
Latouche, S. (2009) Farewell to growth. Cambridge: Polity Press.
Liang, L. (2012) ‘Shadow Libraries’. e-flux (37). [http://www.e-flux.com/journal
/37/61228/shadow-libraries/]
Loftus, P.J., A.M. Cohen, J.C.S. Long and J.D. Jenkins (2015) ‘A critical review of
global decarbonization scenarios: What do they tell us about feasibility?’ Wiley
Interdisciplinary Reviews: Climate Change, 6(1): 93– 112.
Marx, K. (1973 [1857]) The grundrisse. London: Penguin Books in association with
New Left Review. [https://www.marxists.org/archive/marx/works/1857/gru
ndrisse/ch13.htm]
Mattern, S. (2014) ‘Library as infrastructure’, Places Journal. [https://placesjo
urnal.org/article/library-as-infrastructure/]
McKinsey Global Institute (2018) Harnessing automation for a future that works.
[https://www.mckinsey.com/global-themes/digital-disruption/harnessing-au
tomation-for-a-future-that-works]
Morozov, E. (2013) To save everything, click here: The folly of technological
solutionism. New York: PublicAffairs.
Mumford, L. (1967) The myth of the machine: Technics and human development.
New York: Harcourt Brace Jovanovich.
366 | article

Marcell Mars and Tomislav Medak

Against innovation

Noble, D.F. (2011) Forces of production. New Brunswick: Transaction Publishers.
Oliver, M. (1990) The politics of disablement. London: Macmillan Education.
Perez, C. (2011) ‘Finance and technical change: A long-term view’, African
Journal of Science, Technology, Innovation and Development, 3(1): 10-35.
Piketty, T. and L. Chancel (2015) Carbon and inequality: From Kyoto to Paris –
Trends in the global inequality of carbon emissions (1998-2013) and prospects for
an equitable adaptation fund. Paris: Paris School of Economics.
[http://piketty.pse.ens.fr/files/ChancelPiketty2015.pdf]
Pugh, E.W., L.R. Johnson and J.H. Palmer (1991) IBM’s 360 and early 370 systems.
Cambridge: MIT Press.
Ravetz, J. (2003) ‘Models as metaphor’, in B. Kasemir, J. Jaeger, C.C. Jaeger and
M.T. Gardner (eds.) Public participation in sustainability science: A handbook,
Cambridge: Cambridge University Press.
Sample, I. (2012) ‘Harvard university says it can’t afford journal publishers’
prices’, The Guardian, 24 April. [https://www.theguardian.com/science
/2012/apr/24/harvard-univer sity-journal-publishers-prices]
Schumpeter, J.A. (2013 [1942]) Capitalism, socialism and democracy. London:
Routledge.
Spieker, S. (2008) The big archive: Art from bureaucracy. Cambridge: MIT Press.
Sullivan, M (2012) ‘An open letter to America’s publishers from ALA President
Maureen Sullivan’, American Library Association, 28 September.
[http://www.ala.org/news/2012/09/open-letter-america%E2%80%99s-publ
ishers-ala-president-maureen-sullivan].
‘The Cost of Knowledge’ (2012). [http://thecostofknowledge.com/]
Times
Higher
Education
(2017)
‘World
university
rankings’.
[https://www.timeshighereducation.com/world-university-rankings/2018/
world-ranking]
Turner, F. (2010) From counterculture to cyberculture: Stewart Brand, the Whole
Earth Network, and the rise of digital utopianism. Chicago: University of Chicago
Press.
Vaneigem, R. (2012) The revolution of everyday life. Oakland: PM Press.
Wark, M. (2015) ‘Metadata punk’, in T. Medak and M. Mars (eds.) Public library.
Zagreb: What, How; for Whom/WHW & Multimedia Institute.

article | 367



Wigglesworth, R. and E. Platt (2016) ‘Value of negative-yielding bonds hits
$13.4tn’, Financial Times, 12 August. [http://www.ft.com/cms/s/0/973b606060ce-11e6-ae3f-77baadeb1c93.html]

the authors
Marcell Mars is a research associate at the Centre for Postdigital Cultures at Coventry
University (UK). Mars is one of the founders of Multimedia Institute/MAMA in Zagreb.
His research ‘Ruling Class Studies’, started at the Jan van Eyck Academy (2011),
examines state-of-the-art digital innovation, adaptation, and intelligence created by
corporations such as Google, Amazon, Facebook, and eBay. He is a doctoral student at
Digital Cultures Research Lab at Leuphana University, writing a thesis on ‘Foreshadowed
Libraries’. Together with Tomislav Medak he founded Memory of the World/Public
Library, for which he develops and maintains software infrastructure.
Email: ki.be@rkom.uni.st
Tomislav Medak is a doctoral student at the Centre for Postdigital Cultures at Coventry
University. Medak is a member of the theory and publishing team of the Multimedia
Institute/MAMA in Zagreb, as well as an amateur librarian for the Memory of the
World/Public Library project. His research focuses on technologies, capitalist
development, and postcapitalist transition, particularly on economies of intellectual
property and unevenness of technoscience. He authored two short volumes: ‘The Hard
Matter of Abstraction—A Guidebook to Domination by Abstraction’ and ‘Shit Tech for A
Shitty World’. Together with Marcell Mars he co-edited ‘Public Library’ and ‘Guerrilla
Open Access’.
Email: tom@mi2.hr


Mars & Medak
System of a Takedown
2019


System of a Takedown: Control and De-­commodification in the Circuits of Academic Publishing
Marcell Mars and Tomislav Medak

Since 2012 the Public Library/Memory of the World1 project has
been developing and publicly supporting scenarios for massive
disobedience against the current regulation of production and
circulation of knowledge and culture in the digital realm. While
the significance of that year may not be immediately apparent to
everyone, across the peripheries of an unevenly developed world
of higher education and research it produced a resonating void.
The takedown of the book-­sharing site Library.nu in early 2012
gave rise to an anxiety that the equalizing effect that its piracy
had created—­the fact that access to the most recent and relevant
scholarship was no longer a privilege of rich academic institutions
in a few countries of the world (or, for that matter, the exclusive
preserve of academia to begin with)—­would simply disappear into
thin air. While alternatives within these peripheries quickly filled
the gap, it was only through an unlikely set of circumstances that
they were able to do so, let alone continue to exist in light of the
legal persecution they now also face.

48

The starting point for the Public Library/Memory of the World
project was a simple consideration: the public library is the institutional form that societies have devised in order to make knowledge
and culture accessible to all their members regardless of social or
economic status. There’s a political consensus that this principle of
access is fundamental to the purpose of a modern society. Yet, as
digital networks have radically expanded the access to literature
and scientific research, public libraries were largely denied the
ability to extend to digital “objects” the kind of de-­commodified
access they provide in the world of print. For instance, libraries
frequently don’t have the right to purchase e-­books for lending and
preservation. If they do, they are limited by how many times—­
twenty-­six in the case of one publisher—­and under what conditions
they can lend them before not only the license but the “object”
itself is revoked. In the case of academic journals, it is even worse:
as they move to predominantly digital models of distribution,
libraries can provide access to and “preserve” them only for as
long as they pay extortionate prices for ongoing subscriptions. By
building tools for organizing and sharing electronic libraries, creating digitization workflows, and making books available online, the
Public Library/Memory of the World project is aimed at helping to
fill the space that remains denied to real-­world public libraries. It is
obviously not alone in this effort. There are many other platforms,
some more public, some more secretive, working to help people
share books. And the practice of sharing is massive.
—­https://www.memoryoftheworld.org

Capitalism and Schizophrenia
New media remediate old media. Media pay homage to their
(mediatic) predecessors, which themselves pay homage to their
own (mediatic) predecessors. Computer graphics remediate film,
which remediates photography, which remediates painting, and so
on (McLuhan 1965, 8; Bolter and Grusin 1999). Attempts to understand new media technologies always settle on a set of metaphors

(of the old and familiar), in order to approximate what is similar,
and yet at the same time name the new. Every such metaphor has
its semiotic distance, decay, or inverse-­square law that draws the
limit how far the metaphor can go in its explanation of the phenomenon to which it is applied. The intellectual work in the Age of
Mechanical Reproduction thus received an unfortunate metaphor:
intellectual property. A metaphor modeled on the scarce and
exclusive character of property over land. As the Age of Mechanical
Reproduction became more and more the Age of Discrete and
Digital Reproduction, another metaphor emerged, one that reveals
the quandary left after decades of decay resulting from the increasing distanciation of intellectual property from the intellectual work
it seeks to regulate, and that metaphor is: schizophrenia.
Technologies compete with each other—­the discrete and the
digital thus competes with the mechanical—­and the aftermath of
these clashes can be dramatic. People lose their jobs, companies
go bankrupt, disciplines lose their departments, and computer
users lose their old files. More often than not, clashes between
competing technologies create antagonisms between different
social groups. Their voices are (sometimes) heard, and society tries
to balance their interests.
If the institutional remedies cannot resolve the social antagonism,
the law is called on to mediate. Yet in the present, the legal system
only reproduces the schizoid impasse where the metaphor of property over land is applied to works of intellect that have in practical
terms become universally accessible in the digital world. Court
cases do not result in a restoration of balance but rather in the
confirmation of entrenched interests. It is, however, not necessary
that courts act in such a one-­sided manner. As Cornelia Vismann
(2011) reminds us in her analysis of the ancient roots of legal mediation, the juridical process has two facets: first, a theatrical aspect
that has common roots with the Greek dramatic theatre and its
social function as a translator of a matter of conflict into a case for
weighted juridical debate; second, an agonistic aspect not unlike a
sporting competition where a winner has to be decided, one that

49

50

leads to judgment and sanction. In the matter of copyright versus
access, however, the fact that courts cannot look past the metaphor of intellectual property, which reduces any understanding of
our contemporary technosocial condition to an analogy with the
scarcity-­based language of property over land, has meant that they
have failed to adjudicate a matter of conflict between the equalizing effects of universal access to knowledge and the guarantees of
rightful remuneration for intellectual labor into a meaningful social
resolution. Rather they have primarily reasserted the agonistic
aspect by supporting exclusively the commercial interests of large
copyright industries that structure and deepen that conflict at the
societal level.
This is not surprising. As many other elements of contemporary
law, the legal norms of copyright were articulated and codified
through the centuries-­long development of the capitalist state
and world-system. The legal system is, as Nicos Poulantzas (2008,
25–­26) suggests, genetically structured by capitalist development.
And yet at the same time it is semi-­autonomous; the development
of its norms and institutional aspects is largely endogenous and
partly responsive to the specific needs of other social subsystems.
Still, if the law and the courts are the codified and lived rationality
of a social formation, then the choice of intellectual property as a
metaphor in capitalist society comes as no surprise, as its principal
objective is to institute a formal political-­economic framework for
the commodification of intellectual labor that produces knowledge
and culture. There can be no balance, only subsumption and
accumulation. Capitalism and schizophrenia.
Schizophrenia abounds wherever the discrete and the digital
breaking barriers to access meets capitalism. One can only wonder
how the conflicting interests of different divisions get disputed
and negotiated in successful corporate giants like Sony Group
where Sony Pictures Entertainment,2 Sony Music Entertainment3
and Sony Computer Entertainment coexist under the same roof
with the Sony Electronics division, which invented the Walkman
back in 1979 and went on to manufacture devices and gadgets like

home (and professional) audio and video players/recorders (VHS,
Betamax, TV, HiFi, cassette, CD/DVD, mp3, mobile phones, etc.),
storage devices, personal computers, and game consoles. In the
famous 1984 Betamax case (“Sony Corp. of America v. Universal
City Studios, Inc.,” Wikipedia 2015), Universal Studios and the Walt
Disney Company sued Sony for aiding copyright infringement with
their Betamax video recorders. Sony won. The court decision in
favor of fair use rather than copyright infringement laid the legal
ground for home recording technology as the foundation of future
analog, and subsequently digital, content sharing.
Five years later, Sony bought its first major Hollywood studio:
Columbia Pictures. In 2004 Sony Music Entertainment merged with
Bertelsmann Music Group to create Sony BMG. However, things
changed as Sony became the content producer and we entered the
age of the discrete and the digital. Another five years later, in 2009,
Sony BMG sued Joel Tenenbaum for downloading and then sharing
thirty-­one songs. The jury awarded US$675,000 to the music
companies (US$22,000 per song). This is known as “the second
file-­sharing case.” “The first file-­sharing case” was 2007’s Capitol Records, Inc. v. Thomas-­Rasset, which concerned the downloading of
twenty-­four songs. In the second file-­sharing case, the jury awarded
music companies US$1,920,000 in statutory damages (US$80,000
per song). The defendant, Jammie Thomas, was a Native American
mother of four from Brainerd, Minnesota, who worked at the time
as a natural resources coordinator for the Mille Lacs Band of the
Native American Ojibwe people. The conflict between access and
copyright took a clear social relief.
Encouraged by the court decisions in the years that followed, the
movie and music industries have started to publicly claim staggering numbers in annual losses: US$58 billion and 370,000 lost jobs
in the United States alone. The purported losses in sales were,
however, at least seven times bigger than the actual losses and,
if the jobs figures had been true, after only one year there would
have been no one left working in the content industry (Reid 2012).
Capitalism and schizophrenia.

51

52

If there is a reason to make an exception from the landed logic of
property being imposed onto the world of the intellect, a reason
to which few would object, it would be for access for educational
purposes. Universities in particular give an institutional form to
the premise that equal access to knowledge is a prerequisite for
building a society where all people are equal.
In this noble endeavor to make universal access to knowledge
central to social development, some universities stand out more
than the others. Consider, for example, the Massachusetts Institute
of Technology (MIT). The Free Culture and Open Access movements
have never hidden their origins, inspiration, and model in the
success of the Free Software Movement, which was founded in
1984 by Richard Stallman while he was working at the MIT Artificial
Intelligence lab. It was at the MIT Museum that the “Hall of Hacks”
was set up to proudly display the roots of hacking culture. Hacking
culture at MIT takes many shapes and forms. MIT hackers famously
put a fire truck (2006) and a campus police car (1994) onto the
roof of the Great Dome of the campus’s Building 10; they landed
(and then exploded) a weather balloon onto the pitch of Harvard
Stadium during a Harvard–­Yale football game; turned the quote
that “getting an education from MIT is like taking a drink from a Fire
Hose” into a literal fire hydrant serving as a drinking fountain in
front of the largest lecture hall on campus; and many, many other
“hacks” (Peterson 2011).
The World Wide Web Consortium was founded at MIT in 1993.
Presently its mission states as its goal “to enable human communication, commerce, and opportunities to share knowledge,”
on the principles of “Web for All” and the corresponding, more
technologically focused “Web on Everything.” Similarly, MIT began
its OpenCourseWare project in 2002 in order “to publish all of
[MIT’s] course materials online and make them widely available to
everyone” (n.d.). The One Laptop Per Child project was created in
2005 in order to help children “learn, share, create, and collaborate” (2010). Recently the MIT Media Lab (2017) has even started its
own Disobedience Award, which “will go to a living person or group

engaged in what we believe is extraordinary disobedience for
the benefit of society . . . seeking both expected and unexpected
nominees.” When it comes to the governance of access to MIT’s
own resources, it is well known that anyone who is registered and
connected to the “open campus” wireless network, either by being
physically present or via VPN, can search JSTOR, Google Scholar,
and other databases in order to access otherwise paywalled journals from major publishers such as Reed Elsevier, Wiley-­Blackwell,
Springer, Taylor and Francis, or Sage.
The MIT Press has also published numerous books that we love
and without which we would have never developed the Public
Library/Memory of the World project to the stage where it is now.
For instance, only after reading Markus Krajewski’s Paper Machines: About Cards & Catalogs, 1548–­1929 (2011) and learning how
conceptually close librarians came to the universal Turing machine
with the invention of the index card catalog did we center the
Public Library/Memory of the World around the idea of the catalog.
Eric von Hippel’s Democratizing Innovation (2005) taught us how end
users could become empowered to innovate and accordingly we
have built our public library as a distributed network of amateur
librarians acting as peers sharing their catalogs and books. Sven
Spieker’s The Big Archive: Art from Bureaucracy (2008) showed us the
exciting hybrid meta-­space between psychoanalysis, media theory,
and conceptual art one could encounter by visiting the world of
catalogs and archives. Understanding capitalism and schizophrenia would have been hard without Semiotext(e)’s translations of
Deleuze and Guattari, and remaining on the utopian path would
have been impossible if not for our reading of Cybernetic Revolutionaries (Medina 2011), Imagine No Possessions (Kiaer 2005), or Art
Power (Groys 2008).

Our Road into Schizophrenia, Commodity
Paradox, Political Strategy
Our vision for the Public Library/Memory of the World resonated
with many people. After the project initially gained a large number

53

54

of users, and was presented in numerous prominent artistic
venues such as Museum Reina Sofía, Transmediale, Württembergischer Kunstverein, Calvert22, 98weeks, and many more, it was no
small honor when Eric Kluitenberg and David Garcia invited us to
write about the project for an anthology on tactical media that was
to be published by the MIT Press. Tactical media is exactly where
we would situate ourselves on the map. Building on Michel de
Certeau’s concept of tactics as agency of the weak operating in the
terrain of strategic power, the tactical media (Tactical Media Files
2017) emerged in the political and technological conjuncture of the
1990s. Falling into the “art-­into-­life” lineage of historic avant-­gardes,
Situationism, DIY culture, techno-­hippiedom, and media piracy, it
constituted a heterogeneous field of practices and a manifestly
international movement that combined experimental media and
political activism into interventions that contested the post–­Cold
War world of global capitalism and preemptive warfare on a hybrid
terrain of media, institutions, and mass movements. Practices of
tactical media ranged from ephemeral media pranks, hoaxes, and
hacktivism to reappropriations of media apparatuses, institutional
settings, and political venues. We see our work as following in
that lineage of recuperation of the means of communication from
their capture by personal and impersonal structures of political or
economic power.
Yet the contract for our contribution that the MIT Press sent us in
early 2015 was an instant reminder of the current state of affairs
in academic publishing: in return for our contribution and transfer
of our copyrights, we would receive no compensation: no right to
wage and no right to further distribute our work.
Only weeks later our work would land us fully into schizophrenia:
the Public Library/Memory of the World received two takedown
notices from the MIT Press for books that could be found in its
back then relatively small yet easily discoverable online collection
located at https://library.memoryoftheworld.org, including a notice
for one of the books that had served as an inspiration to us: Art
Power. First, no wage and, now, no access. A true paradox of the

present-­day system of knowledge production: products of our
labor are commodities, yet the labor-­power producing them is
denied the same status. While the project’s vision resonates with
many, including the MIT Press, it has to be shut down. Capitalism
and schizophrenia.4
Or, maybe, not. Maybe we don’t have to go down that impasse.
Starting from the two structural circumstances imposed on us by
the MIT Press—­the denial of wage and the denial of access—­we
can begin to analyze why copyright infringement is not merely, as
the industry and the courts would have it, a matter of illegality. But
rather a matter of legitimate action.
Over the past three decades a deep transformation, induced by
the factors of technological change and economic restructuring,
has been unfolding at different scales, changing the way works
of culture and knowledge are produced and distributed across
an unevenly developed world. As new technologies are adopted,
generalized, and adapted to the realities of the accumulation
process—­a process we could see unfolding with the commodification of the internet over the past fifteen years—­the core and
the periphery adopt different strategies of opposition to the
inequalities and exclusions these technologies start to reproduce.
The core, with its emancipatory and countercultural narratives,
pursues strategies that develop legal, economic, or technological
alternatives. However, these strategies frequently fail to secure
broader transformative effects as the competitive forces of the
market appropriate, marginalize, or make obsolete the alternatives
they advocate. Such seems to have been the destiny of much of the
free software, open access, and free culture alternatives that have
developed over this period.
In contrast, the periphery, in order to advance, relies on strategies
of “stealing” that bypass socioeconomic barriers by refusing to
submit to the harmonized regulation that sets the frame for global
economic exchange. The piracy of intellectual property or industrial
secrets thus creates a shadow system of exchange resisting the

55

56

asymmetries of development in the world economy. However, its
illegality serves as a pretext for the governments and companies of
the core to devise and impose further controls over the technosocial systems that facilitate these exchanges.
Both strategies develop specific politics—­a politics of reform, on
the one hand, and a politics of obfuscation and resistance, on the
other—­yet both are defensive politics that affirm the limitations
of what remains inside and what remains outside of the politically
legitimate.
The copyright industry giants of the past and the IT industry giants
of the present are thus currently sorting it out to whose greater
benefit will this new round of commodification work out. For those
who find themselves outside of the the camps of these two factions
of capital, there’s a window of opportunity, however, to reconceive
the mode of production of literature and science that has been
with us since the beginning of the print trade and the dawn of capitalism. It’s a matter of change, at the tail end of which ultimately
lies a dilemma: whether we’re going to live in a more equal or a
more unjust, a more commonised or a more commodified world.

Authorship, Law, and Legitimacy
Before we can talk of such structural transformation, the normative
question we expect to be asked is whether something that is considered a matter of law and juridical decision can be made a matter
of politics and political process. Let’s see.
Copyright has a fundamentally economic function—­to unambiguously establish individualized property in the products of creative
labor. A clear indication of this economic function is the substantive requirement of originality that the work is expected to have
in order to be copyrightable. Legal interpretations set a very low
standard on what counts as original, as their function is no more
than to demarcate one creative contribution from another. Once
a legal title is unambiguously assigned, there is a person holding

property with whose consent the contracting, commodification,
and marketing of the work can proceed.5 In that respect copyright
is not that different from the requirement of formal freedom that
is granted to a laborer to contract out their own labor-­power as a
commodity to capital, giving capital authorization to extract maximum productivity and appropriate the products of the laborer’s
labor.6 Copyright might be just a more efficient mechanism of
exploitation as it unfolds through selling of produced commodities
and not labor power. Art market obscures and mediates the
capital-­labor relation
When we talk today of illegal copying, we primarily mean an
infringement of the legal rights of authors and publishers. There’s an
immediate assumption that the infringing practice of illegal copying
and distribution falls under the domain of juridical sanction, that it is
a matter of law. Yet if we look to the history of copyright, the illegality
of copying was a political matter long before it became a legal one.
Publisher’s rights, author’s rights, and mechanisms of reputation—­
the three elements that are fundamental to the present-­day
copyright system—­all have their historic roots in the context of
absolutism and early capitalism in seventeenth-­and eighteenth-­
century Europe. Before publishers and authors were given a
temporary monopoly over the exploitation of their publications
instituted in the form of copyright, they were operating in a system
where they were forced to obtain a privilege to print books from
royal censors. The first printing privileges granted to publishers, in
early seventeenth-­century Great Britain,7 came with the responsibility of publishers to control what was being published and
disseminated in a growing body of printed matter that started to
reach the public in the aftermath of the invention of print and the
rise of the reading culture. The illegality in these early days of print
referred either to printing books without the permission of the
censor or printing books that were already published by another
printer in the territory where the censor held authority. The transition from the privilege tied to the publisher to the privilege tied to
the natural person of the author would unfold only later.

57

58

In the United Kingdom this transition occurred as the guild of
printers, Stationers’ Company, failed to secure the extension of its
printing monopoly and thus, in order to continue with its business,
decided to advocate the introduction of copyright for the authors
instead. This resulted in the passing of the Copyright Act of 1709,
also known as the Statute of Anne (Rose 2010). The censoring
authority and enterprising publishers now proceeded in lockstep to
isolate the author as the central figure in the regulation of literary
and scientific production. Not only did the author receive exclusive
rights to the work, the author was also made—­as Foucault has
famously analyzed (Foucault 1980, 124)—­the identifiable subject of
scrutiny, censorship, and political sanction by the absolutist state.
Although the Romantic author slowly took the center stage in
copyright regulations, economic compensation for the work would
long remain no more than honorary. Until well into the eighteenth
century, literary writing and creativity in general were regarded as
resulting from divine inspiration and not the individual genius of
the author. Writing was a work of honor and distinction, not something requiring an honest day’s pay.8 Money earned in the growing
printing industry mostly stayed in the pockets of publishers, while
the author received literally an honorarium, a flat sum that served
as a “token of esteem” (Woodmansee 1996, 42). It is only once
authors began to voice demands for securing their material and
political independence from patronage and authority that they also
started to make claims for rightful remuneration.
Thus, before it was made a matter of law, copyright was a matter of
politics and economy.

Copyright, Labor, and Economic Domination
The full-­blown affirmation of the Romantic author-­function marks
the historic moment where a compromise is established between
the right of publishers to the economic exploitation of works and
the right of authors to rightful compensation for those works. Economically, this redistribution from publishers to authors was made

possible by the expanding market for printed books in the eighteenth and nineteenth centuries, while politically this was catalyzed
by the growing desire for the autonomy of scientific and literary
production from the system of feudal patronage and censorship
in gradually liberalizing and modernizing capitalist societies. The
newfound autonomy of production was substantially coupled to
production specifically for the market. However, this irenic balance
could not last for very long. Once the production of culture and
science was subsumed under the exigencies of the generalized
market, it had to follow the laws of commodification and competition from which no form of commodity production can escape.
By the beginning of the twentieth century, copyright expanded to
a number of other forms of creativity, transcending its primarily
literary and scientific ambit and becoming part of the broader
set of intellectual property rights that are fundamental to the
functioning and positioning of capitalist enterprise. The corporatization of the production of culture and knowledge thus brought
about a decisive break from the Romantic model that singularized
authorship in the person of the author. The production of cultural
commodities nowadays involves a number of creative inputs from
both credited (but mostly unwaged) and uncredited (but mostly
waged) contributors. The “moral rights of the author,” a substantive
link between the work and the person of the author, are markedly
out of step with these realities, yet they still perform an important
function in the moral economy of reputation, which then serves as
the legitimation of copyright enforcement and monopoly. Moral
rights allow easy attribution; incentivize authors to subsidize
publishers by self-­financing their own work in the hope of topping
the sales charts, rankings, or indexes; and help markets develop
along winner-­takes-­all principles.
The level of concentration in industries primarily concerned with
various forms of intellectual property rights is staggering. The film
industry is a US$88 billion industry dominated by six major studios
(PwC 2015c). The recorded music industry is an almost US$20
billion industry dominated by only three major labels (PwC 2015b).

59

60

The publishing industry is a US$120 billion industry where the
leading ten companies earn in revenues more than the next forty
largest publishing groups (PwC 2015a; Wischenbart 2014).

The Oligopoly and Academic Publishing
Academic publishing in particular draws the state of play into stark
relief. It’s a US$10 billion industry dominated by five publishers and
financed up to 75 percent from library subscriptions. It’s notorious
for achieving extreme year-­on-­year profit margins—­in the case of
Reed Elsevier regularly over 30 percent, with Taylor and Francis,
Springer, Wiley-­Blackwell and Sage barely lagging behind (Larivière,
Haustein, and Mongeon 2015). Given that the work of contributing
authors is not paid but rather financed by their institutions (provided, that is, that they are employed at an institution) and that
these publications nowadays come mostly in the form of electronic
articles licensed under subscription for temporary use to libraries
and no longer sold as printed copies, the public interest could be
served at a much lower cost by leaving commercial closed-­access
publishers out of the equation entirely.
But that cannot be done, of course. The chief reason for this is that
the system of academic reputation and ranking based on publish-­
or-­perish principles is historically entangled with the business of
academic publishers. Anyone who doesn’t want to put their academic career at risk is advised to steer away from being perceived
as reneging on that not-­so-­tacit deal. While this is patently clear
to many in academia, opting for the alternative of open access
means not playing by the rules, and not playing by the rules can
have real-­life consequences, particularly for younger academics.
Early career scholars have to publish in prestigious journals if they
want to advance in the highly competitive and exclusive system of
academia (Kendzior 2012).
Copyright in academic publishing has thus become simply a mechanism of the direct transfer of economic power from producers to
publishers, giving publishers an instrument for maintaining their

stranglehold on the output of academia. But publishers also have
control over metrics and citation indexes, pandering to the authors
with better tools for maximizing their impact and self-­promotion.
Reputation and copyright are extortive instruments that publishers
can wield against authors and the public to prevent an alternative
from emerging.9
The state of the academic publishing business signals how the
“copyright industries” in general might continue to control the
field as their distribution model now transitions to streaming or
licensed-­access models. In the age of cloud computing, autonomous infrastructures run by communities of enthusiasts are
becoming increasingly a thing of the past. “Copyright industries,”
supported by the complicit legal system, now can pressure proxies
for these infrastructures, such as providers of server colocation,
virtual hosting, and domain-­name network services, to enforce
injunctions for them without ever getting involved in direct, costly
infringement litigation. Efficient shutdowns of precarious shadow
systems allow for a corporate market consolidation wherein the
majority of streaming infrastructures end up under the control of a
few corporations.

Illegal Yet Justified, Collective Civil
Disobedience, Politicizing the Legal
However, when companies do resort to litigation or get involved in
criminal proceedings, they can rest assured that the prosecution
and judicial system will uphold their interests over the right of
public to access culture and knowledge, even when the irrationality
of the copyright system lies in plain sight, as it does in the case of
academic publishing. Let’s look at two examples:
On January 6, 2011, Aaron Swartz, a prominent programmer
and hacktivist, was arrested by the MIT campus police and U.S.
Secret Service on charges of having downloaded a large number
of academic articles from the JSTOR repository. While JSTOR, with
whom Swartz reached a settlement and to whom he returned the

61

62

files, and, later, MIT, would eventually drop the charges, the federal
prosecution decided nonetheless to indict Swartz on thirteen
criminal counts, potentially leading to fifty years in prison and a
US$1 million fine. Under growing pressure by the prosecution
Swartz committed suicide on January 11, 2013.
Given his draconian treatment at the hands of the prosecution
and the absence of institutions of science and culture that would
stand up and justify his act on political grounds, much of Swartz’s
defense focused on trying to exculpate his acts, to make them less
infringing or less illegal than the charges brought against him had
claimed, a rational course of action in irrational circumstances.
However, this was unfortunately becoming an uphill battle as the
prosecution’s attention was accidentally drawn to a statement
written by Swartz in 2008 wherein he laid bare the dysfunctionality
of the academic publishing system. In his Guerrilla Open Access
Manifesto, he wrote: “The world’s entire scientific and cultural heritage, published over centuries in books and journals, is increasingly
being digitized and locked up by a handful of private corporations. . . . Forcing academics to pay money to read the work of their
colleagues? Scanning entire libraries but only allowing the folks at
Google to read them? Providing scientific articles to those at elite
universities in the First World, but not to children in the Global
South? It’s outrageous and unacceptable.” After a no-­nonsense
diagnosis followed an even more clear call to action: “We need
to download scientific journals and upload them to file sharing
networks. We need to fight for Guerilla Open Access” (Swartz 2008).
Where a system has failed to change unjust laws, Swartz felt, the
responsibility was on those who had access to make injustice a
thing of the past.
Whether Swartz’s intent actually was to release the JSTOR repository remains subject to speculation. The prosecution has never
proven that it was. In the context of the legal process, his call to
action was simply taken as a matter of law and not for what it
was—­a matter of politics. Yet, while his political action was pre-

empted, others have continued pursuing his vision by committing
small acts of illegality on a massive scale. In June 2015 Elsevier won
an injunction against Library Genesis, the largest illegal repository
of electronic books, journals, and articles on the Web, and its
subsidiary platform for accessing academic journals, Sci-­hub. A
voluntary and noncommercial project of anonymous scientists
mostly from Eastern Europe, Sci-­hub provides as of end of 2015
access to more than 41 million academic articles either stored
in its database or retrieved through bypassing the paywalls of
academic publishers. The only person explicitly named in Elsevier’s
lawsuit was Sci-­hub’s founder Alexandra Elbakyan, who minced no
words: “When I was working on my research project, I found out
that all research papers I needed for work were paywalled. I was
a student in Kazakhstan at the time and our university was not
subscribed to anything” (Ernesto 2015). Being a computer scientist,
she found the tools and services on the internet that allowed her to
bypass the paywalls. At first, she would make articles available on
internet forums where people would file requests for the articles
they needed, but eventually she automated the process, making
access available to everyone on the open web. “Thanks to Elsevier’s
lawsuit, I got past the point of no return. At this time I either have
to prove we have the full right to do this or risk being executed like
other ‘pirates’ . . . If Elsevier manages to shut down our projects or
force them into the darknet, that will demonstrate an important
idea: that the public does not have the right to knowledge. . . .
Everyone should have access to knowledge regardless of their
income or affiliation. And that’s absolutely legal. Also the idea
that knowledge can be a private property of some commercial
company sounds absolutely weird to me” (Ernesto 2015).
If the issue of infringement is to become political, a critical mass
of infringing activity has to be achieved, access technologically
organized, and civil disobedience collectively manifested. Only in
this way do the illegal acts stand a chance of being transformed
into the legitimate acts.

63

64

Where Law Was, there Politics Shall Be
And thus we have made a full round back to where we started. The
parallel development of liberalism, copyright, and capitalism has
resulted in a system demanding that the contemporary subject
act in accordance with two opposing tendencies: “more capitalist
than capitalist and more proletarian than proletariat” (Deleuze
and Guattari 1983, 34). Schizophrenia is, as Deleuze and Guattari
argue, a condition that simultaneously embodies two disjunctive
positions. Desire and blockage, flow and territory. Capitalism is
the constant decoding of social blockages and territorializations
aimed at liberating the production of desires and flows further
and further, only to oppose them at its extreme limit. It decodes
the old socius by means of private property and commodity
production, privatization and abstraction, the flow of wealth and
flows of workers (140). It allows contemporary subjects—­including
corporate entities such as the MIT Press or Sony—­to embrace their
contradictions and push them to their limits. But capturing them in
the orbit of the self-­expanding production of value, it stops them
at going beyond its own limit. It is this orbit that the law sanctions
in the present, recoding schizoid subjects into the inevitability of
capitalism. The result is the persistence of a capitalist reality antithetical to common interest—­commercial closed-­access academic
publishing—­and the persistence of a hyperproletariat—­an intellectual labor force that is too subsumed to organize and resist the
reality that thrives parasitically on its social function. It’s a schizoid
impasse sustained by a failed metaphor.
The revolutionary events of the Paris Commune of 1871, its mere
“existence” as Marx has called it,10 a brief moment of “communal
luxury” set in practice as Kristin Ross (2015) describes it, demanded
that, in spite of any circumstances and reservations, one takes a
side. And such is our present moment of truth.
Digital networks have expanded the potential for access and
created an opening for us to transform the production of knowledge and culture in the contemporary world. And yet they have
likewise facilitated the capacity of intellectual property industries

to optimize, to cut out the cost of printing and physical distribution.
Digitization is increasingly helping them to control access, expand
copyright, impose technological protection measures, consolidate
the means of distribution, and capture the academic valorization
process.
As the potential opening for universalizing access to culture and
knowledge created by digital networks is now closing, attempts at
private legal reform such as Creative Commons licenses have had
only a very limited effect. Attempts at institutional reform such as
Open Access publishing are struggling to go beyond a niche. Piracy
has mounted a truly disruptive opposition, but given the legal
repression it has met with, it can become an agent of change only if
it is embraced as a kind of mass civil disobedience. Where law was,
there politics shall be.
Many will object to our demand to replace the law with politicization. Transitioning from politics to law was a social achievement
as the despotism of political will was suppressed by legal norms
guaranteeing rights and liberties for authors; this much is true. But
in the face of the draconian, failed juridical rationality sustaining
the schizoid impasse imposed by economic despotism, these developments hold little justification. Thus we return once more to the
words of Aaron Swartz to whom we remain indebted for political
inspiration and resolve: “There is no justice in following unjust laws.
It’s time to come into the light and, in the grand tradition of civil
disobedience, declare our opposition to this private theft of public
culture. . . . With enough of us, around the world, we’ll not just send
a strong message opposing the privatization of knowledge—­we’ll
make it a thing of the past. Will you join us?” (Swartz 2008).

Notes
1

We initially named our project Public Library because we have developed it
as a technosocial project from a minimal definition that defines public library
as constituted by three elements: free access to books for every member of
a society, a library catalog, and a librarian (Mars, Zarroug and Medak, 2015).
However, this definition covers all public libraries and shadow libraries
complementing the work of public libraries in providing digital access. We have
thus decided to rename our project as Memory of the World, after our project’s

65

initial domain name. This is a phrase coined by Henri La Fontaine, whose men-

66

tion we found in Markus Krajewski’s Paper Machines (2011). It turned out that
UNESCO runs a project under the same name with the objective to preserve
valuable archives for the whole of humanity. We have appropriated that objective. Given that this change has happened since we drafted the initial version
of this text in 2015, we’ll call our project in this text with a double name Public
Library/Memory of the World.
2

Sony Pictures Entertainment became the owner of two (MGM, Columbia Pictures) out of eight Golden Age major movie studios (“Major Film Studio,” Wikipedia 2015).

3

In 2012 Sony Music Entertainment is one of the Big Three majors (“Record
Label,” Wikipedia 2015).

4

Since this anecdote was recounted by Marcell in his opening keynote in the
Terms of Media II conference at Brown University, we have received another
batch of takedown notices from the MIT Press. It seemed as no small irony,
because at the time the Terms of Media conference reader was rumored to be
distributed by the MIT Press.

5

“In law, authorship is a point of origination of a property right which, thereafter, like other property rights, will circulate in the market, ending up in the
control of the person who can exploit it most profitably. Since copyright serves
paradoxically to vest authors with property only to enable them to divest that
property, the author is a notion which needs only to be sustainable for an
instant” (Bently 1994).

6

For more on the formal freedom of the laborer to sell his labor-­power, see
chapter 6 of Marx’s Capital (1867).

7

For a more detailed account of the history of printing privilege in Great Britain,
but also the emergence of peer review out of the self-­censoring performed by
the Royal Academy and Académie de sciences in return for the printing privilege, see Biagioli 2002.

8

The transition of authorship from honorific to professional is traced in Woodmansee 1996.

9

Not all publishers are necessarily predatory. For instance, scholar-­led open-­
access publishers, such as those working under the banner of Radical Open
Access (http://radicaloa.disruptivemedia.org) have been experimenting with
alternatives to the dominant publishing models, workflows, and metrics, radicalizing the work of conventional open access, which has by now increasingly
become recuperated by big for-­profit publishers, who see in open access an
opportunity to assume the control over the economy of data in academia.
Some established academic publishers, too, have been open to experiments
that go beyond mere open access and are trying to redesign how academic
writing is produced, made accessible, and valorized. This essay has the good
fortune of appearing as a joint publication of two such publishers: Meson Press
and University of Minnesota Press.

10

“The great social measure of the Commune was its own working existence”
(Marx 1871).

References
Bently, Lionel. 1994. “Copyright and the Death of the Author in Literature and Law.”
The Modern Law Review 57, no. 6: 973–­86. Accessed January 2, 2018. doi:10.1111/
j.1468–­2230.1994.tb01989.x.
Biagioli, Mario. 2002. “From Book Censorship to Academic Peer Review.” Emergences:
Journal for the Study of Media & Composite Cultures 12, no. 1: 11–­45.
Bolter, Jay David, and Richard Grusin. 1999. Remediation: Understanding New Media.
Cambridge, Mass.: MIT Press.
Deleuze, Gilles, and Félix Guattari. 1983. Anti-­Oedipus: Capitalism and Schizophrenia.
Minneapolis: University of Minnesota Press.
Ernesto. 2015. “Sci-­Hub Tears Down Academia’s ‘Illegal’ Copyright Paywalls.” TorrentFreak, June 27. Accessed October 18, 2015. https://torrentfreak.com/sci-hub-tears
-down-academias-illegal-copyright-paywalls-150627/.
Foucault, Michel. 1980. “What Is an Author?” In Language, Counter-­Memory, Practice:
Selected Essays and Interviews, ed. Donald F. Bouchard, 113–­38. Ithaca, N.Y.: Cornell
University Press.
Groys, Boris. 2008. Art Power. Cambridge, Mass.: MIT Press.
Kendzior, Sarah. 2012. “Academic Paywalls Mean Publish and Perish.” Al Jazeera
English, October 2. Accessed October 18, 2015. http://www.aljazeera.com/indepth/
opinion/2012/10/20121017558785551.html.
Kiaer, Christina. 2005. Imagine No Possessions: The Socialist Objects of Russian Constructivism. Cambridge, Mass.: MIT Press.
Krajewski, Markus. 2011. Paper Machines: About Cards & Catalogs, 1548–­1929. Cambridge, Mass.: MIT Press.
Larivière, Vincent, Stefanie Haustein, and Philippe Mongeon. 2015. “The Oligopoly of
Academic Publishers in the Digital Era.” PLoS ONE 10, no. 6. Accessed January 2,
2018. doi:10.1371/journal.pone.0127502.
Mars, Marcell, Marar Zarroug, and Tomislav Medak. 2015. “Public Library (essay).” in
Public Library, ed. Marcell Mars and Tomislav Medak. Zagreb: Multimedia Institute
& What, how & for Whom/WHW.
Marx, Karl. 1867. Capital, Vol. 1. Available at: Marxists.org. Accessed April 9, 2017.
https://www.marxists.org/archive/marx/works/1867-c1/ch06.htm.
Marx, Karl. 1871. “The Civil War in France.” Available at: Marxists.org. Accessed April 9,
2017. https://www.marxists.org/archive/marx/works/1871/civil-war-france/.
McLuhan, Marshall. 1965. Understanding Media: The Extensions of Man. New York:
McGraw-­Hill.
Medina, Eden. 2011. Cybernetic Revolutionaries: Technology and Politics in Allende’s
Chile. Cambridge, Mass. MIT Press.
MIT Media Lab. 2017. “MIT Media Lab Disobedience Award.” Accessed 10 April 2017,
https://media.mit.edu/disobedience/.
MIT OpenCourseWare. n.d. “About OCW | MIT OpenCourseWare | Free Online
Course Materials.” Accessed October 28, 2015. http://ocw.mit.edu/about/.
One Laptop per Child. 2010. “One Laptop per Child (OLPC): Vision.” Accessed October
28, 2015. http://laptop.org/en/vision/.

67

68

Peterson, T. F., ed. 2011. Nightwork: A History of Hacks and Pranks at MIT. Cambridge,
Mass.: MIT Press.
Poulantzas, Nicos. 2008. The Poulantzas Reader: Marxism, Law, and the State. London:
Verso.
PwC. 2015a. “Book Publishing.” Accessed October 18, 2015. http://www.pwc.com/gx
/en/industries/entertainment-media/outlook/segment-insights/book-publishing
.html.
PwC. 2015b. “Filmed Entertainment.” Accessed October 18, 2015. http://www.pwc.com
/gx/en/industries/entertainment-media/outlook/segment-insights/filmed-enter
tainment.html.
PwC. 2015c. “Music: Growth Rates of Recorded and Live Music.” Accessed October 18,
2015. http://www.pwc.com/gx/en/global-entertainment-media-outlook/assets/
2015/music-key-insights-1-growth-rates-of-recorded-and-live-music.pdf.
Reid, Rob. 2012. “The Numbers behind the Copyright Math.” TED Blog, March 20.
Accessed October 28, 2015, http://blog.ted.com/the-numbers-behind-the
-copyright-math/.
Rose, Mark. 2010. “The Public Sphere and the Emergence of Copyright.” In Privilege
and Property, Essays on the History of Copyright, ed. Ronan Deazley, Martin Kretschmer, and Lionel Bently, 67–­88. Open Book Publishers.
Ross, Kristin. 2015. Communal Luxury: The Political Imaginary of the Paris Commune.
London: Verso.
Spieker, Sven. 2008. The Big Archive: Art from Bureaucracy. Cambridge, Mass.: MIT
Press.
Swartz, Aaron. 2008. “Guerilla Open Access Manifesto.” Internet Archive. Accessed
October 18, 2015. https://archive.org/stream/GuerillaOpenAccessManifesto/
Goamjuly2008_djvu.txt.
Tactical Media Files. 2017. “The Concept of Tactical Media.” Accessed May 4, 2017.
http://www.tacticalmediafiles.net/articles/44999.
Vismann, Cornelia. 2011. Medien der Rechtsprechung. Frankfurt a.M.: S. Fischer Verlag.
von Hippel, Eric. 2005. Democratizing Innovation. Cambridge, Mass.: MIT Press.
Wikipedia, the Free Encyclopedia. 2015a. “Major Film Studio.” Accessed January 2,
2018. https://en.wikipedia.org/w/index.php?title=Major_film_studio&oldid
=686867076.
Wikipedia, the Free Encyclopedia. 2015b. “Record Label.” Accessed January 2, 2018.
https://en.wikipedia.org/w/index.php?title=Record_label&oldid=685380090.
Wikipedia, the Free Encyclopedia. 2015c. “Sony Corp. of America v. Universal City
Studios, Inc.” Accessed January 2, 2018. https://en.wikipedia.org/w/index.php?
title=Sony_Corp._of_America_v._Universal_City_Studios,_Inc.&oldid=677390161.
Wischenbart, Rüdiger. 2015. “The Global Ranking of the Publishing Industry 2014.”
Wischenbart. Accessed October 18, 2015. http://www.wischenbart.com/upload/
Global-Ranking-of-the-Publishing-Industry_2014_Analysis.pdf.
Woodmansee, Martha. 1996. The Author, Art, and the Market: Rereading the History of
Aesthetics. New York: Columbia University Press.
World Wide Web Consortium. n.d.“W3C Mission.” Accessed October 28, 2015. http://
www.w3.org/Consortium/mission.


Mars, Medak & Sekulic
Taken Literally
2016


Taken literally
Marcell Mars
Tomislav Medak
Dubravka Sekulic

Free people united in building a society of
equals, embracing those whom previous
efforts have failed to recognize, are the historical foundation of the struggle against
enslavement, exploitation, discrimination
and cynicism. Building a society has never
been an easy-going pastime.
During the turbulent 20th century,
different trajectories of social transformation moved within the horizon set by
the revolutions of the 18th and 19th century: equality, brotherhood and liberty
– and class struggle. The 20th century experimented with various combinations
of economic and social rationales in the
arrangement of social reproduction. The
processes of struggle, negotiation, empowerment and inclusion of discriminated social groups constantly complexified and
dynamised the basic concepts regulating
social relations. However, after the process
of intensive socialisation in the form of either welfare state or socialism that dominated a good part of the 20th century, the
end of the century was marked by a return
in the regulation of social relations back
to the model of market domination and
private appropriation. Such simplification
and fall from complexity into a formulaic
state of affairs is not merely a symptom
of overall exhaustion, loss of imagination
and lacking perspective on further social
development, but rather indicates a cynical
abandonment of the effort to build society,
its idea, its vision – and, as some would
want, of society altogether.
In this article, we wish to revisit the
evolution of regulation of ownership in the
field of intellectual production and housing

as two examples of the historical dead-end
in which we find ourselves.
T H E C A P I TA L I S T M O D E
O F P RO D U C T I O N

According to the text-book definition, the
capitalist mode of production is the first
historical organisation of socio-economic relations in which appropriation of the
surplus from producers does not depend
on force, but rather on neutral laws of economic processes on the basis of which the
capitalist and the worker enter voluntarily
into a relation of production. While under
feudalism it was the aristocratic oligopoly
on violence that secured a hereditary hierarchy of appropriation, under capitalism the
neutral logic of appropriation was secured
by the state monopoly on violence. However, given that the early capitalist relations
in the English country-side did not emerge
outside the existing feudal inequalities, and
that the process of generalisation of capitalist relations, particularly after the rise of industrialisation, resulted in even greater and
even more hardened stratification, the state
monopoly on violence securing the neutral
logic of appropriation ended up mostly securing the hereditary hierarchy of appropriation. Although in the new social formation
neither the capitalist nor the worker was born
capitalist or born worker, the capitalist would
rarely become a worker and the worker a capitalist even rarer. However, under conditions
where the state monopoly on violence could
no longer coerce workers to voluntarily sell
their labour and where their resistance to
accept existing class relations could be

229

expressed in the withdrawal of their labour
power from the production process, their
consent would become a problem for the existing social model. That problem found its
resolution through a series of conflicts that
have resulted in historical concessions and
gains of class struggle ranging from guaranteed labor rights, through institutions of the
welfare state, to socialism.
The fundamental property relation
in the capitalist mode of production is that
the worker has an exclusive ownership over
his/her own labour power, while the capitalist has ownership over the means of production. By purchasing the worker's labour
power, the capitalist obtains the exclusive
right to appropriate the entire product of
worker's labour. However, as the regulation
of property in such unconditional formulaic
form quickly results in deep inequalities, it
could not be maintained beyond the early
days of capitalism. Resulting class struggles
and compromises would achieve a series of
conditions that would successively complexify the property relations.
Therefore, the issue of private property – which goods do we have the right to
call our own to the exclusion of others: our
clothes, the flat in which we live, means of
production, profit from the production process, the beach upon which we wish to enjoy
ourselves alone or to utilise by renting it out,
unused land in our neighbourhood – is not
merely a question of the optimal economic
allocation of goods, but also a question of
social rights and emancipatory opportunities that are required in order secure the
continuous consent of society's members to
its organisational arrangements.
230

Taken literally

OW NER S H I P R EG I M ES

Both the concept of private property over
land and the concept of copyright and
intellectual property have their shared
evolutionary beginnings during the early capitalism in England, at a time when
the newly emerging capitalist class was
building up its position in relation to the
aristocracy and the Church. In both cases, new actors entered into the processes
of political articulation, decision-making
and redistribution of power. However, the
basic process of ( re )defining relations has
remained ( until today ) a spatial demarcation: the question of who is excluded or
remains outside and how.
① In the early period of trade in books, after
the invention of the printing press in the 15th
century, the exclusive rights to commercial
exploitation of written works were obtained
through special permits from the Royal Censors, issued solely to politically loyal printers.
The copyright itself was constituted only in
the 17th century. It's economic function is to
unambiguously establish the ownership title
over the products of intellectual labour. Once
that title is established, there is a person with
whose consent the publisher can proceed in
commodifying and distributing the work to
the exclusion of others from its exploitation.
And while that right to economic benefit was
exclusively that of the publishers at the outset, as authors became increasingl aware that
the income from books guaranteed then an
autonomy from the sponsorship of the King
and the aristocracy, in the 19th century copyright gradually transformed into a legal right

that protected both the author and the publisher in equal measure. The patent rights underwent a similar development. They were
standardised in the 17th century as a precondition for industrial development, and were
soon established as a balance between the
rights of the individual-inventor and the
commercial interest of the manufacturer.
However, the balance of interests between the productive creative individuals
and corporations handling production and
distribution did not last long and, with
time, that balance started to lean further
towards protecting the interests of the corporations. With the growing complexity of
companies and their growing dependence
on intellectual property rights as instruments in 20th century competitive struggles, the economic aspect of intellectual
property increasingly passed to the corporation, while the author/inventor was
left only with the moral and reputational
element. The growing importance of intellectual property rights for the capitalist
economy has been evident over the last
three decades in the regular expansions of
the subject matter and duration of protection, but, most important of all – within
the larger process of integration of the capitalist world-system – in the global harmonisation and enforcement of rights protection. Despite the fact that the interests of
authors and the interests of corporations,
of the global south and the global north, of
the public interest and the corporate interest do not fall together, we are being given
a global and uniform – formulaic – rule of
the abstract logic of ownership, notwithstanding the diverging circumstances and

interests of different societies in the context of uneven development.
No-one is surprised today that, in
spite of their initial promises, the technological advances brought by the Internet,
once saddled with the existing copyright
regulation, did not enhance and expand
access to knowledge. But that dysfunction
is nowhere more evident than in academic publishing. This is a global industry of
the size of music recording industry dominated by an oligopoly of five major commercial publishers: Reed Elsevier, Taylor
& Francis, Springer, Wiley-Blackwell and
Sage. While scientists write their papers,
do peer-reviews and edit journals for free,
these publishers have over past decades
taken advantage of their oligopolistic position to raise the rates of subscriptions they
sell mostly to publicly financed libraries at
academic institutions, so that the majority of libraries, even in the rich centres of
the global north, are unable to afford access to many journals. The fantastic profit
margins of over 30% that these publishers
reap from year to year are premised on denying access to scientific publications and
the latest developments in science not only
to the general public, but also students and
scholars around the world. Although that
oligopoly rests largely on the rights of the
authors, the authors receive no benefit
from that copyright. An even greater irony is, if they want to make their work open
access to others, the authors themselves or
the institutions that have financed the underlying research through the proxy of the
author are obliged to pay additionally to
the publishers for that ‘service’. ×
231

② With proliferation of enclosures and
signposts prohibiting access, picturesque
rural arcadias became landscapes of capitalistic exploitation. Those evicted by the
process of enclosure moved to the cities
and became wage workers. Far away from
the parts of the cities around the factories,
where working families lived squeezed
into one room with no natural light and
ventilation, areas of the city sprang up in
which the capitalists built their mansions.
At that time, the very possibility of participation in political life was conditioned
on private property, thus excluding and
discriminating by legal means entire social
groups. Women had neither the right to
property ownership nor inheritance rights.
Engels' description of the humiliating
living conditions of Manchester workers in
the 19th century pointed to the catastrophic
effects of industrialisation on the situation
of working class ( e.g. lower pay than during
the pre-industrial era ) and indicated that
the housing problem was not a direct consequence of exploitation but rather a problem
arising from inequitable redistribution of
assets. The idea that living quarters for the
workers could be pleasant, healthy and safe
places in which privacy was possible and
that that was not the exclusive right of the
rich, became an integral part of the struggle
for labor rights, and part of the consciousness of progressive, socially-minded architects and all others dedicated to solving the
housing problem.
Just as joining forces was as the
foundation of their struggle for labor and
political rights, joining forces was and has
remained the mechanism for addressing the
232

Taken literally

inadequate housing conditions. As early as
during the 19th century, Dutch working class
and impoverished bourgeoisie joined forces
in forming housing co-operatives and housing societies, squatting and building without permits on the edges of the cities. The
workers' struggle, enlightened bourgeoisie,
continued industrial development, as well
as the phenomenon of Utopian socialist-capitalists like Jean-Baptiste André Godin, who, for example, under the influence
of Charles Fourier's ideas, built a palace for
workers – the Familistery, all these exerted
pressure on the system and contributed to
the improvement of housing conditions for
workers. Still, the dominant model continued to replicate the rentier system in which
even those with inadequate housing found
someone to whom they could rent out a segment of their housing unit.
The general social collapse after
World War I, the Socialist Revolution and
the coming to power in certain European
cities of the social-democrats brought new
urban strategies. In ‘red’ Vienna, initially
under the urban planning leadership of
Otto Neurath, socially just housing policy
and provision of adequate housing was regarded as the city's responsibility. The city
considered the workers who were impoverished by the war and who sought a way out
of their homelessness by building housing
themselves and tilling gardens as a phenomenon that should be integrated, and
not as an error that needed to be rectified.
Sweden throughout the 1930s continued
with its right to housing policy and served
as an example right up until the mid-1970s
both to the socialist and ( capitalist ) wel-

fare states. The idea of ( private ) ownership became complexified with the idea
of social ownership ( in Yugoslavia ) and
public/social housing elsewhere, but since
the bureaucratic-technological system responsible for implementation was almost
exclusively linked with the State, housing
ended up in unwieldy complicated systems
in which there was under-investment in
maintenance. That crisis was exploited as
an excuse to impose as necessary paradigmatic changes that we today regard as the
beginning of neo-liberal policies.
At the beginning of the 1980s in
Great Britain, Margaret Thatcher created an atmosphere of a state of emergency
around the issue of housing ownership
and, with the passing of the Housing Act
in 1980, reform was set in motion that
would deeply transform the lives of the
Brits. The promises of a better life merely
based on the opportunity to buy and become a ( private ) owner never materialised.
The transition from the ‘right to housing’ and the ‘right to ( participation in the
market through ) purchase’ left housing
to the market. There the prices first fell
drastically at the beginning of the 1990s.
That was followed by a financialisation
and speculation on the property market
making housing space in cities like London primarily an avenue of investment, a
currency, a tax haven and a mechanism
by which the rich could store their wealth.
In today's generation, working and lower
classes, even sometimes the upper middle
class can no longer even dream of buying
a flat in London. ×

P L AT F O R M I SAT I O N

Social ownership and housing – understood both literally as living space, but
also as the articulation of the right to decent life for all members of society – which
was already under attack for decades prior,
would be caught completely unprepared
for the information revolution and its
zero marginal cost economy. Take for
example the internet innovation: after a
brief period of comradely couch-surfing,
the company AirBnB in an even shorter period transformed from the service
allowing small enterprising home owners to rent out their vacant rooms into a
catalyst for amassing the ownership over
housing stock with the sole purpose of
renting it out through AirBnb. In the
last phase of that transformation, new
start-ups appeared that offered to the
newly consolidated feudal lords the service of easier management of their housing ‘fleet’, where the innovative approach
boils down to the summoning of service
workers who, just like Uber drivers, seek
out blue dots on their smart-phone maps
desperately rushing – in fear of bad rating,
for a minimal fee and no taxes paid – to
turn up there before their equally precarious competition does. With these innovations, the residents end up being offered
shorter and shorter but increasingly more
expensive contracts on rental, while in a
worse case the flats are left unoccupied
because the rich owner-investors have
realised that an unoccupied flat is a more
profitable deal than a risky investment in
a market in crisis.

233

The information revolution stepped out
onto the historical stage with the promise
of radical democratisation of communication, culture and politics. Anyone could
become the media and address the global
public, emancipate from the constrictive
space of identity, and obtain access to entire
knowledge of the world. However, instead
of resulting in democratising and emancipatory processes, with the handing over of
Internet and technological innovation to the
market in 1990s it resulted in the gradual
disruption of previous social arrangements
in the allocation of goods and in the intensification of the commodification process.
That trajectory reached its full-blown development in the form of Internet platforms
that simultaneously enabled old owners of
goods to control more closely their accessibility and permited new owners to seek out
new forms of commercial exploitation. Take
for example Google Books, where the process of digitization of the entire printed culture of the world resulted in no more than
ad and retail space where only few books
can be accessed for free. Or Amazon Kinde,
where the owner of the platform has such
dramatic control over books that on behest
of copyright holders it can remotely delete
a purchased copy of a book, as quite indicatively happened in 2009 with Orwell's 1984.
The promised technological innovation that
would bring a new turn of the complexity in
the social allocation of goods resulted in a
simplification and reduction of everything
into private property.
The history of resistance to such extreme forms of enclosure of culture and
knowledge is only a bit younger than the
234

Taken literally

processes of commodification themselves
that had begun with the rise of trade in
books. As early as the French Revolution,
the confiscation of books from the libraries
of clergy and aristocracy and their transfer
into national and provincial libraries signalled that the right of access to knowledge
was a pre-condition for full participation
in society. For its part, the British labor
movement of the mid-19th century had to
resort to opening workers' reading-rooms,
projects of proletarian self-education and
the class struggle in order to achieve the
establishment of the institution of public
libraries financed by taxes, and the right
thereby for access to knowledge and culture for all members of society.
SHAD OW P U B L I C L I B R A R I ES

Public library as a space of exemption from
commodification of knowledge and culture
is an institution that complexifies the unconditional and formulaic application of
intellectual property rights, making them
conditional on the public interest that all
members of the society have the right of
access to knowledge. However, with the
transition to the digital, public libraries
have been radically limited in acquiring
anything they could later provide a decommodified access to. Publishers do not
wish to sell electronic books to libraries,
and when they do decide to give them a
lending licence, that licence runs out after 26 lendings. Closed platforms for electronic publications where the publishers
technologically control both the medium
and the ways the work can be used take us

back to the original and not very well-conceived metaphor of ownership – anyone
who owns the land can literally control
everything that happens on that land –
even if that land is the collective process
of writing and reading. Such limited space
for the activity of public libraries is in radical contrast to the potentials for universal
access to all of culture and knowledge that
digital distribution could make possible
at a very low cost, but with considerable
change in the regulation of intellectual production in society.
Since such change would not be in the
interest of formulaic application of intellectual property, acts of civil disobedience to
that regime have over the last twenty years
created a number of 'shadow public libraries'
that provide universal access to knowledge
and culture in the digital domain in the way
that the public libraries are not allowed to:
Library Genesis, Science Hub, Aaaaarg,
Monoskop, Memory of the World or Ubuweb. They all have a simple objective – to
provide access to books, journals and digitised knowledge to all who find themselves
outside the rich academic institutions of the
West and who do not have the privilege of
institutional access.
These shadow public libraries bravely remind society of all the watershed moments in the struggles and negotiations
that have resulted in the establishment
of social institutions, so as to first enable
the transition from what was an unjust,
discriminating and exploitative to a better society, and later guarantee that these
gains would not be dismantled or rescinded. That reminder is, however, more than a

mere hacker pastime, just as the reactions
of the corporations are not easy-going at
all: in mid-2015, Reed Elsevier initiated
a court case against Library Genesis and
Science Hub and by the end of 2015 the
court in New York issued a preliminary
injunction ordering the shut-down of
their domains and access to the servers. At
the same time, a court case was brought
against Aaaaarg in Quebec.
Shadow public libraries are also a
reminder of how technological complexity does not have to be harnessed only in
the conversion of socialised resources back
into the simplified formulaic logic of private property, how we can take technology
in our hands, in the hands of society that is
not dismantling its own foundations, but
rather taking care of and preserving what
is worthwhile and already built – and thus
building itself further. But, most powerfully shadow public libraries are a reminder to us of how the focus and objective of
our efforts should not be a world that can
be readily managed algorithmically, but a
world in which our much greater achievement is the right guaranteed by institutions – envisioned, demanded, struggled
for and negotiated – a society. Platformisation, corporate concentration, financialisation and speculation, although complex
in themselves, are in the function of the
process of de-socialisation. Only by the
re-introduction of the complexity of socialised management and collective re-appropriation of resources can technological
complexity in a world of escalating expropriation be given the perspective of universal sisterhood, equality and liberation.

235

Mattern
Library as Infrastructure
2014


# Library as Infrastructure

Reading room, social service center, innovation lab. How far can we stretch
the public library?

Shannon Mattern

June 2014

__Add to List

#### Share

* __
* __
* __

[![](https://placesjournal.org/wp-content/uploads/2014/06/mattern-library-
infrastructure-1x.jpg)](https://placesjournal.org/wp-content/uploads/2014/06
/mattern-library-infrastructure-1x.jpg)Left: Rijksmuseum Library, Amsterdam.
[Photo by[Ton Nolles](https://www.flickr.com/photos/tonnolles/9428619486/)]
Right: Google data center in Council Bluffs, Iowa. [Photo by Google/Connie
Zhou]

Melvil Dewey was a one-man Silicon Valley born a century before Steve Jobs. He
was the quintessential Industrial Age entrepreneur, but unlike the Carnegies
and Rockefellers, with their industries of heavy materiality and heavy labor,
Dewey sold ideas. His ambition revealed itself early: in 1876, shortly after
graduating from Amherst College, he copyrighted his library classification
scheme. That same year, he helped found the American Library Association,
served as founding editor of _Library_ _Journal_ , and launched the American
Metric Bureau, which campaigned for adoption of the metric system. He was 24
years old. He had already established the Library Bureau, a company that sold
(and helped standardize) library supplies, furniture, media display and
storage devices, and equipment for managing the circulation of collection
materials. Its catalog (which would later include another Dewey invention,
[the hanging vertical
file](http://books.google.com/books?id=_YuWb0uptwAC&pg=PA112&dq=vertical+file+%22library+bureau%22+date:1900-1900&lr=&as_brr=0#v=onepage&q=vertical%20file%20%22library%20bureau%22%20date%3A1900-1900&f=false))
represented the library as a “machine” of uplift and enlightenment that
enabled proto-Taylorist approaches to public education and the provision of
social services. As chief librarian at Columbia College, Dewey established the
first library school — called, notably, the School of Library _Economy_ —
whose first class was 85% female; then he brought the school to Albany, where
he directed the New York State Library. In his spare time, he founded the Lake
Placid Club and helped win the bid for the 1932 Winter Olympics.

Dewey was thus simultaneously in the furniture business, the office-supply
business, the consulting business, the publishing business, the education
business, the human resources business, and what we might today call the
“knowledge solutions” business. Not only did he recognize the potential for
monetizing and cross-promoting his work across these fields; he also saw that
each field would be the better for it. His career (which was not without its
[significant
controversies](http://query.nytimes.com/gst/abstract.html?res=9A06E0D7163DE733A25755C1A9649C946497D6CF))
embodied a belief that classification systems and labeling standards and
furniture designs and people work best when they work towards the same end —
in other words, that intellectual and material systems and labor practices are
mutually constructed and mutually reinforcing.

Today’s libraries, Apple-era versions of the Dewey/Carnegie institution,
continue to materialize, at multiple scales, their underlying bureaucratic and
epistemic structures — from the design of their web interfaces to the
architecture of their buildings to the networking of their technical
infrastructures. This has been true of knowledge institutions throughout
history, and it will be true of our future institutions, too. I propose that
thinking about the library as a network of integrated, mutually reinforcing,
evolving _infrastructures_ — in particular, architectural, technological,
social, epistemological and ethical infrastructures — can help us better
identify what roles we want our libraries to serve, and what we can reasonably
expect of them. What ideas, values and social responsibilities can we scaffold
within the library’s material systems — its walls and wires, shelves and
servers?

[![Dictionary stands from the Library Bureau’s 1890
catalog.](data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7)![Dictionary
stands from the Library Bureau’s 1890 catalog.](https://placesjournal.org/wp-
content/uploads/2014/06/mattern-library-infrastructure-
2x.jpg)](https://placesjournal.org/wp-content/uploads/2014/06/mattern-library-
infrastructure-2x.jpg) Dictionary stands from the [Library Bureau’s 1890
catalog](http://books.google.com/books?id=rwdwAAAAIAAJ&dq=library+bureau+catalog+1890&source=gbs_navlinks_s).

## Library as Platform

For millennia libraries have acquired resources, organized them, preserved
them and made them accessible (or not) to patrons. But the [forms of those
resources](http://www.spl.org/prebuilt/cen_conceptbook/page16.htm) have
changed — from scrolls and codices; to LPs and LaserDiscs; to e-books,
electronic databases and open data sets. Libraries have had at least to
comprehend, if not become a key node within, evolving systems of media
production and distribution. Consider the medieval scriptoria where
manuscripts were produced; the evolution of the publishing industry and book
trade after Gutenberg; the rise of information technology and its webs of
wires, protocols and regulations. 1 At every stage, the contexts — spatial,
political, economic, cultural — in which libraries function have shifted; so
they are continuously [reinventing
themselves](http://www.spl.org/prebuilt/cen_conceptbook/page18.htm) and the
means by which they provide those vital information services.

Libraries have also assumed a host of ever-changing social and symbolic
functions. They have been expected to symbolize the eminence of a ruler or
state, to integrally link “knowledge” and “power” — and, more recently, to
serve as “community centers,” “public squares” or “think tanks.” Even those
seemingly modern metaphors have deep histories. The ancient Library of
Alexandria was a prototypical think tank, 2 and the early Carnegie buildings
of the 1880s were community centers with swimming pools and public baths,
bowling alleys, billiard rooms, even rifle ranges, as well as book stacks. 3
As the Carnegie funding program expanded internationally — to more than 2,500
libraries worldwide — secretary James Bertram standardized the design in his
1911 pamphlet “Notes on the Erection of Library Buildings,” which offered
grantees a choice of six models, believed to be the work of architect Edward
Tilton. Notably, they all included a lecture room.

In short, the library has always been a place where informational and social
infrastructures intersect within a physical infrastructure that (ideally)
supports that program.

Now we are seeing the rise of a new metaphor: the library as “platform” — a
buzzy word that refers to a base upon which developers create new
applications, technologies and processes. In an [influential 2012 article in
_Library Journal_](http://lj.libraryjournal.com/2012/09/future-of-libraries
/by-david-weinberger/), David Weinberger proposed that we think of libraries
as “open platforms” — not only for the creation of software, but also for the
development of knowledge and community. 4 Weinberger argued that libraries
should open up their entire collections, all their metadata, and any
technologies they’ve created, and allow anyone to build new products and
services on top of that foundation. The platform model, he wrote, “focuses our
attention away from the provisioning of resources to the foment” — the “messy,
rich networks of people and ideas” — that “those resources engender.” Thus the
ancient Library of Alexandria, part of a larger museum with botanical gardens,
laboratories, living quarters and dining halls, was a _platform_ not only for
the translation and copying of myriad texts and the compilation of a
magnificent collection, but also for the launch of works by Euclid,
Archimedes, Eratosthenes and their peers.

[![Domnique Perrault, La bibliothèque nationale de France, literally elevated
on a platform. \[Photo by Jean-Pierre
Dalbera\]](data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7)![Domnique
Perrault, La bibliothèque nationale de France, literally elevated on a
platform. \[Photo by Jean-Pierre Dalbera\]](https://placesjournal.org/wp-
content/uploads/2014/06/mattern-library-infrastructure-3x-
1020x679.jpg)](https://placesjournal.org/wp-content/uploads/2014/06/mattern-
library-infrastructure-3x.jpg) Domnique Perrault, La bibliothèque nationale de
France, literally elevated on a platform. [Photo by [Jean-Pierre
Dalbera](https://www.flickr.com/photos/dalbera/4944528385/)]

Yet the platform metaphor has limitations. For one thing, it smacks of Silicon
Valley entrepreneurial epistemology, which prioritizes “monetizable”
“knowledge solutions.” Further, its association with new media tends to
bracket out the similarly generative capacities of low-tech, and even _non_
-technical, library resources. One key misperception of those who proclaim the
library’s obsolescence is that its function as a knowledge institution can be
reduced to its technical services and information offerings. Knowledge is
never solely a product of technology and the information it delivers.

Another problem with the platform model is the image it evokes: a flat, two-
dimensional stage on which resources are laid out for users to _do stuff
with_. The platform doesn’t have any implied depth, so we’re not inclined to
look underneath or behind it, or to question its structure. Weinberger
encourages us to “think of the library not as a portal we go through on
occasion but as infrastructure that is as ubiquitous and persistent as the
streets and sidewalks of a town.” It’s like a “canopy,” he says — or like a
“cloud.” But these metaphors are more poetic than critical; they obfuscate all
the wires, pulleys, lights and scaffolding that you inevitably find underneath
and above that stage — and the casting, staging and direction that determine
what happens _on_ the stage, and that allow it to function _as_ a stage.
Libraries are infrastructures not only because they are ubiquitous and
persistent, but also, and primarily, because they are made of interconnected
networks that undergird all that foment, that create what Pierre Bourdieu
would call “[structuring
structures](http://books.google.com/books?id=WvhSEMrNWHAC&lpg=PA72&ots=puRmifuGmb&dq=bourdieu%20%22structuring%20structures%22&pg=PA72#v=onepage)”
that support Weinberger’s “messy, rich networks of people and ideas.”

It can be instructive for our libraries’ publics — and critical for our
libraries’ leaders — to assess those structuring structures. In this age of
e-books, smartphones, firewalls, proprietary media platforms and digital
rights management; of atrophying mega-bookstores and resurgent independent
bookshops and a metastasizing Amazon; of Google Books and Google Search and
Google Glass; of economic disparity and the continuing privatization of public
space and services — which is simultaneously an age of democratized media
production and vibrant DIY and activist cultures — libraries play a critical
role as mediators, at the hub of all the hubbub. Thus we need to understand
how our libraries function _as_ , and as _part of_ , infrastructural ecologies
— as sites where spatial, technological, intellectual and social
infrastructures shape and inform one another. And we must consider how those
infrastructures can embody the epistemological, political, economic and
cultural values that we _want_ to define our communities. 5

[![Hammond, Beeby and Babka, Harold Washington Library Center, Chicago Public
Library. \[Photo by Robert Dawson, from Public Library: An American
Commons\]](data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7)![Hammond,
Beeby and Babka, Harold Washington Library Center, Chicago Public Library.
\[Photo by Robert Dawson, from Public Library: An American
Commons\]](https://placesjournal.org/wp-content/uploads/2014/06/mattern-
library-infrastructure-4x.jpg)](https://placesjournal.org/wp-
content/uploads/2014/06/mattern-library-infrastructure-4x.jpg) Hammond, Beeby
and Babka, Harold Washington Library Center, Chicago Public Library. [Photo by
Robert Dawson, from _[Public Library: An American
Commons](https://placesjournal.org/article/public-library-an-american-
commons/)_ ]

## Library as Social Infrastructure

Public libraries are often seen as “opportunity institutions,” opening doors
to, and for, the disenfranchised. 6 People turn to libraries to access the
internet, take a GED class, get help with a resumé or job search, and seek
referrals to other community resources. A [recent
report](http://nycfuture.org/research/publications/branches-of-opportunity) by
the Center for an Urban Future highlighted the benefits to immigrants,
seniors, individuals searching for work, public school students and aspiring
entrepreneurs: “No other institution, public or private, does a better job of
reaching people who have been left behind in today’s economy, have failed to
reach their potential in the city’s public school system or who simply need
help navigating an increasingly complex world.” 7

The new Department of Outreach Services at the Brooklyn Public Library, for
instance, partners with other organizations to bring library resources to
seniors, school children and prison populations. The Queens Public Library
employs case managers who help patrons identify public benefits for which
they’re eligible. “These are all things that someone could dub as social
services,” said Queens Library president Thomas Galante, “but they’re not. … A
public library today has information to improve people’s lives. We are an
enabler; we are a connector.” 8

Partly because of their skill in reaching populations that others miss,
libraries have recently reported record circulation and visitation, despite
severe budget cuts, decreased hours and the [threatened closure or
sale](http://www.nydailynews.com/new-york/civic-group-city-bail-cash-strapped-
brooklyn-public-library-system-mired-300-million-repair-article-1.1748855) of
“underperforming” branches. 9 Meanwhile the Pew Research Center has released a
[series of studies](http://libraries.pewinternet.org/) about the materials and
services Americans want their libraries to provide. [Among the
findings](http://libraries.pewinternet.org/2013/12/11/libraries-in-
communities/): 90 percent of respondents say the closure of their local public
library would have an impact on their community, and 63 percent describe that
impact as “major.”

[![Toyo Ito, Sendai Mediatheque. \[Photo by Forgemind
Archimedia\]](data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7)![Toyo
Ito, Sendai Mediatheque. \[Photo by Forgemind
Archimedia\]](https://placesjournal.org/wp-content/uploads/2014/06/mattern-
library-infrastructure-5x-1020x757.jpg)](https://placesjournal.org/wp-
content/uploads/2014/06/mattern-library-infrastructure-5x.jpg)Toyo Ito, Sendai
Mediatheque. [Photo by [Forgemind
Archimedia](https://www.flickr.com/photos/eager/11996856324/)]

Libraries also bring communities together in times of calamity or disaster.
Toyo Ito, architect of the acclaimed [Sendai
Mediatheque](http://en.wikipedia.org/wiki/Sendai_Mediatheque), recalled that
after the 2011 earthquake in Japan, local officials reopened the library
quickly even though it had sustained minor damage, “because it functions as a
kind of cultural refuge in the city.” He continued, “Most people who use the
building are not going there just to read a book or watch a film; many of them
probably do not have any definite purpose at all. They go just to be part of
the community in the building.” 10

We need to attend more closely to such “social infrastructures,” the
“facilities and conditions that allow connection between people,” says
sociologist Eric Klinenberg. In [a recent
interview](http://urbanomnibus.net/2013/10/toward-a-stronger-social-
infrastructure-a-conversation-with-eric-klinenberg/), he argued that urban
resilience can be measured not only by the condition of transit systems and
basic utilities and communication networks, but also by the condition of
parks, libraries and community organizations: “open, accessible, and welcoming
public places where residents can congregate and provide social support during
times of need but also every day.” 11 In his book _Heat Wave_ , Klinenberg
noted that a vital public culture in Chicago neighborhoods drew people out of
sweltering apartments during the 1995 heat wave, and into cooler public
spaces, thus saving lives.

The need for physical spaces that promote a vibrant social infrastructure
presents many design opportunities, and some libraries are devising innovative
solutions. Brooklyn and other cultural institutions have
[partnered](http://www.informationforfamilies.org/Theres_No_Place_Like_Home/Jobs_68.html)
with the [Uni](http://www.theuniproject.org/find-the-uni/), a modular,
portable library that [I wrote about earlier in this
journal](https://placesjournal.org/article/marginalia-little-libraries-in-the-
urban-margins/). And modular solutions — kits of parts — are under
consideration in a design study sponsored by the Center for an Urban Future
and the Architectural League of New York, which aims to [reimagine New York
City’s library branches](http://urbanomnibus.net/2014/06/request-for-
qualifications-re-envisioning-branch-libraries/) so that they can more
efficiently and effectively serve their communities. CUF also plans to
publish, at the end of June, an audit of, and a proposal for, New York’s three
library systems. 12 _New York Times_ architecture critic Michael Kimmelman,
reflecting on the roles played by New York libraries [during recent
hurricanes](http://www.npr.org/2013/08/12/210541233/for-disasters-pack-a
-first-aid-kit-bottled-water-and-a-library-card), goes so far as to
[suggest](http://www.nytimes.com/2013/10/03/arts/design/next-time-libraries-
could-be-our-shelters-from-the-storm.html) that the city’s branch libraries,
which have “become our de facto community centers,” “could be designed in the
future with electrical systems out of harm’s way and set up with backup
generators and solar panels, even kitchens and wireless mesh networks.” 13

[![Bobst Library, New York University, after Hurricane Sandy. \[Photos by
bettyx1138\]](data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7)![Bobst
Library, New York University, after Hurricane Sandy. \[Photos by
bettyx1138\]](https://placesjournal.org/wp-content/uploads/2014/06/mattern-
library-infrastructure-6x-1020x551.jpg)](https://placesjournal.org/wp-
content/uploads/2014/06/mattern-library-infrastructure-6x.jpg) Bobst Library,
New York University, after Hurricane Sandy. [Photos by
[bettyx1138](https://www.flickr.com/photos/bettyx1138/8151244029/)]

But is it too much to expect our libraries to serve as soup kitchens and
recovery centers when they have so many other responsibilities? The library’s
broad mandate means that it often picks up the slack when other institutions
fall short. “It never ceases to amaze me just what libraries are looked upon
to provide,” says Ruth Faklis, director of the Prairie Trail Public Library
District in suburban Chicago:

> This includes, but is not limited to, [serving as] keepers of the homeless …
while simultaneously offering latch-key children a safe and activity-filled
haven. We have been asked to be voter-registration sites, warming stations,
notaries, technology-terrorism watchdogs, senior social-gathering centers,
election sites, substitute sitters during teacher strikes, and the latest —
postmasters. These requests of society are ever evolving. Funding is not
generally attached to these magnanimous suggestions, and when it is, it does
not cover actual costs of the additional burden, thus stretching the library’s
budget even further. I know of no other government entity that is asked to
take on additional responsibilities not necessarily aligned with its mission.
13

In a Metafilter discussion about funding cuts in California, one librarian
offered this poignant lament:

> Every day at my job I helped people just barely survive. … Forget trying to
be the “people’s university” and create a body of well informed citizens.
Instead I helped people navigate through the degrading hoops of modern online
society, fighting for scraps from the plate, and then kicking back afterwards
by pretending to have a farm on Facebook.

[ Read the whole story](http://www.metafilter.com/112698/California-
Dreamin#4183210). It’s quite a punch to the stomach. Given the effort
librarians expend in promoting basic literacies, how much more can this social
infrastructure support? Should we welcome the “design challenge” to engineer
technical and architectural infrastructures to accommodate an ever-
diversifying program — or should we consider that we might have stretched this
program to its limit, and that no physical infrastructure can effectively
scaffold such a motley collection of social services?

Again, we need to look to the infrastructural ecology — the larger network of
public services and knowledge institutions of which each library is a part.
How might towns, cities and regions assess what their various public (and
private) institutions are uniquely qualified and sufficiently resourced to do,
and then deploy those resources most effectively? Should we regard the library
as the territory of the civic _mind_ and ask other social services to attend
to the civic _body_? The assignment of social responsibility isn’t so black
and white — nor are the boundaries between mind and body, cognition and affect
— but libraries do need to collaborate with other institutions to determine
how they leverage the resources of the infrastructural ecology to serve their
publics, with each institution and organization contributing what it’s best
equipped to contribute — and each operating with a clear sense of its mission
and obligation.

Libraries have a natural affinity with cultural institutions. Just this
spring, New York Mayor Bill de Blasio [appointed Tom
Finkelpearl](http://www.nytimes.com/2014/04/07/arts/design/mayor-de-blasio-
names-tom-finkelpearl-of-the-queens-museum.html?_r=1) as the city’s new
Commissioner of Cultural Affairs. A former president of the Queens Museum,
Finkelpearl oversaw the first phase of a renovation by Grimshaw Architects,
which, in its next phase, will incorporate a Queens Public Library branch — an
effective pairing, given the commitment of both institutions to education and
local culture. Similarly, Lincoln Center houses the New York Public Library
for the Performing Arts. As commissioner, Finkelpearl could broaden support
for mixed-use development that strengthens infrastructural ecologies. The
[CUF/Architectural League project](http://urbanomnibus.net/2014/06/request-
for-qualifications-re-envisioning-branch-libraries/) is also considering how
collaborative partnerships can inform library program and design.

[![Bohlin Cywinski Jackson, Ballard Library and Neighborhood Service Center,
Seattle. \[Photo by Jules
Antonio\]](data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7)![Bohlin
Cywinski Jackson, Ballard Library and Neighborhood Service Center, Seattle.
\[Photo by Jules Antonio\]](https://placesjournal.org/wp-
content/uploads/2014/06/mattern-library-infrastructure-7x-
1020x724.jpg)](https://placesjournal.org/wp-content/uploads/2014/06/mattern-
library-infrastructure-7x.jpg)Bohlin Cywinski Jackson, Ballard Library and
Neighborhood Service Center, Seattle. [Photo by [Jules
Antonio](https://www.flickr.com/photos/julesantonio/8152446538/)]

I’ve recently returned from Seattle, where I revisited [OMA’s Central
Library](https://placesjournal.org/article/seattle-central-library-civic-
architecture-in-the-age-of-media/) on its 10th anniversary and toured several
new branch libraries. 15 Under the 1998 bond measure “Libraries for All,”
citizens voted to tax themselves to support construction of the Central
Library and four new branches, and to upgrade _every_ branch in the system.
The [vibrant, sweeping Ballard branch](http://www.archdaily.com/100821
/ballard-library-and-neighborhood-service-center-bohlin-cywinski-jackson/)
(2005), by Bohlin Cywinski Jackson, includes a separate entrance for the
Ballard Neighborhood Service Center, a “[little city
hall](http://www.seattle.gov/neighborhood-service-centers)“ where residents
can find information about public services, get pet licenses, pay utility
bills, and apply for passports and city jobs. While the librarians undoubtedly
field questions about such services, they’re also able to refer patrons next
door, where city employees are better equipped to meet their needs — thus
affording the library staff more time to answer reference questions and host
writing groups and children’s story hours.

Seattle’s City Librarian, Marcellus Turner, is big on partnerships —with
cultural institutions, like local theaters, as well as commercial
collaborators, like the Seahawks football team. 16 After taking the helm in
2011, he identified [five service priorities](http://www.spl.org/about-the-
library/mission-statement) — youth and early learning, technology and access,
community engagement, Seattle culture and history, and re-imagined spaces —
and tasked working groups with developing proposals for how the library can
better address those needs. Each group must consider marketing, funding, staff
deployment and partnership opportunities that “leverage what we have with what
[the partners] have.” For instance, “Libraries that focus on early-childhood
education might employ educators, academicians, or teachers to help us with
research into early-childhood learning and teaching.” 17

The “design challenge” is to consider what physical infrastructures would be
needed to accommodate such partnerships. 18 Many libraries have continued
along a path laid by library innovators from Ptolemy to Carnegie, renovating
their buildings to incorporate public gathering, multi-use, and even
commercial spaces. In Seattle’s Ballard branch, a large meeting room hosts
regular author readings and a vibrant writing group that typically attracts 30
or more participants. In Salt Lake City, the [library
plaza](http://www.slcpl.lib.ut.us/shops) features an artist co-op, a radio
station, a community writing center, the Library Store, and a few cafes — all
private businesses whose ethos is consistent with the library’s. The New York
Public Library has [recently announced](http://www.nypl.org/press/press-
release/april-30-2014/new-york-public-library-opens-doors-coursera-students)
that some of its branches will serve as “learning hubs” for Coursera, the
provider of “massive open online courses.” And many libraries have classrooms
and labs where they offer regular technical training courses.

[![Moshe Safdie, Salt Lake City Public Library. \[Photo by Pedro
Szekely\]](data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7)![Moshe
Safdie, Salt Lake City Public Library. \[Photo by Pedro
Szekely\]](https://placesjournal.org/wp-content/uploads/2014/06/mattern-
library-infrastructure-8x-1020x678.jpg)](https://placesjournal.org/wp-
content/uploads/2014/06/mattern-library-infrastructure-8x.jpg)Moshe Safdie,
Salt Lake City Public Library. [Photo by [Pedro
Szekely](https://www.flickr.com/photos/pedrosz/5139398125/)]

These entrepreneurial models reflect what seems to be an increasingly
widespread sentiment: that while libraries continue to serve a vital role as
“opportunity institutions” for the disenfranchised, this cannot be their
primary self-justification. They cannot duplicate the responsibilities of our
community centers and social service agencies. “Their narrative” — or what I’d
call an “epistemic framing,” by which I mean the way the library packages its
program as a knowledge institution, and the infrastructures that support it —
“must include everyone,” says the University of Michigan’s Kristin
Fontichiaro. 19 What programs and services are consistent with an institution
dedicated to lifelong learning? Should libraries be reconceived as hubs for
civic engagement, where communities can discuss local issues, create media,
and archive community history? 20 Should they incorporate media production
studios, maker-spaces and hacker labs, repositioning themselves in an evolving
ecology of information and educational infrastructures?

These new social functions — which may require new physical infrastructures to
support them — broaden the library’s narrative to include _everyone_ , not
only the “have-nots.” This is not to say that the library should abandon the
needy and focus on an elite patron group; rather, the library should
incorporate the “enfranchised” as a key public, both so that the institution
can reinforce its mission as a social infrastructure for an inclusive public,
_and_ so that privileged, educated users can bring their knowledge and talents
_to_ the library and offer them up as social-infrastructural resources.

Many among this well-resourced population — those who have jobs and home
internet access and can navigate the government bureaucracy with relative ease
— already see themselves as part of the library’s public. They regard the
library as a space of openness, egalitarianism and freedom (in multiple senses
of the term), within a proprietary, commercial, segregated and surveilled
landscape. They understand that no matter how well-connected they are, [they
actually _don’t_ have the world at their
fingertips](https://placesjournal.org/article/marginalia-little-libraries-in-
the-urban-margins/) — that “material protected by stringent copyright and held
in proprietary databases is often inaccessible outside libraries” and that,
“as digital rights management becomes ever more complicated, we … rely even
more on our libraries to help us navigate an increasingly fractured and
litigious digital terrain.” 21 And they recognize that they cannot depend on
Google to organize the world’s information. As the librarian noted in [that
discussion](http://www.metafilter.com/112698/California-Dreamin#4183210) on
Metafilter:

> The [American Library Association] has a proven history of commitment to
intellectual freedom. The public service that we’ve been replaced with has a
spotty history of “not being evil.” When we’re gone, you middle class, you
wealthy, you tech-savvy, who will fight for that with no profit motivation?
Even if you never step foot in our doors, and all of your media comes to a
brightly lit screen, we’re still working for you.

The library’s social infrastructure thus benefits even those who don’t have an
immediate need for its space or its services.

[![David Adjaye, Francis Gregory Neighborhood Library, Washington, D.C.
\[Photo by Edmund
Sumner\]](data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7)![David
Adjaye, Francis Gregory Neighborhood Library, Washington, D.C. \[Photo by
Edmund Sumner\]](https://placesjournal.org/wp-content/uploads/2014/06/mattern-
library-infrastructure-9x-1020x694.jpg)](https://placesjournal.org/wp-
content/uploads/2014/06/mattern-library-infrastructure-9x.jpg)David Adjaye,
Francis Gregory Neighborhood Library, Washington, D.C. [Photo by Edmund
Sumner]

Finally, we must acknowledge the library’s role as a civic landmark — a symbol
of what a community values highly enough to place on a prominent site, to
materialize in dignified architecture that communicates its openness to
everyone, and to support with sufficient public funding despite the fact that
it’ll never make a profit. A well-designed library — a contextually-designed
library — can reflect a community’s character back to itself, clarifying who
it is, in all its multiplicity, and what it stands for. 22 David Adjaye’s
[Bellevue](http://www.archdaily.com/258098/bellevue-library-adjaye-
associates/) and [Francis Gregory](http://www.archdaily.com/258109/francis-
gregory-library-adjaye-associates/) branch libraries, in historically
underserved neighborhoods of Washington D.C., have been lauded for performing
precisely this function. [As Sarah Williams Goldhagen
writes](http://www.newrepublic.com/article/112443/revolution-your-community-
library):

> Adjaye is so attuned to the nuances of urban context that one might be hard
pressed to identify them as the work of one designer. Francis Gregory is steel
and glass, Bellevue is concrete and wood. Francis Gregory presents a single
monolithic volume, Bellevue an irregular accretion of concrete pavilions.
Context drives the aesthetic.

His designs “make of this humble municipal building an arena for social
interaction, …a distinctive civic icon that helps build a sense of common
identity.” This kind of social infrastructure serves a vital need for an
entire community.

[![Stacks at the Stephen A. Schwarzman Building, New York Public Library.
\[Published in a 1911 issue of Scientific
American\]](data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7)![Stacks
at the Stephen A. Schwarzman Building, New York Public Library. \[Published in
a 1911 issue of Scientific American\]](https://placesjournal.org/wp-
content/uploads/2014/06/mattern-library-infrastructure-
10x.jpg)](https://placesjournal.org/wp-content/uploads/2014/06/mattern-
library-infrastructure-10x.jpg)Stacks at the Stephen A. Schwarzman Building,
New York Public Library. [Published in a 1911 issue of _Scientific American_ ]

## Library as Technological-Intellectual Infrastructure

Of course, we must not forget the library collection itself. The old-fashioned
bookstack was [at the center of the recent
debate](http://online.wsj.com/news/articles/SB10001424127887323751104578151653883688578)
over the proposed renovation of the New York Public Library’s Schwartzman
Building on 42nd Street, which was
[cancelled](http://www.nytimes.com/2014/05/08/arts/design/public-library-
abandons-plan-to-revamp-42nd-street-building.html) last month after more than
a year of lawsuits and protests. This storage infrastructure, and the delivery
system it accommodates, have tremendous significance even in a digital age.
For scholars, the stacks represent near-instant access to any materials within
the extensive collection. Architectural historians defended the historical
significance of the stacks, and engineers argued that they are critical to the
structural integrity of the building.

The way a library’s collection is stored and made accessible shapes the
intellectual infrastructure of the institution. The Seattle Public Library
uses [translucent acrylic
bookcases](http://blog.spacesaver.com/StoragesolvedwithSpacesaver/bid/33285
/You-re-not-going-crazy-Library-book-stacks-ARE-cool) made by Spacesaver — and
even here this seemingly mundane, utilitarian consideration cultivates a
character, an ambience, that reflects the library’s identity and its
intellectual values. It might sound corny, but the luminescent glow permeating
the stacks acts as a beacon, a welcoming gesture. There are still many
contemporary libraries that privilege — perhaps even fetishize — the book and
the bookstack: take MVRDV’s [Book
Mountain](http://www.mvrdv.nl/projects/spijkenisse/) (2012), for a town in the
Netherlands; or TAX arquitectura’s [Biblioteca Jose
Vasconcelos](http://www.designboom.com/architecture/biblioteca-vasconcelos-by-
tax-arquitectura-alberto-kalach/) (2006) in Mexico City.

Stacks occupy a different, though also fetishized, space in Helmut Jahn’s
[Mansueto Library](http://www.archdaily.com/143532/joe-and-rika-mansueto-
library-murphy-jahn/) (2011) at the University of Chicago, which mixes diverse
infrastructures to accommodate media of varying materialities: a grand reading
room, a conservation department, a digitization department, and [a
subterranean warehouse of books retrieved by
robot](https://www.youtube.com/watch?v=ESCxYchCaWI&feature=youtu.be). (It’s
worth noting that Boston and other libraries contained [book
railways](http://libraryhistorybuff.blogspot.com/2010/12/book-retrieval-
systems.html) and conveyer belt retrieval systems — proto-robots — a century
ago.) Snøhetta’s [James B. Hunt Jr.
Library](http://www.ncsu.edu/huntlibrary/watch/) (2013) at North Carolina
State University also incorporates a robotic storage and retrieval system, so
that the library can store more books on site, as well as meet its goal of
providing seating for 20 percent of the student population. 23 Here the
patrons come before the collection.

[![Rem Koolhaas/OMA, Seattle Central Library, Spacesaver bookshelves. \[Photo
by
brewbooks\]](data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7)![Rem
Koolhaas/OMA, Seattle Central Library, Spacesaver bookshelves. \[Photo by
brewbooks\]](https://placesjournal.org/wp-content/uploads/2014/06/mattern-
library-infrastructure-11x.jpg)](https://placesjournal.org/wp-
content/uploads/2014/06/mattern-library-infrastructure-11x.jpg)Rem
Koolhaas/OMA, Seattle Central Library, Spacesaver bookshelves. [Photo by
[brewbooks](https://www.flickr.com/photos/brewbooks/4472712525/)]

[![MVRDV, Book Mountain, Spijkenisse, The Netherlands. \[Photo via
MVRDV\]](data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7)![MVRDV,
Book Mountain, Spijkenisse, The Netherlands. \[Photo via
MVRDV\]](https://placesjournal.org/wp-content/uploads/2014/06/mattern-library-
infrastructure-12x.jpg)](https://placesjournal.org/wp-content/uploads/2014/06
/mattern-library-infrastructure-12x.jpg)MVRDV, Book Mountain, Spijkenisse, The
Netherlands. [Photo via MVRDV]

[![TAX, Biblioteca Vasconcelos, Mexico City. \[Photo by
Clinker\]](data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7)![TAX,
Biblioteca Vasconcelos, Mexico City. \[Photo by
Clinker\]](https://placesjournal.org/wp-content/uploads/2014/06/mattern-
library-infrastructure-13x.jpg)](https://placesjournal.org/wp-
content/uploads/2014/06/mattern-library-infrastructure-13x.jpg)TAX, Biblioteca
Vasconcelos, Mexico City. [Photo by
[Clinker](https://www.flickr.com/photos/photos_clinker/295038829/)]

[![Helmut Jahn, Mansueto Library, University of Chicago, reading room above
underground stacks. \[Photo by Eric Allix
Rogers\]](data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7)![Helmut
Jahn, Mansueto Library, University of Chicago, reading room above underground
stacks. \[Photo by Eric Allix Rogers\]](https://placesjournal.org/wp-
content/uploads/2014/06/mattern-library-infrastructure-
14x.jpg)](https://placesjournal.org/wp-content/uploads/2014/06/mattern-
library-infrastructure-14x.jpg)Helmut Jahn, Mansueto Library, University of
Chicago, reading room above underground stacks. [Photo by [Eric Allix
Rogers](https://www.flickr.com/photos/reallyboring/5766873063/)]

[![Mansueto Library stacks. \[Photo by Corey
Seeman\]](data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7)![Mansueto
Library stacks. \[Photo by Corey Seeman\]](https://placesjournal.org/wp-
content/uploads/2014/06/mattern-library-infrastructure-
15x.jpg)](https://placesjournal.org/wp-content/uploads/2014/06/mattern-
library-infrastructure-15x.jpg)Mansueto Library stacks. [Photo by [Corey
Seeman](https://www.flickr.com/photos/cseeman/14148827344/)]

Back in the early aughts, when I spent a summer touring libraries, the
institutions on the leading edge were integrating media production facilities,
recognizing that media “consumption” and “creation” lie on a gradient of
knowledge production. Today there’s a lot of talk about — [and action
around](http://www.infodocket.com/2013/12/16/results-of-makerspaces-in-
libraries-study-released/) — integrating hacker labs and maker-spaces. 24 As
Anne Balsamo explains, these sites offer opportunities — embodied, often
inter-generational learning experiences that are integral to the development
of a “technological imagination” — that are rarely offered in formal learning
institutions. 25

The Hunt Library has a maker-space, a GameLab, various other production labs
and studios, an immersion theater, and, rather eyebrow-raisingly, an Apple
Technology Showcase (named after library donors whose surname is Apple, with
an intentional pun on the electronics company). 26 One might think major
funding is needed for those kinds of programs, but the trend actually began in
2011 in tiny Fayetteville, New York (pop. 4,373), thought to be [the first
public library](http://www.forbes.com/sites/tjmccue/2011/11/15/first-public-
library-to-create-a-maker-space/) to have incorporated a maker-space. The
following year, the Carnegie Libraries of Pittsburgh — which for years has
hosted film competitions, gaming tournaments, and media-making projects for
youth — [launched](http://www.libraryasincubatorproject.org/?p=6653), with
Google and Heinz Foundation support, [The
Labs](http://www.clpgh.org/teens/events/programs/thelabs/): weekly workshops
at three locations where teenagers can access equipment, software and mentors.
Around the same time, Chattanooga — a city blessed with a [super-high-speed
municipal fiber network](http://www.washingtonpost.com/blogs/the-
switch/wp/2013/09/17/how-chattanooga-beat-google-fiber-by-half-a-decade/) —
opened its lauded [4th Floor](http://chattlibrary.org/4th-floor), a
12,000-square foot “public laboratory and educational facility” that “supports
the production, connection, and sharing of knowledge by offering access to
tools and instruction.” Those tools include 3D printers, laser cutters and
vinyl cutters, and the instruction includes everything from tech classes, to
incubator projects for female tech entrepreneurs, to [business pitch
competitions](http://www.nooga.com/158480/hundreds-attend-will-this-float-
business-pitch-event/).

Last year, the Brooklyn Public Library, just a couple blocks from where I
live, opened its [Levy Info
Commons](http://www.bklynlibrary.org/locations/central/infocommons), which
includes space for laptop users and lots of desktop machines featuring
creative software suites; seven reserveable teleconference-ready meeting
rooms, including one that doubles as a recording studio; and a training lab,
which offers an array of digital media workshops led by a local arts and
design organization and also invites patrons to lead their own courses. A
typical month on their robust event calendar includes resume editing
workshops, a Creative Business Tech prototyping workshop, individual meetings
with business counselors, Teen Tech tutorials, computer classes for seniors,
workshops on podcasting and oral history and “adaptive gaming” for people with
disabilities, and even an audio-recording and editing workshop targeted to
poets, to help them disseminate their work in new formats. Also last year, the
Martin Luther King, Jr., Memorial Library in Washington, D.C., opened its
[Digital Commons](http://www.washingtonpost.com/blogs/the-switch/wp/2013/08/07
/the-digital-age-is-forcing-libraries-to-change-heres-what-that-looks-like/),
where patrons can use a print-on-demand bookmaking machine, a 3D printer, and
a co-working space known as the “Dream Lab,” or try out a variety of e-book
readers. The Chicago Public Library partnered with the Museum of Science and
Industry to open [a pop-up maker lab](http://arstechnica.com/gadgets/2013/07
/3d-printing-for-all-inside-chicago-librarys-new-pop-up-maker-lab/) featuring
open-source design software, laser cutters, a milling machine, and (of course)
3D printers — not one, but _three_.

[![Chattanooga Public Library, 4th Floor. \[Photo by Larry
Miller\]](data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7)![Chattanooga
Public Library, 4th Floor. \[Photo by Larry
Miller\]](https://placesjournal.org/wp-content/uploads/2014/06/mattern-
library-infrastructure-17x-1020x680.jpg)](https://placesjournal.org/wp-
content/uploads/2014/06/mattern-library-infrastructure-17x.jpg) Chattanooga
Public Library, 4th Floor. [Photo by [Larry
Miller](https://www.flickr.com/photos/drmillerlg/9228431656/sizes/l)]

[![Snøhetta, James B. Hunt, Jr. Library, North Carolina State University,
MakerBot in Apple Technology Showcase. \[Photo by Mal
Booth\]](data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7)![Snøhetta,
James B. Hunt, Jr. Library, North Carolina State University, MakerBot in Apple
Technology Showcase. \[Photo by Mal Booth\]](https://placesjournal.org/wp-
content/uploads/2014/06/mattern-library-infrastructure-16x-
1020x680.jpg)](https://placesjournal.org/wp-content/uploads/2014/06/mattern-
library-infrastructure-16x.jpg)Snøhetta, James B. Hunt, Jr. Library, North
Carolina State University, MakerBot in Apple Technology Showcase. [Photo by
[Mal Booth](https://www.flickr.com/photos/malbooth/10401308096/sizes/l)]

[![Hunt Library, iPearl Immersion Theater. \[Photo by Payton
Chung\]](data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7)![Hunt
Library, iPearl Immersion Theater. \[Photo by Payton
Chung\]](https://placesjournal.org/wp-content/uploads/2014/06/mattern-library-
infrastructure-18x-1020x573.jpg)](https://placesjournal.org/wp-
content/uploads/2014/06/mattern-library-infrastructure-18x.jpg)Hunt Library,
iPearl Immersion Theater. [Photo by [Payton
Chung](https://www.flickr.com/photos/paytonc/8758630775/sizes/l)]

Some have proposed that libraries — following in the tradition of Alexandria’s
“think tank,” and compelled by a desire to “democratize entrepreneurship” —
make for ideal [co-working or incubator
spaces](http://www.citylab.com/work/2013/02/why-libraries-should-be-next-
great-startup-incubators/4733/), where patrons with diverse skill sets can
organize themselves into start-ups-for-the-people. 27 Others recommend that
librarians entrepreneurialize _themselves_ , rebranding themselves as
professional consultants in a complex information economy. Librarians, in this
view, are uniquely qualified digital literacy tutors; experts in “copyright
compliance, licensing, privacy, information use, and ethics”; gurus of
“aligning … programs with collections, space, and resources”; skilled creators
of “custom ontologies, vocabularies, taxonomies” and structured data; adept
practitioners of data mining. 28 Others recommend that libraries get into the
content production business. In the face of increasing pressure to rent and
license proprietary digital content with stringent use policies, why don’t
libraries do more to promote the creation of independent media or develop
their own free, open-source technologies? Not many libraries have the time and
resources to undertake such endeavors, but [NYPL
Labs](http://www.nypl.org/collections/labs) and Harvard’s [Library Test
Kitchen](http://www.librarytestkitchen.org/), have demonstrated what’s
possible when even back-of-house library spaces become sites of technological
praxis. Unfortunately, those innovative projects are typically hidden behind
the interface (as with so much library labor). Why not bring those operations
to the front of the building, as part of the public program?

Of course, with all these new activities come new spatial requirements.
Library buildings must incorporate a wide variety of furniture arrangements,
lighting designs, acoustical conditions, etc., to accommodate multiple sensory
registers, modes of working, postures and more. Librarians and designers are
now acknowledging — and designing _for_ , rather than designing _out_ —
activities that make noise and can occasionally be a bit messy. I did a study
several years ago on the evolution of library sounds and found widespread
recognition that knowledge-making doesn’t readily happen when “shhh!” is the
prevailing rule. 29

These new physical infrastructures create space for an epistemology embracing
the integration of knowledge consumption and production, of thinking and
making. Yet sometimes I have to wonder, given all the hoopla over “making”:
_are_ tools of computational fabrication really the holy grail of the
knowledge economy? What _knowledge_ is produced when I churn out, say, a
keychain on a MakerBot? I worry that the boosterism surrounding such projects
— and the much-deserved acclaim they’ve received for “rebranding” the library
— glosses over the neoliberal values that these technologies sometimes embody.
Neoliberalism channels the pursuit of individual freedom through property
rights and free markets 30 — and what better way to express yourself than by
3D-printing a bust of your own head at the library, or using the library’s CNC
router to launch your customizable cutting board business on Etsy? While
librarians have long been advocates of free and democratic access to
information, I trust — I hope — that they’re helping their patrons to
cultivate a [critical perspective](https://placesjournal.org/article
/tedification-versus-edification/) regarding [the politics of “technological
innovation”](http://en.wikipedia.org/wiki/The_Californian_Ideology) — and the
potential instrumentalism of makerhood. Sure, Dewey was part of this
instrumentalist tradition, too. But our contemporary pursuit of “innovation”
promotes the idea that “making new stuff” = “producing knowledge,” which can
be a dangerous falsehood.

Library staff might want to take up the critique of “innovation,” too. Each
new Google product release, new mobile technology development, new e-reader
launch brings new opportunities for the library to innovate in response. And
while “keeping current” is a crucial goal, it’s important to place that
pursuit in a larger cultural, political-economic and institutional context.
Striving to stay technologically relevant can backfire when it means merely
responding to the profit-driven innovations of commercial media; we see these
mistakes — innovation for innovation’s sake — in the [ed-
tech](http://en.wikipedia.org/wiki/Educational_technology) arena quite often.

[![George Peabody Library, The John Hopkins University. \[Photo by Thomas
Guignard\]](data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7)![George
Peabody Library, The John Hopkins University. \[Photo by Thomas
Guignard\]](https://placesjournal.org/wp-content/uploads/2014/06/mattern-
library-infrastructure-19x-1020x680.jpg)](https://placesjournal.org/wp-
content/uploads/2014/06/mattern-library-infrastructure-19x.jpg)George Peabody
Library, The John Hopkins University. [Photo by [Thomas
Guignard](https://www.flickr.com/photos/timtom/5304555668/)]

## Reading across the Infrastructural Ecology

Libraries need to stay focused on their long-term cultural goals — which
should hold true regardless of what Google decides to do tomorrow — and on
their place within the larger infrastructural ecology. They also need to
consider how their various infrastructural identities map onto each other, or
don’t. Can an institution whose technical and physical infrastructure is
governed by the pursuit of innovation also fulfill its obligations as a social
infrastructure serving the disenfranchised? What ethics are embodied in the
single-minded pursuit of “the latest” technologies, or the equation of
learning with entrepreneurialism?

As Zadie Smith [argued
beautifully](http://www.nybooks.com/blogs/nyrblog/2012/jun/02/north-west-
london-blues/) in the _New York Review of Books_ , we risk losing the
library’s role as a “different kind of social reality (of the three
dimensional kind), which by its very existence teaches a system of values
beyond the fiscal.” 31 Barbara Fister, a librarian at Gustavus Adolphus
College, offered an [equally eloquent
plea](http://www.insidehighered.com/blogs/library-babel-fish/some-assumptions-
about-libraries#sthash.jwJlhrsD.dpbs) for the library as a space of exception:

> Libraries are not, or at least should not be, engines of productivity. If
anything, they should slow people down and seduce them with the unexpected,
the irrelevant, the odd and the unexplainable. Productivity is a destructive
way to justify the individual’s value in a system that is naturally communal,
not an individualistic or entrepreneurial zero-sum game to be won by the most
industrious. 32

Libraries, she argued, “will always be at a disadvantage” to Google and Amazon
because they value privacy; they refuse to exploit users’ private data to
improve the search experience. Yet libraries’ failure to compete in
_efficiency_ is what affords them the opportunity to offer a “different kind
of social reality.” I’d venture that there _is_ room for entrepreneurial
learning in the library, but there also has to be room for that alternate
reality where knowledge needn’t have monetary value, where learning isn’t
driven by a profit motive. We can accommodate both spaces for entrepreneurship
_and_ spaces of exception, provided the institution has a strong _epistemic
framing_ that encompasses both. This means that the library needs to know how
to read _itself_ as a social-technical-intellectual infrastructure.

It’s particularly important to cultivate these critical capacities — the
ability to “read” our libraries’ multiple infrastructures and the politics and
ethics they embody — when the concrete infrastructures look like San Antonio’s
[BiblioTech](http://bexarbibliotech.org/), a “bookless” library featuring
10,000 e-books, downloadable via the 3M Cloud App; 600 circulating “stripped
down” 3M e-readers; 200 “enhanced” tablets for kids; and, for use on-site, 48
computers, plus laptops and iPads. The library, which opened last fall, also
offers computer classes and meeting space, but it’s all locked within a
proprietary platformed world.

[![Bexar County BiblioTech, San Antonio, Texas. \[Photo by Bexar
BiblioTech\]](data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7)![Bexar
County BiblioTech, San Antonio, Texas. \[Photo by Bexar
BiblioTech\]](https://placesjournal.org/wp-content/uploads/2014/06/mattern-
library-infrastructure-21x-1020x573.jpg)](https://placesjournal.org/wp-
content/uploads/2014/06/mattern-library-infrastructure-21x.jpg)Bexar County
BiblioTech, San Antonio, Texas. [Photo by Bexar BiblioTech]

[![Screenshot of the library’s fully digital collection. \[Photo by Bexar
BiblioTech\]](data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7)![Screenshot
of the library’s fully digital collection. \[Photo by Bexar
BiblioTech\]](https://placesjournal.org/wp-content/uploads/2014/06/mattern-
library-infrastructure-20x.jpg)](https://placesjournal.org/wp-
content/uploads/2014/06/mattern-library-infrastructure-20x.jpg)Screenshot of
the library’s fully digital collection. [Photo by Bexar BiblioTech]

In libraries like BiblioTech — and the [Digital Public Library of
America](http://dp.la/) — the collection itself is off-site. Do _patrons_
wonder where, exactly, all those books and periodicals and cloud-based
materials _live_? What’s under, or floating above, the “platform”? Do they
think about the algorithms that lead them to particular library materials, and
the conduits and protocols through which they access them? Do they consider
what it means to supplant bookstacks with server stacks — whose metal racks we
can’t kick, lights we can’t adjust, knobs we can’t fiddle with? Do they think
about the librarians negotiating access licenses and adding metadata to
“digital assets,” or the engineers maintaining the servers? With the
increasing recession of these technical infrastructures — and the human labor
that supports them — further off-site, [behind the
interface](https://placesjournal.org/article/interfacing-urban-intelligence/),
deeper inside the black box, how can we understand the ways in which those
structures structure our intellect and sociality?

We need to develop — both among library patrons and librarians themselves —
new critical capacities to understand the _distributed_ physical, technical
and social architectures that scaffold our institutions of knowledge and
program our values. And we must consider where those infrastructures intersect
— where they should be, and perhaps aren’t, mutually reinforcing one another.
When do our social obligations compromise our intellectual aspirations, or
vice versa? And when do those social or intellectual aspirations for the
library exceed — or fail to fully exploit — the capacities of our
architectural and technological infrastructures? Ultimately, we need to ensure
that we have a strong epistemological framework — a narrative that explains
how the library promotes learning and stewards knowledge — so that everything
hangs together, so there’s some institutional coherence. We need to sync the
library’s intersecting infrastructures so that they work together to support
our shared intellectual and ethical goals.

![Places Journal](https://placesjournal.org/wp-content/themes/places/img
/article-footer-logo.png)

Places Journal is Supported by Readers Like You.
Please [Subscribe](https://placesjournal.org/newsletter/ "Places Newsletter
Signup") or [Donate](https://placesjournal.org/donate "Donate").

###### Author's Note

I’d like to thank the students in my “Archives, Libraries and Databases”
seminar and my “Digital Archives” studio at The New School, who’ve given me
much food for thought over the years. Thanks, too, to my colleagues at the
[Architectural League of New York](http://archleague.org/) and the [Center for
an Urban Future](http://nycfuture.org/). I owe a debt of gratitude also to
Gabrielle Dean, her students, and her colleagues at Johns Hopkins, who gave me
an opportunity to share a preliminary draft of this work. They, along with my
colleagues Julie Foulkes and Aleksandra Wagner, offered feedback for which I’m
very grateful.

###### Notes

1. See Matthew Battles, _Library: An Unquiet History_ (New York: W.W. Norton, 2003); Lionel Casson, _Libraries in the Ancient World_ (New Haven: Yale University Press, 2001); Fred Lerner, _The Story of Libraries_ (New York: Continuum, 1999).
2. Casson explains that when Alexandria was a brand new city in the third century B.C., its founders enticed intellectuals to the city — in an attempt to establish it as a cultural center — with the famous Museum, “a figurative temple for the muses, a place for cultivating the arts they symbolized. It was an ancient version of a think-tank: the members, consisting of noted writers, poets, scientists, and scholars, were appointed by the Ptolemies for life and enjoyed a handsome salary, tax exemption … free lodging, and food. … It was for them that the Ptolemies founded the library of Alexandria” [33-34].
3. Donald Oehlerts, _Books and Blueprints: Building America’s Public Libraries_ (New York: Greenwood Press, 1991): 62.
4. David Weinberger, “[Library as Platform](http://lj.libraryjournal.com/2012/09/future-of-libraries/by-david-weinberger/),” _Library Journal_ (September 4, 2012).
5. For more on “infrastructural ecologies,” see Reyner Banham, _Los Angeles: The Architecture of Four Ecologies_ (Berkeley, University of California Press, 2009 [1971]); Alan Latham, Derek McCormack, Kim McNamara and Donald McNeil, _Key Concepts in Urban Geography_ (Thousand Oaks, CA: Sage, 2009): 32; Ming Xu and Josh P. Newell, “[Infrastructure Ecology: A Conceptual Mode for Understanding Urban Sustainability](http://css.snre.umich.edu/publication/infrastructure-ecology-conceptual-model-understanding-urban-sustainability),” Sixth International Conference of the International Society for Industrial Ecology (ISIE) Proceedings, Berkeley, CA, June 7-10, 2011; Anu Ramaswami, Christopher Weible, Deborah Main, Tanya Heikkila, Saba Siddiki, Andrew Duvail, Andrew Pattison and Meghan Bernard, “A Social-Ecological-Infrastructural Systems Framework for Interdisciplinary Study of Sustainable City Systems,” _Journal of Industrial Ecology_ 16:6 (December 2012): 801-13. Most references to infrastructural ecologies — and there are few — pertain to systems at the urban scale, but I believe a library is a sufficiently complicated institution, residing at nexus of myriad networks, that it constitutes an infrastructural ecology in its own right.
6. Center for an Urban Future, [“Opportunity Institutions” Conference](http://nycfuture.org/events/event/opportunity-institutions) (March 11, 2013). See also Jesse Hicks and Julie Dressner’s video “[Libraries Now: A Day in the Life of NYC’s Branches](http://nymag.com/daily/intelligencer/2014/05/libraries-now-new-york-video.html)” (May 16, 2014).
7. Center for an Urban Future, _[Branches of Opportunity](http://nycfuture.org/research/publications/branches-of-opportunity)_ (January 2013): 3.
8. Quoted in Katie Gilbert, “[What Is a Library?](http://narrative.ly/long-live-the-book/what-is-a-library/)” _Narratively_ (January 2, 2014).
9. Real estate sales are among the most controversial elements in the New York Public Library’s much-disputed Central Library Plan, which is premised on the sale of the library’s Mid-Manhattan branch and its Science, Industry and Business Library. See Scott Sherman, “[The Hidden History of New York City’s Central Library Plan](http://www.thenation.com/article/175966/hidden-history-new-york-citys-central-library-plan),” _The Nation_ (August 28, 2013).
10. Toyo Ito, “The Building After,” _Artforum_ (September 2013).
11. Eric Klinenberg, “[Toward a Stronger Social Infrastructure: A Conversation with Eric Klinenberg](http://urbanomnibus.net/2013/10/toward-a-stronger-social-infrastructure-a-conversation-with-eric-klinenberg/),” _Urban Omnibus_ (October 16, 2013).
12. I’m a member of the organizing team for this project, and I hope to write more about its outcomes in a future article for this journal.
13. Michael Kimmelman, “[Next Time, Libraries Could Be Our Shelters From the Storm](http://www.nytimes.com/2013/10/03/arts/design/next-time-libraries-could-be-our-shelters-from-the-storm.html),” _New York Times_ (October 2, 2013).
14. Ruth Faklis, in Joseph Janes, Ed., _Library 2020: Today’s Leading Visionaries Describe Tomorrow’s Library_ (Lanham: Scarecrow Press, 2013): 96-7.
15. The Seattle Central Library was a focus of [my first book](http://www.upress.umn.edu/book-division/books/the-new-downtown-library), on public library design. See _The New Downtown Library: Designing With Communities_ (Minneapolis: University of Minnesota Press, 2007).
16. Personal communication with Marcellus Turner, March 21, 2014.
17. Marcellus Turner in _Library 2020_ : 92.
18. Ken Worpole addresses library partnerships, and their implications for design in his _Contemporary Library Architecture: A Planning and Design Guide_ (New York: Routledge, 2013). The book offers a comprehensive look the public roles that libraries serve, and how they inform library planning and design.
19. Kristin Fontichiaro in _Library 2020_ : 8.
20. See Bill Ptacek in _Library 2020_ : 119.
21. The quotations are from my earlier article for Places, “[Marginalia: Little Libraries in the Urban Margins](http://places.designobserver.com/feature/little-libraries-and-tactical-urbanism/33968/).” Within mass-digitization projects like Google Books, as Elisabeth Jones explains, “works that are still in copyright but out of print and works of indeterminate copyright status and/or ownership” will fall between the cracks (in _Library 2020_ : 17).
22. I dedicate a chapter in _The New Downtown Library_ to what makes a library “contextual” — and I address just how slippery that term can be.
23. This sentence was amended after publication to note the multiple motives of implementing the bookBot storage and retrieval system; its compact storage allowed the library to reintegrate some collections that were formerly stored off-site. The library has also developed a Virtual Browse catalog system, which aims to promote virtual discovery that isn’t possible in the physical stacks.
24. According to a late 2013 web-based survey of libraries, 41 percent of respondents provide maker-spaces or maker activities in their libraries, and 36 percent plan to create such spaces in the near future. Most maker-spaces, 51 percent, are in public libraries; 36 percent are in academic libraries; and 9 percent are in school libraries. And among the most popular technologies or technological processes supported in those spaces are computer workstations (67 percent), 3D printers (46 percent), photo editing (45 percent), video editing (43 percent), computer programming/software (39 percent). 33 oercent accommodated digital music recording; 31 percent accommodated 3D modeling, and 30 percent featured work with Arduino and Raspberry Pi circuit boards (Gary Price, “[Results From ‘Makerspaces in Libraries’ Study Released](http://www.infodocket.com/2013/12/16/results-of-makerspaces-in-libraries-study-released/),” _Library Journal_ (December 16, 2013). See also James Mitchell, “[Beyond the Maker Space](http://lj.libraryjournal.com/2014/05/opinion/backtalk/beyond-the-maker-space-backtalk/),” _Library Journal_ (May 27, 2014).
25. Anne Balsamo, “[Videos and Frameworks for ‘Tinkering’ in a Digital Age](http://spotlight.macfound.org/blog/entry/anne-balsamo-tinkering-videos/),” Spotlight on Digital Media and Learning (January 30, 2009).
26. This sentence was amended after publication to note that the Apple Technology Showcase was named after former NCSU faculty member Dr. J. Lawrence Apple and his wife, Ella Apple; in an email to the author, library director Carolyn Argentati wrote that the corporate pun was intentional.
27. Emily Badger, “[Why Libraries Should Be the Next Great Start-Up Incubators](http://www.citylab.com/work/2013/02/why-libraries-should-be-next-great-startup-incubators/4733/),” _Atlantic Cities_ (February 19, 2003).
28. Stephen Abram in _Library 2020_ : 46; Courtney Greene in _Library 2020_ : 51.
29. See my “[Resonant Texts: Sounds of the Contemporary American Public Library](http://www.wordsinspace.net/publications/Mattern_Senses%20and%20Society.pdf),” _The Senses & Society_ 2:3 (Fall 2007): 277-302.
30. See David Harvey, _A Brief History of Neoliberalism_ (New York: Oxford University Press, 2005).
31. Zadie Smith, “[The North West London Blues](http://www.nybooks.com/blogs/nyrblog/2012/jun/02/north-west-london-blues/),” _New York Review of Books_ Blog (June 2, 2012).
32. Barbara Fister, “Some Assumptions About Libraries,” Inside Higher Ed (January 2, 2014).

###### __Cite

Shannon Mattern, "Library as Infrastructure," _Places Journal_ , June 2014.
Accessed 09 Jun 2019.


Mattern
Making Knowledge Available
2018


# Making Knowledge Available

## The media of generous scholarship

[Shannon Mattern](http://www.publicseminar.org/author/smattern/ "Posts by
Shannon Mattern") -- [March 22, 2018](http://www.publicseminar.org/2018/03
/making-knowledge-available/ "Permalink to Making Knowledge Available")

[__ 0](http://www.publicseminar.org/2018/03/making-knowledge-
available/#respond)

[__](http://www.facebook.com/sharer.php?u=http%3A%2F%2Fwww.publicseminar.org%2F2018%2F03
%2Fmaking-knowledge-available%2F&t=Making+Knowledge+Available "Share on
Facebook")[__](https://twitter.com/home?status=Making+Knowledge+Available+http%3A%2F%2Fwww.publicseminar.org%2F2018%2F03
%2Fmaking-knowledge-available%2F "Share on
Twitter")[__](https://plus.google.com/share?url=http%3A%2F%2Fwww.publicseminar.org%2F2018%2F03
%2Fmaking-knowledge-available%2F "Share on
Google+")[__](http://pinterest.com/pin/create/button/?url=http%3A%2F%2Fwww.publicseminar.org%2F2018%2F03
%2Fmaking-knowledge-available%2F&media=http://www.publicseminar.org/wp-
content/uploads/2018/03/6749000895_ea0145ed2d_o-150x150.jpg&description=Making
Knowledge Available "Share on Pinterest")

[ ![](http://www.publicseminar.org/wp-content/uploads/2018/03
/6749000895_ea0145ed2d_o-750x375.jpg) ](http://www.publicseminar.org/wp-
content/uploads/2018/03/6749000895_ea0145ed2d_o.jpg "Making Knowledge
Available")

__Visible Knowledge © Jasinthan Yoganathan | Flickr

A few weeks ago, shortly after reading that Elsevier, the world’s largest
academic publisher, had made over €1 billion in profit in 2017, I received
notice of a new journal issue on decolonization and media.* “Decolonization”
denotes the dismantling of imperialism, the overturning of systems of
domination, and the founding of new political orders. Recalling Achille
Mbembe’s exhortation that we seek to decolonize our knowledge production
practices and institutions, I looked forward to exploring this new collection
of liberated learning online – amidst that borderless ethereal terrain where
information just wants to be free. (…Not really.)

Instead, I encountered a gate whose keeper sought to extract a hefty toll: $42
to rent a single article for the day, or $153 to borrow it for the month. The
keeper of that particular gate, mega-publisher Taylor & Francis, like the
keepers of many other epistemic gates, has found toll-collecting to be quite a
profitable business. Some of the largest academic publishers have, in recent
years, achieved profit margins of nearly 40%, higher than those of Apple and
Google. Granted, I had access to an academic library and an InterLibrary Loan
network that would help me to circumvent the barriers – yet I was also aware
of just how much those libraries were paying for that access on my behalf; and
of all the un-affiliated readers, equally interested and invested in
decolonization, who had no academic librarians to serve as their liaisons.

I’ve found myself standing before similar gates in similar provinces of
paradox: the scholarly book on “open data” that sells for well over $100; the
conference on democratizing the “smart city,” where tickets sell for ten times
as much. Librarian Ruth Tillman was [struck with “acute irony
poisoning”](https://twitter.com/ruthbrarian/status/932701152839454720) when
she encountered a costly article on rent-seeking and value-grabbing in a
journal of capitalism and socialism, which was itself rentable by the month
for a little over $900.

We’re certainly not the first to acknowledge the paradox. For decades, many
have been advocating for open-access publishing, authors have been campaigning
for less restrictive publishing agreements, and librarians have been
negotiating with publishers over exorbitant subscription fees. That fight
continues: in mid-February, over 100 libraries in the UK and Ireland
[submitted a letter](https://www.sconul.ac.uk/page/open-letter-to-the-
management-of-the-publisher-taylor-francis) to Taylor & Francis protesting
their plan to lock up content more than 20 years old and sell it as a separate
package.

My coterminous discoveries of Elsevier’s profit and that decolonization-
behind-a-paywall once again highlighted the ideological ironies of academic
publishing, prompting me to [tweet
something](https://twitter.com/shannonmattern/status/969418644240420865) half-
baked about academics perhaps giving a bit more thought to whether the
politics of their publishing  _venues_  – their media of dissemination –
matched the politics they’re arguing for in their research. Maybe, I proposed,
we aren’t serving either ourselves or our readers very well by advocating for
social justice or “the commons” – or sharing progressive research on labor
politics and care work and the elitism of academic conventions – in journals
that extract huge profits from free labor and exploitative contracts and fees.

Despite my attempt to drown my “call to action” in a swamp of rhetorical
conditionals – “maybe” I was “kind-of” hedging “just a bit”? – several folks
quickly, and constructively, pointed out some missing nuances in my tweet.
[Librarian and LIS scholar Emily Drabinski
noted](https://twitter.com/edrabinski/status/969629307147563008) the dangers
of suggesting that individual “bad actors” are to blame for the hypocrisies
and injustices of a broken system – a system that includes authors, yes, but
also publishers of various ideological orientations, libraries, university
administrations, faculty review committees, hiring committees, accreditors,
and so forth.

And those authors are not a uniform group. Several junior scholars replied to
say that they think  _a lot_  about the power dynamics of academic publishing
(many were “hazed,” at an early age, into the [Impact
Factor](https://en.wikipedia.org/wiki/Impact_factor) Olympics, encouraged to
obsessively count citations and measure “prestige”). They expressed a desire
to experiment with new modes and media of dissemination, but lamented that
they had to bracket their ethical concerns and aesthetic aspirations. Because
tenure. Open-access publications, and more-creative-but-less-prestigious
venues, “don’t count.” Senior scholars chimed in, too, to acknowledge that
scholars often publish in different venues at different times for different
purposes to reach different audiences (I’d add, as well, that some
conversations need to happen in enclosed, if not paywalled, environments
because “openness” can cultivate dangerous vulnerabilities). Some also
concluded that, if we want to make “open access” and public scholarship – like
that featured in  _Public Seminar_  – “count,” we’re in for a long battle: one
that’s best waged within big professional scholarly associations. Even then,
there’s so much entrenched convention – so many naturalized metrics and
administrative structures and cultural habits – that we’re kind-of stuck with
these rentier publishers (to elevate the ingrained irony: in August 2017,
Elsevier acquired bepress, an open-access digital repository used by many
academic institutions). They need our content and labor, which we willing give
away for free, because we need their validation even more.

All this is true. Still, I’d prefer to think that we  _can_ actually resist
rentierism, reform our intellectual infrastructures, and maybe even make some
progress in “decolonizing” the institution over the next years and decades. As
a mid-career scholar, I’d like to believe that my peers and I, in
collaboration with our junior colleagues and colleagues-to-be, can espouse new
values – which include attention to the political, ethical, and even aesthetic
dimensions of the means and  _media_ through which we do our scholarship – in
our search committees, faculty reviews, and juries. Change  _can_  happen at
the local level; one progressive committee can set an example for another, and
one college can do the same. Change can take root at the mega-institutional
scale, too. Several professional organizations, like the Modern Language
Association and many scientific associations, have developed policies and
practices to validate open-access publishing. We can look, for example, to the
[MLA Commons](https://mla.hcommons.org/) and the [Manifold publishing
platform](https://manifold.umn.edu/). We can also look to Germany, where a
nationwide consortium of libraries, universities, and research institutes has
been battling Elsevier since 2016 over their subscription and access policies.
Librarians have long been advocates for ethical publishing, and [as Drabinski
explains](https://crln.acrl.org/index.php/crlnews/article/view/9568/10924),
they’re equipped to consult with scholars and scholarly organizations about
the publication media and platforms that best reinforce their core values.
Those values are the chief concern of the [HuMetricsHSS
initiative](http://humetricshss.org/about-2/), which is imagining a “more
humane,” values-based framework for evaluating scholarly work.

We also need to acknowledge the work of those who’ve been advocating for
similar ideals – and working toward a more ethically reflective publishing
culture – for years. Let’s consider some examples from the humanities and
social sciences – like the path-breaking [Institute for the Future of the
Book](http://www.futureofthebook.org/), which provided the platform where my
colleague McKenzie Wark publicly edited his [ _Gamer
Theory_](http://futureofthebook.org/gamertheory2.0/) back in 2006. Wark’s book
began online and became a print book, published by Harvard. Several
institutions – MIT; [Minnesota](https://www.upress.umn.edu/book-
division/series/forerunners-ideas-first); [Columbia’s Graduate School of
Architecture, Planning, and Preservation
](https://www.arch.columbia.edu/books)(whose publishing unit is led by a New
School alum, James Graham, who also happens to be a former thesis advisee);
Harvard’s [Graduate School of Design
](http://www.gsd.harvard.edu/publications/)and
[metaLab](http://www.hup.harvard.edu/collection.php?cpk=2006); and The New
School’s own [Vera List Center
](http://www.veralistcenter.org/engage/publications/1993/entry-pointsthe-vera-
list-center-field-guide-on-art-and-social-justice-no-1/)– have been
experimenting with the printed book. And individual scholars and
practitioners, like Nick Sousanis, who [published his
dissertation](http://www.hup.harvard.edu/catalog.php?isbn=9780674744431) as a
graphic novel, regard the bibliographic form as integral to their arguments.

Kathleen Fitzpatrick has also been a vibrant force for change, through her
work with the [MediaCommons](http://mediacommons.futureofthebook.org/) digital
scholarly network, her two [open-review ](http://www.plannedobsolescence.net
/peer-to-peer-review-and-its-aporias/)books, and [her
advocacy](http://www.plannedobsolescence.net/evolving-standards-and-practices-
in-tenure-and-promotion-reviews/) for more flexible, more thoughtful faculty
review standards. Her new manuscript,  _Generous Thinking_ , which lives up to
its name, proposes [public intellectualism
](https://generousthinking.hcommons.org/4-working-in-public/public-
intellectuals/)as one such generous practice and advocates for [its positive
valuation](https://generousthinking.hcommons.org/5-the-university/) within the
academy. “What would be required,” she asks, “for the university to begin
letting go of the notion of prestige and of the competition that creates it in
order to begin aligning its personnel processes with its deepest values?” Such
a realignment, I want to emphasize, need not mean a reduction in rigor, as
some have worried; we can still have standards, while insisting that they
correspond to our values. USC’s Tara McPherson has modeled generous and
careful scholarship through her own work and her collaborations in developing
the [Vectors](http://vectors.usc.edu/issues/index.php?issue=7) and
[Scalar](https://scalar.me/anvc/scalar/) publishing platforms, which launched
in 2005 and 2013, respectively.  _Public Seminar_  is [part of that long
tradition](http://www.publicseminar.org/2017/09/the-life-of-the-mind-online/),
too.

Individual scholars – particularly those who enjoy some measure of security –
can model a different pathway and advocate for a more sane, sustainable, and
inclusive publication and review system. Rather than blaming the “bad actors”
for making bad choices and perpetuating a flawed system, let’s instead
incentive the good ones to practice generosity.

In that spirit, I’d like to close by offering a passage I included in my own
promotion dossier, where I justified my choice to prioritize public
scholarship over traditional peer-reviewed venues. I aimed here to make my
values explicit. While I won’t know the outcome of my review for a few months,
and thus I can’t say whether or not this passage successfully served its
rhetorical purpose, I do hope I’ve convincingly argued here that, in
researching media and technology, one should also think critically about the
media one chooses to make that research public. I share this in the hope that
it’ll be useful to others preparing for their own job searches and faculty
reviews, or negotiating their own politics of practice. The passage is below.

* * *

…[A] concern with public knowledge infrastructures has… informed my choice of
venues for publication. Particularly since receiving tenure I’ve become much
more attuned to publication platforms themselves as knowledge infrastructures.
I’ve actively sought out venues whose operational values match the values I
espouse in my research – openness and accessibility (and, equally important,
good design!) – as well as those that The New School embraces through its
commitment to public scholarship and civic engagement. Thus, I’ve steered away
from those peer-reviewed publications that are secured behind paywalls and
rely on uncompensated editorial labor while their parent companies uphold
exploitative copyright policies and charge exorbitant subscription fees. I’ve
focused instead on open-access venues. Most of my articles are freely
available online, and even my 2015 book,  _Deep Mapping the Media City_ ,
published by the University of Minnesota Press, has been made available
through the Mellon Foundation-funded Manifold open-access publishing platform.
In those cases in which I have been asked to contribute work to a restricted
peer-reviewed journal or costly edited volume, I’ve often negotiated with the
publisher to allow me to “pre-print” my work as an article in an open-access
online venue, or to preview an un-edited copy.

I’ve been invited to address the ethics and epistemologies of scholarly
publishing and pedagogical platforms in a variety of venues, A, B, C, D, and
E. I also often chat with graduate students and junior scholars about their
own “publication politics” and appropriate venues for their work, and I review
their prospectuses and manuscripts.

The most personally rewarding and professionally valuable publishing
experience of my post-tenure career has been my collaboration with  _Places
Journal_ , a highly regarded non-profit, university-supported, open-access
venue for public scholarship on landscape, architecture, urbanism. After
having written thirteen (fifteen by Fall 2017) long-form pieces for  _Places_
since 2012, I’ve effectively assumed their “urban data and mediated spaces”
beat. I work with paid, professional editors who care not only about subject
matter – they’re just as much domain experts as any academic peer reviewer
I’ve encountered – but also about clarity and style and visual presentation.
My research and writing process for  _Places_ is no less time- and labor-
intensive, and the editorial process is no less rigorous, than would be
required for a traditional academic publication, but  _Places_  allows my work
to reach a global, interdisciplinary audience in a timely manner, via a
smartly designed platform that allows for rich illustration. This public
scholarship has a different “impact” than pay-walled publications in prestige
journals. Yet the response to my work on social media, the number of citations
it’s received (in both scholarly and popular literature), and the number of
invitations it’s generated, suggest the significant, if incalculable, value of
such alternative infrastructures for academic publishing. By making my work
open and accessible, I’ve still managed to meet many of the prestige- and
scarcity-driven markers of academic excellence (for more on my work’s impact,
see Appendix A).

_* I’ve altered some details so as to avoid sanctioning particular editors or
authors._

_Shannon Mattern is Associate Professor of Media Studies at The New School and
author of numerous books with University of Minnesota Press. Find her on
twitter[@shannonmattern](http://www.twitter.com/shannonmattern)._


Medak
Death and Survival of Dead Labor
2016


# Death and Survival of Dead Labor

by Tomislav Medak — Jan 08, 2016

![](https://schloss-post.com/content/uploads/public-
library_wuerttembergischer-kunstverein-600x450.jpg)

»Public Library. Rethinking the Infrastructures of
Knowledge Production«
Exhibition at Württembergischer Kunstverein Stuttgart, 2014

**The present-day social model of authorship is co-substantive with the
normative regime of copyright. Copyright’s avowed role is to triangulate a
balance between the rights of authors, cultural industries, and the public.
Its legal foundation is in the natural right of the author over the products
of intellectual labor. The recurrent claims of the death of the author,
disputing the primacy of the author over the work, have failed to do much to
displace the dominant understanding of the artwork as an extension of the
personality of the author.**

The structuralist criticism positing an impersonal structuring structure
within which the work operates; the hypertexual criticism dissolving
boundaries of work in the arborescent web of referentiality; or the remix
culture’s hypostatisation of the collective and re-appropriative nature of all
creativity – while changing the meaning we ascribe to the works of culture –
have all failed to leave an impact on how the production of works is
normativized and regulated.

And yet the nexus author–work–copyright has transformed in fundamental ways,
however in ways opposite to what these openings in our social epistemology
have suggested. The figure of the creator, with the attendant apotheosis of
individual creativity and originality, is nowadays more forcefully than ever
before being mobilized and animated by the efforts to expand the exclusive
realm of exploitation of the work under copyright. The forcefulness though
speaks of a deep-seated neurosis, intimating that the purported balance might
not be what it is claimed to be by the copyright advocates. Much is revealed
as we descend into the hidden abode of production.

## _Of Copyright and Authorship_

Copyright has principally an economic function: to unambiguously establish
individualized property in the products of intellectual labor. Once the legal
title is unambiguously assigned, there is a property holder with whose consent
the contracting, commodification, and marketing of the work can proceed. In
that aspect, copyright is not very different from the requirement of formal
freedom that is granted to the laborer to contract out their own labor power
as a commodity to capital, allowing then the capital to maximize the
productivity and appropriate the products of the worker’s labor – which is in
terms of Marx »dead labor.« In fact, the analogy between the contracting of
labor force and the contracting of intellectual work does not stop there. They
also share a common history.

The liberalism of rights and the commodification of labor have emerged from
the context of waning absolutism and incipient capitalism in Europe of the
seventeenth and the eighteenth century. Before the publishers and authors
could have their monopoly over the exploitation of their publications
instituted in the form of copyright, they had to obtain a privilege to print a
book from royal censors. First printing privileges granted to publishers, for
instance in early seventeenth century Great Britain, came with the burden
placed on publishers to facilitate censorship and control over the
dissemination of the growing body of printed matter in the aftermath of the
invention of movable type printing.

The evolution of regulatory mechanisms of contemporary copyright from the
context of absolutism and early capitalism receives its full relief if one
considers how peer review emerged as a self-censoring mechanism within the
Royal Academy and the Académie des sciences. [1] The internal peer review
process helped the academies maintain the privilege to print the works of
their members, which was given to them only under the condition that the works
they publish limit themselves to matters of science and make no political
statements that could otherwise sour the benevolence of the monarch. Once they
expanded to print in their almanacs, journals, and books the works of authors
outside of the academy ranks, they both expanded their scientific authority
and their regulating function to the entire nascent field of modern science.

The transition from the privilege tied to the publisher to the privilege tied
to the natural person of the author would unfold only later. In Great Britain
this occurred as the guild of printers, Stationers’ Company, failed to secure
the extension of its printing privilege and thus, in order to continue with
the business of printing books, decided to advocate a copyright for the
authors instead, which resulted in the passing of the Copyright Act of 1709,
also known as the Statute of Anne. Thus the author became the central figure
in the regulation of literary and scientific production. Not only did the
author now receive the exclusive rights to the work, the author was also made
– as Foucault has famously analyzed – the identifiable subject of scrutiny,
censorship, and political sanction by the absolutist state or the church.

And yet, although the romantic author now took center stage, copyright
regulation, the economic compensation for the work, would long remain no more
than an honorary one. Until well into the eighteenth century literary writing
and creativity in general were regarded as resulting from the divine
inspiration and not from the individual genius of the author. Money earned in
the growing business with books mostly stayed in the hands of the publishers,
while the author received an honorarium, a flat sum that served as a »token of
esteem.« [2] It was only with the increasingly vocal demand by the authors to
secure material and political independence from the patronage and authority
that they started to make claims for rightful remuneration.

## _Of Compensation and Exploitation
_

The moment of full-blown affirmation of romantic author-function marks a
historic moment of redistribution and establishment of compromise between the
right of publishers to economic exploitation of the works and the right of
authors to rightful compensation for their works. Economically this was made
possible by the expanding market for printed books in the eighteenth and the
nineteenth century, while politically this was catalyzed by the growing desire
for autonomy of scientific and literary production from the system of feudal
patronage and censorship in gradually liberalizing modern capitalist
societies. The autonomy of production was substantially coupled to the
production for the market. However, the irenic balance could not last
unobstructed. Once the production of culture and science was subsumed under
the exigencies of the market, it had to follow the laws of commodification and
competition that no commodity production can escape.

With the development of big corporation and monopoly capitalism, [3] the
purported balance between the author and the publisher, the innovator or
scientist and the company, the labor and the capital, the public circulation
and the pressures of monetization has become unhinged. While the legislative
expansions of protections, court decisions, and multilateral treaties are
legitimated on basis of the rights of creators, they have become the economic
basis for the monopolies dominating the commanding heights of the global
economy to protect their dominant position in the world market. The levels of
concentration in the industries with large portfolios of various forms of
intellectual property rights is staggering. The film industry is a US$88
billion industry dominated by six major studios. The recorded music industry
is an almost US$20 billion industry dominated by three major labels. The
publishing industry is a US$120 billion industry, where the leading ten earn
in revenues more than the next 40 largest publishing groups. Among patent
holding industries, the situation is a little more diversified, but big patent
portfolios in general dictate the dynamics of market power.

Academic publishing in particular draws a stark relief of the state of play.
It is a US$10 billion industry dominated by five publishers, financed up to
75% from the subscriptions of libraries. It is notorious for achieving extreme
year on year profit margins – in the case of Reed Elsevier regularly well over
20%, with Taylor & Francis, Springer, and Wiley-Blackwell only just lagging
behind. [4] Given that the work of contributing authors is not paid, but
financed by their institutions (provided they are employed at an institution)
and that the publications nowadays come mostly in the form of electronic
articles licensed under subscription for temporary use to libraries and no
longer sold as printed copies, the public interest could be served at a much
lower cost by leaving commercial closed-access publishers out of the equation.
However, given the entrenched position of these publishers and their control
over the moral economy of reputation in academia, the public disservice that
they do cannot be addressed within the historic ambit of copyright. It
requires politicization.

## _Of Law and Politics_

When we look back on the history of copyright, before there was legality there
was legitimacy. In the context of an almost completely naturalized and
harmonized global regulation of copyright the political question of legitimacy
seems to be no longer on the table. An illegal copy is an object of exchange
that unsettles the existing economies of cultural production. And yet,
copyright nowadays marks a production model that serves the power of
appropriation from the author and market power of the publishers much more
than the labor of cultural producers. Hence the illegal copy is again an
object begging the question as to what do we do at a rare juncture when a
historic opening presents itself to reorganize how a good, such as knowledge
and culture, is produced and distributed in a society. We are at such a
juncture, a juncture where the regime regulating legality and illegality might
be opened to the questioning of its legitimacy or illegitimacy.

1. Jump Up For a more detailed account of this development, as well as for the history of printing privilege in Great Britain, see Mario Biagioli: »From Book Censorship to Academic Peer Review,« in: _Emergences:_ _Journal for the Study of Media & Composite Cultures _12, no. 1 [2002], pp. 11–45.
2. Jump Up The transition of authorship from honorific to professional is traced back in Martha Woodmansee: _The Author, Art, and the Market: Rereading the History of Aesthetics_. New York 1996.
3. Jump Up When referencing monopoly markets, we do not imply purely monopolistic markets, where one company is the only enterprise selling a product, but rather markets where a small number of companies hold most of the market. In monopolistic competition, oligopolies profit from not competing on prices. Rather »all the main players are large enough to survive a price war, and all it would do is shrink the size of the industry revenue pie that the firms are fighting over. Indeed, the price in an oligopolistic industry will tend to gravitate toward what it would be in a pure monopoly, so the contenders are fighting for slices of the largest possible revenue pie.« Robert W. McChesney: _Digital Disconnect: How Capitalism Is Turning the Internet Against Democracy_. New York 2013, pp. 37f. The immediate effect of monopolistic competition in culture is that the consumption is shaped to conform to the needs of the large enterprise, i.e. to accommodate the economies of scale, narrowing the range of styles, expressions, and artists published and promoted in the public.
4. Jump Up Vincent Larivière, Stefanie Haustein, and Philippe Mongeon: »The Oligopoly of Academic Publishers in the Digital Era,« in: _PLoS ONE_ 10, no. 6 [June 2015]: e0127502, doi:10.1371/journal.pone.0127502.

![](data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7)

[Tomislav Medak](https://schloss-post.com/person/tomislav-medak/),
Zagreb/Croatia — Performing Arts, Solitude fellow 2013–2015

Tomislav Medak is a philosopher with interests in contemporary political
philosophy, media theory and aesthetics. He is coordinating the theory program
and publishing activities of the Multimedia Institute/MAMA (Zagreb/Croatia),
and works in parallel with the Zagreb-based theatre collective BADco.


Mars & Medak
Knowledge Commons and Activist Pedagogies
2017


KNOWLEDGE COMMONS AND ACTIVIST PEDAGOGIES: FROM IDEALIST POSITIONS TO COLLECTIVE ACTIONS
Conversation with Marcell Mars and Tomislav Medak (co-authored with Ana Kuzmanic)

Marcell Mars is an activist, independent scholar, and artist. His work has been
instrumental in development of civil society in Croatia and beyond. Marcell is one
of the founders of the Multimedia Institute – mi2 (1999) (Multimedia Institute,
2016a) and Net.culture club MaMa in Zagreb (2000) (Net.culture club MaMa,
2016a). He is a member of Creative Commons Team Croatia (Creative Commons,
2016). He initiated GNU GPL publishing label EGOBOO.bits (2000) (Monoskop,
2016a), meetings of technical enthusiasts Skill sharing (Net.culture club MaMa,
2016b) and various events and gatherings in the fields of hackerism, digital
cultures, and new media art. Marcell regularly talks and runs workshops about
hacking, free software philosophy, digital cultures, social software, semantic web
etc. In 2011–2012 Marcell conducted research on Ruling Class Studies at Jan Van
Eyck in Maastricht, and in 2013 he held fellowship at Akademie Schloss Solitude
in Stuttgart. Currently, he is PhD researcher at the Digital Cultures Research Lab at
Leuphana Universität Lüneburg.
Tomislav Medak is a cultural worker and theorist interested in political
philosophy, media theory and aesthetics. He is an advocate of free software and
free culture, and the Project Lead of the Creative Commons Croatia (Creative
Commons, 2016). He works as coordinator of theory and publishing activities at
the Multimedia Institute/MaMa (Zagreb, Croatia) (Net.culture club MaMa, 2016a).
Tomislav is an active contributor to the Croatian Right to the City movement
(Pravo na grad, 2016). He interpreted to numerous books into Croatian language,
including Multitude (Hardt & Negri, 2009) and A Hacker Manifesto (Wark,
2006c). He is an author and performer with the internationally acclaimed Zagrebbased performance collective BADco (BADco, 2016). Tomislav writes and talks
about politics of technological development, and politics and aesthetics.
Tomislav and Marcell have been working together for almost two decades.
Their recent collaborations include a number of activities around the Public Library
project, including HAIP festival (Ljubljana, 2012), exhibitions in
Württembergischer Kunstverein (Stuttgart, 2014) and Galerija Nova (Zagreb,
2015), as well as coordinated digitization projects Written-off (2015), Digital
Archive of Praxis and the Korčula Summer School (2016), and Catalogue of
Liberated Books (2013) (in Monoskop, 2016b).
243

CHAPTER 12

Ana Kuzmanic is an artist based in Zagreb and Associate Professor at the
Faculty of Civil Engineering, Architecture and Geodesy at the University in Split
(Croatia), lecturing in drawing, design and architectural presentation. She is a
member of the Croatian Association of Visual Artists. Since 2007 she held more
than a dozen individual exhibitions and took part in numerous collective
exhibitions in Croatia, the UK, Italy, Egypt, the Netherlands, the USA, Lithuania
and Slovenia. In 2011 she co-founded the international artist collective Eastern
Surf, which has “organised, produced and participated in a number of projects
including exhibitions, performance, video, sculpture, publications and web based
work” (Eastern Surf, 2017). Ana's artwork critically deconstructs dominant social
readings of reality. It tests traditional roles of artists and viewers, giving the
observer an active part in creation of artwork, thus creating spaces of dialogue and
alternative learning experiences as platforms for emancipation and social
transformation. Grounded within a postdisciplinary conceptual framework, her
artistic practice is produced via research and expression in diverse media located at
the boundaries between reality and virtuality.
ABOUT THE CONVERSATION

I have known Marcell Mars since student days, yet our professional paths have
crossed only sporadically. In 2013 I asked Marcell’s input about potential
interlocutors for this book, and he connected me to McKenzie Wark. In late 2015,
when we started working on our own conversation, Marcell involved Tomislav
Medak. Marcell’s and Tomislav’s recent works are closely related to arts, so I
requested Ana Kuzmanic’s input in these matters. Since the beginning of the
conversation, Marcell, Tomislav, Ana, and I occasionally discussed its generalities
in person. Yet, the presented conversation took place in a shared online document
between November 2015 and December 2016.
NET.CULTURE AT THE DAWN OF THE CIVIL SOCIETY

Petar Jandrić & Ana Kuzmanic (PJ & AK): In 1999, you established the
Multimedia Institute – mi2 (Multimedia Institute, 2016a); in 2000, you established
the Net.culture club MaMa (both in Zagreb, Croatia). The Net.culture club MaMa
has the following goals:
To promote innovative cultural practices and broadly understood social
activism. As a cultural center, it promotes wide range of new artistic and
cultural practices related in the first place to the development of
communication technologies, as well as new tendencies in arts and theory:
from new media art, film and music to philosophy and social theory,
publishing and cultural policy issues.
As a community center, MaMa is a Zagreb’s alternative ‘living room’ and
a venue free of charge for various initiatives and associations, whether they
are promoting minority identities (ecological, LBGTQ, ethnic, feminist and

244

KNOWLEDGE COMMONS AND ACTIVIST PEDAGOGIES

others) or critically questioning established social norms. (Net.culture club
MaMa, 2016a)
Please describe the main challenges and opportunities from the dawn of Croatian
civil society. Why did you decide to establish the Multimedia Institute – mi2 and
the Net.culture club MaMa? How did you go about it?
Marcell Mars & Tomislav Medak (MM & TM): The formative context for
our work had been marked by the process of dissolution of Yugoslavia, ensuing
civil wars, and the rise of authoritarian nationalisms in the early 1990s. Amidst the
general turmoil and internecine bloodshed, three factors would come to define
what we consider today as civil society in the Croatian context. First, the newly
created Croatian state – in its pursuit of ethnic, religious and social homogeneity –
was premised on the radical exclusion of minorities. Second, the newly created
state dismantled the broad institutional basis of social and cultural diversity that
existed under socialism. Third, the newly created state pursued its own nationalist
project within the framework of capitalist democracy. In consequence, politically
undesirable minorities and dissenting oppositional groups were pushed to the
fringes of society, and yet, in keeping with the democratic system, had to be
allowed to legally operate outside of the state, its loyal institutions and its
nationalist consensus – as civil society. Under the circumstances of inter-ethnic
conflict, which put many people in direct or indirect danger, anti-war and human
rights activist groups such as the Anti-War Campaign provided an umbrella under
which political, student and cultural activists of all hues and colours could find a
common context. It is also within this context that the high modernism of cultural
production from the Yugoslav period, driven out from public institutions, had
found its recourse and its continuity.
Our loose collective, which would later come together around the Multimedia
Institute and MaMa, had been decisively shaped by two circumstances. The first
was participation of the Anti-War Campaign, its BBS network ZaMir (Monoskop,
2016c) and in particular its journal Arkzin, in the early European network culture.
Second, the Open Society Institute, which had financed much of the alternative and
oppositional activities during the 1990s, had started to wind down its operations
towards end of the millennium. As the Open Society Institute started to spin off its
diverse activities into separate organizations, giving rise to the Croatian Law
Center, the Center for Contemporary Art and the Center for Drama Art, activities
related to Internet development ended up with the Multimedia Institute. The first
factor shaped us as activists and early adopters of critical digital culture, and the
second factor provided us with an organizational platform to start working
together. In 1998 Marcell was the first person invited to work with the Multimedia
Institute. He invited Vedran Gulin and Teodor Celakoski, who in turn invited other
people, and the group organically grew to its present form.
Prior to our coming together around the Multimedia Institute, we have been
working on various projects such as setting up the cyber-culture platform Labinary
in the space run by the artist initiative Labin Art Express in the former miner town
of Labin located in the north-western region of Istria. As we started working
245

CHAPTER 12

together, however, we began to broaden these activities and explore various
opportunities for political and cultural activism offered by digital networks. One of
the early projects was ‘Radioactive’ – an initiative bringing together a broad group
of activists, which was supposed to result in a hybrid Internet/FM radio. The radio
never arrived into being, yet the project fostered many follow-up activities around
new media and activism in the spirit of ‘don’t hate the media, become the media.’
In these early days, our activities had been strongly oriented towards technological
literacy and education; also, we had a strong interest in political theory and
philosophy. Yet, the most important activity at that time was opening the
Net.culture club MaMa in Zagreb in 2000 (Net.culture club MaMa, 2016a).
PJ & AK: What inspired you to found the Net.culture club MaMa?
MM & TM: We were not keen on continuing the line of work that the
Multimedia Institute was doing under the Open Society Institute, which included,
amongst other activities, setting up the first non-state owned Internet service
provider ZamirNet. The growing availability of Internet access and computer
hardware had made the task of helping political, cultural and media activists get
online less urgent. Instead, we thought that it would be much more important to
open a space where those activists could work together. At the brink of the
millennium, institutional exclusion and access to physical resources (including
space) needed for organizing, working together and presenting that work was a
pressing problem. MaMa was one of the only three independent cultural spaces in
Zagreb – capital city of Croatia, with almost one million inhabitants! The Open
Society Institute provided us with a grant to adapt a former downtown leather-shop
in the state of disrepair and equip it with latest technology ranging from servers to
DJ decks. These resources were made available to all members of the general
public free of charge. Immediately, many artists, media people, technologists, and
political activists started initiating own programs in MaMa. Our activities ranged
from establishing art servers aimed at supporting artistic and cultural projects on
the Internet (Monoskop, 2016d) to technology-related educational activities,
cultural programs, and publishing. By 2000, nationalism had slowly been losing its
stranglehold on our society, and issues pertaining to capitalist globalisation had
arrived into prominence. At MaMa, the period was marked by alter-globalization,
Indymedia, web development, East European net.art and critical media theory.
The confluence of these interests and activities resulted in many important
developments. For instance, soon after the opening of MaMa in 2000, a group of
young music producers and enthusiasts kicked off a daily music program with live
acts, DJ sessions and meetings to share tips and tricks about producing electronic
music. In parallel, we had been increasingly drawn to free software and its
underlying ethos and logic. Yugoslav legacy of social ownership over means of
production and worker self-management made us think how collectivized forms of
cultural production, without exclusions of private property, could be expanded
beyond the world of free software. We thus talked some of our musician friends
into opening the free culture label EGOBOO.bits and publishing their music,
together with films, videos and literary texts of other artists, under the GNU
General Public License. The EGOBOO.bits project had soon become uniquely
246

KNOWLEDGE COMMONS AND ACTIVIST PEDAGOGIES

successful: producers such as Zvuk broda, Blashko, Plazmatick, Aesqe, No Name
No Fame, and Ghetto Booties were storming the charts, the label gradually grew to
fifty producers and formations, and we had the artists give regular workshops in
DJ-ing, sound editing, VJ-ing, video editing and collaborative writing at schools
and our summer camp Otokultivator. It inspired us to start working on alternatives
to the copyright regime and on issues of access to knowledge and culture.
PJ & AK: The civil society is the collective conscious, which provides leverage
against national and corporate agendas and serves as a powerful social corrective.
Thus, at the outbreak of the US invasion to Iraq, Net.culture club MaMa rejected a
$100 000 USAID grant because the invasion was:
a) a precedent based on the rationale of pre-emptive war, b) being waged in
disregard of legitimate processes of the international community, and c)
guided by corporate interests to control natural resources (Multimedia
Institute, 2003 in Razsa, 2015: 82).
Yet, only a few weeks later, MaMa accepted a $100 000 grant from the German
state – and this provoked a wide public debate (Razsa, 2015; Kršić, 2003; Stubbs,
2012).
Now that the heat of the moment has gone down, what is your view to this
debate? More generally, how do you decide whose money to accept and whose
money to reject? How do you decide where to publish, where to exhibit, whom to
work with? What is the relationship between idealism and pragmatism in your
work?
MM & TM: Our decision seems justified yet insignificant in the face of the
aftermath of that historical moment. The unilateral decision of US and its allies to
invade Iraq in March 2003 encapsulated both the defeat of global protest
movements that had contested the neoliberal globalisation since the early 1990s
and the epochal carnage that the War on Terror, in its never-ending iterations, is
still reaping today. Nowadays, the weaponized and privatized security regime
follows the networks of supply chains that cut across the logic of borders and have
become vital both for the global circuits of production and distribution (see Cowen,
2014). For the US, our global policeman, the introduction of unmanned weaponry
and all sorts of asymmetric war technologies has reduced the human cost of war
down to zero. By deploying drones and killer robots, it did away with the
fundamental reality check of own human casualties and made endless war
politically plausible. The low cost of war has resulted in the growing side-lining of
international institutions responsible for peaceful resolution of international
conflicts such as the UN.
Our 2003 decision carried hard consequences for the organization. In a capitalist
society, one can ensure wages either by relying on the market, or on the state, or on
private funding. The USAID grant was our first larger grant after the initial spinoff money from the Open Society Institute, and it meant that we could employ
some people from our community over the period of next two years. Yet at the
same time, the USAID had become directly involved in Iraq, aiding the US forces
and various private contractors such as Halliburton in the dispossession and
247

CHAPTER 12

plunder of the Iraqi economy. Therefore, it was unconscionable to continue
receiving money from them. In light of its moral and existential weight, the
decision to return the money thus had to be made by the general assembly of our
association.
People who were left without wages were part and parcel of the community that
we had built between 2000 and 2003, primarily through Otokultivator Summer
Camps and Summer Source Camp (Tactical Tech Collective, 2016). The other
grant we would receive later that year, from the Federal Cultural Foundation of the
German government, was split amongst a number of cultural organizations and
paid for activities that eventually paved the way for Right to the City (Pravo na
grad, 2016). However, we still could not pay the people who decided to return
USAID money, so they had to find other jobs. Money never comes without
conditionalities, and passing judgements while disregarding specific economic,
historic and organizational context can easily lead to apolitical moralizing.
We do have certain principles that we would not want to compromise – we do
not work with corporations, we are egalitarian in terms of income, our activities are
free for the public. In political activities, however, idealist positions make sense
only for as long as they are effective. Therefore, our idealism is through and
through pragmatic. It is in the similar manner that we invoke the ideal of the
library. We are well aware that reality is more complex than our ideals. However,
the collective sense of purpose inspired by an ideal can carry over into useful
collective action. This is the core of our interest …
PJ & AK: There has been a lot of water under the bridge since the 2000s. From
a ruined post-war country, Croatia has become an integral part of the European
Union – with all associated advantages and problems. What are the main today’s
challenges in maintaining the Multimedia Institute and its various projects? What
are your future plans?
MM & TM: From the early days, Multimedia Institute/MaMa took a twofold
approach. It has always supported people working in and around the organization
in their heterogeneous interests including but not limited to digital technology and
information freedoms, political theory and philosophy, contemporary digital art,
music and cinema. Simultaneously, it has been strongly focused to social and
institutional transformation.
The moment zero of Croatian independence in 1991, which was marked by war,
ethnic cleansing and forceful imposition of contrived mono-national identity, saw
the progressive and modernist culture embracing the political alternative of antiwar movement. It is within these conditions, which entailed exclusion from access
to public resources, that the Croatian civil society had developed throughout the
1990s. To address this denial of access to financial and spatial resources to civil
society, since 2000 we have been organizing collective actions with a number of
cultural actors across the country to create alternative routes for access to resources
– mutual support networks, shared venues, public funding, alternative forms of
funding. All the while, that organizational work has been implicitly situated in an
understanding of commons that draws on two sources – the social contract of the
free software community, and the legacy of social ownership under socialism.
248

KNOWLEDGE COMMONS AND ACTIVIST PEDAGOGIES

Later on, this line of work has been developed towards intersectional struggles
around spatial justice and against privatisation of public services that coalesced
around the Right to the City movement (2007 till present) (Pravo na grad, 2016)
and the 2015 Campaign against the monetization of the national highway network.
In early 2016, with the arrival of the short-lived Croatian government formed by
a coalition of inane technocracy and rabid right wing radicals, many institutional
achievements of the last fifteen years seemed likely to be dismantled in a matter of
months. At the time of writing this text, the collapse of broader social and
institutional context is (again) an imminent threat. In a way, our current situation
echoes the atmosphere of Yugoslav civil wars in 1990s. Yet, the Croatian turn to
the right is structurally parallel to recent turn to the right that takes place in most
parts of Europe and the world at large. In the aftermath of the global neoliberal
race to the bottom and the War on Terror, the disenfranchised working class vents
its fears over immigration and insists on the return of nationalist values in various
forms suggested by irresponsible political establishments. If they are not spared the
humiliating sense of being outclassed and disenfranchised by the neoliberal race to
the bottom, why should they be sympathetic to those arriving from the
impoverished (semi)-periphery or to victims of turmoil unleashed by the endless
War on Terror? If globalisation is reducing their life prospects to nothing, why
should they not see the solution to their own plight in the return of the regime of
statist nationalism?
At the Multimedia Institute/MaMa we intend to continue our work against this
collapse of context through intersectionalist organizing and activism. We will
continue to do cultural programs, publish books, and organise the Human Rights
Film Festival. In order to articulate, formulate and document years of practical
experience, we aim to strengthen our focus on research and writing about cultural
policy, technological development, and political activism. Memory of the
World/Public Library project will continue to develop alternative infrastructures
for access, and develop new and existing networks of solidarity and public
advocacy for knowledge commons.
LOCAL HISTORIES AND GLOBAL REALITIES

PJ & AK: Your interests and activities are predominantly centred around
information and communication technologies. Yet, a big part of your social
engagement takes place in Eastern Europe, which is not exactly on the forefront of
technological innovation. Can you describe the dynamics of working from the
periphery around issues developed in global centres of power (such as the Silicon
Valley)?
MM & TM: Computers in their present form had been developed primarily in
the Post-World War II United States. Their development started from the military
need to develop mathematics and physics behind the nuclear weapons and counterair defense, but soon it was combined with efforts to address accounting, logistics
and administration problems in diverse fields such as commercial air traffic,
governmental services, banks and finances. Finally, this interplay of the military
249

CHAPTER 12

and the economy was joined by enthusiasts, hobbyists, and amateurs, giving the
development of (mainframe, micro and personal) computer its final historical
blueprint. This story is written in canonical computing history books such as The
Computer Boys Take Over: Computers, Programmers, and the Politics of
Technical Expertise. There, Nathan Ensmenger (2010: 14) writes: “the term
computer boys came to refer more generally not simply to actual computer
specialists but rather to the whole host of smart, ambitious, and technologically
inclined experts that emerged in the immediate postwar period.”
Very few canonical computing history books cover other histories. But when
that happens, we learn a lot. Be that Slava Gerovitch’s From Newspeak to
Cyberspeak (2002), which recounts the history of Soviet cybernetics, or Eden
Medina’s Cybernetic Revolutionaries (2011), which revisits the history of socialist
cybernetic project in Chile during Allende’s government, or the recent book by
Benjamin Peters How Not to Network a Nation (2016), which describes the history
of Soviet development of Internet infrastructure. Many (other) histories are yet to
be heard and written down. And when these histories get written down, diverse
things come into view: geopolitics, class, gender, race, and many more.
With their witty play and experiments with the medium, the early days of the
Internet were highly exciting. Big corporate websites were not much different from
amateur websites and even spoofs. A (different-than-usual) proximity of positions
of power enabled by the Internet allowed many (media-art) interventions, (rebirth
of) manifestos, establishment of (pseudo)-institutions … In these early times of
Internet’s history and geography, (the Internet subculture of) Eastern Europe
played a very important part. Inspired by Alexei Shulgin, Lev Manovich wrote ‘On
Totalitarian Interactivity’ (1996) where he famously addressed important
differences between understanding of the Internet in the West and the East. For the
West, claims Manovich, interactivity was a perfect vehicle for the ideas of
democracy and equality. For the East, however, interactivity was merely another
form of (media) manipulation. Twenty years later, it seems that Eastern Europe
was well prepared for what the Internet would become today.
PJ & AK: The dominant (historical) narrative of information and
communication technologies is predominantly based in the United States.
However, Silicon Valley is not the only game in town … What are the main
differences between approaches to digital technologies in the US and in Europe?
MM & TM: In the ninties, the lively European scene, which equally included
the East Europe, was the centre of critical reflection on the Internet and its
spontaneous ‘Californian ideology’ (Barbrook & Cameron, 1996). Critical culture
in Europe and its Eastern ‘countries in transition’ had a very specific institutional
landscape. In Western Europe, art, media, culture and ‘post-academic’ research in
humanities was by and large publicly funded. In Eastern Europe, development of
the civil society had been funded by various international foundations such as the
Open Society Institute aka the Soros Foundation. Critical new media and critical
art scene played an important role in that landscape. A wide range of initiatives,
medialabs, mailing lists, festivals and projects like Next5minutes (Amsterdam/
Rotterdam), Nettime & Syndicate (mailing lists), Backspace & Irational.org
250

KNOWLEDGE COMMONS AND ACTIVIST PEDAGOGIES

(London), Ljudmila (Ljubljana), Rixc (Riga), C3 (Budapest) and others constituted
a loose network of researchers, theorists, artists, activists and other cultural
workers.
This network was far from exclusively European. It was very well connected to
projects and initiatives from the United States such as Critical Art Ensemble,
Rhizome, and Thing.net, to projects in India such as Sarai, and to struggles of
Zapatistas in Chiapas. A significant feature of this loose network was its mutually
beneficial relationship with relevant European art festivals and institutions such as
Documenta (Kassel), Transmediale/HKW (Berlin) or Ars Electronica (Linz). As a
rule of thumb, critical new media and art could only be considered in a conceptual
setup of hybrid institutions, conferences, forums, festivals, (curated) exhibitions
and performances – and all of that at once! The Multimedia Institute was an active
part of that history, so it is hardly a surprise that the Public Library project took a
similar path of development and contextualization.
However, European hacker communities were rarely hanging out with critical
digital culture crowds. This is not the place to extensively present the historic
trajectory of different hacker communities, but risking a gross simplification here
is a very short genealogy. The earliest European hacker association was the
German Chaos Computer Club (CCC) founded in 1981. Already in the early
1980s, CCC started to publicly reveal (security) weaknesses of corporate and
governmental computer systems. However, their focus on digital rights, privacy,
cyberpunk/cypherpunk, encryption, and security issues prevailed over other forms
of political activism. The CCC were very successful in raising issues, shaping
public discussions, and influencing a wide range of public actors from digital rights
advocacy to political parties (such as Greens and Pirate Party). However, unlike the
Italian and Spanish hackers, CCC did not merge paths with other social and/or
political movements. Italian and Spanish hackers, for instance, were much more
integral to autonomist/anarchist, political and social movements, and they have
kept this tradition until the present day.
PJ & AK: Can you expand this analysis to Eastern Europe, and ex-Yugoslavia
in particular? What were the distinct features of (the development of) hacker
culture in these areas?
MM & TM: Continuing to risk a gross simplification in the genealogy, Eastern
European hacker communities formed rather late – probably because of the
turbulent economic and political changes that Eastern Europe went through after
1989.
In MaMa, we used to run the programme g33koskop (2006–2012) with a goal to
“explore the scope of (term) geek” (Multimedia Institute, 2016b). An important
part of the program was to collect stories from enthusiasts, hobbyists, or ‘geeks’
who used to be involved in do-it-yourself communities during early days of
(personal) computing in Yugoslavia. From these makers of first 8-bit computers,
editors of do-it-yourself magazines and other early day enthusiasts, we could learn
that technical and youth culture was strongly institutionally supported (e.g. with
nation-wide clubs called People’s Technics). However, the socialist regime did not
adequately recognize the importance and the horizon of social changes coming
251

CHAPTER 12

from (mere) education and (widely distributed) use of personal computers. Instead,
it insisted on an impossible mission of own industrial computer production in order
to preserve autonomy on the global information technology market. What a
horrible mistake … To be fair, many other countries during this period felt able to
achieve own, autonomous production of computers – so the mistake has reflected
the spirit of the times and the conditions of uneven economic and scientific
development.
Looking back on the early days of computing in former Yugoslavia, many geeks
now see themselves as social visionaries and the avant-garde. During the 1990s
across the Eastern Europe, unfortunately, they failed to articulate a significant
political agenda other than fighting the monopoly of telecom companies. In their
daily lives, most of these people enjoyed opportunities and privileges of working in
a rapidly growing information technology market. Across the former Yugoslavia,
enthusiasts had started local Linux User Groups: HULK in Croatia, LUGOS in
Slovenia, LUGY in Serbia, Bosnia and Hercegovina, and Macedonia. In the spirit
of their own times, many of these groups focused on attempts to convince the
business that free and open source software (at the time GNU/Linux, Apache,
Exim …) was a viable IT solution.
PJ & AK: Please describe further developments in the struggle between
proponents of proprietary software and the Free Software Movement.
MM & TM: That was the time before Internet giants such as Google, Amazon,
eBay or Facebook built their empires on top of Free/Libre/Open Source Software.
GNU General Public Licence, with its famous slogan “free as in free speech, not
free as in free beer” (Stallman, 2002), was strong enough to challenge the property
regime of the world of software production. Meanwhile, Silicon Valley
experimented with various approaches against the challenge of free software such
as ‘tivoizations’ (systems that incorporate copyleft-based software but impose
hardware restrictions to software modification), ‘walled gardens’ (systems where
carriers or service providers control applications, content and media, while
preventing them from interacting with the wider Internet ecosystem), ‘software-asa-service’ (systems where software is hosted centrally and licensed through
subscription). In order to support these strategies of enclosure and turn them into
profit, Silicon Valley developed investment strategies of venture capital or
leveraged buyouts by private equity to close the proprietary void left after the
success of commons-based peer production projects, where a large number of
people develop software collaboratively over the Internet without the exclusion by
property (Benkler, 2006).
There was a period when it seemed that cultural workers, artists and hackers
would follow the successful model of the Free Software Movement and build a
universal commons-based platform for peer produced, shared and distributed
culture, art, science and knowledge – that was the time of the Creative Commons
movement. But that vision never materialized. It did not help, either, that start-ups
with no business models whatsoever (e.g. De.lic.io.us (bookmarks), Flickr
(photos), Youtube (videos), Google Reader (RSS aggregator), Blogspot, and
others) were happy to give their services for free, let contributors use Creative
252

KNOWLEDGE COMMONS AND ACTIVIST PEDAGOGIES

Commons licences (mostly on the side of licenses limiting commercial use and
adaptations), let news curators share and aggregate relevant content, and let Time
magazine claim that “You” (meaning “All of us”) are The Person of the Year
(Time Magazine, 2006).
PJ & AK: Please describe the interplay between the Free Software Movement
and the radically capitalist Silicon Valley start-up culture, and place it into the
larger context of political economy of software development. What are its
consequences for the hacker movement?
MM & TM: Before the 2008 economic crash, in the course of only few years,
most of those start-ups and services had been sold out to few business people who
were able to monetize their platforms, users and usees (mostly via advertisement)
or crowd them out (mostly via exponential growth of Facebook and its ‘magic’
network effect). In the end, almost all affected start-ups and services got shut down
(especially those bought by Yahoo). Nevertheless, the ‘golden’ corporate start-up
period brought about a huge enthusiasm and the belief that entrepreneurial spirit,
fostered either by an individual genius or by collective (a.k.a. crowd) endeavour,
could save the world. During that period, unsurprisingly, the idea of hacker
labs/spaces exploded.
Fabulous (self)replicating rapid prototypes, 3D printers, do-it-yourself, the
Internet of Things started to resonate with (young) makers all around the world.
Unfortunately, GNU GPL (v.3 at the time) ceased to be a priority. The
infrastructure of free software had become taken for granted, and enthusiastic
dancing on the shoulders of giants became the most popular exercise. Rebranding
existing Unix services (finger > twitter, irc > slack, talk > im), and/or designing the
‘last mile’ of user experience (often as trivial as adding round corners to the
buttons), would often be a good enough reason to enclose the project, do the
slideshow pitch, create a new start-up backed up by an angel investor, and hope to
win in the game of network effect(s).
Typically, software stack running these projects would be (almost) completely
GNU GPL (server + client), but parts made on OSX (endorsed for being ‘true’
Unix under the hood) would stay enclosed. In this way, projects would shift from
the world of commons to the world of business. In order to pay respect to the open
source community, and to keep own reputation of ‘the good citizen,’ many
software components would get its source code published on GitHub – which is a
prime example of that game of enclosure in its own right. Such developments
transformed the hacker movement from a genuine political challenge to the
property regime into a science fiction fantasy that sharing knowledge while
keeping hackers’ meritocracy regime intact could fix all world’s problems – if only
we, the hackers, are left alone to play, optimize, innovate and make that amazing
technology!
THE SOCIAL LIFE OF DIGITAL TECHNOLOGIES

PJ & AK: This brings about the old debate between technological determinism
and social determinism, which never seems to go out of fashion. What is your take,
253

CHAPTER 12

as active hackers and social activists, on this debate? What is the role of
(information) technologies in social development?
MM & TM: Any discussion of information technologies and social
development requires the following parenthesis: notions used for discussing
technological development are shaped by the context of parallel US hegemony
over capitalist world-system and its commanding role in the development of
information technologies. Today’s critiques of the Internet are far from celebration
of its liberatory, democratizing potential. Instead, they often reflect frustration over
its instrumental role in the expansion of social control. Yet, the binary of freedom
and control (Chun, 2008), characteristic for ideological frameworks pertaining to
liberal capitalist democracies, is increasingly at pains to explain what has become
evident with the creeping commercialization and concentration of market power in
digital networks. Information technologies are no different from other generalpurpose technologies on which they depend – such as mass manufacture, logistics,
or energy systems.
Information technologies shape capitalism – in return, capitalism shapes
information technologies. Technological innovation is driven by interests of
investors to profit from new commodity markets, and by their capacity to optimize
and increase productivity of other sectors of economy. The public has some
influence over development of information technologies. In fact, publicly funded
research and development has created and helped commercialize most of the
fundamental building blocks of our present digital infrastructures ranging from
microprocessors, touch-screens all the way to packet switching networks
(Mazzucato, 2013). However, public influence on commercially matured
information technologies has become limited, driven by imperatives of
accumulation and regulatory hegemony of the US.
When considering the structural interplay between technological development
and larger social systems, we cannot accept the position of technological
determinism – particularly not in the form of Promethean figures of enterpreneurs,
innovators and engineers who can solve the problems of the world. Technologies
are shaped socially, yet the position of outright social determinism is inacceptable
either. The reproduction of social relations depends on contingencies of
technological innovation, just as the transformation of social relations depends on
contingencies of actions by individuals, groups and institutions. Given the
asymmetries that exist between the capitalist core and the capitalist periphery, from
which we hail, strategies for using technologies as agents of social change differ
significantly.
PJ & AK: Based on your activist experience, what is the relationship between
information technologies and democracy?
MM & TM: This relation is typically discussed within the framework of
communicative action (Habermas, 1984 [1981], 1987 [1981]) which describes how
the power to speak to the public has become radically democratized, how digital
communication has coalesced into a global public sphere, and how digital
communication has catalysed the power of collective mobilization. Information
technologies have done all that – but the framework of communicative action
254

KNOWLEDGE COMMONS AND ACTIVIST PEDAGOGIES

describes only a part of the picture. Firstly, as Jodi Dean warns us in her critique of
communicative capitalism (Dean, 2005; see also Dean, 2009), the self-referential
intensity of communication frequently ends up as a substitute for the hard (and
rarely rewarding) work of political organization. Secondly, and more importantly,
Internet technologies have created the ‘winner takes all’ markets and benefited
more highly skilled workforce, thus helping to create extreme forms of economic
inequality (Brynjolfsson & McAfee, 2011). Thus, in any list of world’s richest
people, one can find an inordinate number of entrepreneurs from information
technology sector. This feeds deeply into neoliberal transformation of capitalist
societies, with growing (working and unemployed) populations left out of social
welfare which need to be actively appeased or policed. This is the structural
problem behind liberal democracies, electoral successes of the radical right, and
global “Trumpism” (Blyth, 2015). Intrinsic to contemporary capitalism,
information technologies reinforce its contradictions and pave its unfortunate trail
of destruction.
PJ & AK: Access to digital technologies and digital materials is dialectically
intertwined with human learning. For instance, Stallman’s definition of free
software directly addresses this issue in two freedoms: “Freedom 1: The freedom
to study how the program works, and change it to make it do what you wish,” and
“Freedom 3: The freedom to improve the program, and release your improvements
(and modified versions in general) to the public, so that the whole community
benefits” (Stallman, 2002: 43). Please situate the relationship between access and
learning in the contemporary context.
MM & TM: The relationships between digital technologies and education are
marked by the same contradictions and processes of enclosure that have befallen
the free software. Therefore, Eastern European scepticism towards free software is
equally applicable to education. The flip side of interactivity is audience
manipulation; the flip side of access and availability is (economic) domination.
Eroded by raising tuitions, expanding student debt, and poverty-level wages for
adjunct faculty, higher education is getting more and more exclusive. However,
occasional spread of enthusiasm through ideas such as MOOCs does not bring
about more emancipation and equality. While they preach loudly about unlimited
access for students at the periphery, neoliberal universities (backed up by venture
capital) are actually hoping to increase their recruitment business (models).
MOOCs predominantly serve members of privileged classes who already have
access to prestige universities, and who are “self-motivated, self-directed, and
independent individuals who would push to succeed anywhere” (Konnikova,
2014). It is a bit worrying that such rise of inequality results from attempts to
provide materials freely to everyone with Internet access!
The question of access to digital books for public libraries is different. Libraries
cannot afford digital books from world’s largest publishers (Digitalbookworld,
2012), and the small amount of already acquired e-books must destroyed after only
twenty six lendings (Greenfield, 2012). Thus, the issue of access is effectively left
to competition between Amazon, Google, Apple and other companies. The state of
affairs in scientific publishing is not any better. As we wrote in the collective open
255

CHAPTER 12

letter ‘In solidarity with Library Genesis and Sci-Hub’ (Custodians.online, 2015),
five for-profit publishers (Elsevier, Springer, Wiley-Blackwell, Taylor & Francis
and Sage) own more than half of all existing databases of academic material, which
are licensed at prices so scandalously high that even Harvard, the richest university
of the Global North, has complained that it cannot afford them any longer. Robert
Darnton, the past director of Harvard Library, says: “We faculty do the research,
write the papers, referee papers by other researchers, serve on editorial boards, all
of it for free … and then we buy back the results of our labor at outrageous prices.”
For all the work supported by public money benefiting scholarly publishers,
particularly the peer review that grounds their legitimacy, prices of journal articles
prohibit access to science to many academics – and all non-academics – across the
world, and render it a token of privilege (Custodians.online, 2015).
PJ & AK: Please describe the existing strategies for struggle against these
developments. What are their main strengths and weaknesses?
MM & TM: Contemporary problems in the field of production, access,
maintenance and distribution of knowledge regulated by globally harmonized
intellectual property regime have brought about tremendous economic, social,
political and institutional crisis and deadlock(s). Therefore, we need to revisit and
rethink our politics, strategies and tactics. We could perhaps find inspiration in the
world of free software production, where it seems that common effort, courage and
charming obstinacy are able to build alternative tools and infrastructures. Yet, this
model might be insufficient for the whole scope of crisis facing knowledge
production and dissemination. The aforementioned corporate appropriations of free
software such as ‘tivoizations,’ ‘walled gardens,’ ‘software-as-a-service’ etc. bring
about the problem of longevity of commons-based peer-production.
Furthermore, the sense of entitlement for building alternatives to dominant
modes of oppression can only arrive at the close proximity to capitalist centres of
power. The periphery (of capitalism), in contrast, relies on strategies of ‘stealing’
and bypassing socio-economic barriers by refusing to submit to the harmonized
regulation that sets the frame for global economic exchange. If we honestly look
back and try to compare the achievements of digital piracy vs. the achievements of
reformist Creative Commons, it is obvious that the struggle for access to
knowledge is still alive mostly because of piracy.
PJ & AK: This brings us to the struggle against (knowledge as) private
property. What are the main problems in this struggle? How do you go about them?
MM & TM: Many projects addressing the crisis of access to knowledge are
originated in Eastern Europe. Examples include Library Genesis, Science Hub,
Monoskop and Memory of the World. Balázs Bodó’s research (2016) on the ethos
of Library Genesis and Science Hub resonates with our beliefs, shared through all
abovementioned projects, that the concept of private property should not be taken
for granted. Private property can and should be permanently questioned,
challenged and negotiated. This is especially the case in the face of artificial
scarcity (such as lack of access to knowledge caused by intellectual property in
context of digital networks) or selfish speculations over scarce basic human

256

KNOWLEDGE COMMONS AND ACTIVIST PEDAGOGIES

resources (such as problems related to housing, water or waterfront development)
(Mars, Medak, & Sekulić, 2016).
The struggle to challenge the property regime used to be at the forefront of the
Free Software Movement. In the spectacular chain of recent events, where the
revelations of sweeping control and surveillance of electronic communications
brought about new heroes (Manning, Assange, Snowden), the hacker is again
reduced to the heroic cypherpunk outlaw. This firmly lies within the old Cold War
paradigm of us (the good guys) vs. them (the bad guys). However, only rare and
talented people are able to master cryptography, follow exact security protocols,
practice counter-control, and create a leak of information. Unsurprisingly, these
people are usually white, male, well-educated, native speakers of English.
Therefore, the narrative of us vs. them is not necessarily the most empowering, and
we feel that it requires a complementary strategy that challenges the property
regime as a whole. As our letter at Custodians.online says:
We find ourselves at a decisive moment. This is the time to recognize that the
very existence of our massive knowledge commons is an act of collective
civil disobedience. It is the time to emerge from hiding and put our names
behind this act of resistance. You may feel isolated, but there are many of us.
The anger, desperation and fear of losing our library infrastructures, voiced
across the Internet, tell us that. This is the time for us custodians, being dogs,
humans or cyborgs, with our names, nicknames and pseudonyms, to raise our
voices. Share your writing – digitize a book – upload your files. Don’t let our
knowledge be crushed. Care for the libraries – care for the metadata – care
for the backup. (Custodians.online, 2015)
FROM CIVIL DISOBEDIENCE TO PUBLIC LIBRARY

PJ & AK: Started in 2012, The Public Library project (Memory of the World,
2016a) is an important part of struggle against commodification of knowledge.
What is the project about; how did it arrive into being?
MM & TM: The Public Library project develops and affirms scenarios for
massive disobedience against current regulation of production and circulation of
knowledge and culture in the digital realm. Started in 2012, it created a lot of
resonance across the peripheries of an unevenly developed world of study and
learning. Earlier that year, takedown of the book-sharing site Library.nu produced
the anxiety that the equalizing effects brought about by piracy would be rolled
back. With the takedown, the fact that access to most recent and most relevant
knowledge was (finally) no longer a privilege of the rich academic institutions in a
few countries of the Global West, and/or the exclusive preserve of the academia to
boot – has simply disappeared into thin air. Certainly, various alternatives from
deep semi-periphery have quickly filled the gap. However, it is almost a miracle
that they still continue to exist in spite of prosecution they are facing on everyday
basis.

257

CHAPTER 12

Our starting point for the Public Library project is simple: public library is the
institutional form devised by societies in order to make knowledge and culture
accessible to all its members regardless their social or economic status. There is a
political consensus across the board that this principle of access is fundamental to
the purpose of a modern society. Only educated and informed citizens are able to
claim their rights and fully participate in the polity for common good. Yet, as
digital networks have radically expanded availability of literature and science,
provision of de-commodified access to digital objects has been by and large denied
to public libraries. For instance, libraries frequently do not have the right to
purchase e-books for lending and preservations. If they do, they are limited in
regards to how many times and under what conditions they can lend digital objects
before the license and the object itself is revoked (Greenfield, 2012). The case of
academic journals is even worse. As journals become increasingly digital, libraries
can provide access and ‘preserve’ them only for as long as they pay extortionate
subscriptions. The Public Library project fills in the space that remains denied to
real-world public libraries by building tools for organizing and sharing electronic
libraries, creating digitization workflows and making books available online.
Obviously, we are not alone in this effort. There are many other platforms, public
and hidden, that help people to share books. And the practice of sharing is massive.
PJ & AK: The Public Library project (Memory of the World, 2016a) is a part of
a wider global movement based, amongst other influences, on the seminal work of
Aaron Swartz. This movement consists of various projects including but not
limited to Library Genesis, Aaaaarg.org, UbuWeb, and others. Please situate The
Public Library project in the wider context of this movement. What are its distinct
features? What are its main contributions to the movement at large?
MM & TM: The Public Library project is informed by two historic moments in
the development of institution of public library The first defining moment
happened during the French Revolution – the seizure of library collections from
aristocracy and clergy, and their transfer to the Bibliothèque Nationale and
municipal libraries of the post-revolutionary Republic. The second defining
moment happened in England through working class struggles to make knowledge
accessible to the working class. After the revolution of 1848, that struggle resulted
in tax-supported public libraries. This was an important part of the larger attempt
by the Chartist movement to provide workers with “really useful knowledge”
aimed at raising class consciousness through explaining functioning of capitalist
domination and exploring ways of building workers’ own autonomous culture
(Johnson, 1988). These defining revolutionary moments have instituted two
principles underpinning the functioning of public libraries: a) general access to
knowledge is fundamental to full participation in the society, and b)
commodification of knowledge in the form of book trade needs to be limited by
public de-commodified non-monetary forms of access through public institutions.
In spite of enormous expansion of potentials for providing access to knowledge
to all regardless of their social status or geographic location brought about by the
digital technologies, public libraries have been radically limited in pursuing their
mission. This results in side-lining of public libraries in enormous expansion of
258

KNOWLEDGE COMMONS AND ACTIVIST PEDAGOGIES

commodification of knowledge in the digital realm, and brings huge profits to
academic publishers. In response to these limitations, a number of projects have
sprung up in order to maintain public interest by illegal means.
PJ & AK: Can you provide a short genealogy of these projects?
MM & TM: Founded in 1996, Ubu was one of the first online repositories.
Then, in 2001, Textz.com started distributing texts in critical theory. After
Textz.com got shot down in early 2004, it took another year for Aaaaarg to emerge
and Monoskop followed soon thereafter. In the latter part of the 2000s, Gigapedia
started a different trajectory of providing access to comprehensive repositories.
Gigapedia was a game changer, because it provided access to thousands and
thousands of scholarly titles and made access to that large corpus no longer limited
to those working or studying in the rich institutions of the Global North. In 2012
publishing industry shut down Gigapedia (at the time, it was known as Library.nu).
Fortunately, the resulting vacuum did not last for long, as Library.nu repository got
merged into the holdings of Library Genesis. Building on the legacy of Soviet
scholars who devised the ways of shadow production and distribution of
knowledge in the form of samizdat and early digital distribution of texts in the
post-Soviet period (Balázs, 2014), Library Genesis has built a robust infrastructure
with the mission to provide access to the largest online library in existence while
keeping a low profile. At this moment Library Genesis provides access to books,
and its sister project Science Hub provides access to academic journals. Both
projects are under threat of closure by the largest academic publisher Reed
Elsevier. Together with the Public Library project, they articulate a position of civil
disobedience.
PJ & AK: Please elaborate the position of civil disobedience. How does it
work; when is it justified?
MM & TM: Legitimating discourses usually claim that shadow libraries fall
into the category of non-commercial fair use. These arguments are definitely valid,
yet they do not build a particularly strong ground for defending knowledge
commons. Once they arrive under attack, therefore, shadow libraries are typically
shut down. In our call for collective disobedience, therefore, we want to make a
larger claim. Access to knowledge as a universal condition could not exist if we –
academics and non-academics across the unevenly developed world – did not
create own ways of commoning knowledge that we partake in producing and
learning. By introducing the figure of the custodian, we are turning the notion of
property upside down. Paraphrasing the Little Prince, to own something is to be
useful to that which you own (Saint-Exupéry, 1945). Custodians are the political
subjectivity of that disobedient work of care.
Practices of sharing, downloading, and uploading, are massive. So, if we want to
prevent our knowledge commons from being taken away over and over again, we
need to publicly and collectively stand behind our disobedient behaviour. We
should not fall into the trap of the debate about legality or illegality of our
practices. Instead, we should acknowledge that our practices, which have been
deemed illegal, are politically legitimate in the face of uneven opportunities
between the Global North and the Global South, in the face of commercialization
259

CHAPTER 12

of education and student debt in the Global North … This is the meaning of civil
disobedience – to take responsibility for breaking unjust laws.
PJ & AK: We understand your lack of interest for debating legality –
nevertheless, legal services are very interested in your work … For instance,
Marcell has recently been involved in a law suit related to Aaaaarg. Please describe
the relationship between morality and legality in your (public) engagement. When,
and under which circumstances, can one’s moral actions justify breaking the law?
MM & TM: Marcell has been recently drawn into a lawsuit that was filed
against Aaaaarg for copyright infringement. Marcell, the founder of Aaaaarg Sean
Dockray, and a number of institutions ranging from universities to continentalscale intergovernmental organizations, are being sued by a small publisher from
Quebec whose translation of André Bazin’s What is Cinema? (1967) was twice
scanned and uploaded to Aaaaarg by an unknown user. The book was removed
each time the plaintiff issued a takedown notice, resulting in minimal damages, but
these people are nonetheless being sued for 500.000 Canadian dollars. Should
Aaaaarg not be able to defend its existence on the principle of fair use, a valuable
common resource will yet again be lost and its founder will pay a high price. In this
lawsuit, ironically, there is little economic interest. But many smaller publishers
find themselves squeezed between the privatization of education which leaves
students and adjuncts with little money for books and the rapid concentration of
academic publishing. For instance, Taylor and Francis has acquired a smaller
humanities publisher Ashgate and shut it down in a matter of months (Save
Ashgate Publishing petition, 2015).
The system of academic publishing is patently broken. It syphons off public
funding of science and education into huge private profits, while denying living
wages and access to knowledge to its producers. This business model is legal, but
deeply illegitimate. Many scientists and even governments agree with this
conclusion – yet, situation cannot be easily changed because of entrenched power
passed down from the old models of publishing and their imbrication with
allocation of academic prestige. Therefore, the continuous existence of this model
commands civil disobedience.
PJ & AK: The Public Library project (Memory of the World, 2016a) operates
in various public domains including art galleries. Why did you decide to develop
The Public Library project in the context of arts? How do you conceive the
relationship between arts and activism?
MM & TM: We tend to easily conflate the political with the aesthetic.
Moreover, when an artwork expressedly claims political character, this seems to
grant it recognition and appraisal. Yet, socially reflective character of an artwork
and its consciously critical position toward the social reality might not be outright
political. Political action remains a separate form of agency, which is different than
that of socially reflexive, situated and critical art. It operates along a different logic
of engagement. It requires collective mobilization and social transformation.
Having said that, socially reflexive, situated and critical art cannot remain detached
from the present conjuncture and cannot exist outside the political space. Within
the world of arts, alternatives to existing social sensibilities and realities can be
260

KNOWLEDGE COMMONS AND ACTIVIST PEDAGOGIES

articulated and tested without paying a lot of attention to consistency and
plausibility. Whereas activism generally leaves less room for unrestricted
articulation, because it needs to produce real and plausible effects.
With the generous support of the curatorial collective What, How and for Whom
(WHW) (2016), the Public Library project was surprisingly welcomed by the art
world, and this provided us with a stage to build the project, sharpen its arguments
and ascertain legitimacy of its political demands. The project was exhibited, with
WHW and other curators, in some of the foremost art venues such as Reina Sofía
in Madrid, Württembergischer Kunstverein in Stuttgart, 98 Weeks in Beirut,
Museum of Contemporary Art Metelkova in Ljubljana, and Calvert 22 in London.
It is great to have a stage where we can articulate social issues and pursue avenues
of action that other social institutions might find risky to support. Yet, while the
space of art provides a safe haven from the adversarial world of political reality, we
think that the addressed issues need to be politicized and that other institutions,
primarily institutions of education, need to stand behind the demand for universal
access. For instance, teaching and research at the University in Zagreb critically
depends on the capacity of its faculty and students to access books and journals
from sources that are deemed illegal – in our opinion, therefore, the University
needs to take a public stand for these forms of access. In the world of
commercialized education and infringement liability, expecting the University to
publicly support us seems highly improbable. However, it is not impossible! This
was recently demonstrated by the Zürich Academy of Arts, which now hosts a
mirror of Ubu – a crucial resource for its students and faculty alike
(Custodians.online, 2016).
PJ & AK: In the current climate of economic austerity, the question of
resources has become increasingly important. For instance, Web 2.0. has narrowed
available spaces for traditional investigative journalism, and platforms such as
Airbnb and Uber have narrowed spaces for traditional labor. Following the same
line of argument, placing activism into art galleries clearly narrows available
spaces for artists. How do you go about this problem? What, if anything, should be
done with the activist takeover of traditional forms of art? Why?
MM & TM: Art can no longer stand outside of the political space, and it can no
longer be safely stowed away into a niche of supposed autonomy within bourgeois
public sphere detached from commodity production and the state. However, art
academies in Croatia and many other places throughout the world still churn out
artists on the premise that art is apolitical. In this view artists can specialize in a
medium and create in isolation of their studios – if their artwork is recognized as
masterful, it will be bought on the marketplace. This is patently a lie! Art in Croatia
depends on bonds of solidarity and public support.
Frequently it is the art that seeks political forms of engagement rather than vice
versa. A lot of headspace for developing a different social imaginary can be gained
from that venturing aspect of contemporary art. Having said that, art does not need
to be political in order to be relevant and strong.

261

CHAPTER 12

THE DOUBLE LIFE OF HACKER CULTURE

PJ & AK: The Public Library project (Memory of the World, 2016a) is essentially
pedagogical. When everyone is a librarian, and all books are free, living in the
world transforms into living with the world – so The Public Library project is also
essentially anti-capitalist. This brings us to the intersections between critical
pedagogy of Paulo Freire, Peter McLaren, Henry Giroux, and others – and the
hacker culture of Richard Stallman, Linus Torvalds, Steven Lévy, and others. In
spite of various similarities, however, critical pedagogy and hacker culture disagree
on some important points.
With its deep roots in Marxism, critical theory always insists on class analysis.
Yet, imbued in the Californian ideology (Barbrook and Cameron, 1996), the hacker
culture is predominantly individualist. How do you go about the tension between
individualism and collectivism in The Public Library project? How do you balance
these forces in your overall work?
MM & TM: Hacker culture has always lived a double life. Personal computers
and the Internet have set up a perfect projection screen for a mind-set which
understands autonomy as a pursuit for personal self-realisation. Such mind-set sees
technology as a frontier of limitless and unconditional freedom, and easily melds
with entrepreneurial culture of the Silicon Valley. Therefore, it is hardly a surprise
that individualism has become the hegemonic narrative of hacker culture.
However, not all hacker culture is individualist and libertarian. Since the 1990s, the
hacker culture is heavily divided between radical individualism and radical
mutualism. Fred Turner (2006), Richard Barbrook and Andy Cameron (1996) have
famously shown that radical individualism was built on freewheeling counterculture of the American hippie movement, while radical mutualism was built on
collective leftist traditions of anarchism and Marxism. This is evident in the Free
Software Movement, which has placed ethics and politics before economy and
technology. In her superb ethnographic work, Biella Coleman (2013) has shown
that projects such as GNU/Linux distribution Debian have espoused radically
collective subjectivities. In that regard, these projects stand closer to mutualist,
anarchist and communist traditions where collective autonomy is the foundation of
individual freedom.
Our work stands in that lineage. Therefore, we invoke two collective figures –
amateur librarian and custodian. These figures highlight the labor of communizing
knowledge and maintaining infrastructures of access, refuse to leave the commons
to the authority of professions, and create openings where technologies and
infrastructures can be re-claimed for radically collective and redistributive
endeavours. In that context, we are critical of recent attempts to narrow hacker
culture down to issues of surveillance, privacy and cryptography. While these
issues are clearly important, they (again) reframe the hacker community through
the individualist dichotomy of freedom and privacy, and, more broadly, through
the hegemonic discourse of the post-historical age of liberal capitalism. In this
way, the essential building blocks of the hacker culture – relations of production,
relations of property, and issues of redistribution – are being drowned out, and
262

KNOWLEDGE COMMONS AND ACTIVIST PEDAGOGIES

collective and massive endeavour of commonizing is being eclipsed by the
capacity of the few crypto-savvy tricksters to avoid government control.
Obviously, we strongly disagree with the individualist, privative and 1337 (elite)
thrust of these developments.
PJ & AK: The Public Library project (Memory of the World, 2016a) arrives
very close to visions of deschooling offered by authors such as Ivan Illich (1971),
Everett Reimer (1971), Paul Goodman (1973), and John Holt (1967). Recent
research indicates that digital technologies offer some fresh opportunities for the
project of deschooling (Hart, 2001; Jandrić, 2014, 2015b), and projects such as
Monoskop (Monoskop, 2016) and The Public Library project (Memory of the
World, 2016a) provide important stepping-stones for emancipation of the
oppressed. Yet, such forms of knowledge and education are hardly – if at all –
recognised by the mainstream. How do you go about this problem? Should these
projects try and align with the mainstream, or act as subversions of the mainstream,
or both? Why?
MM & TM: We are currently developing a more fine-tuned approach to
educational aspects of amateur librarianship. The forms of custodianship over
knowledge commons that underpin the practices behind Monoskop, Public Library,
Aaaaarg, Ubu, Library Genesis, and Science Hub are part and parcel of our
contemporary world – whether you are a non-academic with no access to scholarly
libraries, or student/faculty outside of the few well-endowed academic institutions
in the Global North. As much as commercialization and privatization of education
are becoming mainstream across the world, so are the strategies of reproducing
one’s knowledge and academic research that depend on the de-commodified access
of shadow libraries.
Academic research papers are narrower in scope than textbooks, and Monoskop
is thematically more specific than Library Genesis. However, all these practices
exhibit ways in which our epistemologies and pedagogies are built around
institutional structures that reproduce inequality and differentiated access based on
race, gender, class and geography. By building own knowledge infrastructures, we
build different bodies of knowledge and different forms of relating to our realities –
in words of Walter Mignolo, we create new forms of epistemic disobedience
(2009). Through Public Library, we have digitized and made available several
collections that represent epistemologically different corpuses of knowledge. A
good example of that is the digital collection of books selected by Black Panther
Herman Wallace as his dream library for political education (Memory of the
World, 2016b).
PJ & AK: Your work breaks traditional distinctions between professionals and
amateurs – when everyone becomes a librarian, the concepts of ‘professional
librarian’ and ‘amateur librarian’ become obsolete. Arguably, this tension is an
inherent feature of the digital world – similar trends can be found in various
occupations such as journalism and arts. What are the main consequences of the
new (power) dynamics between professionals and amateurs?
MM & TM: There are many tensions between amateurs and professionals.
There is the general tension, which you refer to as “the inherent feature of the
263

CHAPTER 12

digital world,” but there are also more historically specific tensions. We, amateur
librarians, are mostly interested in seizing various opportunities to politicize and
renegotiate the positions of control and empowerment in the tensions that are
already there. We found that storytelling is a particularly useful, efficient and
engaging way of politicization. The naïve and oft overused claim – particularly
during the Californian nineties – of the revolutionary potential of emerging digital
networks turned out to be a good candidate for replacement by a story dating back
two centuries earlier – the story of emergence of public libraries in the early days
of the French bourgeois revolution in the 19th century.
The seizure of book collections from the Church and the aristocracy in the
course of revolutions casts an interesting light on the tensions between the
professionals and the amateurs. Namely, the seizure of book collections didn’t lead
to an Enlightenment in the understanding of the world – a change in the paradigm
how we humans learn, write and teach each other about the world. Steam engine,
steam-powered rotary press, railroads, electricity and other revolutionary
technological innovations were not seen as results of scientific inquiry. Instead,
they were by and large understood as developments in disciplines such as
mechanics, engineering and practical crafts, which did not challenge religion as the
foundational knowledge about the world.
Consequently, public prayers continued to act as “hoped for solutions to cattle
plagues in 1865, a cholera epidemic in 1866, and a case of typhoid suffered by the
young Prince (Edward) of Wales in 1871” (Gieryn, 1983). Scientists of the time
had to demarcate science from both the religion and the mechanics to provide a
rationale for its supriority as opposed to the domains of spiritual and technical
discovery. Depending on whom they talked to, asserts Thomas F. Gieryn, scientists
would choose to discribe the science as either theoretical or empirical, pure or
applied, often in contradictory ways, but with a clear goal to legitimate to
authorities both the scientific endavor and its claim to resources. Boundary-work of
demarcation had the following characteristics:
(a) when the goal is expansion of authority or expertise into domains claimed
by other professions or occupations, boundary-work heightens the contrast
between rivals in ways flattering to the ideologists’ side;
(b) when the goal is monopolization of professional authority and resources,
boundary-work excludes rivals from within by defining them as outsiders
with labels such as ‘pseudo,’ ‘deviant,’ or ‘amateur’;
(c) when the goal is protection of autonomy over professional activities,
boundary-work exempts members from responsibility for consequences of
their work by putting the blame on scapegoats from outside. (Gieryn, 1983:
791–192)
Once institutionally established, modern science and its academic system have
become the exclusive instances where emerging disciplines had now to seek
recognition and acceptance. The new disciplines (and their respective professions),
in order to become acknowledged by the scientific community as legitimate, had to
264

KNOWLEDGE COMMONS AND ACTIVIST PEDAGOGIES

repeat the same boundary-work as the science in general once had to go through
before.
The moral of this story is that the best way for a new scientific discipline to
claim its territory was to articulate the specificity and importance of its insights in a
domain no other discipline claimed. It could achieve that by theorizing,
formalizing, and writing own vocabulary, methods and curricula, and finally by
asking the society to see its own benefit in acknowledging the discipline, its
practitioners and its practices as a separate profession – giving it the green light to
create its own departments and eventually join the productive forces of the world.
This is how democratization of knowledge led to the professionalization of science.
Another frequent reference in our storytelling is the history of
professionalization of computing and its consequences for the fields and disciplines
where the work of computer programmers plays an important role (Ensmenger,
2010: 14; Krajewski, 2011). Markus Krajewski in his great book Paper Machines
(2011), looking back on the history of index card catalog (an analysis that is
formative for our understanding of the significance of library catalog as an
epistemic tool), introduced a thought-provoking idea of the logical equivalence of
the developed index card catalog and the Turing machine, thus making the library a
vanguard of the computing. Granting that equivalence, we however think that the
professionalization of computing much better explains the challenges of today’s
librarianship and tensions between the amateur and professional librarians.
The world recognized the importance and potential of computer technology
much before computer science won its own autonomy in the academia. Computer
science first had to struggle and go through its own historical phase of boundarywork. In 1965 the Association for Computing Machinery (ACM) had decided to
pool together various attempts to define the terms and foundations of computer
science analysis. Still, the field wasn’t given its definition before Donald Knuth
and his colleagues established the algorithm as as the principle unit of analysis in
computer science in the first volume of Knuth’s canonical The Art of Computer
Programming (2011) [1968]. Only once the algorithm was posited as the main unit
of study of computer science, which also served as the basis for ACM’s
‘Curriculum ‘68’ (Atchison et al., 1968), the path was properly paved for the future
departments of computer science in the university.
PJ & AK: What are the main consequences of these stories for computer
science education?
MM & TM: Not everyone was happy with the algorithm’s central position in
computer science. Furthermore, since the early days, computer industry has been
complaining that the university does not provide students with practical
knowledge. Back in 1968, for instance, IBM researcher Hal Sackman said:
new departments of computer science in the universities are too busy
teaching simon-pure courses in their struggle for academic recognition to pay
serious time and attention to the applied work necessary to educate
programmers and systems analysts for the real world. (in Ensmenger, 2010:
133)
265

CHAPTER 12

Computer world remains a weird hybrid where knowledge is produced in both
academic and non-academic settings, through academic curricula – but also
through fairs, informal gatherings, homebrew computer clubs, hacker communities
and the like. Without the enthusiasm and the experiments with ways how
knowledge can be transferred and circulated between peers, we would have
probably never arrived to the Personal Computer Revolution in the beginning of
1980s. Without the amount of personal computers already in use, we would have
probably never experienced the Internet revolution in the beginning of 1990s. It is
through such historical development that computer science became the academic
centre of the larger computer universe which spread its tentacles into almost all
other known disciplines and professions.
PJ & AK: These stories describe the process of professionalization. How do
you go about its mirror image – the process of amateurisation?
MM & TM: Systematization, vocabulary, manuals, tutorials, curricula – all the
processes necessary for achieving academic autonomy and importance in the world
– prime a discipline for automatization of its various skills and workflows into
software tools. That happened to photography (Photoshop, 1990; Instagram, 2010),
architecture (AutoCAD, 1982), journalism (Blogger, 1999; WordPress, 2003),
graphic design (Adobe Illustrator, 1986; Pagemaker, 1987; Photoshop, 1988;
Freehand, 1988), music production (Steinberg Cubase, 1989), and various other
disciplines (Memory of the World, 2016b).
Usually, after such software tool gets developed and introduced into the
discipline, begins the period during which a number of amateurs start to ‘join’ that
profession. An army of enthusiasts with a specific skill, many self-trained and with
understanding of a wide range of software tools, join. This phenomenon often
marks a crisis as amateurs coming from different professional backgrounds start to
compete with certified and educated professionals in that field. Still, the future
development of the same software tools remains under control by software
engineers, who become experts in established workflows, and who promise further
optimizations in the field. This crisis of old professions becomes even more
pronounced if the old business models – and their corporate monopolies – are
challenged by the transition to digital network economy and possibly face the
algorithmic replacement of their workforce and assets.
For professions under these challenging conditions, today it is often too late for
boundary-work described in our earlier answer. Instead of maintaining authority
and expertise by labelling upcoming enthusiasts as ‘pseudo,’ ‘deviant,’ or
‘amateur,’ therefore, contemporary disciplines need to revisit own roots, values,
vision and benefits for society and then (re-)articulate the corpus of knowledge that
the discipline should maintain for the future.
PJ & AK: How does this relate to the dichotomy between amateur and
professional librarians?
MM & TM: We regard the e-book management software Calibre (2016),
written by Kovid Goyal, as a software tool which has benefited from the
knowledge produced, passed on and accumulated by librarians for centuries.
Calibre has made the task of creating and maintaining the catalog easy.
266

KNOWLEDGE COMMONS AND ACTIVIST PEDAGOGIES

Our vision is to make sharing, aggregating and accessing catalogs easy and
playful. We like the idea that every rendered catalog is stored on a local hard disk,
that an amateur librarian can choose when to share, and that when she decides to
share, the catalog gets aggregated into a library together with the collections of
other fellow amateur librarians (at https://library.memoryoftheworld.org). For the
purpose of sharing we wrote the Calibre plugin named let’s share books and set up
the related server infrastructure – both of which are easily replicable and
deployable into distributed clones.
Together with Voja Antonić, the legendary inventor of the first eight-bit
computer in Yugoslavia, we also designed and developed a series of book scanners
and used them to digitize hundreds of books focused to Yugoslav humanities such
as the Digital Archive of Praxis and the Korčula Summer School (2016), Catalogue
of Liberated Books (2013), books thrown away from Croatian public libraries
during ideological cleansing of the 1990s Written-off (2015), and the collection of
books selected by the Black Panther Herman Wallace as his dream library for
political education (Memory of the World, 2016b).
In our view, amateur librarians are complementary to professional librarians,
and there is so much to learn and share between each other. Amateur librarians care
about books which are not (yet) digitally curated with curiosity, passion and love;
they dare to disobey in pursuit for the emancipatory vision of the world which is
now under threat. If we, amateur librarians, ever succeed in our pursuits – that
should secure the existing jobs of professional librarians and open up many new
and exciting positions. When knowledge is easily accessed, (re)produced and
shared, there will be so much to follow up upon.
TOWARDS AN ACTIVIST PUBLIC PEDAGOGY

PJ & AK: You organize talks and workshops, publish books, and maintain a major
regional hub for people interested in digital cultures. In Croatia, your names are
almost synonymous with social studies of the digital – worldwide, you are
recognized as regional leaders in the field. Such engagement has a prominent
pedagogical component – arguably, the majority of your work can be interpreted as
public pedagogy. What are the main theoretical underpinnings of your public
pedagogy? How does it work in practice?
MM & TM: Our organization is a cluster of heterogeneous communities and
fields of interest. Therefore, our approaches to public pedagogy hugely vary. In
principle, we subscribe to the idea that all intelligences are equal and that all
epistemology is socially structured. In practice, this means that our activities are
syncretic and inclusive. They run in parallel without falling under the same
umbrella, and they bring together people of varying levels of skill – who bring in
various types of knowledge, and who arrive from various social backgrounds.
Working with hackers, we favour hands-on approach. For a number of years
Marcell has organized weekly Skill Sharing program (Net.culture club MaMa,
2016b) that has started from very basic skills. The bar was incrementally raised to
today’s level of the highly specialized meritocratic community of 1337 hackers. As
267

CHAPTER 12

the required skill level got too demanding, some original members left the group –
yet, the community continues to accommodate geeks and freaks. At the other end,
we maintain a theoretically inflected program of talks, lectures and publications.
Here we invite a mix of upcoming theorists and thinkers and some of the most
prominent intellectuals of today such as Jacques Rancière, Alain Badiou, Saskia
Sassen and Robert McChesney. This program creates a larger intellectual context,
and also provides space for our collaborators in various activities.
Our political activism, however, takes an altogether different approach. More
often than not, our campaigns are based on inclusive planning and direct decision
making processes with broad activist groups and the public. However, such
inclusiveness is usually made possible by a campaigning process that allows
articulation of certain ideas in public and popular mobilization. For instance, before
the Right to the City campaign against privatisation of the pedestrian zone in
Zagreb’s Varšavska Street coalesced together (Pravo na grad, 2016), we tactically
used media for more than a year to clarify underlying issues of urban development
and mobilize broad public support. At its peak, this campaign involved no less than
200 activists involved in the direct decision-making process and thousands of
citizens in the streets. Its prerequisite was hard day-to-day work by a small group
of people organized by the important member of our collective Teodor Celakoski.
PJ & AK: Your public pedagogy provides great opportunity for personal
development – for instance, talks organized by the Multimedia Institute have been
instrumental in shaping our educational trajectories. Yet, you often tackle complex
problems and theories, which are often described using complex concepts and
language. Consequently, your public pedagogy is inevitably restricted to those who
already possess considerable educational background. How do you balance the
popular and the elitist aspects of your public pedagogy? Do you intend to try and
reach wider audiences? If so, how would you go about that?
MM & TM: Our cultural work equally consists of more demanding and more
popular activities, which mostly work together in synergy. Our popular Human
Rights Film Festival (2016) reaches thousands of people; yet, its highly selective
programme echoes our (more) theoretical concerns. Our political campaigns are
intended at scalability, too. Demanding and popular activities do not contradict
each other. However, they do require very different approaches and depend on
different contexts and situations. In our experience, a wide public response to a
social cause cannot be simply produced by shaping messages or promoting causes
in ways that are considered popular. The response of the public primarily depends
on a broadly shared understanding, no matter its complexity, that a certain course
of action has an actual capacity to transform a specific situation. Recognizing that
moment, and acting tactfully upon it, is fundamental to building a broad political
process.
This can be illustrated by the aforementioned Custodians.online letter (2015)
that we recently co-authored with a number of our fellow library activists against
the injunction that allows Elsevier to shut down two most important repositories
providing access to scholarly writing: Science Hub and Library Genesis. The letter
is clearly a product of our specific collective work and dynamic. Yet, it clearly
268

KNOWLEDGE COMMONS AND ACTIVIST PEDAGOGIES

articulates various aspects of discontent around this impasse in access to
knowledge, so it resonates with a huge number of people around the world and
gives them a clear indication that there are many who disobey the global
distribution of knowledge imposed by the likes of Elsevier.
PJ & AK: Your work is probably best described by John Holloway’s phrase
“in, against, and beyond the state” (Holloway, 2002, 2016). What are the main
challenges of working under such conditions? How do you go about them?
MM & TM: We could situate the Public Library project within the structure of
tactical agency, where one famously moves into the territory of institutional power
of others. While contesting the regulatory power of intellectual property over
access to knowledge, we thus resort to appropriation of universalist missions of
different social institutions – public libraries, UNESCO, museums. Operating in an
economic system premised on unequal distribution of means, they cannot but fail
to deliver on their universalist promise. Thus, while public libraries have a mission
to provide access to knowledge to all members of the society, they are severely
limited in what they can do to accomplish that mission in the digital realm. By
claiming the mission of universal access to knowledge for shadow libraries,
collectively built shared infrastructures redress the current state of affairs outside of
the territory of institutions. Insofar, these acts of commoning can indeed be
regarded as positioned beyond the state (Holloway, 2002, 2016).
Yet, while shadow libraries can complement public libraries, they cannot
replace public libraries. And this shifts the perspective from ‘beyond’ to ‘in and
against’: we all inhabit social institutions which reflect uneven development in and
between societies. Therefore, we cannot simply operate within binaries: powerful
vs. powerless, institutional vs. tactical. Our space of agency is much more complex
and blurry. Institutions and their employees resist imposed limitations, and
understand that their spaces of agency reach beyond institutional limitations.
Accordingly, the Public Library project enjoys strong and unequivocal complicity
of art institutions, schools and libraries for its causes and activities. While
collectively building practices that abolish the present state of affairs and reclaim
the dream of universal access to knowledge, we rearticulate the vision of a
radically equal society equipped with institutions that can do justice to that
“infinite demand” (Critchley, 2013). We are collectively pursuing this collective
dream – in words of our friend and our continuing inspiration Aaron Swartz: “With
enough of us, around the world, we’ll not just send a strong message opposing the
privatization of knowledge – we’ll make it a thing of the past. Will you join us?”
(Swartz, 2008).


Medak, Mars & WHW
Public Library
2015


Public Library

may • 2015
price 50 kn

This publication is realized along with the exhibition
Public Library • 27/5 –13/06 2015 • Gallery Nova • Zagreb
Izdavači / Publishers
Editors
Tomislav Medak • Marcell Mars •
What, How & for Whom / WHW
ISBN 978-953-55951-3-7 [Što, kako i za koga/WHW]
ISBN 978-953-7372-27-9 [Multimedijalni institut]
A Cip catalog record for this book is available from the
National and University Library in Zagreb under 000907085

With the support of the Creative Europe Programme of the
European Union

ZAGREB • ¶ May • 2015

Public Library

1.
Marcell Mars, Manar Zarroug
& Tomislav Medak

75

Public Library (essay)
2.
Paul Otlet

87

Transformations in the Bibliographical
Apparatus of the Sciences
(Repertory — Classification — Office
of Documentation)
3.
McKenzie Wark

111

Metadata Punk
4.
Tomislav Medak
The Future After the Library
UbuWeb and Monoskop’s Radical Gestures

121

Marcell Mars,
Manar Zarroug
& Tomislav Medak

Public library (essay)

In What Was Revolutionary about the French Revolution? 01 Robert Darnton considers how a complete collapse of the social order (when absolutely
everything — all social values — is turned upside
down) would look. Such trauma happens often in
the life of individuals but only rarely on the level
of an entire society.
In 1789 the French had to confront the collapse of
a whole social order—the world that they defined
retrospectively as the Ancien Régime — and to find
some new order in the chaos surrounding them.
They experienced reality as something that could
be destroyed and reconstructed, and they faced
seemingly limitless possibilities, both for good and
evil, for raising a utopia and for falling back into
tyranny.02
The revolution bootstraps itself.
01 Robert H. Darnton, What Was Revolutionary about the
French Revolution? (Waco, TX: Baylor University Press,
1996), 6.
02 Ibid.

Public library (essay)

75

In the dictionaries of the time, the word revolution was said to derive from the verb to revolve and
was defined as “the return of the planet or a star to
the same point from which it parted.” 03 French political vocabulary spread no further than the narrow
circle of the feudal elite in Versailles. The citizens,
revolutionaries, had to invent new words, concepts
… an entire new language in order to describe the
revolution that had taken place.
They began with the vocabulary of time and space.
In the French revolutionary calendar used from 1793
until 1805, time started on 1 Vendémiaire, Year 1, a
date which marked the abolition of the old monarchy on (the Gregorian equivalent) 22 September
1792. With a decree in 1795, the metric system was
adopted. As with the adoption of the new calendar,
this was an attempt to organize space in a rational
and natural way. Gram became a unit of mass.
In Paris, 1,400 streets were given new names.
Every reminder of the tyranny of the monarchy
was erased. The revolutionaries even changed their
names and surnames. Le Roy or Leveque, commonly
used until then, were changed to Le Loi or Liberté.
To address someone, out of respect, with vous was
forbidden by a resolution passed on 24 Brumaire,
Year 2. Vous was replaced with tu. People are equal.
The watchwords Liberté, égalité, fraternité (freedom, equality, brotherhood)04 were built through
03 Ibid.
04 Slogan of the French Republic, France.fr, n.d.,
http://www.france.fr/en/institutions-and-values/slogan
-french-republic.html.

76

M. Mars • M. Zarroug • T. Medak

literacy, new epistemologies, classifications, declarations, standards, reason, and rationality. What first
comes to mind about the revolution will never again
be the return of a planet or a star to the same point
from which it departed. Revolution bootstrapped,
revolved, and hermeneutically circularized itself.
Melvil Dewey was born in the state of New York in
1851.05 His thirst for knowledge was found its satisfaction in libraries. His knowledge about how to
gain knowledge was developed by studying libraries.
Grouping books on library shelves according to the
color of the covers, the size and thickness of the spine,
or by title or author’s name did not satisfy Dewey’s
intention to develop appropriate new epistemologies in the service of the production of knowledge
about knowledge. At the age of twenty-four, he had
already published the first of nineteen editions of
A Classification and Subject Index for Cataloguing
and Arranging the Books and Pamphlets of a Library,06 the classification system that still bears its
author’s name: the Dewey Decimal System. Dewey
had a dream: for his twenty-first birthday he had
announced, “My World Work [will be] Free Schools
and Free Libraries for every soul.”07
05 Richard F. Snow, “Melvil Dewey”, American Heritage 32,
no. 1 (December 1980),
http://www.americanheritage.com/content/melvil-dewey.
06 Melvil Dewey, A Classification and Subject Index for Cataloguing and Arranging the Books and Pamphlets of a
Library (1876), Project Gutenberg e-book 12513 (2004),
http://www.gutenberg.org/files/12513/12513-h/12513-h.htm.
07 Snow, “Melvil Dewey”.

Public library (essay)

77

His dream came true. Public Library is an entry
in the catalog of History where a fantastic decimal08
describes a category of phenomenon that—together
with free public education, a free public healthcare,
the scientific method, the Universal Declaration of
Human Rights, Wikipedia, and free software, among
others—we, the people, are most proud of.
The public library is a part of these invisible infrastructures that we start to notice only once they
begin to disappear. A utopian dream—about the
place from which every human being will have access to every piece of available knowledge that can
be collected—looked impossible for a long time,
until the egalitarian impetus of social revolutions,
the Enlightment idea of universality of knowledge,
and the expcetional suspenssion of the comercial
barriers to access to knowledge made it possible.
The internet has, as in many other situations, completely changed our expectations and imagination
about what is possible. The dream of a catalogue
of the world — a universal approach to all available
knowledge for every member of society — became
realizable. A question merely of the meeting of
curves on a graph: the point at which the line of
global distribution of personal computers meets
that of the critical mass of people with access to
the internet. Today nobody lacks the imagination
necessary to see public libraries as part of a global infrastructure of universal access to knowledge
for literally every member of society. However, the
08 “Dewey Decimal Classification: 001.”, Dewey.info, 27 October 2014, http://dewey.info/class/001/2009-08/about.en.

78

M. Mars • M. Zarroug • T. Medak

emergence and development of the internet is taking place precisely at the point at which an institutional crisis—one with traumatic and inconceivable
consequences—has also begun.
The internet is a new challenge, creating experiences commonly proferred as ‘revolutionary’. Yet, a
true revolution of the internet is the universal access
to all knowledge that it makes possible. However,
unlike the new epistemologies developed during
the French revolution the tendency is to keep the
‘old regime’ (of intellectual property rights, market
concentration and control of access). The new possibilities for classification, development of languages,
invention of epistemologies which the internet poses,
and which might launch off into new orbits from
existing classification systems, are being suppressed.
In fact, the reactionary forces of the ‘old regime’
are staging a ‘Thermidor’ to suppress the public libraries from pursuing their mission. Today public
libraries cannot acquire, cannot even buy digital
books from the world’s largest publishers.09 The
small amount of e-books that they were able to acquire already they must destroy after only twenty-six
lendings.10 Libraries and the principle of universal
09 “American Library Association Open Letter to Publishers on
E-Book Library Lending”, Digital Book World, 24 September
2012, http://www.digitalbookworld.com/2012/americanlibrary-association-open-letter-to-publishers-on-e-booklibrary-lending/.
10 Jeremy Greenfield, “What Is Going On with Library E-Book
Lending?”, Forbes, 22 June 2012, http://www.forbes.com/
sites/jeremygreenfield/2012/06/22/what-is-going-on-withlibrary-e-book-lending/.

Public library (essay)

79

access to all existing knowledge that they embody
are losing, in every possible way, the battle with a
market dominated by new players such as Amazon.
com, Google, and Apple.
In 2012, Canada’s Conservative Party–led government cut financial support for Libraries and
Archives Canada (LAC) by Can$9.6 million, which
resulted in the loss of 400 archivist and librarian
jobs, the shutting down of some of LAC’s internet
pages, and the cancellation of the further purchase
of new books.11 In only three years, from 2010 to
2012, some 10 percent of public libraries were closed
in Great Britain.12
The commodification of knowledge, education,
and schooling (which are the consequences of a
globally harmonized, restrictive legal regime for intellectual property) with neoliberal austerity politics
curtails the possibilities of adapting to new sociotechnological conditions, let alone further development, innovation, or even basic maintenance of
public libraries’ infrastructure.
Public libraries are an endangered institution,
doomed to extinction.
Petit bourgeois denial prevents society from confronting this disturbing insight. As in many other
fields, the only way out offered is innovative mar11 Aideen Doran, “Free Libraries for Every Soul: Dreaming
of the Online Library”, The Bear, March 2014, http://www.
thebear-review.com/#!free-libraries-for-every-soul/c153g.
12 Alison Flood, “UK Lost More than 200 Libraries in 2012”,
The Guardian, 10 December 2012, http://www.theguardian.
com/books/2012/dec/10/uk-lost-200-libraries-2012.

80

M. Mars • M. Zarroug • T. Medak

ket-based entrepreneurship. Some have even suggested that the public library should become an
open software platform on top of which creative
developers can build app stores13 or Internet cafés
for the poorest, ensuring that they are only a click
away from the Amazon.com catalog or the Google
search bar. But these proposals overlook, perhaps
deliberately, the fundamental principles of access
upon which the idea of the public library was built.
Those who are well-meaning, intelligent, and
tactfull will try to remind the public of all the many
sides of the phenomenon that the public library is:
major community center, service for the vulnerable,
center of literacy, informal and lifelong learning; a
place where hobbyists, enthusiasts, old and young
meet and share knowledge and skills.14 Fascinating. Unfortunately, for purely tactical reasons, this
reminder to the public does not always contain an
explanation of how these varied effects arise out of
the foundational idea of a public library: universal
access to knowledge for each member of the society produces knowledge, produces knowledge about
knowledge, produces knowledge about knowledge
transfer: the public library produces sociability.
The public library does not need the sort of creative crisis management that wants to propose what
13 David Weinberger, “Library as Platform”, Library Journal,
4 September 2012, http://lj.libraryjournal.com/2012/09/
future-of-libraries/by-david-weinberger/.
14 Shannon Mattern, “Library as Infrastructure”, Design
Observer, 9 June 2014, http://places.designobserver.com/
entryprint.html?entry=38488.

Public library (essay)

81

the library should be transformed into once our society, obsessed with market logic, has made it impossible for the library to perform its main mission. Such
proposals, if they do not insist on universal access
to knowledge for all members, are Trojan horses for
the silent but galloping disappearance of the public
library from the historical stage. Sociability—produced by public libraries, with all the richness of its
various appearances—will be best preserved if we
manage to fight for the values upon which we have
built the public library: universal access to knowledge for each member of our society.
Freedom, equality, and brotherhood need brave librarians practicing civil disobedience.
Library Genesis, aaaaarg.org, Monoskop, UbuWeb
are all examples of fragile knowledge infrastructures
built and maintained by brave librarians practicing
civil disobedience which the world of researchers
in the humanities rely on. These projects are re-inventing the public library in the gap left by today’s
institutions in crisis.
Library Genesis15 is an online repository with over
a million books and is the first project in history to
offer everyone on the Internet free download of its
entire book collection (as of this writing, about fifteen terabytes of data), together with the all metadata
(MySQL dump) and PHP/HTML/Java Script code
for webpages. The most popular earlier reposito15 See http://libgen.org/.

82

M. Mars • M. Zarroug • T. Medak

ries, such as Gigapedia (later Library.nu), handled
their upload and maintenance costs by selling advertising space to the pornographic and gambling
industries. Legal action was initiated against them,
and they were closed.16 News of the termination of
Gigapedia/Library.nu strongly resonated among
academics and book enthusiasts circles and was
even noted in the mainstream Internet media, just
like other major world events. The decision by Library Genesis to share its resources has resulted
in a network of identical sites (so-called mirrors)
through the development of an entire range of Net
services of metadata exchange and catalog maintenance, thus ensuring an exceptionally resistant
survival architecture.
aaaaarg.org, started by the artist Sean Dockray, is
an online repository with over 50,000 books and
texts. A community of enthusiastic researchers from
critical theory, contemporary art, philosophy, architecture, and other fields in the humanities maintains,
catalogs, annotates, and initiates discussions around
it. It also as a courseware extension to the self-organized education platform The Public School.17
16 Andrew Losowsky, “Library.nu, Book Downloading Site,
Targeted in Injunctions Requested by 17 Publishers,” Huffington Post, 15 February 2012, http://www.huffingtonpost.
com/2012/02/15/librarynu-book-downloading-injunction_
n_1280383.html.
17 “The Public School”, The Public School, n.d.,
https://www.thepublicschool.org/.

Public library (essay)

83

UbuWeb18 is the most significant and largest online
archive of avant-garde art; it was initiated and is lead
by conceptual artist Kenneth Goldsmith. UbuWeb,
although still informal, has grown into a relevant
and recognized critical institution of contemporary
art. Artists want to see their work in its catalog and
thus agree to a relationship with UbuWeb that has
no formal contractual obligations.
Monoskop is a wiki for the arts, culture, and media
technology, with a special focus on the avant-garde,
conceptual, and media arts of Eastern and Central
Europe; it was launched by Dušan Barok and others.
In the form of a blog Dušan uploads to Monoskop.
org/log an online catalog of curated titles (at the
moment numbering around 3,000), and, as with
UbuWeb, it is becoming more and more relevant
as an online resource.
Library Genesis, aaaaarg.org, Kenneth Goldsmith,
and Dušan Barok show us that the future of the
public library does not need crisis management,
venture capital, start-up incubators, or outsourcing but simply the freedom to continue extending
the dreams of Melvil Dewey, Paul Otlet19 and other
visionary librarians, just as it did before the emergence of the internet.

18 See http://ubu.com/.
19 “Paul Otlet”, Wikipedia, 27 October 2014,
http://en.wikipedia.org/wiki/Paul_Otlet.

84

M. Mars • M. Zarroug • T. Medak

With the emergence of the internet and software
tools such as Calibre and “[let’s share books],”20 librarianship has been given an opportunity, similar to astronomy and the project SETI@home21, to
include thousands of amateur librarians who will,
together with the experts, build a distributed peerto-peer network to care for the catalog of available
knowledge, because
a public library is:
— free access to books for every member of society
— library catalog
— librarian
With books ready to be shared, meticulously
cataloged, everyone is a librarian.
When everyone is librarian, library is
everywhere.22


20 “Tools”, Memory of the World, n.d.,
https://www.memoryoftheworld.org/tools/.
21 See http://setiathome.berkeley.edu/.
22 “End-to-End Catalog”, Memory of the World, 26 November 2012,
https://www.memoryoftheworld.org/end-to-end-catalog/.

Public library (essay)

85

Paul Otlet

Transformations
in the Bibliographical Apparatus
of the Sciences [1]
Repertory — Classification — Office
of Documentation
1. Because of its length, its extension to all countries,
the profound harm that it has created in everyone’s
life, the War has had, and will continue to have, repercussions for scientific productivity. The hour for
the revision of the old order is about to strike. Forced
by the need for economies of men and money, and
by the necessity of greater productivity in order to
hold out against all the competition, we are going to
have to introduce reforms into each of the branches
of the organisation of science: scientific research, the
preservation of its results, and their wide diffusion.
Everything happens simultaneously and the distinctions that we will introduce here are only to
facilitate our thinking. Always adjacent areas, or
even those that are very distant, exert an influence
on each other. This is why we should recognize the
impetus, growing each day even greater in the organisation of science, of the three great trends of
our times: the power of associations, technological
progress and the democratic orientation of institutions. We would like here to draw attention to some
of their consequences for the book in its capacity

Transformations In The Bibliographical
Apparatus Of The Sciences

87

as an instrument for recording what has been discovered and as a necessary means for stimulating
new discoveries.
The Book, the Library in which it is preserved,
and the Catalogue which lists it, have seemed for
a long time as if they had achieved their heights of
perfection or at least were so satisfactory that serious
changes need not be contemplated. This may have
been so up to the end of the last century. But for a
score of years great changes have been occurring
before our very eyes. The increasing production of
books and periodicals has revealed the inadequacy of
older methods. The increasing internationalisation
of science has required workers to extend the range
of their bibliographic investigations. As a result, a
movement has occurred in all countries, especially
Germany, the United States and England, for the
expansion and improvement of libraries and for
an increase in their numbers. Publishers have been
searching for new, more flexible, better-illustrated,
and cheaper forms of publication that are better-coordinated with each other. Cataloguing enterprises
on a vast scale have been carried out, such as the
International Catalogue of Scientific Literature and
the Universal Bibliographic Repertory. [2]
Three facts, three ideas, especially merit study
for they represent something really new which in
the future can give us direction in this area. They
are: The Repertory, Classification and the Office of
Documentation.
•••

88

Paul Otlet

2. The Repertory, like the book, has gradually been
increasing in size, and improvements in it suggest
the emergence of something new which will radically modify our traditional ideas.
From the point of view of form, a book can be
defined as a group of pages cut to the same format
and gathered together in such a way as to form a
whole. It was not always so. For a long time the
Book was a roll, a volumen. The substances which
then took the place of paper — papyrus and parchment — were written on continuously from beginning to end. Reading required unrolling. This was
certainly not very practical for the consultation of
particular passages or for writing on the verso. The
codex, which was introduced in the first centuries of
the modern era and which is the basis of our present
book, removed these inconveniences. But its faults
are numerous. It constitutes something completed,
finished, not susceptible of addition. The Periodical
with its successive issues has given science a continuous means of concentrating its results. But, in
its turn, the collections that it forms runs into the
obstacle of disorder. It is impossible to link similar
or connected items; they are added to one another
pell-mell, and research requires handling great masses of heavy paper. Of course indexes are a help and
have led to progress — subject indexes, sometimes
arranged systematically, sometimes analytically,
and indexes of names of persons and places. These
annual indexes are preceded by monthly abstracts
and are followed by general indexes cumulated every
five, ten or twenty-five years. This is progress, but
the Repertory constitutes much greater progress.

Transformations In The Bibliographical
Apparatus Of The Sciences

89

The aim of the Repertory is to detach what the
book amalgamates, to reduce all that is complex to
its elements and to devote a page to each. Pages, here,
are leaves or cards according to the format adopted.
This is the “monographic” principle pushed to its
ultimate conclusion. No more binding or, if it continues to exist, it will become movable, that is to
say, at any moment the cards held fast by a pin or a
connecting rod or any other method of conjunction
can be released. New cards can then be intercalated,
replacing old ones, and a new arrangement made.
The Repertory was born of the Catalogue. In
such a work, the necessity for intercalations was
clear. Nor was there any doubt as to the unitary or
monographic notion: one work, one title; one title,
one card. As a result, registers which listed the same
collections of books for each library but which had
constantly to be re-done as the collections expanded,
have gradually been discarded. This was practical
and justified by experience. But upon reflection one
wonders whether the new techniques might not be
more generally applied.
What is a book, in fact, if not a single continuous line which has initially been cut to the length
of a page and then cut again to the size of a justified
line? Now, this cutting up, this division, is purely
mechanical; it does not correspond to any division
of ideas. The Repertory provides a practical means
of physically dividing the book according to the
intellectual division of ideas.
Thus, the manuscript library catalogue on cards
has been quickly followed by catalogues printed on
cards (American Library Bureau, the Catalogue or

90

Paul Otlet

the Library of Congress in Washington) [3]; then by
bibliographies printed on cards (International Institute of Bibliography, Concilium Bibliographicum)
[4]; next, indices of species have been published on
cards (Index Speciorum) [5]. We have moved from
the small card to the large card, the leaf, and have
witnessed compendia abandoning the old form for
the new (Jurisclasseur, or legal digests in card form).
Even the idea of the encyclopedia has taken this
form (Nelson’s Perpetual Cyclopedia [6]).
Theoretically and technically, we now have in
the Repertory a new instrument for analytically or
monographically recording data, ideas, information. The system has been improved by divisionary cards of various shapes and colours, placed in
such a way that they express externally the outline
of the classification being used and reduce search
time to a minimum. It has been improved further
by the possibility of using, by cutting and pasting,
materials that have been printed on large leaves or
even books that have been published without any
thought of repertories. Two copies, the first providing the recto, the second the verso, can supply
all that is necessary. One has gone even further still
and, from the example of statistical machines like
those in use at the Census of Washington (sic) [7],
extrapolated the principle of “selection machines”
which perform mechanical searches in enormous
masses of materials, the machines retaining from
the thousands of cards processed by them only those
related to the question asked.
•••

Transformations In The Bibliographical
Apparatus Of The Sciences

91

3. But such a development, like the Repertory before it, presupposes a classification. This leads us to
examine the second practical idea that is bringing
about the transformation of the book.
Classification plays an enormous role in scientific thought. If one could say that a science was a
well-made language, one could equally assert that
it is a completed classification. Science is made up
of verified facts which are organised in a structure
of systems, hypotheses, theories, laws. If there is
a certain order in things, it is necessary to have it
also in science which reflects and explains nature.
That is why, since the time of Greek thought until
the present, constant efforts have been made to improve classification. These have taken three principal directions: classification studied as an activity
of the mind; the general classification and sequence
of the sciences; the systematization appropriate to
each discipline. The idea of order, class, genus and
species has been studied since Aristotle, in passing
by Porphyrus, by the scholastic philosophers and by
modern logicians. The classification of knowledge
goes back to the Greeks and owes much to the contributions of Bacon and the Renaissance. It was posed
as a distinct and separate problem by D’Alembert
and the Encyclopédie, and by Ampère, Comte, and
Spencer. The recent work of Manouvrier, Durand
de Cros, Goblot, Naville, de la Grasserie, has focussed on various aspects of it. [8] As to systematics,
one can say that this has become the very basis of
the organisation of knowledge as a body of science.
When one has demonstrated the existence of 28 million stars, a million chemical compounds, 300,000

92

Paul Otlet

vegetable species, 200,000 animal species, etc., it is
necessary to have a means, an Ariadne’s thread, of
finding one’s way through the labyrinth formed by
all these objects of study. Because there are sciences of beings as well as sciences of phenomena, and
because they intersect with each other as we better
understand the whole of reality, it is necessary that
this means be used to retrieve both. The state of development of a science is reflected at any given time
by its systematics, just as the general classification
of the sciences reflects the state of development of
the encyclopedia, of the philosophy of knowledge.
The need has been felt, however, for a practical
instrument of classification. The classifications of
which we have just spoken are constantly changing, at least in their detail if not in broad outline. In
practice, such instability, such variability which is
dependent on the moment, on schools of thought
and individuals, is not acceptable. Just as the Repertory had its origin in the catalogue, so practical
classification originated in the Library. Books represent knowledge and it is necessary to arrange them
in collections. Schemes for this have been devised
since the Middle Ages. The elaboration of grand
systems occurred in the 17th and 18th centuries
and some new ones were added in the 19th century. But when bibliography began to emerge as an
autonomous field of study, it soon began to develop
along the lines of the catalogue of an ideal library
comprising the totality of what had been published.
From this to drawing on library classifications was
but a step, and it was taken under certain conditions
which must be stressed.

Transformations In The Bibliographical
Apparatus Of The Sciences

93

Up to the present time, 170 different classifications
have been identified. Now, no cooperation is possible if everyone stays shut up in his own system. It
has been necessary, therefore, to choose a universal
classification and to recommend it as such in the
same way that the French Convention recognized
the necessity of a universal system of weights and
measures. In 1895 the first International Conference
of Bibliography chose the Decimal Classification
and adopted a complete plan for its development. In
1904, the edition of the expanded tables appeared. A
new edition was being prepared when the war broke
out Brussels, headquarters of the International Institute of Bibliography, which was doing this work,
was part of the invaded territory.
In its latest state, the Decimal Classification has
become an instrument of great precision which
can meet many needs. The printed tables contain
33,000 divisions and they have an alphabetical index consisting of about 38,000 words. Learning is
here represented in its entire sweep: the encyclopedia of knowledge. Its principle is very simple. The
empiricism of an alphabetical classification by subject-heading cannot meet the need for organising
and systematizing knowledge. There is scattering;
there is also the difficulty of dealing with the complex expressions which one finds in the modern terminology of disciplines like medicine, technology,
and the social sciences. Above all, it is impossible
to achieve any international cooperation on such
a national basis as language. The Decimal Classification is a vast systematization of knowledge, “the
table of contents of the tables of contents” of all

94

Paul Otlet

treatises. But, as it would be impossible to find a
particular subject’s relative place by reference to
another subject, a system of numbering is needed.
This is decimal, which an example will make clear.
Optical Physiology would be classified thus:
5 th Class
3rd Group
5th Division
7th Sub-division

Natural Sciences
Physics
Optics
Optical Physiology

or 535.7
This number 535.7 is called decimal because all
knowledge is taken as one of which each science is
a fraction and each individual subject is a decimal
subdivided to a lesser or greater degree. For the sake
of abbreviation, the zero of the complete number,
which would be 0.5357, has been suppressed because
the zero would be repeated in front of each number.
The numbers 5, 3, 5, 7 (which one could call five hundred and thirty-five point seven and which could
be arranged in blocks of three as for the telephone,
or in groups of twos) form a single number when
the implied words, “class, group, division and subdivision,” are uttered.
The classification is also called decimal because
all subjects are divided into ten classes, then each
of these into at least ten groups, and each group
into at least ten divisions. All that is needed for the
number 535.7 always to have the same meaning is
to translate the tables into all languages. All that is
needed to deal with future scientific developments

Transformations In The Bibliographical
Apparatus Of The Sciences

95

in optical physiology in all of its ramifications is to
subdivide this number by further decimal numbers
corresponding to the subdivisions of the subject
Finally, all that is needed to ensure that any document or item pertaining to optical physiology finds
its place within the sum total of scientific subjects
is to write this number on it In the alphabetic index
to the tables references are made from each word
to the classification number just as the index of a
book refers to page numbers.
This first remarkable principle of the decimal
classification is generally understood. Its second,
which has been introduced more recently, is less
well known: the combination of various classification numbers whenever there is some utility in expressing a compound or complex heading. In the
social sciences, statistics is 31 and salaries, 331.2. By
a convention these numbers can be joined by the
simple sign : and one may write 31:331.2 statistics
of salaries.01
This indicates a general relationship, but a subject also has its place in space and time. The subject
may be salaries in France limited to a period such as
the 18th century (that is to say, from 1700 to 1799).
01 The first ten divisions are: 0 Generalities, 1 Philosophy, 2
Religion, 3 Social Sciences, 4 Philology, Language, 5 Pure
Sciences, 6 Applied Science, Medicine, 7 Fine Arts, 8 Literature, 9 History and Geography. The Index number 31 is
derived from: 3rd class social sciences, 1st group statistics. The
Index number 331.2 is derived from 3rd class social sciences,
3rd group political economy, 1st division topics about work,
2nd subdivision salaries.

96

Paul Otlet

The sign that characterises division by place being
the parenthesis and that by time quotation marks
or double parentheses, one can write:
33:331.2 (44) «17» statistics — of salaries — in
France — in the 17th century
or ten figures and three signs to indicate, in terms
of the universe of knowledge, four subordinated
headings comprising 42 letters. And all of these
numbers are reversible and can be used for geographic or chronologic classification as well as for
subject classification:
(44) 31:331.2 «17»
France — Statistics — Salaries — 17th Century
«17» (44) 31:331.2
17th Century — France — Statistics — Salaries
The subdivisions of relation and location explained
here, are completed by documentary subdivisions
for the form and the language of the document (for
example, periodical, in Italian), and by functional
subdivisions (for example, in zoology all the divisions by species of animal being subdivided by biological aspects). It follows by virtue of the law of
permutations and combinations that the present
tables of the classification permit the formulation
at will of millions of classification numbers. Just as
arithmetic does not give us all the numbers readymade but rather a means of forming them as we
need them, so the classification gives us the means

Transformations In The Bibliographical
Apparatus Of The Sciences

97

of creating classification numbers insofar as we have
compound headings that must be translated into a
notation of numbers.
Like chemistry, mathematics and music, bibliography thus has its own extremely simple notations:
numbers. Immediately and without confusion, it
allows us to find a place for each idea, for each thing
and consequently for each book, article, or document and even for each part of a book or document
Thus it allows us to take our bearings in the midst
of the sources of knowledge, just as the system of
geographic coordinates allows us to take our bearings on land or sea.
One may well imagine the usefulness of such a
classification to the Repertory. It has rid us of the
difficulty of not having continuous pagination. Cards
to be intercalated can be placed according to their
class number and the numbering is that of tables
drawn up in advance, once and for all, and maintained with an unvarying meaning. As the classification has a very general use, it constitutes a true
documentary classification which can be used in
various kinds of repertories: bibliographic repertories; catalogue-like repertories of objects, persons,
phenomena; and documentary repertories of files
made up of written or printed materials of all kinds.
The possibility can be envisaged of encyclopedic
repertories in which are registered and integrated
the diverse data of a scientific field and which draw
for this purpose on materials published in periodicals. Let each article, each report, each item of news
henceforth carry a classification number and, automatically, by clipping, encyclopedias on cards can

98

Paul Otlet

be created in which all the results of international
scientific cooperation are brought together at the
same number. This constitutes a profound change
in the technology of the Book, since the repertory
thus formed is simultaneously a constantly up-dated book and a cooperative book in which are found
printed elements produced in all locations.
•••
4. If we can realize the third idea, the Office of Documentation, then reform will be complete. Such an
office is the old library, but adapted to a new function. Hitherto the library has been a museum of
books. Works were preserved in libraries because
they were precious objects. Librarians were keepers.
Such establishments were not organised primarily
for the use of documents. Moreover, their outmoded
regulations if they did not exclude the most modern
forms of publication at least did not admit them.
They have poor collections of journals; collections
of newspapers are nearly nonexistent; photographs,
films, phonograph discs have no place in them, nor
do film negatives, microscopic slides and many other “documents.” The subject catalogue is considered
secondary in the library so long as there is a good
register for administrative purposes. Thus there is
little possibility of developing repertories in the
library, that is to say of taking publications to pieces and redistributing them in a more directly and
quickly accessible form. For want of personnel to
arrange them, there has not even been a place for
the cards that are received already printed.

Transformations In The Bibliographical
Apparatus Of The Sciences

99

The Office of Documentation, on the contrary, is
conceived of in such a way as to achieve all that is
lacking in the library. Collections of books are the
necessary basis for it, but books, far from being
considered as finished products, are simply materials which must be developed more fully. This
development consists in establishing the connections each individual book has with all of the other
books and forming from them all what might be
called The Universal Book. It is for this that we use
repertories: bibliographic repertories; repertories of
documentary dossiers gathering pamphlets and extracts together by subject; catalogues; chronological
repertories of facts or alphabetical ones of names;
encyclopedic repertories of scientific data, of laws,
of patents, of physical and technical constants, of
statistics, etc. All of these repertories will be set up
according to the method described above and arranged by the same universal classification. As soon
as an organisation to contain these repertories is
created, the Office of Documentation, one may be
sure that what happened to the book when libraries
first opened — scientific publication was regularised
and intensified — will happen to them. Then there
will be good reason for producing in bibliographies,
catalogues, and above all in books and periodicals
themselves, the rational changes which technology and the creative imagination suggest. What is
still an exception today will be common tomorrow.
New possibilities will exist for cooperative work
and for the more effective organisation of science.
•••

100

Paul Otlet

5. Repertory, Classification, Office of Documentation are therefore the three related elements of a
single reform in our methods of registering scientific discoveries and making them available to the
greatest number of people. Already one must speak
less of experiments and uncertain trials than of the
beginning of serious achievement. The International Institute of Bibliography in Brussels constitutes
a vast intellectual cooperative whose members are
becoming more numerous each day. Associations,
scientific establishments, periodical publications,
scientific and technical workers of every kind are
affiliating with it. Its repertories contain millions of
cards. There are sections in several countries02 . But
this was before the War. Since its outbreak, a movement in France, England and the United States has
been emerging everywhere to improve the organisation of the Book. The Office of Documentation has
been suggested as the solution for the requirements
that have been discussed.
It is important that the world of science and
technology should support this movement and
above all that it should endeavour to apply the new
methods to the works which it will be necessary to
re-organise. Among the most important of these is
the International Catalogue of Scientific Literature,
that fine and great work begun at the initiative of the
Royal Society of London. Until now, this work has
02 In France, the Bureau Bibliographique de Paris and great
associations such as the Société pour l’encouragement de
l’industrie nationale, l’Association pour l’avancement des
sciences, etc., are affiliated with it.

Transformations In The Bibliographical
Apparatus Of The Sciences

101

been carried on without relation to other works of
the same kind: it has not recognised the value of a
card repertory or a universal classification. It must
recognise them in the future.03 ❧

03 See Paul Otlet, “La Documentation et I’information au service de I’industrie”, Bulletin de la Société d’encouragement
de l’industrie nationale, June 1917. — La Documentation au
service de l’invention. Euréka, October 1917. — L’Institut
International de Bibliographie, Bibliographie de la France,
21 December 1917. — La Réorganisation du Catalogue international de la littérature scientifique. Revue générale des
sciences, IS February 1918. The publications of the Institute,
especially the expanded tables of the Decimal Classification,
have been deposited at the Bureau Bibliographique de Paris,
44 rue de Rennes at the apartments of the Société de l’encouragement. — See also the report presented by General
Sebert (9] to the Congrès du Génie civil, in March 1918 and
whose conclusions about the creation in Paris of a National
Office of Technical Documentation have been adopted.

102

Paul Otlet

Editor’s Notes
[1] “Transformations operées dans l’appareil bibliographique
des sciences,” Revue scientifique 58 (1918): 236-241.
[2] The International Catalogue of Scientific Literature, an enormous work, was compiled by a Central Bureau under the
sponsorship of the Royal Society from material sent in from
Regional Bureaus around the world. It was published annually beginning in 1902 in 17 parts each corresponding to
a major subject division and comprising one or more volumes. Publication was effectively suspended in 1914. By the
time war broke out, the Universal Bibliographic Repertory
contained over 11 million entries.
[3] For card publication by the Library Bureau and Library of
Congress, see Edith Scott, “The Evolution of Bibliographic
Systems in the United States, 1876–1945” and Editor’s Note
36 to the second paper and Note 5 to the seventh paper in
International Organisation and Dissemination of Knowledge; Selected Essays of Paul Otlet, translated and edited by
W. Boyd Rayward. Amsterdam: Elsevier, 1990: 148–156.
[4] Otlet refers to the Concilium Bibliographicum also in Paper
No. 7, “The Reform of National Bibliographies...” in International Organisation and Dissemination of Knowledge; Selected
Essays of Paul Otlet. See also Editor’s Note 5 in that paper
for the major bibliographies published by the Concilium
Bibliographicum.
[5] A possible example of what Otlet is referring to here is the
Gray Herbarium Index. This was “planned to provide cards
for all the names of vascular plant taxa attributable to the

Transformations In The Bibliographical
Apparatus Of The Sciences

103

Western Hemisphere beginning with the literature of 1886”
(Gray Herbarium Index, Preface, p. iii). Under its first compiler, 20 instalments consisting in all of 28,000 cards were
issued between 1894 and 1903. It has been continued after
that time and was for many years “issued quarterly at the
rate of about 4,000 cards per year.” At the time the cards
were reproduced in a printed catalogue by G. K. Hall in 1968,
there were 85 subscribers to the card sets.
[6] Nelson’s Perpetual Loose-Leaf Encylcopedia was a popular,
12-volume work which went through many editions, its
principle being set down at the beginning of the century.
It was published in binders and the publisher undertook to
supply a certain number of pages of revisions (or renewals)
semi-annually after each edition, the first of which appeared
in 1905. An interesting reference presumably to this work
occurs in a notice, “An Encylcopedia on the Card-Index System,” in the Scientific American 109 (1913): 213. The Berlin
Correspondent of the journal reports a proposal made in
Berlin which contains “an idea, in a sense ... already carried
out in an American loose-leaf encyclopedia, the publishers
of which supply new pages to take the place of those that
are obsolete” (Nelsons, an English firm, set up a New York
branch in 1896. Publication in the U.S. of works to be widely
circulated there was a requirement of the copyright law.)
The reporter observes that the principle suggested “affords
a means of recording all facts at present known as well as
those to be discovered in the future, with the same safety
and ease as though they were registered in our memory, by
providing a universal encyclopedia, incessantly keeping
abreast of the state of human knowledge.” The “bookish”
form of conventional encyclopedias acts against its future
success. “In the case of a mere storehouse of facts the in-

104

Paul Otlet

finitely more mobile form of the card index should however
be adopted, possibly,” the author goes on making a most interesting reference, “in conjunction with Dr. Goldschmidt’s
Microphotographic Library System.” The need for a central
institute, the nature of its work, the advantages of the work
so organised are described in language that is reminiscent
of that of Paul Otlet (see also the papers of Goldschmidt
and Otlet translated in International Organisation and
Dissemination of Knowledge; Selected Essays of Paul Otlet).
[7] These machines were derived from Herman Hollerith’s
punched cards and tabulating machines. Hollerith had
introduced them under contract into the U.S. Bureau of
the Census for the 1890 census. This equipment was later
modified and developed by the Bureau. Hollerith, his invention and his business connections lie at the roots of the
present IBM company. The equipment and its uses in the
census from 1890 to 1910 are briefly described in John H.
Blodgett and Claire K. Schultz, “Herman Hollerith: Data
Processing Pioneer,” American Documentation 20 (1969):
221-226. As they observe, suggesting the accuracy of Otlet’s
extrapolation, “his was not simply a calculating machine,
it performed selective sorting, an operation basic to all information retrieval.”
[8] The history of the classification of knowledge has been treated
in English in detail by E.C. Richardson in his Classification
Theoretical and Practical, the first edition of which appeared
in 1901 and was followed by editions in 1912 and 1930. A
different treatment is given in Robert Flint’s Philosophy as
Scientia Scientarium: a History of the Classification of the
Sciences which appeared in 1904. Neither of these works
deal with Manouvrier, a French anthropologist, or Durand

Transformations In The Bibliographical
Apparatus Of The Sciences

105

de Cros. Joseph-Pierre Durand, sometimes called Durand
de Cros after his birth place, was a French physiologist and
philosopher who died in 1900. In his Traité de documentation,
in the context of his discussion of classification, Otlet refers
to an Essai de taxonomie by Durand published by Alcan. It
seems that this is an error for Aperçus de taxonomie (Alcan,
1899).
[9] General Hippolyte Sebert was President of the Association française pour l’avancement des sciences, and the Société d’encouragement pour l’industrie nationale. He had
been active in the foundation of the Bureau bibliographique
de Paris. For other biographical information about him see
Editor’s Note 9 to Paper no 17, “Henri La Fontaine”, in International Organisation and Dissemination of Knowledge;
Selected Essays of Paul Otlet.

English translation of the Paul Otlet’s text published with the
permission of W. Boyd Rayward. The translation was originally
published as Paul Otlet, “Transformations in the Bibliographical
Apparatus of the Sciences: Repertory–Classification–Office of
Documentation”, in International Organisation and Dissemination of Knowledge; Selected Essays of Paul Otlet, translated and
edited by W. Boyd Rayward, Amsterdam: Elsevier, 1990: 148–156.

106

Paul Otlet

107

108

public library

http://aaaaarg.org/

109

McKenzie Wark

Metadata Punk

So we won the battle but lost the war. By “we”, I
mean those avant-gardes of the late twentieth century whose mission was to free information from the
property form. It was always a project with certain
nuances and inconsistencies, but over-all it succeeded beyond almost anybody’s wildest dreams. Like
many dreams, it turned into a nightmare in the end,
the one from which we are now trying to awake.
The place to start is with what the situationists
called détournement. The idea was to abolish the
property form in art by taking all of past art and
culture as a commons from which to copy and correct. We see this at work in Guy Debord’s texts and
films. They do not quote from past works, as to do
so acknowledges their value and their ownership.
The elements of détournement are nothing special.
They are raw materials for constructing theories,
narratives, affects of a subjectivity no longer bound
by the property form.
Such a project was recuperated soon enough
back into the art world as “appropriation.” Richard
Prince is the dialectical negation of Guy Debord,

Metadata Punk

111

in that appropriation values both the original fragment and contributes not to a subjectivity outside of
property but rather makes a career as an art world
star for the appropriating artist. Of such dreams is
mediocrity made.
If there was a more promising continuation of
détournement it had little to do with the art world.
Détournement became a social movement in all but
name. Crucially, it involved an advance in tools,
from Napster to Bitorrent and beyond. It enabled
the circulation of many kinds of what Hito Steyerl
calls the poor image. Often low in resolution, these
détourned materials circulated thanks both to the
compression of information but also because of the
addition of information. There might be less data
but there’s added metadata, or data about data, enabling its movement.
Needless to say the old culture industries went
into something of a panic about all this. As I wrote
over ten years ago in A Hacker Manifesto, “information wants to be free but is everywhere in chains.”
It is one of the qualities of information that it is indifferent to the medium that carries it and readily
escapes being bound to things and their properties.
Yet it is also one of its qualities that access to it can
be blocked by what Alexander Galloway calls protocol. The late twentieth century was — among other
things — about the contradictory nature of information. It was a struggle between détournement and
protocol. And protocol nearly won.
The culture industries took both legal and technical steps to strap information once more to fixity
in things and thus to property and scarcity. Inter-

112

McKenzie Wark

estingly, those legal steps were not just a question of
pressuring governments to make free information
a crime. It was also a matter of using international
trade agreements as a place outside the scope of de­
mo­­cratic oversight to enforce the old rules of property. Here the culture industries join hands with the
drug cartels and other kinds of information-based
industry to limit the free flow of information.
But laws are there to be broken, and so are protocols of restriction such as encryption. These were
only ever delaying tactics, meant to shore up old
monopoly business for a bit longer. The battle to
free information was the battle that the forces of
détournement largely won. Our defeat lay elsewhere.
While the old culture industries tried to put information back into the property form, there were
other kinds of strategy afoot. The winners were not
the old culture industries but what I call the vulture
industries. Their strategy was not to try to stop the
flow of free information but rather to see it as an
environment to be leveraged in the service of creating a new kind of business. “Let the data roam free!”
says the vulture industry (while quietly guarding
their own patents and trademarks). What they aim
to control is the metadata.
It’s a new kind of exploitation, one based on an
unequal exchange of information. You can have the
little scraps of détournement that you desire, in exchange for performing a whole lot of free labor—and
giving up all of the metadata. So you get your little
bit of data; they get all of it, and more importantly,
any information about that information, such as
the where and when and what of it.

Metadata Punk

113

It is an interesting feature of this mode of exploitation that you might not even be getting paid for your
labor in making this information—as Trebor Scholz
as pointed out. You are working for information
only. Hence exploitation can be extended far beyond
the workplace and into everyday life. Only it is not
so much a social factory, as the autonomists call it.
This is more like a social boudoir. The whole of social
space is in some indeterminate state between public
and private. Some of your information is private to
other people. But pretty much all of it is owned by
the vulture industry — and via them ends up in the
hands of the surveillance state.
So this is how we lost the war. Making information free seemed like a good idea at the time. Indeed, one way of seeing what transpired is that we
forced the ruling class to come up with these new
strategies in response to our own self-organizing
activities. Their actions are reactions to our initiatives. In this sense the autonomists are right, only
it was not so much the actions of the working class
to which the ruling class had to respond in this case,
as what I call the hacker class. They had to recuperate a whole social movement, and they did. So our
tactics have to change.
In the past we were acting like data-punks. Not
so much “here’s three chords, now form your band.”
More like: “Here’s three gigs, now go form your autonomous art collective.” The new tactic might be
more question of being metadata-punks. On the one
hand, it is about freeing information about information rather than the information itself. We need
to move up the order of informational density and

114

McKenzie Wark

control. On the other hand, it might be an idea to
be a bit discreet about it. Maybe not everyone needs
to know about it. Perhaps it is time to practice what
Zach Blas calls infomatic opacity.
Three projects seem to embody much of this
spirit to me. One I am not even going to name or
discuss, as discretion seems advisable in that case.
It takes matters off the internet and out of circulation among strangers. Ask me about it in person if
we meet in person.
The other two are Monoskop Log and UbuWeb.
It is hard to know what to call them. They are websites, archives, databases, collections, repositories,
but they are also a bit more than that. They could be
thought of also as the work of artists or of curators;
of publishers or of writers; of archivists or researchers. They contain lots of files. Monoskop is mostly
books and journals; UbuWeb is mostly video and
audio. The work they contain is mostly by or about
the historic avant-gardes.
Monoskop Log bills itself as “an educational
open access online resource.” It is a component part
of Monoskop, “a wiki for collaborative studies of
art, media and the humanities.” One commenter
thinks they see the “fingerprint of the curator” but
nobody is named as its author, so let’s keep it that
way. It is particularly strong on Eastern European
avant-garde material. UbuWeb is the work of Kenneth Goldsmith, and is “a completely independent
resource dedicated to all strains of the avant-garde,
ethnopoetics, and outsider arts.”
There’s two aspects to consider here. One is the
wealth of free material both sites collect. For any-

Metadata Punk

115

body trying to teach, study or make work in the
avant-garde tradition these are very useful resources.
The other is the ongoing selection, presentation and
explanation of the material going on at these sites
themselves. Both of them model kinds of ‘curatorial’
or ‘publishing’ behavior.
For instance, Monoskop has wiki pages, some
better than Wikipedia, which contextualize the work
of a given artist or movement. UbuWeb offers “top
ten” lists by artists or scholars which give insight
not only into the collection but into the work of the
person making the selection.
Monoskop and UbuWeb are tactics for intervening in three kinds of practices, those of the artworld, of publishing and of scholarship. They respond to the current institutional, technical and
political-economic constraints of all three. As it
says in the Communist Manifesto, the forces for social change are those that ask the property question.
While détournement was a sufficient answer to that
question in the era of the culture industries, they try
to formulate, in their modest way, a suitable tactic
for answering the property question in the era of
the vulture industries.
This takes the form of moving from data to metadata, expressed in the form of the move from writing
to publishing, from art-making to curating, from
research to archiving. Another way of thinking this,
suggested by Hiroki Azuma would be the move from
narrative to database. The object of critical attention
acquires a third dimension, a kind of informational
depth. The objects before us are not just a text or an
image but databases of potential texts and images,
with metadata attached.

116

McKenzie Wark

The object of any avant-garde is always to practice the relation between aesthetics and everyday
life with a new kind of intensity. UbuWeb and
Monoskop seem to me to be intimations of just
such an avant-garde movement. One that does not
offer a practice but a kind of meta-practice for the
making of the aesthetic within the everyday.
Crucial to this project is the shifting of aesthetic
intention from the level of the individual work to the
database of works. They contain a lot of material, but
not just any old thing. Some of the works available
here are very rare, but not all of them are. It is not
just rarity, or that the works are available for free.
It is more that these are careful, artful, thoughtful
collections of material. There are the raw materials here with which to construct a new civilization.
So we lost the battle, but the war goes on. This
civilization is over, and even its defenders know it.
We live in among ruins that accrete in slow motion.
It is not so much a civil war as an incivil war, waged
against the very conditions of existence of life itself.
So even if we have no choice but to use its technologies and cultures, the task is to build another way
of life among the ruins. Here are some useful practices, in and on and of the ruins. ❧

Metadata Punk

117

118

public library

http://midnightnotes.memoryoftheworld.org/

119

Tomislav Medak

The Future After the Library
UbuWeb and Monoskop’s
Radical Gestures

The institution of the public library has crystallized,
developed and advanced around historical junctures
unleashed by epochal economic, technological and
political changes. A series of crises since the advent
of print have contributed to the configuration of the
institutional entanglement of the public library as
we know it today:01 defined by a publicly available
collection, housed in a public building, indexed and
made accessible with a help of a public catalog, serviced by trained librarians and supported through
public financing. Libraries today embody the idea
of universal access to all knowledge, acting as custodians of a culture of reading, archivists of material
and ephemeral cultural production, go-betweens
of information and knowledge. However, libraries have also embraced a broader spirit of public
service and infrastructure: providing information,
01 For the concept and the full scope of the contemporary library
as institutional entanglement see Shannon Mattern, “Library
as Infrastructure”, Places Journal, accessed April 9, 2015,
https://placesjournal.org/article/library-as-infrastructure/.

The Future After the Library

121

education, skills, assistance and, ultimately, shelter
to their communities — particularly their most vulnerable members.
This institutional entanglement, consisting in
a comprehensive organization of knowledge, universally accessible cultural goods and social infrastructure, historically emerged with the rise of (information) science, social regulation characteristic
of modernity and cultural industries. Established
in its social aspect as the institutional exemption
from the growing commodification and economic
barriers in the social spheres of culture, education
and knowledge, it is a result of struggles for institutionalized forms of equality that still reflect the
best in solidarity and universality that modernity
had to offer. Yet, this achievement is marked by
contradictions that beset modernity at its core. Libraries and archives can be viewed as an organon
through which modernity has reacted to the crises
unleashed by the growing production and fixation
of text, knowledge and information through a history of transformations that we will discuss below.
They have been an epistemic crucible for the totalizing formalizations that have propelled both the
advances and pathologies of modernity.
Positioned at a slight monastic distance and indolence toward the forms of pastoral, sovereign or
economic domination that defined the surrounding world that sustained them, libraries could never
close the rift or between the universalist aspirations
of knowledge and their institutional compromise.
Hence, they could never avoid being the battlefield
where their own, and modernity’s, ambivalent epis-

122

Tomislav Medak

temic and social character was constantly re-examined and ripped asunder. It is this ambivalent
character that has been a potent motor for critical theory, artistic and political subversion — from
Marx’s critique of political economy, psychoanalysis
and historic avant-gardes, to revolutionary politics.
Here we will examine the formation of the library
as an epistemic and social institution of modernity
and the forms of critical engagement that continue
to challenge the totalizing order of knowledge and
appropriation of culture in the present.
Here Comes the Flood02
Prior to the advent of print, the collections held in
monastic scriptoria, royal courts and private libraries
typically contained a limited number of canonical
manuscripts, scrolls and incunabula. In Medieval
and early Renaissance Europe the canonized knowledge considered necessary for the administration of
heavenly and worldly affairs was premised on reading and exegesis of biblical and classical texts. It is
02 The metaphor of the information flood, here incanted in the
words of Peter Gabriel’s song with apocalyptic overtones, as
well as a good part of the historic background of the development of index card catalog in the following paragraphs
are based on Markus Krajewski, Paper Machines: About
Cards & Catalogs, 1548–1929 (MIT Press, 2011). The organizing idea of Krajewski’s historical account, that the index
card catalog can be understood as a Turing machine avant
la lettre, served as a starting point for the understanding
of the library as an epistemic institution developed here.

The Future After the Library

123

estimated that by the 15th century in Western Europe
there were no more than 5 million manuscripts held
mainly in the scriptoria of some 21,000 monasteries and a small number of universities. While the
number of volumes had grown sharply from less
than 0.8 million in the 12th century, the number of
monasteries had remained constant throughout that
period. The number of manuscripts read averaged
around 1,000 per million inhabitants, with the total
population of Europe peaking around 60 million.03
All in all, the book collections were small, access was
limited and reading culture played a marginal role.
The proliferation of written matter after the invention of mechanical movable type printing would
greatly increase the number of books, but also the
patterns of literacy and knowledge production. Already in the first fifty years after Gutenberg’s invention, 12 million volumes were printed, and from
this point onwards the output of printing presses
grew exponentially to 700 million volumes in the
18th century. In the aftermath of the explosion in
book production the cost of producing and buying
books fell drastically, reducing the economic barriers to literacy, but also creating a material vector
for a veritable shift of the epistemic paradigm. The
03 For an economic history of the book in the Western Europe
see Eltjo Buringh and Jan Luiten Van Zanden, “Charting
the ‘Rise of the West’: Manuscripts and Printed Books in
Europe, A Long-Term Perspective from the Sixth through
Eighteenth Centuries”, The Journal of Economic History 69,
No. 02 (June 2009): 409–45, doi:10.1017/S0022050709000837,
particularly Tables 1-5.

124

Tomislav Medak

emerging reading public was gaining access to the
new works of a nascent Enlightenment movement,
ushering in the modern age of science. In parallel
with those larger epochal transformations, the explosion of print also created a rising tide of new books
that suddenly inundated the libraries. The libraries
now had to contend both with the orders-of-magnitude greater volume of printed matter and the
growing complexity of systematically storing, ordering, classifying and tracking all of the volumes
in their collection. An once almost static collection
of canonical knowledge became an ever expanding
dynamic flux. This flood of new books, the first of
three to follow, presented principled, infrastructural and organizational challenges to the library that
radically transformed and coalesced its functions.
The epistemic shift created by this explosion of
library holdings led to a revision of the assumption
that the library is organized around a single holy
scripture and a small number of classical sources.
Coextensive with the emergence and multiplication of new sciences, the books that were entering
the library now covered an ever diversified scope
of topics and disciplines. And the sheer number of
new acquisitions demanded the physical expansion of libraries, which in turn required a radical
rethinking of the way the books were stored, displayed and indexed. In fact, the flood caused by the
printing press was nothing short of a revolution in
the organization, formalization and processing of
information and knowledge. This becomes evident
in the changes that unfolded between the 16th and
the early 20th in the cataloging of library collections.

The Future After the Library

125

The initial listings of books were kept in bound
volumes, books in their own right. But as the number of items arriving into the library grew, the constant need to insert new entries made the bound
book format increasingly impractical for library
catalogs. To make things more complicated still,
the diversification of the printed matter demanded
a richer bibliographic description that would allow
better comprehension of what was contained in the
volumes. Alongside the name of the author and the
book’s title, the description now needed to include
the format of the volume, the classification of the
subject matter and the book’s location in the library.
As the pace of new arrivals accelerated, the effort to
create a library catalog became unending, causing a
true crisis in the emerging librarian profession. This
would result in a number of physical and epistemic
innovations in the organization and formalization
of information and knowledge. The requirement
to constantly rearrange the order of entries in the
listing lead to the eventual unbinding of the bound
catalog into separate slips of paper and finally to the
development of the index card catalog. The unbound
index cards and their floating rearrangement, not
unlike that of the movable type, would in turn result in the design of filing cabinets. From Conrad
Gessner’s Bibliotheca Universalis, a three-volume
book-format catalog of around 3,000 authors and
10,000 texts, arranged alphabetically and topically,
published in the period 1545–1548; Gottfried Wilhelm Leibniz’s proposals for a universal library
during his tenure at the Wolfenbüttel library in the
late 17th century; to Gottfried van Swieten’s catalog

126

Tomislav Medak

of the Viennese court library, the index card catalog and the filing cabinets would develop almost to
their present form.04
The unceasing inflow of new books into the library
prompted the need to spatially organize and classify
the arrangement of the collection. The simple addition of new books to the shelves by size; canonical
relevance or alphabetical order, made little sense
in a situation where the corpus of printed matter
was quickly expanding and no individual librarian
could retain an intimate overview of the library’s
entire collection. The inflow of books required that
the brimming shelf-space be planned ahead, while
the increasing number of expanding disciplines required that the collection be subdivided into distinct
sections by fields. First the shelves became classified
and then the books individually received a unique
identifier. With the completion of the Josephinian
catalog in the Viennese court library, every book became compartmentalized according to a systematic
plan of sciences and assigned a unique sequence of
a Roman numeral, a Roman letter and an Arabic
numeral by which it could be tracked down regardless of its physical location.05 The physical location
of the shelves in the library no longer needed to be
reflected in the ordering of the catalog, and the catalog became a symbolic representation of the freely
re-arrangeable library. In the technological lingo of
today, the library required storage, index, search
and address in order to remain navigable. It is this
04 Krajewski, Paper Machines, op. cit., chapter 2.
05 Ibid., 30.

The Future After the Library

127

formalization of a universal system of classification
of objects in the library with the relative location of
objects and re-arrangeable index that would then in
1876 receive its present standardized form in Melvil
Dewey’s Decimal System.
The development of the library as an institution of
public access and popular literacy did not proceed
apace with the development of its epistemic aspects.
It was only a series of social upheavals and transformations in the course of the 18th and 19th century
that would bring about another flood of books and
political demands, pushing the library to become
embedded in an egalitarian and democratic political culture. The first big step in that direction came
with the decision of the French revolutionary National Assembly from 2 November 1789 to seize all
book collections from the Church and aristocracy.
Million of volumes were transferred to the Bibliothèque Nationale and local libraries across France.
In parallel, particularly in England, capitalism was
on the rise. It massively displaced the impoverished rural population into growing urban centers,
propelled the development of industrial production and, by the mid-19th century, introduced the
steam-powered rotary press into the book business.
As books became more easily, and mass produced,
the commercial subscription libraries catering to the
better-off parts of society blossomed. This brought
the class aspect of the nascent demand for public
access to books to the fore. After the failed attempts
to introduce universal suffrage and end the system
of political representation based on property entitlements in 1830s and 1840s, the English Chartist

128

Tomislav Medak

movement started to open reading rooms and cooperative lending libraries that would quickly become
a popular hotbed of social exchanges between the
lower classes. In the aftermath of the revolutionary
upheavals of 1848, the fearful ruling classes heeded
the demand for tax-financed public libraries, hoping
that the access to literature and edification would
ultimately hegemonize the working class for the
benefits of capitalism’s culture of self-interest and
competition.06
The Avant-gardes in the Library
As we have just demonstrated, the public library
in its epistemic and social aspects coalesced in the
context of the broader social transformations of
modernity: early capitalism and processes of nation-building in Europe and the USA. These transformations were propelled by the advancement of
political and economic rationalization, public and
business administration, statistical and archival
procedures. Archives underwent a corresponding and largely concomitant development with the
libraries, responding with a similar apparatus of
classification and ordering to the exponential expansion of administrative records documenting the
social world and to the historicist impulse to capture the material traces of past events. Overlaying
the spatial organization of documentation; rules
06 For the social history of public library see Matthew Battles,
Library: An Unquiet History (Random House, 2014) chapter
5: “Books for all”.

The Future After the Library

129

of its classification and symbolic representation of
the archive in reference tools, they tried to provide
a formalization adequate to the passion for capturing historical or present events. Characteristic
of the ascendant positivism of the 19th century, the
archivists’ and librarians’ epistemologies harbored
a totalizing tendency that would become subject to
subversion and displacement in the first decades of
the 20th century.
The assumption that the classificatory form can
fully capture the archival content would become
destabilized over and over by the early avant-gardist
permutations of formal languages of classification:
dadaist montage of the contingent compositional
elements, surrealist insistence on the unconscious
surpluses produced by automatized formalized language, constructivist foregrounding of dynamic and
spatialized elements in the acts of perception and
cognition of an artwork.07 The material composition
of the classified and ordered objects already contained formalizations deposited into those objects
by the social context of their provenance or projected onto them by the social situation of encounter
with them. Form could become content and content
could become form. The appropriations, remediations and displacements exacted by the neo-avantgardes in the second half of the 20th century pro07 Sven Spieker, The Big Archive: Art from Bureaucracy (MIT
Press, 2008) provides a detailed account of strategies that
the historic avant-gardes and the post-war art have developed toward the classificatory and ordering regime of the
archive.

130

Tomislav Medak

duced subversions, resignifications and simulacra
that only further blurred the lines between histories
and their construction, dominant classifications and
their immanent instabilities.
Where does the library fit into this trajectory? Operating around an uncertain and politically embattled universal principle of public access to knowledge
and organization of information, libraries continued being sites of epistemic and social antagonisms,
adaptations and resilience in response to the challenges created by the waves of radical expansion of
textuality and conflicting social interests between
the popular reading culture and the commodification of cultural consumption. This precarious position is presently being made evident by the third
big flood — after those unleashed by movable type
printing and the social context of industrial book
production — that is unfolding with the transition
of the book into the digital realm. Both the historic
mode of the institutional regulation of access and
the historic form of epistemic classification are
swept up in this transformation. While the internet
has made possible a radically expanded access to
digitized culture and knowledge, the vested interests of cultural industries reliant on copyright for
their control over cultural production have deepened the separation between cultural producers and
their readers, listeners and viewers. While the hypertextual capacity for cross-reference has blurred
the boundaries of the book, digital rights management technologies have transformed e-books into
closed silos. Both the decommodification of access
and the overcoming of the reified construct of the

The Future After the Library

131

self-enclosed work in the form of a book come at
the cost of illegality.
Even the avant-gardes in all their inappropriable
and idiosyncratic recalcitrance fall no less under
the legally delimited space of copyrightable works.
As they shift format, new claims of ownership and
appropriation are built. Copyright is a normative
classification that is totalizing, regardless of the
effects of leaky networks speaking to the contrary.
Few efforts have insisted on the subverting of juridical classification by copyright more lastingly than
the UbuWeb archive. Espousing the avant-gardes’
ethos of appropriation, for almost 20 years it has
collected and made accessible the archives of the
unknown; outsider, rare and canonized avant-gardes and contemporary art that would otherwise remained reserved for the vaults and restricted access
channels of esoteric markets, selective museological
presentations and institutional archives. Knowing
that asking to publish would amount to aligning itself with the totalizing logic of copyright, UbuWeb
has shunned the permission culture. At the level of
poetical operation, as a gesture of displacing the cultural archive from a regime of limited, into a regime
of unlimited access, it has created provocations and
challenges directed at the classifying and ordering
arrangements of property over cultural production.
One can only assume that as such it has become a
mechanism for small acts of treason for the artists,
who, short of turning their back fully on the institutional arrangements of the art world they inhabit,
use UbuWeb to release their own works into unlimited circulation on the net. Sometimes there might

132

Tomislav Medak

be no way or need to produce a work outside the
restrictions imposed by those institutions, just as
sometimes it is for academics impossible to avoid
the contradictory world of academic publishing,
yet that is still no reason to keep one’s allegiance to
their arrangements.
At the same time UbuWeb has played the game
of avant-gardist subversion: “If it doesn’t exist on
the internet, it doesn’t exist”. Provocation is most
effective when it is ignorant of the complexities of
the contexts that it is directed at. Its effect starts
where fissures in the defense of the opposition start
to show. By treating UbuWeb as massive evidence
for the internet as a process of reappropriation, a
process of “giving to all”, its volunteering spiritus
movens, Kenneth Goldsmith, has been constantly rubbing copyright apologists up the wrong way.
Rather than producing qualifications, evasions and
ambivalences, straightforward affirmation of copy­
ing, plagiarism and reproduction as a dominant
yet suppressed mode of operation of digital culture re-enacts the avant-gardes’ gesture of taking
no hostages from the officially sanctioned systems
of classification. By letting the incumbents of control over cultural production react to the norm of
copying, you let them struggle to dispute the norm
rather than you having to try to defend the norm.
UbuWeb was an early-comer, starting in 1996
and still functioning today on seemingly similar
technology, it’s a child of the early days of World
Wide Web and the promissory period of the experimental internet. It’s resolutely Web 1.0, with
a single maintainer, idiosyncratically simple in its

The Future After the Library

133

layout and programmatically committed to the
eventual obsolescence and sudden abandonment.
No platform, no generic design, no widgets, no
kludges and no community features. Only Beckett
avec links. Endgame.
A Book is an Index is an Index is an Index...
Since the first book flood, the librarian dream of
epistemological formalization has revolved around
the aspiration to cross-reference all the objects in
the collection. Within the physical library the topical designation has been relegated to the confines of
index card catalog that remained isolated from the
structure of citations and indexes in the books themselves. With the digital transition of the book, the
time-shifted hypertextuality of citations and indexes
became realizable as the immediate cross-referentiality of the segments of individual text to segments
of other texts and other digital artifacts across now
permeable boundaries of the book.
Developed as a wiki for collaborative studies of
art, media and the humanities, Monoskop.org took
up the task of mapping and describing avant-gardes and media art in Europe. In its approach both
indexical and encyclopedic, it is an extension of
the collaborative editing made possible by wiki
technology. Wikis rose to prominence in the early
2000s allowing everyone to edit and extend websites running on that technology by mastering a
very simple markup language. Wikis have been the
harbinger of a democratization of web publishing
that would eventually produce the largest collabo-

134

Tomislav Medak

rative website on the internet — the Wikipedia, as
well as a number of other collaborative platforms.
Monoskop.org embraces the encyclopedic spirit of
Wikipedia, focusing on its own specific topical and
topological interests. However, from its earliest days
Monoskop.org has also developed as a form of index
that maps out places, people, artworks, movements,
events and venues that compose the dense network
of European avant-gardes and media art.
If we take the index as a formalization of cross-referential relations between names of people, titles
of works and concepts that exist in the books and
across the books, what emerges is a model of a relational database reflecting the rich mesh of cultural
networks. Each book can serve as an index linking
its text to people, other books, segments in them.
To provide a paradigmatic demonstration of that
idea, Monoskop.org has assembled an index of all
persons in Friedrich Kittler’s Discourse Networks,
with each index entry linking both to its location
in the digital version of the book displayed on the
aaaaarg.org archive and to relevant resources for
those persons on the Monoskop.org and the internet. Hence, each object in the library, an index
in its own right, potentially allows one to initiate
the relational re-classification and re-organization
of all other works in the library through linkable
information.
Fundamental to the works of the post-socialist
retro-avant-gardes of the last couple of decades has
been the re-writing of a history of art in reverse.
In the works of IRWIN, Laibach or Mladen Stilinović, or comparable work of Komar & Melamid,

The Future After the Library

135

totalizing modernity is detourned by re-appropriating the forms of visual representation and classification that the institutions of modernity used to
construct a linear historical narrative of evolutions
and breaks in the 19th and 20th century. Genealogical
tables, events, artifacts and discourses of the past
were re-enacted, over-affirmed and displaced to
open up the historic past relegated to the archives
to an understanding that transformed the present
into something radically uncertain. The efforts of
Monoskop.org in digitizing of the artifacts of the
20th century avant-gardes and playing with the
epistemic tools of early book culture is a parallel
gesture, with a technological twist. If big data and
the control over information flows of today increasingly naturalizes and re-affirms the 19th century
positivist assumptions of the steerablity of society,
then the endlessly recombinant relations and affiliations between cultural objects threaten to overflow
that recurrent epistemic framework of modernity’s
barbarism in its cybernetic form.
The institution of the public library finds itself
today under a double attack. One unleashed by
the dismantling of the institutionalized forms of
social redistribution and solidarity. The other by
the commodifying forces of expanding copyright
protections and digital rights management, control
over the data flows and command over the classification and order of information. In a world of
collapsing planetary boundaries and unequal development, those who control the epistemic order

136

Tomislav Medak

control the future.08 The Googles and the NSAs run
on capturing totality — the world’s knowledge and
communication made decipherable, organizable and
controllable. The instabilities of the epistemic order
that the library continues to instigate at its margins
contributes to keeping the future open beyond the
script of ‘commodify and control’. In their acts of
re-appropriation UbuWeb and Monoskop.org are
but a reminder of the resilience of libraries’ instability that signals toward a future that can be made
radically open. ❧

08 In his article “Controlling the Future—Edward Snowden and
the New Era on Earth”, (accessed April 13, 2015, http://www.
eurozine.com/articles/2014-12-19-altvater-en.html), Elmar
Altvater makes a comparable argument that the efforts of
the “Five Eyes” to monitor the global communication flows,
revealed by Edward Snowden, and the control of the future
social development defined by the urgency of mitigating the
effects of the planetary ecological crisis cannot be thought
apart.

The Future After the Library

137

138

public library

http://kok.memoryoftheworld.org

139

Public Library
www.memoryoftheworld.org

Publishers
What, How & for Whom / WHW
Slovenska 5/1 • HR-10000 Zagreb
+385 (0) 1 3907261
whw@whw.hr • www.whw.hr
ISBN 978-953-55951-3-7 [Što, kako i za koga/WHW]
Multimedia Institute
Preradovićeva 18 • HR-10000 Zagreb
+385 (0)1 4856400
mi2@mi2.hr • www.mi2.hr
ISBN 978-953-7372-27-9 [Multimedijalni institut]
Editors
Tomislav Medak • Marcell Mars • What, How & for Whom / WHW
Copy Editor
Dušanka Profeta [Croatian]
Anthony Iles [English]
Translations
Una Bauer
Tomislav Medak
Dušanka Profeta
W. Boyd Rayward
Design & layout
Dejan Kršić @ WHW
Typography
MinionPro [robert slimbach • adobe]

English translation of the Paul
Otlet’s text published with the permission of W. Boyd
Rayward. The translation was originally published as
Paul Otlet, “Transformations in the Bibliographical
Apparatus of the Sciences: Repertory–Classification–Office
of Documentation”, in International Organisation and
Dissemination of Knowledge; Selected Essays of Paul Otlet,
translated and edited by W. Boyd Rayward, Amsterdam:
Elsevier, 1990: 148–156. ❧
format / size
120 × 200 mm
pages
144
Paper
Agrippina 120 g • Rives Laid 300 g
Printed by
Tiskara Zelina d.d.
Print Run
1000
Price
50 kn
May • 2015

This publication, realized along with the exhibition
Public Library in Gallery Nova, Zagreb 2015, is a part of
the collaborative project This Is Tomorrow. Back to Basics:
Forms and Actions in the Future organized by What, How
& for Whom / WHW, Zagreb, Tensta Konsthall, Stockholm
and Latvian Center for Contemporary Art / LCCA, Riga, as a
part of the book edition Art As Life As Work As Art. ❧

Supported by
Office of Culture, Education and Sport of the City of Zagreb
Ministry of Culture of the Republic of Croatia
Croatian Government Office for Cooperation with NGOs
Creative Europe Programme of the European Commission.
National Foundation for Civil Society Development
Kultura Nova Foundation

This project has been funded with support
from European Commision. This publication reflects
the views only of the authors, and the Commission
cannot be held responsible for any use which may be
made of the information contained therein. ❧
Publishing of this book is enabled by financial support of
the National Foundation for Civil Society Development.
The content of the publication is responsibility of
its authors and as such does not necessarily reflect
the views of the National Foundation. ❧
This project is financed
by the Croatian Government Office for Cooperation
with NGOs. The views expressed in this publication
are the sole responsibility of the publishers. ❧

This book is licensed under a Creative
Commons Attribution–ShareAlike 4.0
International License. ❧

Public Library

may • 2015
price 50 kn


Medak, Sekulic & Mertens
Book Scanning and Post-Processing Manual Based on Public Library Overhead Scanner v1.2
2014


PUBLIC LIBRARY
&
MULTIMEDIA INSTITUTE

BOOK SCANNING & POST-PROCESSING MANUAL
BASED ON PUBLIC LIBRARY OVERHEAD SCANNER

Written by:
Tomislav Medak
Dubravka Sekulić
With help of:
An Mertens

Creative Commons Attribution - Share-Alike 3.0 Germany

TABLE OF CONTENTS

Introduction
3
I. Photographing a printed book
7
I. Getting the image files ready for post-processing
11
III. Transformation of source images into .tiffs
13
IV. Optical character recognition
16
V. Creating a finalized e-book file
16
VI. Cataloging and sharing the e-book
16
Quick workflow reference for scanning and post-processing
18
References
22

INTRODUCTION:
BOOK SCANNING - FROM PAPER BOOK TO E-BOOK
Initial considerations when deciding on a scanning setup
Book scanning tends to be a fragile and demanding process. Many factors can go wrong or produce
results of varying quality from book to book or page to page, requiring experience or technical skill
to resolve issues that occur. Cameras can fail to trigger, components to communicate, files can get
corrupted in the transfer, storage card doesn't get purged, focus fails to lock, lighting conditions
change. There are trade-offs between the automation that is prone to instability and the robustness
that is prone to become time consuming.
Your initial choice of book scanning setup will have to take these trade-offs into consideration. If
your scanning community is confined to your hacklab, you won't be risking much if technological
sophistication and integration fails to function smoothly. But if you're aiming at a broad community
of users, with varying levels of technological skill and patience, you want to create as much timesaving automation as possible on the condition of keeping maximum stability. Furthermore, if the
time of individual members of your scanning community can contribute is limited, you might also
want to divide some of the tasks between users and their different skill levels.
This manual breaks down the process of digitization into a general description of steps in the
workflow leading from the printed book to a digital e-book, each of which can be in a concrete
situation addressed in various manners depending on the scanning equipment, software, hacking
skills and user skill level that are available to your book scanning project. Several of those steps can
be handled by a single piece of equipment or software, or you might need to use a number of them your mileage will vary. Therefore, the manual will try to indicate the design choices you have in the
process of planning your workflow and should help you make decisions on what design is best for
you situation.
Introducing book scanner designs
The book scanning starts with the capturing of digital image files on the scanning equipment. There
are three principle types of book scanner designs:
 flatbed scanner
 single camera overhead scanner
 dual camera overhead scanner
Conventional flatbed scanners are widely available. However, given that they require the book to be
spread wide open and pressed down with the platen in order to break the resistance of the book
binding and expose sufficiently the inner margin of the text, it is the most destructive approach for
the book, imprecise and slow.
Therefore, book scanning projects across the globe have taken to custom designing improvised
setups or scanner rigs that are less destructive and better suited for fast turning and capturing of
pages. Designs abound. Most include:




one or two digital photo cameras of lesser or higher quality to capture the pages,
transparent V-shaped glass or Plexiglas platen to press the open book against a V-shape
cradle, and
a light source.

The go-to web resource to help you make an informed decision is the DIY book scanning
community at http://diybookscanner.org. A good place to start is their intro
(http://wiki.diybookscanner.org/ ) and scanner build list (http://wiki.diybookscanner.org/scannerbuild-list ).
The book scanners with a single camera are substantially cheaper, but come with an added difficulty
of de-warping the distorted page images due to the angle that pages are photographed at, which can
sometimes be difficult to correct in the post-processing. Hence, in this introductory chapter we'll
focus on two camera designs where the camera lens stands relatively parallel to the page. However,
with a bit of adaptation these instructions can be used to work with any other setup.
The Public Library scanner
In the focus of this manual is the scanner built for the Public Library project, designed by Voja
Antonić (see Illustration 1). The Public Library scanner was built with the immediate use by a wide
community of users in mind. Hence, the principle consideration in designing the Public Library
scanner was less sophistication and more robustness, facility of use and distributed process of
editing.
The board designs can be found here: http://www.memoryoftheworld.org/blog/2012/10/28/ourbeloved-bookscanner. The current iterations are using two Canon 1100 D cameras with the kit lens
Canon EF-S 18-55mm 1:3.5-5.6 IS. Cameras are auto-charging.

Illustration 1: Public Library Scanner
The scanner operates by automatically lowering the Plexiglas platen, illuminating the page and then
triggering camera shutters. The turning of pages and the adjustments of the V-shaped cradle holding

the book are manual.
The scanner is operated by a two-button controller (see Illustration 2). The upper, smaller button
breaks the capture process in two steps: the first click lowers the platen, increases the light level and
allows you to adjust the book or the cradle, the second click triggers the cameras and lifts the platen.
The lower button has
two modes. A quick
click will execute the
whole capture process in
one go. But if you hold
it pressed longer, it will
lower the platen,
allowing you to adjust
the book and the cradle,
and lift it without
triggering cameras when
you press again.

Illustration 2: A two-button controller

More on this manual: steps in the book scanning process
The book scanning process in general can be broken down in six steps, each of which will be dealt
in a separate chapter in this manual:
I. Photographing a printed book
I. Getting the image files ready for post-processing
III. Transformation of source images into .tiffs
IV. Optical character recognition
V. Creating a finalized e-book file
VI. Cataloging and sharing the e-book
A step by step manual for Public Library scanner
This manual is primarily meant to provide a detailed description and step-by-step instructions for an
actual book scanning setup -- based on the Voja Antonić's scanner design described above. This is a
two-camera overhead scanner, currently equipped with two Canon 1100 D cameras with EF-S 1855mm 1:3.5-5.6 IS kit lens. It can scan books of up to A4 page size.
The post-processing in this setup is based on a semi-automated transfer of files to a GNU/Linux
personal computer and on the use of free software for image editing, optical character recognition
and finalization of an e-book file. It was initially developed for the HAIP festival in Ljubljana in
2011 and perfected later at MaMa in Zagreb and Leuphana University in Lüneburg.
Public Library scanner is characterized by a somewhat less automated yet distributed scanning
process than highly automated and sophisticated scanner hacks developed at various hacklabs. A
brief overview of one such scanner, developed at the Hacker Space Bruxelles, is also included in
this manual.
The Public Library scanning process proceeds thus in following discrete steps:

1. creating digital images of pages of a book,
2. manual transfer of image files to the computer for post-processing,
3. automated renaming of files, ordering of even and odd pages, rotation of images and upload to a
cloud storage,
4. manual transformation of source images into .tiff files in ScanTailor
5. manual optical character recognition and creation of PDF files in gscan2pdf
The detailed description of the Public Library scanning process follows below.
The Bruxelles hacklab scanning process
For purposes of comparison, here we'll briefly reference the scanner built by the Bruxelles hacklab
(http://hackerspace.be/ScanBot). It is a dual camera design too. With some differences in hardware functionality
(Bruxelles scanner has automatic turning of pages, whereas Public Library scanner has manual turning of pages), the
fundamental difference between the two is in the post-processing - the level of automation in the transfer of images
from the cameras and their transformation into PDF or DjVu e-book format.
The Bruxelles scanning process is different in so far as the cameras are operated by a computer and the images are
automatically transferred, ordered and made ready for further post-processing. The scanner is home-brew, but the
process is for advanced DIY'ers. If you want to know more on the design of the scanner, contact Michael Korntheuer at
contact@hackerspace.be.
The scanning and post-processing is automated by a single Python script that does all the work
http://git.constantvzw.org/?
p=algolit.git;a=tree;f=scanbot_brussel;h=81facf5cb106a8e4c2a76c048694a3043b158d62;hb=HEAD
The scanner uses two Canon point and shoot cameras. Both cameras are connected to the PC with USB. They both run
PTP/CHDK (Canon Hack Development Kit). The scanning sequence is the following:
1. Script sends CHDK command line instructions to the cameras
2. Script sorts out the incoming files. This part is tricky. There is no reliable way to make a distinction between the left
and right camera, only between which camera was recognized by USB first. So the protocol is to always power up the
left camera first. See the instructions with the source code.
3. Collect images in a PDF file
4. Run script to OCR a .PDF file to plain .TXT file: http://git.constantvzw.org/?
p=algolit.git;a=blob;f=scanbot_brussel/ocr_pdf.sh;h=2c1f24f9afcce03520304215951c65f58c0b880c;hb=HEAD

I. PHOTOGRAPHING A PRINTED BOOK
Technologically the most demanding part of the scanning process is creating digital images of the
pages of a printed book. It's a process that is very different form scanner design to scanner design,
from camera to camera. Therefore, here we will focus strictly on the process with the Public Library
scanner.
Operating the Public Library scanner
0. Before you start:
Better and more consistent photographs lead to a more optimized and faster post-processing and a
higher quality of the resulting digital e-book. In order to guarantee the quality of images, before you
start it is necessary to set up the cameras properly and prepare the printed book for scanning.
a) Loosening the book
Depending on the type and quality of binding, some books tend to be too resistant to opening fully
to reveal the inner margin under the pressure of the scanner platen. It is thus necessary to “break in”
the book before starting in order to loosen the binding. The best way is to open it as wide as
possible in multiple places in the book. This can be done against the table edge if the book is more
rigid than usual. (Warning – “breaking in” might create irreversible creasing of the spine or lead to
some pages breaking loose.)
b) Switch on the scanner
You start the scanner by pressing the main switch or plugging the power cable into the the scanner.
This will also turn on the overhead LED lights.

c) Setting up the cameras
Place the cameras onto tripods. You need to move the lever on the tripod's head to allow the tripod
plate screwed to the bottom of the camera to slide into its place. Secure the lock by turning the lever
all the way back.
If the automatic chargers for the camera are provided, open the battery lid on the bottom of the
camera and plug the automatic charger. Close the lid.
Switch on the cameras using the lever on the top right side of the camera's body and place it into the
aperture priority (Av) mode on the mode dial above the lever (see Illustration 3). Use the main dial
just above the shutter button on the front side of the camera to set the aperture value to F8.0.

Illustration 3: Mode and main dial, focus mode switch, zoom
and focus ring
On the lens, turn the focus mode switch to manual (MF), turn the large zoom ring to set the value
exactly midway between 24 and 35 mm (see Illustration 3). Try to set both cameras the same.
To focus each camera, open a book on the cradle, lower the platen by holding the big button on the
controller, and turn on the live view on camera LCD by pressing the live view switch (see
Illustration 4). Now press the magnification button twice and use the focus ring on the front of the
lens to get a clear image view.

Illustration 4: Live view switch and magnification button

d) Connecting the cameras
Now connect the cameras to the remote shutter trigger cables that can be found lying on each side
of the scanner. They need to be plugged into a small round port hidden behind a protective rubber
cover on the left side of the cameras.
e) Placing the book into the cradle and double-checking the cameras
Open the book in the middle and place it on the cradle. Hold pressed the large button on the
controller to lower the Plexiglas platen without triggering the cameras. Move the cradle so that the
the platen fits into with the middle of the book.
Turn on the live view on the cameras' LED to see if the the pages fit into the image and if the
cameras are positioned parallel to the page.
f) Double-check storage cards and batteries
It is important that both storage cards on cameras are empty before starting the scanning in order
not to mess up the page sequence when merging photos from the left and the right camera in the
post-processing. To double-check, press play button on cameras and erase if there are some photos
left from the previous scan -- this you do by pressing the menu button, selecting the fifth menu from
the left and then select 'Erase Images' -> 'All images on card' -> 'OK'.
If no automatic chargers are provided, double-check on the information screen that batteries are
charged. They should be fully charged before starting with the scanning of a new book.

g) Turn off the light in the room
Lighting conditions during scanning should be as constant as possible, to reduce glare and achieve
maximum quality remove any source of light that might reflect off the Plexiglas platen. Preferably
turn off the light in the room or isolate the scanner with the black cloth provided.

1. Photographing a book
Now you are ready to start scanning. Place the book closed in the cradle and lower the platen by
holding the large button on the controller pressed (see Illustration 2). Adjust the position of the
cradle and lift the platen by pressing the large button again.
To scan you can now either use the small button on the controller to lower the platen, adjust and
then press it again to trigger the cameras and lift the platen. Or, you can just make a short press on
the large button to do it in one go.
ATTENTION: When the cameras are triggered, the shutter sound has to be heard coming
from both cameras. If one camera is not working, it's best to reconnect both cameras (see
Section 0), make sure the batteries are charged or adapters are connected, erase all images
and restart.
A mistake made in the photographing requires a lot of work in the post-processing, so it's
much quicker to repeat the photographing process.
If you make a mistake while flipping pages, or any other mistake, go back and scan from the page
you missed or incorrectly scanned. Note down the page where the error occurred and in the postprocessing the redundant images will be removed.
ADVICE: The scanner has a digital counter. By turning the dial forward and backward, you
can set it to tell you what page you should be scanning next. This should help you avoid
missing a page due to a distraction.
While scanning, move the cradle a bit to the left from time to time, making sure that the tip of Vshaped platen is aligned with the center of the book and the inner margin is exposed enough.

II. GETTING THE IMAGE FILES READY FOR POST-PROCESSING
Once the book pages have been photographed, they have to be transfered to the computer and
prepared for post-processing. With two-camera scanners, the capturing process will result in two
separate sets of images -- odd and even pages -- coming from the left and right cameras respectively
-- and you will need to rename and reorder them accordingly, rotate them into a vertical position
and collate them into a single sequence of files.
a) Transferring image files
For the transfer of files your principle process design choices are either to copy the files by
removing the memory cards from the cameras and copying them to the computer via a card reader
or to transfer them via a USB cable. The latter process can be automated by remote operating your
cameras from a computer, however this can be done only with a certain number of Canon cameras
(http://bit.ly/16xhJ6b) that can be hacked to run the open Canon Hack Development Kit firmware
(http://chdk.wikia.com).
After transferring the files, you want to erase all the image files on the camera memory card, so that
they would not end up messing up the scan of the next book.
b) Renaming image files
As the left and right camera are typically operated in sync, the photographing process results in two
separate sets of images, with even and odd pages respectively, that have completely different file
names and potentially same time stamps. So before you collate the page images in the order how
they appear in the book, you want to rename the files so that the first image comes from the right
camera, the second from the left camera, the third comes again from the right camera and so on.
You probably want to do a batch renaming, where your right camera files start with n and are offset
by an increment of 2 (e.g. page_0000.jpg, page_0002.jpg,...) and your left camera files start with
n+1 and are also offset by an increment of 2 (e.g. page_0001.jpg, page_0003.jpg,...).
Batch renaming can be completed either from your file manager, in command line or with a number
of GUI applications (e.g. GPrename, rename, cuteRenamer on GNU/Linux).
c) Rotating image files
Before you collate the renamed files, you might want to rotate them. This is a step that can be done
also later in the post-processing (see below), but if you are automating or scripting your steps this is
a practical place to do it. The images leaving your cameras will be positioned horizontally. In order
to position them vertically, the images from the camera on the right will have to be rotated by 90
degrees counter-clockwise, the images from the camera on the left will have to be rotated by 90
degrees clockwise.
Batch rotating can be completed in a number of photo-processing tools, in command line or
dedicated applications (e.g. Fstop, ImageMagick, Nautilust Image Converter on GNU/Linux).
d) Collating images into a single batch
Once you're done with the renaming and rotating of the files, you want to collate them into the same
folder for easier manipulation later.

Getting the image files ready for post-processing on the Public Library scanner
In the case of Public Library scanner, a custom C++ script was written by Mislav Stublić to
facilitate the transfer, renaming, rotating and collating of the images from the two cameras.
The script prompts the user to place into the card reader the memory card from the right camera
first, gives a preview of the first and last four images and provides an entry field to create a subfolder in a local cloud storage folder (path: /home/user/Copy).
It transfers, renames, rotates the files, deletes them from the card and prompts the user to replace the
card with the one from the left camera in order to the transfer the files from there and place them in
the same folder. The script was created for GNU/Linux system and it can be downloaded, together
with its source code, from: https://copy.com/nLSzflBnjoEB
If you have other cameras than Canon, you can edit the line 387 of the source file to change to the
naming convention of your cameras, and recompile by running the following command in your
terminal: "gcc scanflow.c -o scanflow -ludev `pkg-config --cflags --libs gtk+-2.0`"
In the case of Hacker Space Bruxelles scanner, this is handled by the same script that operates the cameras that can be
downloaded from: http://git.constantvzw.org/?
p=algolit.git;a=tree;f=scanbot_brussel;h=81facf5cb106a8e4c2a76c048694a3043b158d62;hb=HEAD

III. TRANSFORMATION OF SOURCE IMAGES INTO .TIFFS
Images transferred from the cameras are high definition full color images. You want your cameras
to shoot at the largest possible .jpg resolution in order for resulting files to have at least 300 dpi (A4
at 300 dpi requires a 9.5 megapixel image). In the post-processing the size of the image files needs
to be reduced down radically, so that several hundred images can be merged into an e-book file of a
tolerable size.
Hence, the first step in the post-processing is to crop the images from cameras only to the content of
the pages. The surroundings around the book that were captured in the photograph and the white
margins of the page will be cropped away, while the printed text will be transformed into black
letters on white background. The illustrations, however, will need to be preserved in their color or
grayscale form, and mixed with the black and white text. What were initially large .jpg files will
now become relatively small .tiff files that are ready for optical character recognition process
(OCR).
These tasks can be completed by a number of software applications. Our manual will focus on one
that can be used across all major operating systems -- ScanTailor. ScanTailor can be downloaded
from: http://scantailor.sourceforge.net/. A more detailed video tutorial of ScanTailor can be found
here: http://vimeo.com/12524529.
ScanTailor: from a photograph of a page to a graphic file ready for OCR
Once you have transferred all the photos from cameras to the computer, renamed and rotated them,
they are ready to be processed in the ScanTailor.
1) Importing photographs to ScanTailor
- start ScanTailor and open ‘new project’
- for ‘input directory’ chose the folder where you stored the transferred and renamed photo images
- you can leave ‘output directory’ as it is, it will place your resulting .tiffs in an 'out' folder inside
the folder where your .jpg images are
- select all files (if you followed the naming convention above, they will be named
‘page_xxxx.jpg’) in the folder where you stored the transferred photo images, and click 'OK'
- in the dialog box ‘Fix DPI’ click on All Pages, and for DPI choose preferably '600x600', click
'Apply', and then 'OK'
2) Editing pages
2.1 Rotating photos/pages
If you've rotated the photo images in the previous step using the scanflow script, skip this step.
- Rotate the first photo counter-clockwise, click Apply and for scope select ‘Every other page’
followed by 'OK'
- Rotate the following photo clockwise, applying the same procedure like in the previous step
2.2 Deleting redundant photographs/pages
- Remove redundant pages (photographs of the empty cradle at the beginning and the end of the
book scanning sequence; book cover pages if you don’t want them in the final scan; duplicate pages
etc.) by right-clicking on a thumbnail of that page in the preview column on the right side, selecting
‘Remove from project’ and confirming by clicking on ‘Remove’.

# If you by accident remove a wrong page, you can re-insert it by right-clicking on a page
before/after the missing page in the sequence, selecting 'insert after/before' (depending on which
page you selected) and choosing the file from the list. Before you finish adding, it is necessary to
again go through the procedure of fixing DPI and Rotating.
2.3 Adding missing pages
- If you notice that some pages are missing, you can recapture them with the camera and insert them
manually at this point using the procedure described above under 2.2.
3) Split pages and deskew
Steps ‘Split pages’ and ‘Deskew’ should work automatically. Run them by clicking the ‘Play’ button
under the 'Select content' function. This will do the three steps automatically: splitting of pages,
deskewing and selection of content. After this you can manually re-adjust splitting of pages and deskewing.
4) Selecting content
Step ‘Select content’ works automatically as well, but it is important to revise the resulting selection
manually page by page to make sure the entire content is selected on each page (including the
header and page number). Where necessary, use your pointer device to adjust the content selection.
If the inner margin is cut, go back to 'Split pages' view and manually adjust the selected split area. If
the page is skewed, go back to 'Deskew' and adjust the skew of the page. After this go back to
'Select content' and readjust the selection if necessary.
This is the step where you do visual control of each page. Make sure all pages are there and
selections are as equal in size as possible.
At the bottom of thumbnail column there is a sort option that can automatically arrange pages by
the height and width of the selected content, making the process of manual selection easier. The
extreme differences in height should be avoided, try to make selected areas as much as possible
equal, particularly in height, across all pages. The exception should be cover and back pages where
we advise to select the full page.
5) Adjusting margins
For best results select in the previous step content of the full cover and back page. Now go to the
'Margins' step and set under Margins section both Top, Bottom, Left and Right to 0.0 and do 'Apply
to...' → 'All pages'.
In Alignment section leave 'Match size with other pages' ticked, choose the central positioning of
the page and do 'Apply to...' → 'All pages'.
6) Outputting the .tiffs
Now go to the 'Output' step. Ignore the 'Output Resolution' section.
Next review two consecutive pages from the middle of the book to see if the scanned text is too
faint or too dark. If the text seems too faint or too dark, use slider Thinner – Thicker to adjust. Do
'Apply to' → 'All pages'.
Next go to the cover page and select under Mode 'Color / Grayscale' and tick on 'White Margins'.
Do the same for the back page.
If there are any pages with illustrations, you can choose the 'Mixed' mode for those pages and then

under the thumb 'Picture Zones' adjust the zones of the illustrations.
Now you are ready to output the files. Just press 'Play' button under 'Output'. Once the computer is
finished processing the images, just do 'File' → 'Save as' and save the project.

IV. OPTICAL CHARACTER RECOGNITION
Before the edited-down graphic files are finalized as an e-book, we want to transform the image of
the text into an actual text that can be searched, highlighted, copied and transformed. That
functionality is provided by Optical Character Recognition. This a technically difficult task dependent on language, script, typeface and quality of print - and there aren't that many OCR tools
that are good at it. There is, however, a relatively good free software solution - Tesseract
(http://code.google.com/p/tesseract-ocr/) - that has solid performance, good language data and can
be trained for an even better performance, although it has its problems. Proprietary solutions (e.g.
Abby FineReader) sometimes provide superior results.
Tesseract supports as input format primarily .tiff files. It produces a plain text file that can be, with
the help of other tools, embedded as a separate layer under the original graphic image of the text in
a PDF file.
With the help of other tools, OCR can be performed also against other input files, such as graphiconly PDF files. This produces inferior results, depending again on the quality of graphic files and
the reproduction of text in them. One such tool is a bashscript to OCR a ODF file that can be found
here: https://github.com/andrecastro0o/ocr/blob/master/ocr.sh
As mentioned in the 'before scanning' section, the quality of the original book will influence the
quality of the scan and thus the quality of the OCR. For a comparison, have a look here:
http://www.paramoulipist.be/?p=1303
Once you have your .txt file, there is still some work to be done. Because OCR has difficulties to
interpret particular elements in the lay-out and fonts, the TXT file comes with a lot of errors.
Recurrent problems are:
- combinations of specific letters in some fonts (it can mistake 'm' for 'n' or 'I' for 'i' etc.);
- headers become part of body text;
- footnotes are placed inside the body text;
- page numbers are not recognized as such.

V. CREATING A FINALIZED E-BOOK FILE
After the optical character recognition has been completed, the resulting text can be merged with
the images of pages and output into an e-book format. While increasingly the proper e-book file
formats such as ePub have been gaining ground, PDFs still remain popular because many people
tend to read on their computers, and they retain the original layout of the book on paper including
the absolute pagination needed for referencing in citations. DjVu is also an option, as an alternative
to PDF, used because of its purported superiority, but it is far less popular.
The export to PDF can be done again with a number of tools. In our case we'll complete the optical
character recognition and PDF export in gscan2pdf. Again, the proprietary Abbyy FineReader will
produce a bit smaller PDFs.
If you prefer to use an e-book format that works better with e-book readers, obviously you will have
to remove some of the elements that appear in the book - headers, footers, footnotes and pagination.

This can be done earlier in the process of cropping down the original .jpg image files (see under III)
or later by transforming the PDF files. This can be done in Calibre (http://calibre-ebook.com) by
converting the PDF into an ePub, where it can be further tweaked to better accommodate or remove
the headers, footers, footnotes and pagination.
Optical character recognition and PDF export in Public Library workflow
Optical character recognition with the Tesseract engine can be performed on GNU/Linux by a
number of command line and GUI tools. Much of those tools exist also for other operating systems.
For the users of the Public Library workflow, we recommend using gscan2pdf application both for
the optical character recognition and the PDF or DjVu export.
To do so, start gscan2pdf and open your .tiff files. To OCR them, go to 'Tools' and select 'OCR'. In
the dialog box select the Tesseract engine and your language. 'Start OCR'. Once the OCR is
finished, export the graphic files and the OCR text to PDF by selecting 'Save as'.
However, given that sometimes the proprietary solutions produce better results, these tasks can also
be done, for instance, on the Abbyy FineReader running on a Windows operating system running
inside the Virtual Box. The prerequisites are that you have both Windows and Abbyy FineReader
you can install in the Virtual Box. If using Virtual Box, once you've got both installed, you need to
designate a shared folder in your Virtual Box and place the .tiff files there. You can now open them
from the Abbyy FineReader running in the Virtual Box, OCR them and export them into a PDF.
To use Abbyy FineReader transfer the output files in your 'out' out folder to the shared folder of the
VirtualBox. Then start the VirtualBox, start Windows image and in Windows start Abbyy
FineReader. Open the files and let the Abbyy FineReader read the files. Once it's done, output the
result into PDF.

VI. CATALOGING AND SHARING THE E-BOOK
Your road from a book on paper to an e-book is complete. If you want to maintain your library you
can use Calibre, a free software tool for e-book library management. You can add the metadata to
your book using the existing catalogues or you can enter metadata manually.
Now you may want to distribute your book. If the work you've digitized is in the public domain
(https://en.wikipedia.org/wiki/Public_domain), you might consider contributing it to the Gutenberg
project
(http://www.gutenberg.org/wiki/Gutenberg:Volunteers'_FAQ#V.1._How_do_I_get_started_as_a_Pr
oject_Gutenberg_volunteer.3F ), Wikibooks (https://en.wikibooks.org/wiki/Help:Contributing ) or
Arhive.org.
If the work is still under copyright, you might explore a number of different options for sharing.

QUICK WORKFLOW REFERENCE FOR SCANNING AND
POST-PROCESSING ON PUBLIC LIBRARY SCANNER
I. PHOTOGRAPHING A PRINTED BOOK
0. Before you start:
- loosen the book binding by opening it wide on several places
- switch on the scanner
- set up the cameras:
- place cameras on tripods and fit them tigthly
- plug in the automatic chargers into the battery slot and close the battery lid
- switch on the cameras
- switch the lens to Manual Focus mode
- switch the cameras to Av mode and set the aperture to 8.0
- turn the zoom ring to set the focal length exactly midway between 24mm and 35mm
- focus by turning on the live view, pressing magnification button twice and adjusting the
focus to get a clear view of the text
- connect the cameras to the scanner by plugging the remote trigger cable to a port behind a
protective rubber cover on the left side of the cameras
- place the book into the crade
- double-check storage cards and batteries
- press the play button on the back of the camera to double-check if there are images on the
camera - if there are, delete all the images from the camera menu
- if using batteries, double-check that batteries are fully charged
- switch off the light in the room that could reflect off the platen and cover the scanner with the
black cloth
1. Photographing
- now you can start scanning either by pressing the smaller button on the controller once to
lower the platen and adjust the book, and then press again to increase the light intensity, trigger the
cameras and lift the platen; or by pressing the large button completing the entire sequence in one
go;
- ATTENTION: Shutter sound should be coming from both cameras - if one camera is not
working, it's best to reconnect both cameras, make sure the batteries are charged or adapters
are connected, erase all images and restart.
- ADVICE: The scanner has a digital counter. By turning the dial forward and backward,
you can set it to tell you what page you should be scanning next. This should help you to
avoid missing a page due to a distraction.

II. Getting the image files ready for post-processing
- after finishing with scanning a book, transfer the files to the post-processing computer
and purge the memory cards
- if transferring the files manually:
- create two separate folders,
- transfer the files from the folders with image files on cards, using a batch
renaming software rename the files from the right camera following the convention
page_0001.jpg, page_0003.jpg, page_0005.jpg... -- and the files from the left camera
following the convention page_0002.jpg, page_0004.jpg, page_0006.jpg...
- collate image files into a single folder
- before ejecting each card, delete all the photo files on the card
- if using the scanflow script:
- start the script on the computer
- place the card from the right camera into the card reader
- enter the name of the destination folder following the convention
"Name_Surname_Title_of_the_Book" and transfer the files
- repeat with the other card
- script will automatically transfer the files, rename, rotate, collate them in proper
order and delete them from the card
III. Transformation of source images into .tiffs
ScanTailor: from a photograph of page to a graphic file ready for OCR
1) Importing photographs to ScanTailor
- start ScanTailor and open ‘new project’
- for ‘input directory’ chose the folder where you stored the transferred photo images
- you can leave ‘output directory’ as it is, it will place your resulting .tiffs in an 'out' folder
inside the folder where your .jpg images are
- select all files (if you followed the naming convention above, they will be named
‘page_xxxx.jpg’) in the folder where you stored the transferred photo images, and click
'OK'
- in the dialog box ‘Fix DPI’ click on All Pages, and for DPI choose preferably '600x600',
click 'Apply', and then 'OK'
2) Editing pages
2.1 Rotating photos/pages
If you've rotated the photo images in the previous step using the scanflow script, skip this step.
- rotate the first photo counter-clockwise, click Apply and for scope select ‘Every other
page’ followed by 'OK'
- rotate the following photo clockwise, applying the same procedure like in the previous
step

2.2 Deleting redundant photographs/pages
- remove redundant pages (photographs of the empty cradle at the beginning and the end;
book cover pages if you don’t want them in the final scan; duplicate pages etc.) by rightclicking on a thumbnail of that page in the preview column on the right, selecting ‘Remove
from project’ and confirming by clicking on ‘Remove’.
# If you by accident remove a wrong page, you can re-insert it by right-clicking on a page
before/after the missing page in the sequence, selecting 'insert after/before' and choosing the file
from the list. Before you finish adding, it is necessary to again go the procedure of fixing DPI and
rotating.
2.3 Adding missing pages
- If you notice that some pages are missing, you can recapture them with the camera and
insert them manually at this point using the procedure described above under 2.2.
3)

Split pages and deskew
- Functions ‘Split Pages’ and ‘Deskew’ should work automatically. Run them by
clicking the ‘Play’ button under the 'Select content' step. This will do the three steps
automatically: splitting of pages, deskewing and selection of content. After this you can
manually re-adjust splitting of pages and de-skewing.

4)

Selecting content and adjusting margins
- Step ‘Select content’ works automatically as well, but it is important to revise the
resulting selection manually page by page to make sure the entire content is selected on
each page (including the header and page number). Where necessary use your pointer device
to adjust the content selection.
- If the inner margin is cut, go back to 'Split pages' view and manually adjust the selected
split area. If the page is skewed, go back to 'Deskew' and adjust the skew of the page. After
this go back to 'Select content' and readjust the selection if necessary.
- This is the step where you do visual control of each page. Make sure all pages are there
and selections are as equal in size as possible.
- At the bottom of thumbnail column there is a sort option that can automatically arrange
pages by the height and width of the selected content, making the process of manual
selection easier. The extreme differences in height should be avoided, try to make
selected areas as much as possible equal, particularly in height, across all pages. The
exception should be cover and back pages where we advise to select the full page.

5) Adjusting margins
- Now go to the 'Margins' step and set under Margins section both Top, Bottom, Left and
Right to 0.0 and do 'Apply to...' → 'All pages'.
- In Alignment section leave 'Match size with other pages' ticked, choose the central

positioning of the page and do 'Apply to...' → 'All pages'.
6) Outputting the .tiffs
- Now go to the 'Output' step.
- Review two consecutive pages from the middle of the book to see if the scanned text is
too faint or too dark. If the text seems too faint or too dark, use slider Thinner – Thicker to
adjust. Do 'Apply to' → 'All pages'.
- Next go to the cover page and select under Mode 'Color / Grayscale' and tick on 'White
Margins'. Do the same for the back page.
- If there are any pages with illustrations, you can choose the 'Mixed' mode for those
pages and then under the thumb 'Picture Zones' adjust the zones of the illustrations.
- To output the files press 'Play' button under 'Output'. Save the project.
IV. Optical character recognition & V. Creating a finalized e-book file
If using all free software:
1) open gscan2pdf (if not already installed on your machine, install gscan2pdf from the
repositories, Tesseract and data for your language from https://code.google.com/p/tesseract-ocr/)
- point gscan2pdf to open your .tiff files
- for Optical Character Recognition, select 'OCR' under the drop down menu 'Tools',
select the Tesseract engine and your language, start the process
- once OCR is finished and to output to a PDF, go under 'File' and select 'Save', edit the
metadata and select the format, save
If using non-free software:
2) open Abbyy FineReader in VirtualBox (note: only Abby FineReader 10 installs and works with some limitations - under GNU/Linux)
- transfer files in the 'out' folder to the folder shared with the VirtualBox
- point it to the readied .tiff files and it will complete the OCR
- save the file

REFERENCES
For more information on the book scanning process in general and making your own book scanner
please visit:
DIY Book Scanner: http://diybookscannnner.org
Hacker Space Bruxelles scanner: http://hackerspace.be/ScanBot
Public Library scanner: http://www.memoryoftheworld.org/blog/2012/10/28/our-belovedbookscanner/
Other scanner builds: http://wiki.diybookscanner.org/scanner-build-list
For more information on automation:
Konrad Voeckel's post-processing script (From Scan to PDF/A):
http://blog.konradvoelkel.de/2013/03/scan-to-pdfa/
Johannes Baiter's automation of scanning to PDF process: http://spreads.readthedocs.org
For more information on applications and tools:
Calibre e-book library management application: http://calibre-ebook.com/
ScanTailor: http://scantailor.sourceforge.net/
gscan2pdf: http://sourceforge.net/projects/gscan2pdf/
Canon Hack Development Kit firmware: http://chdk.wikia.com
Tesseract: http://code.google.com/p/tesseract-ocr/
Python script of Hacker Space Bruxelles scanner: http://git.constantvzw.org/?
p=algolit.git;a=tree;f=scanbot_brussel;h=81facf5cb106a8e4c2a76c048694a3043b158d62;hb=HEA
D


Murtaugh
A bag but is language nothing of words
2016


## A bag but is language nothing of words

### From Mondotheque

#####

(language is nothing but a bag of words)

[Michael Murtaugh](/wiki/index.php?title=Michael_Murtaugh "Michael Murtaugh")

In text indexing and other machine reading applications the term "bag of
words" is frequently used to underscore how processing algorithms often
represent text using a data structure (word histograms or weighted vectors)
where the original order of the words in sentence form is stripped away. While
"bag of words" might well serve as a cautionary reminder to programmers of the
essential violence perpetrated to a text and a call to critically question the
efficacy of methods based on subsequent transformations, the expression's use
seems in practice more like a badge of pride or a schoolyard taunt that would
go: Hey language: you're nothin' but a big BAG-OF-WORDS.

## Bag of words

In information retrieval and other so-called _machine-reading_ applications
(such as text indexing for web search engines) the term "bag of words" is used
to underscore how in the course of processing a text the original order of the
words in sentence form is stripped away. The resulting representation is then
a collection of each unique word used in the text, typically weighted by the
number of times the word occurs.

Bag of words, also known as word histograms or weighted term vectors, are a
standard part of the data engineer's toolkit. But why such a drastic
transformation? The utility of "bag of words" is in how it makes text amenable
to code, first in that it's very straightforward to implement the translation
from a text document to a bag of words representation. More significantly,
this transformation then opens up a wide collection of tools and techniques
for further transformation and analysis purposes. For instance, a number of
libraries available in the booming field of "data sciences" work with "high
dimension" vectors; bag of words is a way to transform a written document into
a mathematical vector where each "dimension" corresponds to the (relative)
quantity of each unique word. While physically unimaginable and abstract
(imagine each of Shakespeare's works as points in a 14 million dimensional
space), from a formal mathematical perspective, it's quite a comfortable idea,
and many complementary techniques (such as principle component analysis) exist
to reduce the resulting complexity.

What's striking about a bag of words representation, given is centrality in so
many text retrieval application is its irreversibility. Given a bag of words
representation of a text and faced with the task of producing the original
text would require in essence the "brain" of a writer to recompose sentences,
working with the patience of a devoted cryptogram puzzler to draw from the
precise stock of available words. While "bag of words" might well serve as a
cautionary reminder to programmers of the essential violence perpetrated to a
text and a call to critically question the efficacy of methods based on
subsequent transformations, the expressions use seems in practice more like a
badge of pride or a schoolyard taunt that would go: Hey language: you're
nothing but a big BAG-OF-WORDS. Following this spirit of the term, "bag of
words" celebrates a perfunctory step of "breaking" a text into a purer form
amenable to computation, to stripping language of its silly redundant
repetitions and foolishly contrived stylistic phrasings to reveal a purer
inner essence.

## Book of words

Lieber's Standard Telegraphic Code, first published in 1896 and republished in
various updated editions through the early 1900s, is an example of one of
several competing systems of telegraph code books. The idea was for both
senders and receivers of telegraph messages to use the books to translate
their messages into a sequence of code words which can then be sent for less
money as telegraph messages were paid by the word. In the front of the book, a
list of examples gives a sampling of how messages like: "Have bought for your
account 400 bales of cotton, March delivery, at 8.34" can be conveyed by a
telegram with the message "Ciotola, Delaboravi". In each case the reduction of
number of transmitted words is highlighted to underscore the efficacy of the
method. Like a dictionary or thesaurus, the book is primarily organized around
key words, such as _act_ , _advice_ , _affairs_ , _bags_ , _bail_ , and
_bales_ , under which exhaustive lists of useful phrases involving the
corresponding word are provided in the main pages of the volume. [1]

[![Liebers
P1016847.JPG](/wiki/images/4/41/Liebers_P1016847.JPG)](/wiki/index.php?title=File:Liebers_P1016847.JPG)

[![Liebers
P1016859.JPG](/wiki/images/3/35/Liebers_P1016859.JPG)](/wiki/index.php?title=File:Liebers_P1016859.JPG)

[![Liebers
P1016861.JPG](/wiki/images/3/34/Liebers_P1016861.JPG)](/wiki/index.php?title=File:Liebers_P1016861.JPG)

[![Liebers
P1016869.JPG](/wiki/images/f/fd/Liebers_P1016869.JPG)](/wiki/index.php?title=File:Liebers_P1016869.JPG)

> [...] my focus in this chapter is on the inscription technology that grew
parasitically alongside the monopolistic pricing strategies of telegraph
companies: telegraph code books. Constructed under the bywords “economy,”
“secrecy,” and “simplicity,” telegraph code books matched phrases and words
with code letters or numbers. The idea was to use a single code word instead
of an entire phrase, thus saving money by serving as an information
compression technology. Generally economy won out over secrecy, but in
specialized cases, secrecy was also important.[2]

In Katherine Hayles' chapter devoted to telegraph code books she observes how:

> The interaction between code and language shows a steady movement away from
a human-centric view of code toward a machine-centric view, thus anticipating
the development of full-fledged machine codes with the digital computer. [3]

[![Liebers
P1016851.JPG](/wiki/images/1/13/Liebers_P1016851.JPG)](/wiki/index.php?title=File:Liebers_P1016851.JPG)
Aspects of this transitional moment are apparent in a notice included
prominently inserted in the Lieber's code book:

> After July, 1904, all combinations of letters that do not exceed ten will
pass as one cipher word, provided that it is pronounceable, or that it is
taken from the following languages: English, French, German, Dutch, Spanish,
Portuguese or Latin -- International Telegraphic Conference, July 1903 [4]

Conforming to international conventions regulating telegraph communication at
that time, the stipulation that code words be actual words drawn from a
variety of European languages (many of Lieber's code words are indeed
arbitrary Dutch, German, and Spanish words) underscores this particular moment
of transition as reference to the human body in the form of "pronounceable"
speech from representative languages begins to yield to the inherent potential
for arbitrariness in digital representation.

What telegraph code books do is remind us of is the relation of language in
general to economy. Whether they may be economies of memory, attention, costs
paid to a telecommunicatons company, or in terms of computer processing time
or storage space, encoding language or knowledge in any form of writing is a
form of shorthand and always involves an interplay with what one expects to
perform or "get out" of the resulting encoding.

> Along with the invention of telegraphic codes comes a paradox that John
Guillory has noted: code can be used both to clarify and occlude. Among the
sedimented structures in the technological unconscious is the dream of a
universal language. Uniting the world in networks of communication that
flashed faster than ever before, telegraphy was particularly suited to the
idea that intercultural communication could become almost effortless. In this
utopian vision, the effects of continuous reciprocal causality expand to
global proportions capable of radically transforming the conditions of human
life. That these dreams were never realized seems, in retrospect, inevitable.
[5]

[![Liebers
P1016884.JPG](/wiki/images/9/9c/Liebers_P1016884.JPG)](/wiki/index.php?title=File:Liebers_P1016884.JPG)

[![Liebers
P1016852.JPG](/wiki/images/7/74/Liebers_P1016852.JPG)](/wiki/index.php?title=File:Liebers_P1016852.JPG)

[![Liebers
P1016880.JPG](/wiki/images/1/11/Liebers_P1016880.JPG)](/wiki/index.php?title=File:Liebers_P1016880.JPG)

Far from providing a universal system of encoding messages in the English
language, Lieber's code is quite clearly designed for the particular needs and
conditions of its use. In addition to the phrases ordered by keywords, the
book includes a number of tables of terms for specialized use. One table lists
a set of words used to describe all possible permutations of numeric grades of
coffee (Choliam = 3,4, Choliambos = 3,4,5, Choliba = 4,5, etc.); another table
lists pairs of code words to express the respective daily rise or fall of the
price of coffee at the port of Le Havre in increments of a quarter of a Franc
per 50 kilos ("Chirriado = prices have advanced 1 1/4 francs"). From an
archaeological perspective, the Lieber's code book reveals a cross section of
the needs and desires of early 20th century business communication between the
United States and its trading partners.

The advertisements lining the Liebers Code book further situate its use and
that of commercial telegraphy. Among the many advertisements for banking and
law services, office equipment, and alcohol are several ads for gun powder and
explosives, drilling equipment and metallurgic services all with specific
applications to mining. Extending telegraphy's formative role for ship-to-
shore and ship-to-ship communication for reasons of safety, commercial
telegraphy extended this network of communication to include those parties
coordinating the "raw materials" being mined, grown, or otherwise extracted
from overseas sources and shipped back for sale.

## "Raw data now!"

From [La ville intelligente - Ville de la connaissance](/wiki/index.php?title
=La_ville_intelligente_-_Ville_de_la_connaissance "La ville intelligente -
Ville de la connaissance"):

Étant donné que les nouvelles formes modernistes et l'utilisation de matériaux
propageaient l'abondance d'éléments décoratifs, Paul Otlet croyait en la
possibilité du langage comme modèle de « [données
brutes](/wiki/index.php?title=Bag_of_words "Bag of words") », le réduisant aux
informations essentielles et aux faits sans ambiguïté, tout en se débarrassant
de tous les éléments inefficaces et subjectifs.


From [The Smart City - City of Knowledge](/wiki/index.php?title
=The_Smart_City_-_City_of_Knowledge "The Smart City - City of Knowledge"):

As new modernist forms and use of materials propagated the abundance of
decorative elements, Otlet believed in the possibility of language as a model
of '[raw data](/wiki/index.php?title=Bag_of_words "Bag of words")', reducing
it to essential information and unambiguous facts, while removing all
inefficient assets of ambiguity or subjectivity.


> Tim Berners-Lee: [...] Make a beautiful website, but first give us the
unadulterated data, we want the data. We want unadulterated data. OK, we have
to ask for raw data now. And I'm going to ask you to practice that, OK? Can
you say "raw"?

>

> Audience: Raw.

>

> Tim Berners-Lee: Can you say "data"?

>

> Audience: Data.

>

> TBL: Can you say "now"?

>

> Audience: Now!

>

> TBL: Alright, "raw data now"!

>

> [...]

>

> So, we're at the stage now where we have to do this -- the people who think
it's a great idea. And all the people -- and I think there's a lot of people
at TED who do things because -- even though there's not an immediate return on
the investment because it will only really pay off when everybody else has
done it -- they'll do it because they're the sort of person who just does
things which would be good if everybody else did them. OK, so it's called
linked data. I want you to make it. I want you to demand it. [6]

## Un/Structured

As graduate students at Stanford, Sergey Brin and Lawrence (Larry) Page had an
early interest in producing "structured data" from the "unstructured" web. [7]

> The World Wide Web provides a vast source of information of almost all
types, ranging from DNA databases to resumes to lists of favorite restaurants.
However, this information is often scattered among many web servers and hosts,
using many different formats. If these chunks of information could be
extracted from the World Wide Web and integrated into a structured form, they
would form an unprecedented source of information. It would include the
largest international directory of people, the largest and most diverse
databases of products, the greatest bibliography of academic works, and many
other useful resources. [...]

>

> **2.1 The Problem**
> Here we define our problem more formally:
> Let D be a large database of unstructured information such as the World
Wide Web [...] [8]

In a paper titled _Dynamic Data Mining_ Brin and Page situate their research
looking for _rules_ (statistical correlations) between words used in web
pages. The "baskets" they mention stem from the origins of "market basket"
techniques developed to find correlations between the items recorded in the
purchase receipts of supermarket customers. In their case, they deal with web
pages rather than shopping baskets, and words instead of purchases. In
transitioning to the much larger scale of the web, they describe the
usefulness of their research in terms of its computational economy, that is
the ability to tackle the scale of the web and still perform using
contemporary computing power completing its task in a reasonably short amount
of time.

> A traditional algorithm could not compute the large itemsets in the lifetime
of the universe. [...] Yet many data sets are difficult to mine because they
have many frequently occurring items, complex relationships between the items,
and a large number of items per basket. In this paper we experiment with word
usage in documents on the World Wide Web (see Section 4.2 for details about
this data set). This data set is fundamentally different from a supermarket
data set. Each document has roughly 150 distinct words on average, as compared
to roughly 10 items for cash register transactions. We restrict ourselves to a
subset of about 24 million documents from the web. This set of documents
contains over 14 million distinct words, with tens of thousands of them
occurring above a reasonable support threshold. Very many sets of these words
are highly correlated and occur often. [9]

## Un/Ordered

In programming, I've encountered a recurring "problem" that's quite
symptomatic. It goes something like this: you (the programmer) have managed to
cobble out a lovely "content management system" (either from scratch, or using
any number of helpful frameworks) where your user can enter some "items" into
a database, for instance to store bookmarks. After this ordered items are
automatically presented in list form (say on a web page). The author: It's
great, except... could this bookmark come before that one? The problem stems
from the fact that the database ordering (a core functionality provided by any
database) somehow applies a sorting logic that's almost but not quite right. A
typical example is the sorting of names where details (where to place a name
that starts with a Norwegian "Ø" for instance), are language-specific, and
when a mixture of languages occurs, no single ordering is necessarily
"correct". The (often) exascerbated programmer might hastily add an additional
database field so that each item can also have an "order" (perhaps in the form
of a date or some other kind of (alpha)numerical "sorting" value) to be used
to correctly order the resulting list. Now the author has a means, awkward and
indirect but workable, to control the order of the presented data on the start
page. But one might well ask, why not just edit the resulting listing as a
document? Not possible! Contemporary content management systems are based on a
data flow from a "pure" source of a database, through controlling code and
templates to produce a document as a result. The document isn't the data, it's
the end result of an irreversible process. This problem, in this and many
variants, is widespread and reveals an essential backwardness that a
particular "computer scientist" mindset relating to what constitutes "data"
and in particular it's relationship to order that makes what might be a
straightforward question of editing a document into an over-engineered
database.

Recently working with Nikolaos Vogiatzis whose research explores playful and
radically subjective alternatives to the list, Vogiatzis was struck by how
from the earliest specifications of HTML (still valid today) have separate
elements (OL and UL) for "ordered" and "unordered" lists.

> The representation of the list is not defined here, but a bulleted list for
unordered lists, and a sequence of numbered paragraphs for an ordered list
would be quite appropriate. Other possibilities for interactive display
include embedded scrollable browse panels. [10]

Vogiatzis' surprise lay in the idea of a list ever being considered
"unordered" (or in opposition to the language used in the specification, for
order to ever be considered "insignificant"). Indeed in its suggested
representation, still followed by modern web browsers, the only difference
between the two visually is that UL items are preceded by a bullet symbol,
while OL items are numbered.

The idea of ordering runs deep in programming practice where essentially
different data structures are employed depending on whether order is to be
maintained. The indexes of a "hash" table, for instance (also known as an
associative array), are ordered in an unpredictable way governed by a
representation's particular implementation. This data structure, extremely
prevalent in contemporary programming practice sacrifices order to offer other
kinds of efficiency (fast text-based retrieval for instance).

## Data mining

In announcing Google's impending data center in Mons, Belgian prime minister
Di Rupo invoked the link between the history of the mining industry in the
region and the present and future interest in "data mining" as practiced by IT
companies such as Google.

Whether speaking of bales of cotton, barrels of oil, or bags of words, what
links these subjects is the way in which the notion of "raw material" obscures
the labor and power structures employed to secure them. "Raw" is always
relative: "purity" depends on processes of "refinement" that typically carry
social/ecological impact.

Stripping language of order is an act of "disembodiment", detaching it from
the acts of writing and reading. The shift from (human) reading to machine
reading involves a shift of responsibility from the individual human body to
the obscured responsibilities and seemingly inevitable forces of the
"machine", be it the machine of a market or the machine of an algorithm.

From [X = Y](/wiki/index.php?title=X_%3D_Y "X = Y"):

Still, it is reassuring to know that the products hold traces of the work,
that even with the progressive removal of human signs in automated processes,
the workers' presence never disappears completely. This presence is proof of
the materiality of information production, and becomes a sign of the economies
and paradigms of efficiency and profitability that are involved.


The computer scientists' view of textual content as "unstructured", be it in a
webpage or the OCR scanned pages of a book, reflect a negligence to the
processes and labor of writing, editing, design, layout, typesetting, and
eventually publishing, collecting and cataloging [11].

"Unstructured" to the computer scientist, means non-conformant to particular
forms of machine reading. "Structuring" then is a social process by which
particular (additional) conventions are agreed upon and employed. Computer
scientists often view text through the eyes of their particular reading
algorithm, and in the process (voluntarily) blind themselves to the work
practices which have produced and maintain these "resources".

Berners-Lee, in chastising his audience of web publishers to not only publish
online, but to release "unadulterated" data belies a lack of imagination in
considering how language is itself structured and a blindness to the need for
more than additional technical standards to connect to existing publishing
practices.

Last Revision: 2*08*2016

1. ↑ Benjamin Franklin Lieber, Lieber's Standard Telegraphic Code, 1896, New York;
2. ↑ Katherine Hayles, "Technogenesis in Action: Telegraph Code Books and the Place of the Human", How We Think: Digital Media and Contemporary Technogenesis, 2006
3. ↑ Hayles
4. ↑ Lieber's
5. ↑ Hayles
6. ↑ Tim Berners-Lee: The next web, TED Talk, February 2009
7. ↑ "Research on the Web seems to be fashionable these days and I guess I'm no exception." from Brin's [Stanford webpage](http://infolab.stanford.edu/~sergey/)
8. ↑ Extracting Patterns and Relations from the World Wide Web, Sergey Brin, Proceedings of the WebDB Workshop at EDBT 1998,
9. ↑ Dynamic Data Mining: Exploring Large Rule Spaces by Sampling; Sergey Brin and Lawrence Page, 1998; p. 2
10. ↑ Hypertext Markup Language (HTML): "Internet Draft", Tim Berners-Lee and Daniel Connolly, June 1993,
11. ↑

Retrieved from
[https://www.mondotheque.be/wiki/index.php?title=A_bag_but_is_language_nothing_of_words&oldid=8480](https://www.mondotheque.be/wiki/index.php?title=A_bag_but_is_language_nothing_of_words&oldid=8480)

Sekulic
Legal Hacking and Space
2015


# Legal hacking and space

## What can urban commons learn from the free software hackers?

* [Dubravka Sekulic](https://www.eurozine.com/authors/sekulic-dubravka/)

4 November 2015

There is now a need to readdress urban commons through the lens of the digital
commons, writes Dubravka Sekulic. The lessons to be drawn from the free
software community and its resistance to the enclosure of code will likely
prove particularly valuable where participation and regulation are concerned.

> Commons are a particular type of institutional arrangement for governing the
use and disposition of resources. Their salient characteristic, which defines
them in contradistinction to property, is that no single person has exclusive
control over the use and disposition of any particular resource. Instead,
resources governed by commons may be used or disposed of by anyone among some
(more or less defined) number of persons, under rules that may range from
"anything goes" to quite crisply articulated formal rules that are effectively
enforced.
> (Benkler 2003: 6)

The above definition of commons, from the seminal paper "The political economy
of commons" by Yochai Benkler, addresses any type of commons, whether analogue
or digital. In fact, the concept of commons entered the digital realm from
physical space in order to interpret the type of communities, relationships
and production that started to appear with the development of the free as
opposed to the proprietary. Peter Linebaugh charted in his excellent book
_Magna Carta Manifesto_ , how the creation and development of the concept of
commons were closely connected to constantly changing relationships of people
and communities to the physical space. Here, I argue that the concept was
enriched when it was implemented in the digital field. Readdressing urban
space through the lens of digital commons can enable another imagination and
knowledge to appear around urban commons.

[![](http://www.eurozine.com/UserFiles/illustrations/sekulic_commons_220w.jpg)](http://www.derive.at/)The
notion of commons in (urban) space is often complicated by archaic models of
organization and management - "the pasture we knew how to share". There is a
tendency to give the impression that the solution is in reverting to the past
models. In the realm of digital though, there is no "pasture" from the Middle
Ages to fall back on. Digital commons had to start from scratch and define its
own protocols of production and reproduction (caring and sharing). Therefore,
the digital commons and free software community can be the one to turn to, not
only for inspiration and advice, but also as a partner when addressing
questions of urban commons. Or, as Marcell Mars would put it "if we could
start again with (regulating and defining) land, knowing what we know now
about digital networks, we could come up with something much better and
appropriate for today's world. That property wouldn't be private, maybe not
even property, but something else. Only then can we say we have learned
something from the digital" (2013).

## Enclosure as the trigger for action

The moment we turn to commons in relation to (urban) space is the moment in
which the pressure to privatize public space and to commodify every aspect of
urban life has become so strong that it can be argued that it mirrors a moment
in which Magna Carta Libertatum was introduced to protect the basic
reproduction of life for those whose sustenance was connected to the common
pastures and forests of England in the thirteenth century. At the end of the
twentieth century, urban space became the ultimate commodity, and increasing
privatization not only endangered the reproduction of everyday life in the
city; the rent extraction through privatized public space and housing
endangered bare life itself. Additionally, the cities' continuous
privatization of its amenities transformed almost every action in the city, no
matter how mundane - as for example, drinking a glass of water from a tap -,
into an action that creates profit for some private entity and extracts it
from the community. Thus every activity became labour, which a citizen-worker
is not only alienated from, but also unaware of. David Harvey's statement
about the city replacing the factory as a site of class war seems to be not
only an apt description of the condition of life in the city, but also a cry
for action.

When Richard Stallman turned to the foundational gesture of the creation of
free software, GNU/GPL (General Public Licence) was his reaction to the
artificially imposed logic of scarcity on the world of code - and the
increasing and systematic enclosure that took place in the late 1970s and
1980s as "a tidal wave of commercialization transformed software from a
technical object into a commodity, to be bought and sold on the open market
under the alleged protection of intellectual property law" (Coleman 2012:
138). Stallman, who worked as a researcher at MIT's Artificial Intelligence
Laboratory, detected how "[m]any programmers are unhappy about the
commercialization of system software. It may enable them to make more money,
but it requires them to feel in conflict with other programmers in general
rather than feel as comrades. The fundamental act of friendship among
programmers is the sharing of programs; marketing arrangements now typically
used essentially forbid programmers to treat others as friends. The purchaser
of software must choose between friendship and obeying the law. Naturally,
many decide that friendship is more important. But those who believe in law
often do not feel at ease with either choice. They become cynical and think
that programming is just a way of making money" (Stallman 2002: 32).

In the period between 1980 and 1984, "one man [Stallman] envisioned a crusade
to change the situation" (Moglen 1999). Stallman understood that in order to
subvert the system, he would have to intervene in the protocols that regulate
the conditions under which the code is produced, and not the code itself;
although he did contribute some of the best lines of code into the compiler
and text editor - the foundational infrastructure for any development. The
gesture that enabled the creation of a free software community that yielded
the complex field of digital commons was not a perfect line of code. The
creation of GNU General Public License (GPL) was a legal hack to counteract
the imposing of intellectual property law on code. At that time, the only
license available for programmers wanting to keep the code free was public
domain, which gave no protection against the code being appropriated and
closed. GPL enabled free codes to become self-perpetuating. Everything built
using a free code had to be made available under the same condition, in order
to secure the freedom for programmers to continue sharing and not breaking the
law. "By working on and using GNU rather than proprietary programs, we can be
hospitable to everyone and obey the law. In addition, GNU serves as an example
to inspire and as a banner to rally others to join in sharing. This can give
us a feeling of harmony, which is impossible if we use software, which is not
free. For about half the programmers I talk to, this is an important happiness
that money cannot replace" (Stallman 2002: 33).

Architects and planners as well as environmental designers have for too long
believed the opposite, that a good enough design can subvert the logic of
enclosure that dominates the production and reproduction of space; that a good
enough design can keep space open and public by the sheer strength of spatial
intervention. Stallman rightfully understands that no design is strong enough
to keep private ownership from claiming what it believes belongs to it.
Digital and urban commons, despite operating in completely different realms
and economies, are under attack from the same threat of "market processes"
that "crucially depend upon the individual monopoly of capitalists (of all
sorts) over ownership of the means of production, including finance and land.
All rent, recall, is a return to the monopoly power of private ownership of
some crucial asset, such as land or a patent. The monopoly power of private
property is therefore both the beginning-point and the end-point of all
capitalist activity" (Harvey 2012: 100). Stallman envisioned a bleak future
(2003: 26-28) but found a way to "relate the means to the ends". He understood
that the emancipatory task of a struggle "is not only what has to be done, but
also how it will be done and who will do it" (Stavrides & De Angelis: 7).
Thus, to produce the necessary requirements - both for a community to emerge,
but also for the basis of future protocols - tools and methodologies are
needed for the community to create both free software and itself.

## Renegotiating (undoing) property, hacking the law, creating community

Property, as an instrument of allocation of resources, is a right that is
negotiated within society and by society and not written in stone or given as
such. The digital, more than any other field, discloses property as being
inappropriate for contemporary relationships between production and
reproduction and, additionally, proves how it is possible to fundamentally
rethink it. The digital offers this possibility as it is non-material, non-
rival and non-exclusive (Meretz 2013), unlike anything in the physical world.
And Elinor Ostrom's lifelong empirical researches give ground to the belief
that eschewing property, being the sole instrument of allocation, can work as
a tool of management even for rival, excludable goods.
The value of information in digital form is not flat, but property is not the
way to protect that value, as the music industry realized during the course of
the last ten years. Once the copy is _out there_ , the cost of protecting its
exclusivity on the grounds of property becomes too high in relation to the
potential value to be extracted. For example, the value is extracted from
information through controlling the moment of its release and not through
subsequent exploitation. Stallman decided to tackle the imposition of the
concept of property on computer code (and by extension to the digital realm as
a whole) by articulating it in another field: just as property is the product
of constant negotiations within a society, so are legal regulations. After
some time, he was joined by "[m]any free software developers [who] do not
consider intellectual property instruments as the pivotal stimulus for a
marketplace of ideas and knowledge. Instead, they see them as a form of
restriction so fundamental (or poorly executed) that they need to be
counteracted through alternative legal agreements that treat knowledge,
inventions, and other creative expressions not as property but rather as
speech to be freely shared, circulated, and modified" (Coleman 2012: 26).

The digital sphere can give a valid example of how renegotiating regulation
can transform a resource from scarce to abundant. When the change from
analogue signal to packet switching begun to take effect, the distribution of
finite territory and the way the radio frequency spectrum was managed got
renegotiated and the amount of slots of space to be allocated grew by an order
of magnitude while the absolute size of the spectrum stayed the same. This
shift enabled Brecht's dream of a two-sided radio to become reality, thus
enabling what he had suggested: "change this apparatus over from distribution
to communication".1

According to Lawrence Lessig, what regulates behavior in cyberspace is an
interdependence of four constraints: market, law, architecture and norms
(Lessig 2012: 121-25). Analogously, space can be put in place of cyberspace,
as the regulation of space is the sum of these four constraints. These four
constraints are in a dynamic relationship in which the balance can be tilted
towards one, depending on how much each of these categories puts pressure on
the other three. Changes in any one reflect the regulation of the whole.
"Architecture" in Lessig's theory should be understood broadly as the "built
environment" that regulates behaviour in (cyber)space. In the last few decades
we have experienced the domination of the market reconfiguring the basis of
norms, law and architecture. In order to counteract this, the other three
constraints need to be re-negotiated. In digital space, this reconfiguration
happened by declaring the code - that is, the set of instructions written as
highly formalized text in a specific programming language to be executed
(usually) by the computer - to be considered as speech in front of the law,
and by hacking the law in order to disrupt the way that property relationships
are formed.

To put it simply, in order to create a change in dynamics between the
architecture, norms and the market, the law had to be addressed first. This is
not a novel procedure, "legal hacking is going on all the time, it is just
that politics is doing it under the veil of legality because they are the
parliament, they are Microsoft, which can hire a whole law firm to defend them
and find all the legal loopholes. Legal hacking is the norm actually" (Bailey
2013). When it comes to physical space, one of the most obvious examples of
the reconfiguration of regulations under the influence of the market is to
create legal provisions, norms and architecture to sustain the concept of
developing (and privatizing) public space through public-private partnerships.
The decision of the Italian parliament that the privatization of services
(specifically of water management) is legal and does not obstruct one's access
to water as a human right, is another example of a crude manipulation of the
law by the state in favour of the market. Unlike legal hacks by corporations
that aim to create a favourable legal climate for another round of
accumulation through dispossession, Stallman's hack tries to limit the impact
of the market and to create a space of freedom for the creation of a code and
of sharable knowledge, by questioning one of the central pillars of liberal
jurisprudence: (intellectual) property law.

Similarly, translated into physical space, one of the initiatives in Europe
that comes closest to creating a real existing urban commons, Teatro Valle
Occupato in Rome, is doing the same, "pushing the borders of legality of
private property" by legally hacking the institution of a foundation to "serve
a public, or common, purpose" and having "notarized [a] document registered
with the Italian state, that creates a precedent for other people to follow in
its way" (Bailey 2013). Sounds familiar to Stallman's hack as the fundamental
gesture by which community and the whole eco-system can be formed.

It is obvious that, in order to create and sustain that type of legal hack, it
is a necessity to have a certain level of awareness and knowledge of how
systems, both political and legal, work, i.e. to be politically literate.
"While in general", says Italian commons-activist and legal scholar Saki
Bailey, "we've become extremely lazy [when it comes to politics]. We've
started to become a kind of society of people who give up their responsibility
to participate by handing it over to some charismatic leaders, experts of [a]
different type" (2013). Free software hackers, in order to understand and take
part in a constant negotiation that takes place on a legal level between the
market that seeks to cloister the code and hackers who want to keep it free,
had to become literate in an arcane legal language. Gabriella Coleman notes in
_Coding Freedom_ that hacker forums sometimes tend to produce legal analysis
that is just as serious as one would expect to find in a law office. Like the
occupants of Teatro Valle, free software hackers understand the importance of
devoting time and energy to understand constraints and to find ways to
structurally divert them.

This type of knowledge is not shared and created in isolation, but in
socialization, in discussions in physical or cyber spaces (such as #irc chat
rooms, forums, mailing lists…), the same way free software hackers share their
knowledge about code. Through this process of socializing knowledge, "the
community is formed, developed, and reproduced through practices focused on
common space. To generalize this principle: the community is developed through
commoning, through acts and forms of organization oriented towards the
production of the common" (Stavrides 2012: 588). Thus forming a community is
another crucial element of the creation of digital commons, but even more
important are its development and resilience. The emerging community was not
given something to manage, it created something together, and together devised
rules of self-regulation and decision-making.

The prime example of this principle in the free software community is the
Debian Project, formed around the development of the Debian Linux
distribution. It is a volunteer organization consisting of around 3,000
developers that since its inception in 1993 has defined a set of basic
principles by which the project and its members conduct their affairs. This
includes the introduction of new people into the community, a process called
Debian Social Contract (DSC). A special part of the DSC defines the criteria
for "free software", thus regulating technical aspects of the project and also
technical relations with the rest of a free software community. The Debian
Constitution, another document created by the community so it can govern
itself, describes the organizational structure for formal decision-making
within the project.

Another example is Wikipedia, where the community that makes the online
encyclopedia also takes part in creating regulations, with some aspects
debated almost endlessly on forums. It is even possible to detect a loose
community of "Internet users" who took to the streets all over the world when
SOPA (Stop Online Piracy Act) and PIPA (Preventing Real Online Threats to
Economic Creativity and Theft of Intellectual Property Act) threatened to
enclose the Internet, as we know it; the proposed legislation was successfully
contested.

Free software projects that represent the core of the digital commons are most
of the time born of the initiative of individuals, but their growth and life
cycle depend on the fact that they get picked up by a community or generate
community around them that is allowed to take part in their regulation and in
decisions about which shape and forms the project will take in the future.
This is an important lesson to be transferred to the physical space in which
many projects fail because they do not get picked up by the intended
community, as the community is not offered a chance to partake in its creation
and, more importantly, its regulation.

## Building common infrastructure and institutions

"The expansion of intellectual property law" as the main vehicle of the trend
to enclose the code that leads to the act of the creation of free software
and, thus, digital commons, "is part and parcel of a broader neoliberal trend
to privatize what was once under public or under the state's aegis, such as
health provision, water delivery, and military services" (Coleman 2012: 16).
The structural fight headed by the GNU/GPL against the enclosure of code
"defines the contractual relationship that serves to secure the freedom of
means of production and to constitute a community of those participating in
the production and reproduction of free resources. And it is this constitutive
character, as an answer to an every time singular situation of appropriation
by the capital, that is a genuine political emancipation striving for an equal
and free collective production" (Mars & Medak 2004). Thus digital commons "is
based on the _communication_ among _singularities_ and emerges through
collaborative social processes of production " (Negri & Hardt 2005: 204).

The most important lesson urban commons can take from its digital counterpart
is at the same time the most difficult one: how to make a structural hack in
the moment of the creation of an urban commons that will enable it to become
structurally self-perpetuating, thus creating fertile ground not only for a
singular spatialization of urban commons to appear, but to multiply and create
a whole new eco-system. Digital commons was the first field in which what
Negri and Hardt (2009: 3-21) called the "republic of property" was challenged.
Urban commons, in order to really emerge as a spatialization of a new type of
relationship, need to start undoing property as well in order to socially re-
appropriate the city. Or in the words of Stavros Stavrides "the most urgent
and promising task, which can oppose the dominant governance model, is the
reinvention of common space. The realm of the common emerges in a constant
confrontation with state-controlled 'authorized' public space. This is an
emergence full of contradictions, perhaps, quite difficult to predict, but
nevertheless necessary. Behind a multifarious demand for justice and dignity,
new roads to collective emancipation are tested and invented. And, as the
Zapatistas say, we can create these roads only while walking. But we have to
listen, to observe, and to feel the walking movement. Together" (Stavrides
2012: 594).

The big task for both digital and urban commons is "[b]uilding a core common
infrastructure [which] is a necessary precondition to allow us to transition
away from a society of passive consumers buying what a small number of
commercial producers are selling. It will allow us to develop into a society
in which all can speak to all, and in which anyone can become an active
participant in political, social and cultural discourse" (Benkler 2003: 9).
This core common infrastructure has to be porous enough to include people that
are not similar, to provide "a ground to build a public realm and give
opportunities for discussing and negotiating what is good for all, rather than
the idea of strengthening communities in their struggle to define their own
commons. Relating commons to groups of "similar" people bears the danger of
eventually creating closed communities. People may thus define themselves as
commoners by excluding others from their milieu, from their own privileged
commons." (Stavrides 2010). If learning carefully from digital commons, urban
commons need to be conceptualized on the basis of the public, with a self-
regulating community that is open for others to join. That socializes
knowledge and thus produces and reproduces the commons, creating a space for
political emancipation that is capable of judicial arguments for the
protection and extension of regulations that are counter-market oriented.

## References

Bailey, Saki (2013): Interview by Dubravka Sekulic and Alexander de Cuveland.

Benkler, Yochai (2003): "The political economy of commons". _Upgrade_ IV, no.
3, 6-9, [www.benkler.org/Upgrade-
Novatica%20Commons.pdf](http://www.benkler.org/Upgrade-
Novatica%20Commons.pdf).

Benkler, Yochai (2006): _The Wealth of Networks: How Social Production
Transforms Markets and Freedom_. New Haven: Yale University Press.

Brecht, Bertolt (2000): "The radio as a communications apparatus". In: _Brecht
on Film and Radio_ , edited by Marc Silberman. Methuen, 41-6.

Coleman, E. Gabriella (2012): _Coding Freedom: The Ethics and Aesthetics of
Hacking_. Princeton University Press / Kindle edition.

Hardt, Michael and Antonio Negri (2005): _Multitude: War and Democracy in the
Age of Empire_. Penguin Books.

Hardt, Michael and Antonio Negri (2011): _Commonwealth_. Belknap Press of
Harvard University Press.

Harvey, David (2012): The Art of Rent. In: _Rebel Cities: From the Right to
the City to the Urban Revolution_ , 1st ed. Verso, 94-118.

Hill, Benjamin Mako (2012): Freedom for Users, Not for Software. In: Bollier,
David & Helfrich, Silke (Ed.): _The Wealth of the Commons: a World Beyond
Market and State_. Levellers Press / E-book.

Lessig, Lawrence (2012): _Code: Version 2.0_. Basic Books.

Linebaugh, Peter (2008): _The Magna Carta Manifesto: Liberties and Commons for
All_. University of California Press.

Mars, Marcell (2013): Interview by Dubravka Sekulic.

Mars, Marcell and Tomislav Medak (2004): "Both devil and gnu",
[www.desk.org:8080/ASU2/newsletter.Zarez.N5M.MedakRomicTXT.EnGlish](http://www.desk.org:8080/ASU2/newsletter.Zarez.N5M.MedakRomicTXT.EnGlish).

Martin, Reinhold (2013): "Public and common(s): Places: Design observer",
[placesjournal.org/article/public-and-
commons](https://placesjournal.org/article/public-and-commons).

Meretz, Stefan (2010): "Commons in a taxonomy of goods", [keimform.de/2010
/commons-in-a-taxonomy-of-goods](http://keimform.de/2010/commons-in-a
-taxonomy-of-goods/).

Mitrasinovic, Miodrag (2006): _Total Landscape, Theme Parks, Public Space_ ,
1st ed. Ashgate.

Moglen, Eben (1999): "Anarchism triumphant: Free software and the death of
copyright", First Monday,
[firstmonday.org/ojs/index.php/fm/article/view/684/594](http://firstmonday.org/ojs/index.php/fm/article/view/684/594).

Stallman, Richard and Joshua Gay (2002): _Free Software, Free Society:
Selected Essays of Richard M. Stallman_. GNU Press.

Stallman, Richard and Joshua Gay (2003): "The Right to Read". _Upgrade_ IV,
no. 3, 26-8.

Stavrides, Stavros (2012) "Squares in movement". _South Atlantic Quarterly_
111, no. 3, 585-96.

Stavrides, Stavros (2013): "Contested urban rhythms: From the industrial city
to the post-industrial urban archipelago". _The Sociological Review_ 61,
34-50.

Stavrides, Stavros, and Massimo De Angelis (2010): "On the commons: A public
interview with Massimo De Angelis and Stavros Stavrides". _e-flux_ 17, 1-17,
[www.e-flux.com/journal/on-the-commons-a-public-interview-with-massimo-de-
angelis-and-stavros-stavrides/](http://www.e-flux.com/journal/on-the-commons-a
-public-interview-with-massimo-de-angelis-and-stavros-stavrides/).

1

"[...] radio is one-sided when it should be two-. It is purely an apparatus
for distribution, for mere sharing out. So here is a positive suggestion:
change this apparatus over from distribution to communication". See "The radio
as a communications apparatus", Brecht 2000.

Published 4 November 2015
Original in English
First published by derive 61 (2015)

Contributed by dérive © Dubravka Sekulic / dérive / Eurozine

[PDF/PRINT](https://www.eurozine.com/legal-hacking-and-space/?pdf)


Sekulic
On Knowledge and Stealing
2018


# Dubravka Sekulic: On Knowledge and 'Stealing'

This text was originally published in [The
Funambulist](https://thefunambulist.net/) - Issue 17, May-June 2018
"Weaponized Infrastructure".

__

In 2003 artist Jackie Summell started a correspondence with Herman Wallace,
who at the time was serving a life sentence in solitary confinement in the
Louisiana State Penitentiary in Angola, by asking him “What kind of a house
does a man who has lived in a 6′ x 9′ cell for over thirty years dream of?”
(1) The Louisiana State Penitentiary, the largest maximum-security prison in
the US, besides inmate quarters and among other facilities includes a prison
plantation, Prison View Golf Course, and Angola Airstrip. The nickname Angola
comes from the former slave plantation purchased for a prison after the end of
the Civil War – and where Herman Wallace became a prisoner in 1971 upon
charges of armed robbery. He became politically active in the prison's chapter
of the Black Panther and campaigned for better conditions in Angola,
organizing petitions and hunger strikes against segregation, rape, and
violence. In 1973, together with Albert Woodfox, he was convicted of murder of
a prison guard and both were put in solitary confinement. Together with Robert
King, Wallace and Woodfox would become known as the Angola 3, the three prison
inmates who served the longest period in solitary confinement – 29, 41, and 43
years respectively. The House that Herman Built, Herman's virtual and
eventually physical dream house in his birth city of New Orleans grew from the
correspondence between Jackie and Herman. At one point, Jackie asked Herman to
make a list of the books he would have on the book shelf in his dream house,
the books which influenced his political awakening. At the time Jackie was a
fellow at Akademie Schloss Solitude in Stuttgart, which supported acquisition
of the books and became the foundation of Herman's physical library on its
premises, waiting for his dream home to be built to relocate.

In 2013 the conviction against Herman Wallace was thrown out and he was
released from jail. Three days later he passed away. He never saw his dream
house built, nor took a book from a shelf in his library in Solitude, which
remained accessible to fellows and visitors until 2014. In 2014 Public
Library/Memory of the World (2) digitized Herman's library to place it online
thus making it permanently accessible to everyone with an Internet
connection(3). The spirit of Herman Wallace continued to live through the
collection shaping him – works by Marxists, revolutionaries, anarchists,
abolitionists, and civil rights activists, some of whom were also prisoners
during their lifetime. Many books from Herman's library would not be
accessible to those serving time, as access to knowledge for the inmate
population in the US is increasingly being regulated. A peak into the list of
banned books, which at one point included Michelle Alexander's The New Jim
Crow (The New Press, 2010), reveals the incentive of the ban was to prevent
access to knowledge that would allow inmates to understand their position in
society and the workings of the prison-industrial complex. It is becoming
increasingly difficult for inmates to have chance encounters with a book that
could change their lives; given access to knowledge they could see their
position in life from another perspective; they could have a moment of
revelation like the one Cle Sloan had. Sloan, a member of the Los Angeles gang
Bloods encountered his neighborhood Athens Park on a 1972 Los Angeles Police
Department 'Gang Territories' map in Mike Davis' book City of Quartz, which
made him understand gang violence in L.A. was a product of institutional
violence, structural racism, and systemic dispersal of community support
networks put in place by the Black Panther Party.

The books in Herman's library can be seen as a toolbox of “really useful
knowledge” for someone who has to conceive the notion of freedom. The term
“really useful knowledge” originated with workers' awareness of the need for
self-education in the early-19th century, describing a body of 'unpractical'
knowledge such as politics, economics, and philosophy, workers needed to
understand and change their position in society, and opposed 'useful
knowledge' – knowledge of 'practical' skills which would make them useful to
the employer. Like in the 19th century, sustaining the system relies on
continued exploitation of a population prevented from accessing, producing and
sharing knowledges needed to start to understand the system that is made to
oppress and to articulate a position from which they can act. Who controls the
networks of production and distribution to knowledge is an important issue, as
it determines which books are made accessible. Self-help and coloring books
are allowed and accessible to inmates so as to continue oppression and pacify
resistance. The crisis of access persists outside the prison walls with a
continuous decline in the number of public libraries and the books they offer
due to the double assault of austerity measures and a growing monopoly of the
corporate publishing industry.

Digital networks have incredible power to widely distribute content, and once
the (digital) content is out there it is relatively easy to share and access.
Digital networks can provide a solution for enclosure of knowledge and for the
oppressed, easier access to channels of distribution. At least that was the
promise – the Internet would enable a democratization of access. However,
digital networks have a significant capacity to centralize and control within
the realm of knowledge distribution, one look at the oligopoly of academic
publishing and its impact on access and independent production shows its
contrary.

In June 2015 Elsiver won an injunction against Library Genesis and its
subsidiary platform sci-hub.org, making it inaccessible in some countries and
via some commercial internet providers. Run by anonymous scientists mostly
from Eastern Europe, these voluntary and non-commercial projects are the
largest illegal repository of electronic books, journals, and articles on the
web (4). Most of the scientific articles collected in the repository bypassed
the paywalls of academic publishers using the solidary network of access
provided by those associated with universities rich enough to pay the
exuberant subscription fees. The only person named in the court case was
Alexandra Elbakyan, who revealed her identity as the creator of sci-hub.org,
and explained she was motivated by the lack of access: “When I was working on
my research project, I found out that all research papers I needed for work
were paywalled. I was a student in Kazakhstan at the time and our university
was not subscribed to anything.”(5) The creation of sci-hub.org made
scientific knowledge accessible to anyone, not just to members of wealthy
academic institutions. The act of acknowledging responsibility for sci-hub
transformed what was seen as the act of illegality (piracy) into the act of
civil disobedience. In the context of sci-hub and Library Genesis, both
projects from the periphery of knowledge production, “copyright infringement
opens on to larger questions about the legitimacy of the historic compromise –
if indeed there ever even was one – between the labor that produces culture
and knowledge and its commodification as codified in existing copyright
regulations.”(6) Here, disobedience and piracy have an equalizing effect on
the asymmetries of access to knowledge.

In 2008, programmer and hacktivist Aaron Swartz published Guerilla Open
Access Manifesto triggered by the enclosure of scientific knowledge production
of the past, often already part of public domain, via digitization. “The
world's entire scientific and cultural heritage, published over centuries in
books and journals, is increasingly being digitized and locked up by a handful
private corporations […] We need to download scientific journals and upload
them to file sharing networks. We need to fight for Guerilla Open Access.”(7)
On January 6, 2011, the MIT police and the US Secret Service arrested Aaron
Swartz on charges of having downloaded a large number of scientific articles
from one of the most used and paywalled database. The federal prosecution
decided to show the increasingly nervous publishing industry the lengths they
are willing to go to protect them by indicting Swartz on 13 criminal counts.
With a threat of 50 years in prison and US$1 million fine, Aaron committed
suicide on January 11, 2013. But he left us with an assignment – if you have
access, you have a responsibility to share with those who do not; “with enough
of us, around the world, we'll not just send a strong message opposing the
privatization of knowledge — we'll make it a thing of the past. Will you join
us?” (8) He pointed to an important issue – every new cycle of technological
development (in this case the move from paper to digital) brings a new threat
of enclosure of the knowledge in the public domain.

While “the core and the periphery adopt different strategies of opposition to
the inequalities and exclusions [digital] technologies start to reproduce”
some technologies used by corporations to enclose can be used to liberate
knowledge and make it accessible. The existence of projects such as Library
Genesis, sci-hub, Public Library/Memory of the World, aaaarg.org, monoskop,
and ubuweb, commonly known as shadow libraries, show how building
infrastructure for storing, indexing, and access, as well as supporting
digitization, can not only be put to use by the periphery, but used as a
challenge to the normalization of enclosure offered by the core. The people
building alternative networks of distribution also build networks of support
and solidarity. Those on the peripheries need to 'steal' the knowledge behind
paywalls in order to fight the asymmetries paywalls enforce – peripheries
“steal” in order to advance. Depending on the vantage point, digitization of a
book can be stealing, or liberating it to return the knowledge (from the dusty
library closed stacks) back into circulation. “Old” knowledge can teach new
tricksters a handful of tricks.

In 2015 I realized none of the architecture students of the major European
architecture schools can have a chance encounter with Architecture and
Feminisms or Sexuality and Space, nor with many books on similar topics
because they were typically located in the library’s closed stacks. Both books
were formative and in 2005, as a student I went to great lengths to gain
access to them. The library at the Faculty of Architecture in Belgrade, was
starved of books due to permanent financial crisis, and even bestsellers such
as Rem Koolhaas' S, M, L, XL were not available, let alone books that were
focused on feminism and architecture. At the time, the Internet could inform
that edited volumes such as Architecture and Feminism and Sexuality and Space
existed but nothing more. To satisfy my curiosity, and help me write a paper,
a friend sent – via another friend – her copies from London to Belgrade, which
I photocopied, and returned. With time, I graduated to buying my own second
hand copies of both books, which I digitized upon realizing access to them
still relied on access to a well-stocked specialist library. They became the
basis for my growing collection on feminism/gender/space I maintain as an
amateur librarian, tactically digitizing books to contribute to the growing
struggle to make architecture more equitable as both a profession and an
effect in space.

At the end, a confession, and an anecdote – since 2015, I have tried to
digitize a book a week and every year, I manage to digitize around 20 books,
so one can say I am not particularly good at meeting my goals. The books I do
digitize are related to feminism, space, race, urban riots, and struggle, and
I choose them for their (un)availability and urgency. Most of them are
published in the 1970s and 1980s, though some were published in the 1960s and
1990s. Some I bought as former library books, digitized on a DIY book scanner,
and uploaded to the usual digital repositories. It takes two to four hours to
make a neat and searchable PDF scan of a book. As a PDF, knowledge production
usually under the radar or long out of print becomes more accessible. One of
the first books I digitized was Robert Goodman's After the Planners, a
critique of urban planning and the limits of alternate initiatives in cities
written in the late 1960s. A few years after I scanned it, online photos from
a conference drew my attention –the important, white male professor was
showing the front page of After the Planners on his slide. I realized fast the
image had a light signature of the scanner I had used. While I do not know if
this act of digitization made a dent or was co-opted, seeing the image was a
small proof that digitization can bring books back into circulation and access
to them might make a difference – or that access to knowledge can be a weapon.



[Dubravka Sekulic](https://www.making-futures.com/contributor/sekulic/) writes
about the production of space. She is an amateur-librarian at Public
Library/Memory of the World, where she maintains feminist, and space/race
collections. During Making Futures School, Dubravka will be figuring out the
future of education (on all things spatial) together with [Elise
Hunchuck](https://www.making-futures.com/contributor/hunchuck/), [Jonathan
Solomon](https://www.making-futures.com/contributor/solomon/) and [Valentina
Karga](https://www.making-futures.com/contributor/karga/).

__

This text was originally published in The Funambulist - Issue 17, May-June
2018 "Weaponized Infrastrucuture".  [A pdf version of it can be downloaded
here.](https://www.making-futures.com/wp-content/uploads/2019/05
/Dubravka_Sekulic-On_Knowledge_and_Stealing.pdf)

__

Notes:

(1) For more on the project Herman’s House. Accessed 6 April 2018.


(2) Public Library is a project which has been since 2012 developing and
publicly supporting scenarios for massive disobedience against the current
regulation of production and circulation of knowlde and culture in the digital
realm. See: ‘Memory of the World’. Accessed 7 April 2018.


(3) Herman's library can be accessed at[
http://herman.memoryoftheworld.org/](http://herman.memoryoftheworld.org/) More
on the context of digitization see: ‘Herman’s Library’. Memory of the World
(blog), 28 October 2014. /hermans-library/>, and ‘Public Library. Rethinking the Infrastructures of
Knowledge Production’. Memory of the World (blog), 30 October 2014.
the-infrastructures-of-knowledge-production/.>

(4) For more on shadow libraries and library genesis see: Bodo, Balazs.
‘Libraries in the Post-Scarcity Era’. SSRN Scholarly Paper. Rochester, NY:
Social Science Research Network, 10 June 2015.


(5) ‘Sci-Hub Tears Down Academia’s “Illegal” Copyright Paywalls’. TorrentFreak
(blog), 27 June 2015. illegal-copyright-paywalls-150627/.>

(6) For the schizophrenia of the current model of the corporate enclosure of
the scientific knowledge see: Mars, Marcell and Tomislav Medak, The System of
a Takedown, forthcoming, 2018

(7) Aaron Swartz. Guerilla Open Access Manifesto. Accessed 7 April 2018.[
http://archive.org/details/GuerillaOpenAccessManifesto.](http://archive.org/details/GuerillaOpenAccessManifesto.)

(8) Ibid.

(9) Mars, Marcell and Tomislav Medak, The System of a Takedown, forthcoming,
2018.

(10) See ‘In Solidarity with Library Genesis and Sci-Hub’.
http://custodians.online. Accessed 7 April 2018.




Sollfrank
The Surplus of Copying
2018


## essay #11

The Surplus of Copying
How Shadow Libraries and Pirate Archives Contribute to the
Creation of Cultural Memory and the Commons
By Cornelia Sollfrank

Digital artworks tend to have a problematic relationship with the white
cube—in particular, when they are intended and optimized for online
distribution. While curators and exhibition-makers usually try to avoid
showing such works altogether, or at least aim at enhancing their sculptural
qualities to make them more presentable, the exhibition _Top Tens_ featured an
abundance of web quality digital artworks, thus placing emphasis on the very
media condition of such digital artifacts. The exhibition took place at the
Onassis Cultural Center in Athens in March 2018 and was part of the larger
festival _Shadow Libraries: UbuWeb in Athens_ ,1 an event to introduce the
online archive UbuWeb2 to the Greek audience and discuss related cultural,
ethical, technical, and legal issues. This text takes the event—and the
exhibition in particular—as a starting point for a closer look at UbuWeb and
the role an artistic approach can play in building cultural memory within the
neoliberal knowledge economy.

_UbuWeb—The Cultural Memory of the Avant-Garde_

Since Kenneth Goldsmith started Ubu in 1997 the site has become a major point
of reference for anyone interested in exploring twentieth-century avant-garde
art. The online archive provides free and unrestricted access to a remarkable
collection of thousands of artworks—among them almost 700 films and videos,
over 1000 sound art pieces, dozens of filmed dance productions, an
overwhelming amount of visual poetry and conceptual writing, critical
documents, but also musical scores, patents, electronic music resources, plus
an edition of vital new literature, the /ubu editions. Ubu contextualizes the
archived objects within curated sections and also provides framing academic
essays. Although it is a project run by Goldsmith without a budget, it has
built a reputation for making all the things available one would not find
elsewhere. The focus on “avant-garde” may seem a bit pretentious at first, but
when you look closer at the project, its operator and the philosophy behind
it, it becomes obvious how much sense this designation makes. Understanding
the history of the twentieth-century avant-garde as “a history of subversive
takes on creativity, originality, and authorship,”3 such spirit is not only
reflected in terms of the archive’s contents but also in terms of the project
as a whole. Theoretical statements by Goldsmith in which he questions concepts
such as authorship, originality, and creativity support this thesis4—and with
that a conflictual relationship with the notion of intellectual property is
preprogrammed. Therefore it comes as no surprise that the increasing
popularity of the project goes hand-in-hand with a growing discussion about
its ethical justification.

At the heart of Ubu, there is the copy! Every item in the archive is a digital
copy, either of another digital item or, in fact, it is the digitized version
of an analog object.5 That is to say, the creation of a digital collection is
inevitably based on copying the desired archive records and storing them on
dedicated media. However, making a copy is in itself a copyright-relevant act,
if the respective item is an original creation and as such protected under
copyright law.6 Hence, “any reproduction of a copyrighted work infringes the
copyright of the author or the corresponding rights of use of the copyright
holder”.7 Whether the existence of an artwork within the Ubu collection is a
case of copyright infringement varies with each individual case and depends on
the legal status of the respective work, but also on the way the rights
holders decide to act. As with all civil law, there is no judge without a
plaintiff, which means even if there is no express consent by the rights
holders, the work can remain in the archive as long as there is no request for
removal.8 Its status, however, is precarious. We find ourselves in the
notorious gray zone of copyright law where nothing is clear and many things
are possible—until somebody decides to challenge this status. Exploring the
borders of this experimental playground involves risk-taking, but, at the same
time, it is the only way to preserve existing freedoms and make a case for
changing cultural needs, which have not been considered in current legal
settings. And as the 20 years of Ubu’s existence demonstrate, the practice may
be experimental and precarious, but with growing cultural relevance and
reputation it is also gaining in stability.

_Fair Use and Public Interest_

At all public appearances and public presentations Goldsmith and his
supporters emphasize the educational character of the project and its non-
commercial orientation.9 Such a characterization is clearly intended to take
the wind out of the sails of its critics from the start and to shift the
attention away from the notion of piracy and toward questions of public
interest and the common good.

From a cultural point of view, the project unquestionably is of inestimable
value; a legal defense, however, would be a difficult undertaking. Copyright
law, in fact, has a built-in opening, the so-called copyright exceptions or
fair use regulations. They vary according to national law and cultural
traditions and allow for the use of copyrighted works under certain, defined
provisions without permission of the owner. The exceptions basically apply to
the areas of research and private study (both non-commercial), education,
review, and criticism and are described through general guidelines. “These
defences exist in order to restore the balance between the rights of the owner
of copyright and the rights of society at large.”10

A very powerful provision in most legislations is the permission to make
“private copies”, digital and analog ones, in small numbers, but they are
limited to non-commercial and non-public use, and passing on to a third party
is also excluded.11 As Ubu is an online archive that makes all of its records
publicly accessible and, not least, also provides templates for further
copying, it exceeds the notion of a “private copy” by far. Regarding further
fair use provisions, the four factors that are considered in a decision-making
process in US copyright provisions, for instance, refer to: 1) the purpose and
character of the use, including whether such use is of a commercial nature or
is for non-profit educational purposes; 2) the nature of the copyrighted work;
3) the amount and substantiality of the portion used in relation to the
copyrighted work as a whole; and 4) the effect of the use upon the potential
market for the value of the copyrighted work (US Copyright Act, 1976, 17 USC.
§107, online, n.pag.). Applying these fair use provisions to Ubu, one might
consider that the main purposes of the archive relate to education and
research, that it is by its very nature non-commercial, and it largely does
not collide with any third party business interests as most of the material is
not commercially available. However, proving this in detail would be quite an
endeavor. And what complicates matters even more is that the archival material
largely consists of original works of art, which are subject to strict
copyright law protection, that all the works have been copied without any
transformative or commenting intention, and last but not least, that the
aspect of the appropriateness of the amount of used material becomes absurd
with reference to an archive whose quality largely depends on
comprehensiveness: the more the merrier. As Simon Stokes points out, legally
binding decisions can only be made on a case-by-case basis, which is why it is
difficult to make a general evaluation of Ubu’s legal situation.12 The ethical
defense tends to induce the cultural value of the archive as a whole and its
invaluable contribution to cultural memory, while the legal situation does not
consider the value of the project as a whole and necessitates breaking it down
into all the individual items within the collection.

This very brief, when not abridged discussion of the possibilities of fair use
already demonstrates how complex it would be to apply them to Ubu. How
pointless it would be to attempt a serious legal discussion for such a
privately run archive becomes even clearer when looking at the problems public
libraries and archives have to face. While in theory such official
institutions may even have a public mission to collect, preserve, and archive
digital material, in practice, copyright law largely prevents the execution of
this task, as Steinhauer explains.13 The legal expert introduces the example
of the German National Library, which was assigned the task since 2006 to make
back-up copies of all websites published within the .de sublevel domain, but
it turned out to be illegal.14 Identifying a deficiently legal situation when
it comes to collecting, archiving, and providing access to digital cultural
goods, Steinhauer even speaks of a “legal obligation to amnesia”.15 And it is
particularly striking that, from a legal perspective, the collecting of
digitalia is more strictly regulated than the collecting of books, for
example, where the property status of the material object comes into play.
Given the imbalance between cultural requirements, copyright law, and the
technical possibilities, it is not surprising that private initiatives are
being founded with the aim to collect and preserve cultural memory. These
initiatives make use of the affordability and availability of digital
technology and its infrastructures, and they take responsibility for the
preservation of cultural goods by simply ignoring copyright induced
restrictions, i.e. opposing the insatiable hunger of the IP regime for
control.

_Shadow Libraries_

Ubu was presented and discussed in Athens at an event titled _Shadow
Libraries: UbuWeb in Athens_ , thereby making clear reference to the ecosystem
of shadow libraries. A library, in general, is an institution that collects,
orders, and makes published information available while taking into account
archival, economic, and synoptic aspects. A shadow library does exactly the
same thing, but its mission is not an official one. Usually, the
infrastructure of shadow libraries is conceived, built, and run by a private
initiative, an individual, or a small group of people, who often prefer to
remain anonymous for obvious reasons. In terms of the media content provided,
most shadow libraries are peer-produced in the sense that they are based on
the contributions of a community of supporters, sometimes referred to as
“amateur librarians”. The two key attributes of any proper library, according
to Amsterdam-based media scholar Bodo Balazs, are the catalog and the
community: “The catalogue does not just organize the knowledge stored in the
collection; it is not just a tool of searching and browsing. It is a critical
component in the organisation of the community of librarians who preserve and
nourish the collection.”16 What is specific about shadow libraries, however,
is the fact that they make available anything their contributors consider to
be relevant—regardless of its legal status. That is to say, shadow libraries
also provide unauthorized access to copyrighted publications, and they make
the material available for download without charge and without any other
restrictions. And because there is a whole network of shadow libraries whose
mission is “to remove all barriers in the way of science,”17 experts speak of
an ecosystem fostering free and universal access to knowledge.

The notion of the shadow library enjoyed popularity in the early 2000s when
the wide availability of digital networked media contributed to the emergence
of large-scale repositories of scientific materials, the most famous one
having been Gigapedia, which later transformed into library.nu. This project
was famous for hosting approximately 400,000 (scientific) books and journal
articles but had to be shut down in 2012 as a consequence of a series of
injunctions from powerful publishing houses. The now leading shadow library in
the field, Library Genesis (LibGen), can be considered as its even more
influential successor. As of November 2016 the database contained 25 million
documents (42 terabytes), of which 2.1 million were books, with digital copies
of scientific articles published in 27,134 journals by 1342 publishers.18 The
large majority of the digital material is of scientific and educational nature
(95%), while only 5% serves recreational purposes.19 The repository is based
on various ways of crowd-sourcing, i.e. social and technical forms of
accessing and sharing academic publications. Despite a number of legal cases
and court orders, the site is still available under various and changing
domain names.20

The related project Sci-Hub is an online service that processes requests for
pay-walled articles by providing systematic, automized, but unauthorized
backdoor access to proprietary scholarly journal databases. Users requesting
papers not present in LibGen are advised to download them through Sci-Hub; the
respective PDF files are served to users and automatically added to LibGen (if
not already present). According to _Nature_ magazine, Sci-Hub hosts around 60
million academic papers and was able to serve 75 million downloads in 2016. On
a daily basis 70,000 users access approximately 200,000 articles.

The founder of the meta library Sci-Hub is Kazakh programmer Alexandra
Elbakyan, who has been sued by large publishing houses and was convicted twice
to pay almost 20 million US$ in compensation for the losses her activities
allegedly have caused, which is why she had to go underground in Russia. For
illegally leaking millions of documents the _New York Times_ compared her to
Edward Snowden in 2016: “While she didn’t reveal state secrets, she took a
stand for the public’s right to know by providing free online access to just
about every scientific paper ever published, ranging from acoustics to
zymology.” 21 In the same year the prestigious _Nature_ magazine elected her
as one of the ten most influential people in science. 22 Unlike other
persecuted people, she went on the offensive and started to explain her
actions and motives in court documents and blog posts. Sci-Hub encourages new
ways of distributing knowledge, beyond any commercial interests. It provides a
radically open infrastructure thus creating an inviting atmosphere. “It is a
knowledge infrastructure that can be freely accessed, used and built upon by
anyone.”23

As both projects LibGen and Sci-Hub are based in post-Soviet countries, Balazs
reconstructed the history and spirit of Russian reading culture and brings
them into connection.24 Interestingly, the author also establishes a
connection to the Kolhoz (Russian: колхо́з), an early Soviet collective farm
model that was self-governing, community-owned, and a collaborative
enterprise, which he considers to be a major inspiration for the digital
librarians. He also identifies parallels between this Kolhoz model and the
notion of the “commons”—a concept that will be discussed in more detail with
regards to shadow libraries further below.

According to Balazs, these sorts of libraries and collections are part of the
Guerilla Open Access movement (GOA) and thus practical manifestations of Aaron
Swartz’s “Guerilla Open Access Manifesto”.25 In this manifesto the American
hacker and activist pointed out the flaws of open access politics and aimed at
recruiting supporters for the idea of “radical” open access. Radical in this
context means to completely ignore copyright and simply make as much
information available as possible. “Information is power” is how the manifesto
begins. Basically, it addresses the—what he calls—“privileged”, in the sense
that they do have access to information as academic staff or librarians, and
he calls on their support for building a system of freely available
information by using their privilege, downloading and making information
available. Swartz and Elbakyan both have become the “iconic leaders”26 of a
global movement that fights for scientific knowledge to be(come) freely
accessible and whose protagonists usually prefer to operate unrecognized.
While their particular projects may be of a more or less temporary nature, the
discursive value of the work of the “amateur librarians” and their projects
will have a lasting impact on the development of access politics.

_Cultural and Knowledge Commons_

The above discussion illustrates that the phenomenon of shadow libraries
cannot be reduced to its copyright infringing aspects. It needs to be
contextualized within a larger sociopolitical debate that situates the demand
for free and unrestricted access to knowledge within the struggle against the
all-co-opting logic of capital, which currently aims to economize all aspects
of life.

In his analysis of the Russian shadow libraries Balazs has drawn a parallel to
the commons as an alternative mode of ownership and a collective way of
dealing with resources. The growing interest in the discourses around the
commons demonstrates the urgency and timeliness of this concept. The
structural definition of the commons conceived by political economist Massimo
de Angelis allows for its application in diverse fields: “Commons are social
systems in which resources are pooled by a community of people who also govern
these resources to guarantee the latter’s sustainability (if they are natural
resources) and the reproduction of the community. These people engage in
‘commoning,’ that is a form of social labour that bears a direct relation to
the needs of the people, or the commoners”.27 While the model originates in
historical ways of sharing natural resources, it has gained new momentum in
relation to very different resources, thus constituting a third paradigm of
production—beyond state and private—however, with all commoning activities
today still being embedded in the surrounding economic system.

As a reason for the newly aroused interest in the commons, de Angelis provides
the crisis of global capital, which has maneuvered itself into a systemic
impasse. While constantly expanding through its inherent logic of growth and
accumulation, it is the very same logic that destroys the two systems capital
relies on: non-market-shaped social reproduction and the ecological system.
Within this scenario de Angelis describes capital as being in need of the
commons as a “fix” for the most urgent systemic failures: “It needs a ‘commons
fix,’ especially in order to deal with the devastation of the social fabric as
a result of the current crisis of reproduction. Since neoliberalism is not
about to give up its management of the world, it will most likely have to ask
the commons to help manage the devastation it creates. And this means: if the
commons are not there, capital will have to promote them somehow.”28

This rather surprising entanglement of capital and the commons, however, is
not the only perspective. Commons, at the same time, have the potential to
create “a social basis for alternative ways of articulating social production,
independent from capital and its prerogatives. Indeed, today it is difficult
to conceive emancipation from capital—and achieving new solutions to the
demands of _buen vivir_ , social and ecological justice—without at the same
time organizing on the terrain of commons, the non-commodified systems of
social production. Commons are not just a ‘third way’ beyond state and market
failures; they are a vehicle for emerging communities of struggle to claim
ownership to their own conditions of life and reproduction.”29 It is their
purpose to satisfy people’s basic needs and empower them by providing access
to alternative means of subsistence. In that sense, commons can be understood
as an _experimental zone_ in which participants can learn to negotiate
responsibilities, social relations, and peer-based means of production.

_Art and Commons_

Projects such as UbuWeb, Monoskop,30 aaaaarg,31 Memory of the World,32 and
0xdb33 vary in size, they have different forms of organization and foci, but
they all care for specific cultural goods and make sure these goods remain
widely accessible—be it digital copies of artworks and original documents,
books and other text formats, videos, film, or sound and music. Unlike the
large shadow libraries introduced above, which aim to provide access to
hundreds of thousands, if not millions of mainly academic papers and books,
thus trying to fully cover the world of scholarly and academic works, the
smaller artist-run projects are of different nature. While UbuWeb’s founder,
for instance, also promotes a generally unrestricted access to cultural goods,
his approach with UbuWeb is to build a curated archive with copies of artworks
that he considers to be relevant for his very context.34 The selection is
based on personal assessment and preference and cared for affectionately.
Despite its comprehensiveness, it still can be considered a “personal website”
on which the artist shares things relevant to him. As such, he is in good
company with similar “artist-run shadow libraries”, which all provide a
technical infrastructure with which they share resources, while the resources
are of specific relevance to their providers.

Just like the large pirate libraries, these artistic archiving and library
practices challenge the notion of culture as private property and remind us
that it is not an unquestionable absolute. As Jonathan Lethem contends,
“[culture] rather is a social negotiation, tenuously forged, endlessly
revised, and imperfect in its every incarnation.”35 Shadow libraries, in
general, are symptomatic of the cultural battles and absurdities around access
and copyright within an economic logic that artificially tries to limit the
abundance of digital culture, in which sharing does not mean dividing but
rather multiplying. They have become a cultural force, one that can be
represented in Foucauldian terms, as symptomatic of broader power struggles as
well as systemic failures inherent in the cultural formation. As Marczewska
puts it, “Goldsmith moves away from thinking about models of cultural
production in proprietary terms and toward paradigms of creativity based on a
culture of collecting, organizing, curating, and sharing content.”36 And by
doing so, he produces major contradictions, or rather he allows the already
existing contradictions to come to light. The artistic archives and libraries
are precarious in terms of their legal status, while it is exactly due to
their disregard of copyright that cultural resources could be built that
exceed the relevance of most official archives that are bound to abide the
law. In fact, there are no comparable official resources, which is why the
function of these projects is at least twofold: education and preservation.37

Maybe UbuWeb and the other, smaller or larger, shadow libraries do not qualify
as commons in the strict sense of involving not only a non-market exchange of
goods but also a community of commoners who negotiate the terms of use among
themselves. This would require collective, formalized, and transparent types
of organization. Furthermore, most of the digital items they circulate are
privately owned and therefore cannot simply be transferred to become commons
resources. These projects, in many respects, are in a preliminary stage by
pointing to the _ideal of culture as a commons_. By providing access to
cultural goods and knowledge that would otherwise not be available at all or
inaccessible for large parts of the general public, they might even fulfill
the function of a “commons fix”, to a certain degree, but at the same time
they are the experimental zone needed to unlearn copyright and relearn new
ways of cultural production and dissemination beyond the property regime. In
any case, they can function as perfect entry points for the discussion and
investigation of the transformative force art can have within the current
global neoliberal knowledge society.

_Top Tens—Showcasing the Copy as an Aesthetic and Political Statement_

The exhibition _Top Tens_ provided an experimental setting to explore the
possibilities of translating the abundance of a digital archive into a “real
space”, by presenting one hundred artworks from the Ubu archive. 38 Although
all works were properly attributed in the exhibition, the artists whose works
were shown neither had a say about their participation in the exhibition nor
about the display formats. Tolerating the presence of a work in the archive is
one thing; tolerating its display in such circumstances is something else,
which might even touch upon moral rights and the integrity of the work.
However, the exhibition was not so much about the individual works on display
but the archiving condition they are subject to. So the discussion here has
nothing to do the abiding art theory question of original and copy.
Marginally, it is about the question of high-quality versus low-quality
copies. In reproducible media the value of an artwork cannot be based on its
originality any longer—the core criterion for sales and market value. This is
why many artists use the trick of high-resolution and limited edition, a kind
of distributed originality status for several authorized objects, which all
are not 100 percent original but still a bit more original than an arbitrary
unlimited edition. Leaving this whole discussion aside was a clear indication
that something else was at stake. The conceptual statement made by the
exhibition and its makers foregrounded the nature of the shadow library, which
visitors were able to experience when entering the gallery space. Instead of
viewing the artworks in the usual way—online—they had the opportunity to
physically immerse themselves in the cultural condition of proliferated acts
of copying, something that “affords their reconceptualization as a hybrid
creative-critical tool and an influential aesthetic category.”39

Appropriation and copying as longstanding methods of subversive artistic
production, where the reuse of existing material serves as a tool for
commentary, social critique, and a means of making a political statement, has
expanded here to the art of exhibition-making. The individual works serve to
illustrate a curatorial concept, thus radically shifting the avant-garde
gesture which copying used to be in the twentieth century, to breathe new life
in the “culture of collecting, organizing, curating, and sharing content.”
Organizing this conceptually concise exhibition was a brave and bold statement
by the art institution: The Onassis Cultural Centre, one of Athens’ most
prestigious cultural institutions, dared to adopt a resolutely political
stance for a—at least in juridical terms—questionable project, as Ubu lives
from the persistent denial of copyright. Neglecting the concerns of the
individual authors and artists for a moment was a necessary precondition in
order to make space for rethinking the future of cultural production.

________________
Special thanks to Eric Steinhauer and all the artists and amateur librarians
who are taking care of our cultural memory.

1 Festival program online: Onassis Cultural Centre, “Shadow Libraries: UbuWeb
in Athens,” (accessed on Sept. 30, 2018).
2 _UbuWeb_ is a massive online archive of avant-garde art created over the
last two decades by New York-based artist and writer Kenneth Goldsmith.
Website of the archive: (accessed on Sept. 30, 2018).
3 Kaja Marczewska, _This Is Not a Copy. Writing at the Iterative Turn_ (New
York: Bloomsbury Academic, 2018), 22.
4 For further reading: Kenneth Goldsmith, _Uncreative Writing: Managing
Language in the Digital Age_ (New York: Columbia University Press, 2011).
5 Many works in the archive stem from the pre-digital era, and there is no
precise knowledge of the sources where Ubu obtains its material, but it is
known that Goldsmith also digitizes a lot of material himself.
6 In German copyright law, for example, §17 and §19a grant the exclusive right
to reproduce, distribute, and make available online to the author. See also:
(accessed on Sept. 30,
2018).
7 Eric Steinhauer, “Rechtspflicht zur Amnesie: Digitale Inhalte, Archive und
Urheberrecht,” _iRightsInfo_ (2013), /rechtspflicht-zur-amnesie-digitale-inhalte-archive-und-urheberrecht/18101>
(accessed on Sept. 30, 2018).
8 In particularly severe cases of copyright infringement also state
prosecutors can become active, which in practice, however, remains the
exception. The circumstances in which criminal law must be applied are
described in §109 of German copyright law.
9 See, for example, “Shadow Libraries” for a video interview with Kenneth
Goldsmith.
10 Paul Torremans, _Intellectual Property Law_ (Oxford: Oxford University
Press, 2010), 265.
11 See also §53 para. 1–3 of the German Act on Copyright and Related Rights
(UrhG), §42 para. 4 in the Austrian UrhG, and Article 19 of Swiss Copyright
Law.
12 Simon Stokes, _Art & Copyright_ (Oxford: Hart Publishing, 2003).
13 Steinhauer, “Rechtspflicht zur Amnesie”.
14 This discrepancy between a state mandate for cultural preservation and
copyright law has only been fixed in 2018 with the introduction of a special
law, §16a DNBG.
15 Steinhauer, “Rechtspflicht zur Amnesie”.
16 Bodo Balazs, “The Genesis of Library Genesis: The Birth of a Global
Scholarly Shadow Library,” Nov. 4, 2014, _SSRN_ ,
, (accessed on
Sept. 30, 2018).
17 Motto of Sci-Hub: “Sci-Hub,” _Wikipedia_ , /Sci-Hub> (accessed on Sept. 30, 2018).
18 Guillaume Cabanac, “Bibliogifts in LibGen? A study of a text-sharing
platform driven by biblioleaks and crowdsourcing,” _Journal of the Association
for Information Science and Technology_ , 67, 4 (2016): 874–884.
19 Ibid.
20 The current address is (accessed on Sept. 30, 2018).
21 Kate Murphy, “Should All Research Papers Be Free?” _New York Times Sunday
Review_ , Mar. 12, 2016, /should-all-research-papers-be-free.html> (accessed on Sept. 30, 2018).
22 Richard Van Noorden, “Nature’s 10,” _Nature_ , Dec. 19, 2016,
(accessed on Sept. 30,
2018).
23 Bodo Balazs, “Pirates in the library – an inquiry into the guerilla open
access movement,” paper for the 8th Annual Workshop of the International
Society for the History and Theory of Intellectual Property, CREATe,
University of Glasgow, UK, July 6–8, 2016. Online available at: https
://adrien-chopin.weebly.com/uploads/2/1/7/6/21765614/2016_bodo_-_pirates.pdf
(accessed on Sept. 30, 2018).
24 Balazs, “The Genesis of Library Genesis”.
25 Aaron Swartz, “Guerilla Open Access Manifesto,” _Internet Archive_ , July
2008,

(accessed on Sept. 30, 2018).
26 Balazs, “Pirates in the library”.
27 Massimo De Angelis, “Economy, Capital and the Commons,” in: _Art,
Production and the Subject in the 21st Century_ , eds. Angela Dimitrakaki and
Kirsten Lloyd (Liverpool: Liverpool University Press, 2015), 201.
28 Ibid., 211.
29 Ibid.
30 See: (accessed on Sept. 30, 2018).
31 Accessible with invitation. See:
[https://aaaaarg.fail/](https://aaaaarg.fail) (accessed on Sept. 30, 2018).
32 See: (accessed on Sept. 30, 2018).
33 See: (accessed on Sept. 30, 2018).
34 Kenneth Goldsmith in conversation with Cornelia Sollfrank, _The Poetry of
Archiving_ , 2013, (accessed on Sept. 30, 2018).
35 Jonathan Lethem, _The Ecstasy of Influence: Nonfictions, etc._ (London:
Vintage, 2012), 101.
36 Marczewska, _This Is Not a Copy_ , 2.
37 The research project _Creating Commons_ , based at Zurich University of the
Arts, is dedicated to the potential of art projects for the creation of
commons: “creating commons,” (accessed on
Sept. 30, 2018).
38 One of Ubu’s features online has been the “top ten”, the idea to invite
guests to pick their ten favorite works from the archive and thus introduce a
mix between chance operation and subjectivity in order to reveal hidden
treasures. The curators of the festival in Athens, Ilan Manouach and Kenneth
Goldsmith, decided to elevate this principle to the curatorial concept of the
exhibition and invited ten guests to select their ten favorite works. The
Athens-based curator Elpida Karaba was commissioned to work on an adequate
concept for the realization, which turned out to be a huge black box divided
into ten small cubicles with monitors and seating areas, supplemented by a
large wall projection illuminating the whole space.
39 Marczewska, _This Is Not a Copy_ , 7.

This text is under a _Creative Commons_ license: CC BY NC SA 3.0 Austria

Sollfrank & Dockray
Expanded Appropriation
2013


Sean Dockray
Expanded Appropriation

Berlin, 4 January 2013

[00:13]
Public School [00:17]
We decided to give up doing a gallery because… Well, for one, the material
conditions weren’t so great for it. But I think people who open up galleries
do it in really challenging conditions, so there is no reason why we couldn’t
have done a gallery in that basement. [00:37] I think we were actually
disinterested in exhibition as a format. After a few years – I mean, we did
something like 35 things that could easily be called exhibitions, in a span of
5 years leading up to that. [00:55] I think we just wanted to try something
else. And so we already had started a project called The Public School a year
prior, so we decided that we would use our space primarily as a school.
[01:10] At that time those two things happened. We eliminated the gallery and
then ended up with two new galleries and a school instead!

[01:20] What The Public School is… it’s been going now for five fears. It
began just as a structure or even a diagram, or an idea or something. [01:43]
And the idea is that people would propose things that they wanted to learn
about, or to teach to other people. And then there would be a kind of process
where we use our space or the Internet to allow people to sign up to say they
are also interested in this idea. And then the School’s job would be to turn
those ideas into real meetings of people, real classes where people got
together. [02:15] So in that sense the curriculum would be developed in
public. It wouldn't be public just simply in the sense that anyone could go to
it, but it’d be public in the sense that anyone could produce the form of it.
[02:32] And again, I need a lot more time, I think, to talk about all the
dimensions to it, but in broad strokes that’s kind of what it is. [02:43]
Although we started in Los Angeles, in the basement of our original gallery
five years ago, it’s now been in around a dozen cities around the world, where
people are operating according to the same process, and then sometimes in
conversation with one another. And there’ve been 500-600 classes, and 2000 or
so proposals made in that time.

[03:18]
Motivation

[03:22]
It was in the air at the time already, so I don’t think it’d be an entirely
independent impulse – number one. But I had actually tried to start a couple
of things that had failed. [03:41] Like Aaaaarg – I tried to set up some
physical reading groups that would complement the online archive. So, in Los
Angeles the idea would be that we’d meet and talk about things that were being
posted to the website. So, yes, reading groups. But they never really went
anywhere. They were always really small, and they kind of run out of steam
quite quickly because no one was interested. [04:10] So in a way The Public
School was a later iteration of something that I’d already been trying for a
while. But the other thing was that by doing these reading groups,
intuitively, I knew what was wrong. [04:31] Although I like to read, that is
not all of what education is to me. To me learning and education is something
that is more inclusive of a lot more of what we experience in life, than
simply theoretical discussions. The structures didn’t really allow that in a
way. [04:56] The Public School came out of just trying to imagine what kind of
structure would be inclusive to overcome some of those self-imposed
limitations. [05:14] I’m very interested in technology in a hands-on way. I
like to code and electronics – hacking around with electronics. And at the
same time, I like to read and I like to write. And then once you go down that
line then you think, well, I like music a lot and I like to play chess as
well. [05:46] I think about all these things that I like to do, and I just
thought about how a lot of these gestures towards education that I tried to do
previously, in no way embraced me as a whole person. So in that sense, it was
based in personal interest. [06:22] But the other personal interest had to do
with personal motivation, it had to do with running an art space for, at that
point, four years. And actually seeing the way that that happened, because I’m
not a curator. [06:38] And so the act of putting on exhibitions for me was
less about making value judgments, and more about trying to contribute to the
cultural life of my city, and also provide opportunities that didn’t exist in
Los Angeles. [06:57] For example, no one really knew how to show work with
technology, and we were able to, because, for instance I knew how to set up
projectors, fix electronics or get things to start and stop, and that kind of
stuff. [07:13] But over the course of running it, because it is an exhibition
space, I found myself put into the role of being a curator – Fiona and I both
did. And it was kind of an uncomfortable role to be deciding what became
visible and what wouldn’t be. [07:32] And one thing that was never visible was
the sort of mechanisms by which an institution made certain things visible.
[07:40] So the public in The Public School actually in a way is trying to
eliminate that whole apparatus, or at least, put that apparatus as something
that we didn’t want to be solely the ones interacting with. We wanted that
apparatus to be… that our entire community, the community of people who is
participating in the programme – that they were the ones responsible for it.
[08:14] So that would shift programming, but also accountability and all these
things, to the people who are actually participating in the life of the space.

[08:28]
Technical Infrastructure

[08:32]
The technical infrastructure is incredibly important because at the moment
that’s people’s primary experience of the project. They make proposals on the
website, and then the classes are actually organised by people through the
website. So the website, the entire technical infrastructure becomes the
engine for getting events to happen. [09:01] It’s not an essential part. At
the very beginning we did it on paper, and we had the website and the paper
kind of simultaneously. And we’d print things out onto paper that would be
accessible by coming into the space, and vice versa, we'd enter things from
the paper back into the website. [09:26] But at the moment it’s mostly
orchestrated through the website. And it’s been three versions of it, like
three separate pieces of software, and the last two it’s been Kayla Waldorf
and myself who have been programming it. And we have… [09:45] Number one,
we’ve organised lots of classes, so we’re very involved in the life of the
school. And in a way we try to programme the site according to (A) what would
make things work, but (B), like you say, in a way that expresses the politics,
as we see them, of the site. [10:14] And so almost at every level, at every
design decision that Kayla might be making, or every kind of code or database
decision, you know, interactive decision that I might be making – those
conversations and those ideas are finding their way into that. [10:45] And
vice versa, that you see code, in a certain way, as not determining politics,
but certainly influencing what people see as possible and also choices that
they see available to them, and things like that. [11:09] I guess as users of
the site, as organisers of The Public School and as programmers, this kind of
relationship between the project and the software is quite intertwined.
[11:28] And I don’t think that… I think that typically art institutions use a
website as a kind of publicity vehicle, as a kind of postcard or something
that fits into their broadcasting of a programme, as something as a glue
between their space and their audience. [11:49] And I think for us the website
is actually integral to the space and to the audience. There is more of a
continuum between the space, programme, website and audience.

[12:04]
Aaaaarg.org

[12:08]
It started out small. In a way, it was an extension of what I think as a
practice that all of us are familiar with, which is sharing books that we’ve
read, or sharing articles that we’ve read, especially if your work is somehow
in relationship to things that you might be reading. [12:41] In my
architecture school, for instance, we would read lots and lots, and then we’d
be making work in parallel. It wouldn’t be that either would determine the
other, but in the end, there is a strong relationship between the ideas that
you have and what you see as possible, and the things that you are reading.
[13:07] So as part of the student culture, especially among my friends, the
people that I identified with in school, we’d be discovering different parts
of the library independently. And then when we found something that was quite
moving in whatever way then we would photocopy it to keep it for ourselves
later. [13:34] And we’d also give it to each other as a kind of secret tool,
or something like that, you know, like you have the sense that when you found
something that is really good – and specially if other people aren’t even
interested – then you feel really empowered by having access to that, by being
able to read it and reread it. [14:02] And then you feel more empowered when
there is a community of other people. It may be a small one, but who have read
that thing as well, because then you start building a kind of shared frame of
reference, a shared vocabulary and a shared way of seeing the world, and
seeing what you’re working on. [14:22] And I think out of that comes projects,
like you actually work on projects together, you collaborate, you correspond
with other people or you actually share the work. And that’s what happened.
[14:41] I started Aaaaarg.org after I moved from New York to Los Angeles, so I
was quite far away from some of the people that I was working with – and just
continuing with that very basic activity of sharing reading material in order
to have that shared vocabulary to be able to work together.

[15:08]
Content

[15:12]
It turned out to be architecture at the very beginning. But we all had really
broad understandings of what architecture meant and what it included, so there
was a lot of media theory, art history and philosophy, and occasionally some
architecture too. [15:38] And so that became the initial kind of seed. And I
think everything has, as the site expanded from there, to be not just me and
some collaborators, or then collaborators of collaborators, and then friends
of those people, and so on. [16:03] It’s kind of a ripple effect outwards.
What happened was something that is quite common to almost any platform, which
is this kind of feedback. Even in an open structure, it's never truly open.
There’re always rules in place, there’s always a past history, and those two
things go a long way to influence what happens in the future. [16:33] I’m sure
a lot of people will come to the site who are interested in one thing, and
then find nothing in the site that speaks to them, and then disappear. Whereas
other people, the site really spoke to them, and so what they would contribute
can also fit according to that sense, to that inclination.

[16:59]
Dynamics of growth and community-building

[17:04]
Especially when I’m involved in this kind of projects, I don’t like being
alone. Obviously it contributes a lot to the work, not only because there’s
more people, but actually the kind of relationships and negotiations that
happen in that work are interesting in themselves. [17:29] So anyway, it was
never all that interesting for it to be a private library. I mean, we all have
private libraries, but there is this potential as well, which I think wasn’t
part of the project at the beginning, it really was a tool for sharing in a
particular kind of context. [17:56] But I think, obviously, you know, once
people saw it then they saw a sort of potential in it, because you see what
happens on the Internet and you know that in certain cases you can read from
it and you can write to it. [18:18] And you also know that, although there
still [are] various forms of digital exclusion, that it's quite accessible
relative to other forms, other libraries, like university libraries, for
instance.

[18:37]
Cornelia Sollfrank: It’s not just about having access to certain material, but
what is related to it, and what’s really important, is the dynamics of
building a community and the context, and even smaller discourses around
certain issues, which you don’t have necessarily if you just download a text.
Then you have the text but you don’t have somebody to talk to, or you don’t
write your opinion about it to someone. So that’s, I think, what comes with
the project, which makes it very valuable to a lot of people.

[19:13]
Yes. That’s going back to what I was saying about some of the failures before
The Public School, which was... As the site was growing, as Aaaaarg was
growing, all of a sudden there would be things in there that I didn’t know
about before, that someone felt it was important to share. [19:37] And because
someone felt that it was important to share it, I felt it was important to
read it. And I did, but then I wanted to read it with other people. [19:51]
So, some of those reading groups were always attempts to produce some social
context for the theory.
[20:06] Having a library as if the archive itself is the library – but having
that isn't really that interesting to me. What's interesting is having some
social context that I can feel involved in (not that I ‘have’ to be involved
in it), but having some social context to make use of that reading material.

[20:42]
Copyright

[20:47]
At the beginning it was never a component of the project, because of that sort
of natural extension between what I see as a perfectly… something that I think
that we all do already. And especially in architecture and art, if you are
involved in reading you give books to people. Like you gave me your book…  And
I’ve passed on a number of books. [21:34] If I print out something to read and
I’m done with it, then I’m more likely to pass it on than I’m to shred it – I
have to keep it in my closet forever, what do I do with it? If I think I’m
truly done with it, even for a moment, then I’m more likely to pass it on.
[22:00] So at the beginning it had nothing to do with piracy, it had
everything to do with wanting to share things with other people. And a lot of
times it's not just in this abstract “I kind of like to share,” but it was
project-based, and I think it became a little bit more abstract. [22:24] But I
think actually over time, when people were sharing things, sometimes they did
it with this sort of abstract recipient of that sharing, and that they would
think, “I have access to this and I know that other people want access to it,
and so that’s going to be why I share it.” [22:46] In other cases, I know that
people were trying to organise a reading group, and this is quite common,
which is that people would be organising something and then how are they going
to distribute the reading material. Yes, they could give everyone a link to
Amazon so they all order their own book, maybe that would be better for
Amazon. [23:13] But there are another ways that they would organise the
reading material there. A lot of times the stuff they wanted to read was
already on Aaaaarg. Sometimes they had to upload a few new things. [23:26] And
so that’s how a lot of it grew and that’s why people are involved. And I think
sharing was what drove the project. And then it really wasn’t for 3 years that
even there was anything even relating to copyright issues. No one complained
for all that time. [23:53] And then when complains came in then, you know, we
responded by taking it down. It was quite simple. [24:05] But then later in
the life of the project, the copyright problems sort of, in a way,
retroactively made the project more about piracy than about sharing.

[24:22]
Attempts to control file-sharing

[24:26]
Either through making activity which used to be legal, illegal, or which used
to be in a kind of grey area because there wasn’t a framework in place for it,
that sort of draw hard lines to say that something in now illegal. [24:46] And
then there is the technological forms of negation, I think, which is to
actually make it impossible for people to do something that they used to be
able to do – signing copies of a file and not allowing it to open if it’s not
opening in the right place, or through the cloud, through this kind of new
marketing opportunities of centralising a lot of files in one place, and then
sort of governing the access through sites like Spotify. [25:29] Amazon does
the same thing, you know, also with their e-books, where they own the device,
the distribution network and the servers. And so by controlling the entire
pipeline, there’s a lot more control over what people do. [25:51] For
instance, you have to jailbreak the Kindle in to order to share a book. Again,
something that we used to be able to do, now we actually have to break the law
or break our devices. [26:05] So these two things, I think, are how it gets
dealt with. And of course, there’s always responses to those things. [26:12] I
think the technological one is a big [one] ... to me that’s the more
challenging one, especially now, because what’s been produced is much more
miniaturised and a lot more difficult to...

C.S.: Hack?

[26:30] Yes. And also you can’t hack the server farm that’s located in, you
know, this really remote part of some country that you’ve never been to.
Shouldn’t say never. In fact, I’ll say never, just to see if someone can.
[26:50] Positive things would be to say, if we take a more expansive view of
the economy, look at who is making money, and then make an appeal for that.
Because there are people who are making money, like Apple is making a lot of
money, and other people who aren’t making money. [27:15] And I don’t think you
can blame the readers, for instance, for the fact that writers and publishers
aren’t making money, because the readers are going into that too, because of
the same forces. [27:28] So you look at who is making the money, and I think
that is a political argument that needs to be made, that this money is
actually being kind of hoarded by some of these companies, because they are
sort of gaming the system and the restructuring of the economy, but also how
we consume entertainment, and all this kind of things, and the restructuring
of production around the globe.
[27:59] I don’t think sites like Aaaaarg do anything more than point out a
kind of dynamic that is existing in the world – to think that somehow you can
sort of turn that into something positive, you know, in a way that gets
capitalism to stop exploiting people – like it seems silly to me, capitalism
exploits people...

[28:31]
Publishing landscape

[28:35]
I think that the role of the publishers [is] already changing, because of the
Internet and because of companies like Amazon, who changed not only selling
books. They changed not only the bookstore, but also changed the entire
distribution model, which then changes the way publishers work – and more and
more, even the entire life cycle of a book, you know, from the writing to the
sort of organisation and communication, to the distribution to the
consumption. [29:09] The entire life cycle of a book is happening through
these networks, from the software that we write it on, and where is that stuff
stored, you know – is a Google Docs or some other thing? –, and our e-mails
that are circulating, and the accounting software. [29:31] A lot of it is
changing through the entire pipeline anyway, so to me, it’s really difficult
to say how publishing is changing because the entire flow, the entire
apparatus is changing.
[29:48] At the beginning, Aaaaarg was a way of bringing readers together, and
to allow readers to sort of give value to certain things that they were
reading. And I think that’s always been a form of publishing to me. [30:09]
Yes, someone is responsible for having the book edited, having it printed it,
distributing it, there’s a huge material expense in all of that. [30:21] But
then you also have the life of the book after it gets to the store. And it
continues to have a life, like sometimes it lives for decades and decades, and
it goes between readers, it goes through sidewalk vendors, and used book
stores, and sits on people’s libraries, and goes to public libraries. [30:44]
And I would say that Aaaaarg is sort of in that part of the life cycle.
[30:54] These platforms become sort of new publishers themselves, but I
haven’t really thought that kind of statement through enough. In a way, if
publishing is to make something public and to create publics, then of course,
that’s something that Aaaaarg has done since the beginning. [31:22] It made
things public to people who maybe didn’t exist for before, and it also
produced communities of people around books – I mean, if that’s what a
publication and a publisher does, then, of course, it kind of does that within
the context of the Internet, and it does that by both using and producing
social relations between people.

[31:50]
Reading / books

[31:54]
I have lots of books, and I buy them from anywhere. I buy them, as much as it
pains me to admit it, I buy them from Amazon, I buy them from bookstores, I
buy them from used books stores, I buy them on the street, I find them in
trash, I’ve photocopied so many parts of books at the library, because they
didn’t circulate or something, or because I only had four hours to look at the
book; I’ve gotten things for my friends, I’ve gotten things from classes that
I used to take when I was a student but I still have. [32:37] And then with
the Internet, then I'd see it on a screen, sometimes I print that out, you
know. I’m not a purist in any way about reading or about books, I’m not
particularly sentimental about ‘the book.’ Even though I love books and I see
what’s nice about them, I think that every sort of form a book takes has its
own kind of… there’s something unique about it. [33:11] Honestly, this kind
of, let’s say, increase in e-Pubs and PDFs hasn’t really changed my
relationship to books at all. It’s the same as it’s always been, which is,
I’ll read it, how I can get it. And maybe there’s slightly now forms, and
sometimes I read on a little… I bought a touchpad when they had a fire sale a
while ago, so I read on that.

[33:44] And maybe I’m making an obvious argument here, but you see, if you've
ever scanned a book you know that it takes time, and you know that you screw
up quite a lot, and sometimes those screw ups find their way in, and the
labour that goes into making a scan finds its way in. [34:02] And it’s only
through really good scans that you can manage to sort of eliminate a lot of
that, a lot of the traces of that labour. But I know that, in the entire
history of Aaaaarg, the files will always show the labour of the person who is
trying to get something up to share it with other people. It’s not a
frictionless easy activity, there is work that’s involved in it. [34:31] And I
find some of the scans were quite beautiful in that way, even when they
weren’t necessarily so good to read.
[34:41] There’s actually, if we go to scale… Again, I have way more books that
I could possibly read, physical books. And I’m going to continue buying more,
acquiring more through my entire life, I’m sure of it. And I think that’s just
part of loving books and loving to read, you have more than you can possibly
deal with. [35:11] And I think, on a level of scale, maybe, with the Internet
we find ourselves, in orders of magnitude, [with] more than we could possibly
deal with. But in a way, it’s the same kind of anxiety, and the limits are
more or less the same. [35:29] But then there are maybe even new opportunities
for new ways of reading that weren’t available before. I could flip through a
book in a certain way, but maybe now with the possibility of indexing the
whole content of a book, and doing searches, and creating ways of visually
displaying books and relationships between books, and between parts of books,
and this kind of things, and also making lists, and making lists with other
people – all of these maybe provide new ways of reading which weren't
available. [36:13] And of course it means that then other ways of reading that
get sort of buried and, you know, lost. And I’m sure that that's true too,
that slow deep reading maybe isn’t as prevalent as different types of
referencing and stuff. [36:32] Not to say that it’s totally identical, but
certainly an evolution. I don’t think that progression is so linear, that it’s
pure loss, or anything like that.

[36:44]
Form and content

[36:49] For me what’s interesting is to try and examine how structure and
form, or structure and content, form and content – I mean, that’s kind of
another on-going question, how structure is not divorced from content.
Structure is not simply a container for the content, any more than the mind
and body are distinct entities – but that the structure that something takes
influences the shape that content takes, and also the ways that people might
approach that context, or use it in this kind of things. And likewise, the
content begins to affect the structure as well. [37:47] Why I’m interested in
structures is because they aren’t deterministic, they don’t determine what’s
going to happen. And all the projects that you mention are things that I think
of, let’s say, as platforms or something, in the sense that they have… they
involve a lot of people quite often, more than just me, and they also have…
the duration is not specified in advance, and what’s going to happen in them
is not specified in advance. [38:30] So they’re experimental in that way, and
they have that in common. And that is what’s interesting to me, is the
production of situations where we don’t know what’s going to happen. [38:51]
And sometimes when focusing on a work you have vision for what that work is
going to be, and then all your work goes into realising that, and, of course,
you have surprises along the way, but then you get something that surprisingly
ends up like what you kind of imagined at the beginning – that way of working
doesn’t really interest me. I sort of become bored pretty early on in that
process. [39:23] Whereas the kind of longer term thing where the initial
conditions actually produce a situation that’s a little unstable, and
therefore what happens is also kind of unpredictable and unstable, to me this
is about opening up other possibilities for things as small as being together
for a short time, but also as big as ways of living.

[40:00] On the one level, these are structural projects, but on another level
they are all kind of structural appropriations in a way, or appropriations of
structures, like from a gallery, a library, a school, another gallery. [40:23]
And I was actually thinking about that I kind of wish that (and I imagine
soon, maybe in the next decade or two) an art historian will make this kind of
argument for evolving the concept of appropriation, to go beyond objects to…
Because in a way appropriation enters into the discourse when reproduction…
[40:52] I think appropriation it’s been something, let’s say, that maybe is a
historical concept. So at certain point in history maybe it even has a
different name, there’s different ways that it happens, there are different
cultural responses to it. [41:09] And I think that in the twentieth century,
especially with mechanical reproduction, appropriation becomes quite clear
what it is, because images or sounds, you know, things became distributed and
available for people to actually materially use. [41:30] And the tools that
people have available to make work as well allow for this type of reuse of
what’s being circulated through the world. [41:45] And I guess what I’m sort
of saying is, if that’s appropriation of objects, then there might even be a
time now, especially as the economy sort of shifted from being simply about
commodity – the production, and sale and consumption of commodities) – to now,
if we try to understand critically the economy now, it’s something that’s much
more complicated – it involves financialization, debt and derivative trading,
and all this kind of things. [42:25] And so, perhaps also if appropriation is
a historical idea, then appropriation also needs to be updated, and this would
mean – for me this would mean appropriation of systems. [42:46] So rather than
the appropriation of what’s been distributed, it’s the appropriation of the
system of distribution. And to me these are also projects that I get excited
about at the moment. [43:04] In a way it also makes sense, because if
photographs were circulating around the world, and that was, you know, a new
thing, to see that sort of imagery circulating in that way, at a certain point
in time a century ago; then now I think we are even having a similar reaction
to something like Facebook, which to me kind comes out of nowhere, and
suddenly it exists in the world as a structure that is organising a certain
part of the activity of, you know, hundreds of millions of people. [43:47] And
so I think, in a way, that’s the level on which maybe we can start thinking of
appropriation, at a level of this kind of large scale systems. But then that
brings up a whole new set of questions, like what do you call that, number
one. Number two, obviously the legal framework that’s in place, obviously that
will cause problems.


Sollfrank, Francke & Weinmayr
Piracy Project
2013


Giving What You Don't Have

Andrea Francke, Eva Weinmayr
Piracy Project

Birmingham, 6 December 2013

[00:12]
Eva Weinmayr: When we talk about the word piracy, it causes a lot of problems
to quite a few institutions to deal with it. So events that we’ve organised
have been announced by Central Saint Martins without using the word piracy.
That’s interesting, the problems it still causes…

Cornelia Sollfrank: And how do you announce the project without “Piracy”? The
Project?

E. W.: It’s a project about intellectual property.

C. S.: The P Project.

Andrea Francke, Eva Weinmayr: [laugh] Yes.

[00:52]
Andrea Francke: The Piracy Project is a knowledge platform, and it is based
around a collection of pirated books, of books that have been copied by
people. And we use it to raise discussion about originality, authorship,
intellectual property questions, and to produce new material, new essays and
new questions.

[01:12]
E. W.: So the Piracy Project includes several aspects. One is that it is an
act of piracy in itself, because it is located in an art school, in a library,
in an officially built up a collection of pirated books. [01:30] So that’s the
second aspect, it’s a collection of books which have been copied,
appropriated, modified, improved, which live in this library. [01:40] And the
third part is that it is a collection of physical books, which is touring. We
create reading rooms and invite people to explore the books and discuss issues
raised by cultural piracy.
[01:58] The Piracy Project started in an art college library, which was
supposed to be closed down. And the Piracy Project is one project of And
Publishing. And Publishing is a publishing activity exploring print-on-demand
and new modes of production and of dissemination, the immediacy of
dissemination. [02:20] And Publishing is a collaboration between myself and
Lynn Harris, and we were hosted by Central Saint Martins College of Art and
Design in London. And the campus where this library was situated was the
campus we were working at. [02:40] So when the library was being closed, we
moved in the library together with other members of staff, and kept the
library open in a self-organised way. But we were aware that there’s no budget
to buy new books, and we wanted to have this as a lively space, so we created
an open call for submissions and we asked people to select a book which is
really important to them and make a copy of it. [03:09] So we weren’t
interested in piling up a collection of second hand books, we were really
interested in this process: what happens when you make a copy of a book, and
how does this copy sit next to the original authoritative copy of the book.
This is how it started.

[03:31]
A. F.: I met Eva at the moment when And Publishing was helping to set up this
new space in the library, and they were trying to think how to make the
library more alive inside that university. [03:44] And I was doing research on
Peruvian book piracy at that time, and I had found this book that was modified
and was in circulation. And it was a very exciting moment for us to think what
happens if we can promote this type of production inside this academic
library.

[04:05] Piracy Project
Collection / Reading Room / Research

[04:11]
The Collection

[04:15]
E. W.: We asked people to make a copy of a book which is important to them and
send it to us, and so with these submission we started to build up the
collections. Lots of students were getting involved, but also lots of people
who work in this topic, and were interested in these topics. [04:38] So we
received about one hundred books in a couple of months. And then, parallel to
this, we started to do research ourselves. [04:50] We had a residency in
China, so we went to China, to Beijing and Shanghai, to meet illegal
booksellers of pirated architecture books. And we had a residency in Turkey,
in Istanbul, where we did lots of interviews with publishers and artists on
book piracy. [05:09] So the collection is a mix of our own research and cases
from the real book markets, and creative work, artistic work which is produced
in the context of an art college and the wider cultural realm.

[05:29]
A. F.: And it is an ongoing project.

E. W.: The project is ongoing, we still receive submissions. The collection is
growing, and at the moment here we have about 180 books, here at Grand Union
(Birmingham).

[05:42]
A. F.: When we did the open call, something that was really important to us
was to make clear for people that they have a space of creativity when they
are making a copy. So we wrote, please send us a copy of a book, and be aware
that things happen when you copy a book. [05:57] Whether you do it
intentionally or not a copy is never the same. So you can use that space, take
ownership of that space and make something out of that; or you can take a step
back and allow things to happen without having control. And I think that is
something that is quite important for us in the project. [06:12] And it is
really interesting how people have embraced that in different measures, like
subtle things, or material things, or adding text, taking text out, mixing
things, judging things. Sometimes just saying, I just want it to circulate, I
don’t mind what happens in the space, I just want the subject to be in the
world again.

[06:35]
E. W.: I think this is one which I find interesting in terms of making a copy,
because it’s not so much about my own creativity, it’s more about exploring
how technology edits what you can see. It’s Jan van Toorn’s Critical Practice,
and the artist is Hester Barnard, a Canadian artist. [07:02] She sent us these
three copies, and we thought, that’s really generous, three copies. But they
are not identical copies, they are very different. Some have a lot of empty
pages in the book. And this book has been screen-captured on a 3.5 inch
iPhone, whereas this book has been screen-captured on a desktop, and this one
has been screen-captured with a laptop. [07:37] So the device you use to
access information online determines what you actually receive. And I find
this really interesting, that she translated this back into a hardcopy, the
online edited material. [07:53] And this is kind of taught by this book,
standard International Copyright. She went to Google Books, and screen-
captured all the pages Google Books are showing. So we are all familiar with
blurry text pages, but then it starts that you get the message “Page 38 is not
shown in this preview.” [08:18] And then it’s going through the whole book, so
she printed every page basically, omitting the actual information. But the
interesting thing is that we are all aware that this is happening on Google,
on screen online, but the fact that she’s translating this back into an
object, into a printed book, is interesting.

[08:44]
Reading Room

[08:48]
A. F.: We create these reading rooms with the collection as a way to tour the
collection, and meet people and have conversations around the books. And that
is something quite important to us, that we go with the physical books to a
place, either for two or three months, and meet different people that have
different interests in relation to the collection in that locality. We’ve been
doing that for the last two years, I think, three years. [09:12] And it’s
quite interesting because different places have very different experiences of
piracy. So you can go to a country where piracy is something very common, or a
different place where people have a very strong position against piracy, or a
different legal framework. And I feel the type of conversations and the
quality of interactions is quite different from being present on the space and
with the books. [09:36] And that’s why we don’t call these exhibitions,
because we always have places where people can come and they can stay, and
they can come again. Sometimes people come three or four times and they
actually read the books. And a few times they go back to their houses and they
bring books back, and they said, I’m going to contact this friend who has been
to Russia and he told me about this book – so we can add it to the collection.
I think that makes a big difference to how the research in the project
functions.

[10:06]
E. W.: One of the most interesting events we did with the Piracy collection
was at the Show Room where we had a residency for the last year. There were
three events, and one was A Day At The Courtroom. This was an afternoon where
we invited three copyright lawyers coming from different legal systems: the
US, the UK, and the Continental European, Athens. And we presented ten
selected cases from the collection and the three copyright lawyers had to
assess them in the eyes of the law, and they had to agree where to put this
book in a scale from legal to illegal. [10:51] So we weren’t interested really
to say, this is legal and this is illegal, we were interested in all the
shades in between. And then they had to discuss where they would place the
book. But then the audience had the last verdict, and then the audience placed
the book. [11:05] And this was an extremely interesting discussion, because it
was interesting to see how different the legal backgrounds are, how blurry the
whole field is, how you can assess when is the moment where a work becomes a
transformative work, or when it stays a derivative work, and this whole
discussion.
[11:30] When we do these reading rooms – and we had one in New York, for
example, at the New York Art Book Fair – people are coming, and they are
coming to see the physical books in a physical space, so this creates a social
encounter and we have these conversations. [11:47] For example, a woman stood
up to us in New york and she told us about a piracy project she run where she
was working in a juvenile detention centre, and she produced a whole shadow
library of books because the incarcerated kids couldn’t take the books in
their cells, so she created these copies, individual chapters, and they could
circulate. [12:20] I’m telling this because the fact that we are having this
reading room and that we are meeting people, and that we are having these
conversations, really furthers our research. We find out about these projects
by sharing knowledge.

[12:38]
Categories

[12:42]
A. F.: Whenever we set our reading room for the Piracy Project we need to
organise the books in a certain way. What we started to do now is that we’ve
created these different categories, and the first set of categories came from
the legal event. [12:56] So we set up, we organised the books in different
categories that would help us have questions for the lawyers, that would work
for groups of books instead of individual works. [13:07] And the idea is that,
for example, we are going to have our next events with librarians, and a new
set of categories would come. So the categories change as our interest or
research in the project is changing. [13:21] The current categories are:
Pirated Design, so books where the look of the book has been copied but not
the content; recirculation, books that have been copied trying to be
reproduced exactly as they were, because they need to be circulating again;
transformation, books that have been modified; For Sale Doctrine, so we
receive quite a few books where people haven’t actually made a copy but they
have cut the book or drawn inside the book, and legally you are allowed to do
anything with a book except copy it, so we thought that it was quite important
so that we didn’t have to discuss that with the lawyers; [14:03] Public
Domain, which are works that are already out of copyright, again, so whatever
you do with those books is legal; and collation, books gathered from different
sources, and who owns the copyright, which was a really interesting question,
which is when you have a book that has many authors – it’s really interesting.
Different systems in different countries have different ways to deal with who
owns the copyright and what are the rights of the owners of the different
works.

[14:36]
E. W.: Ahmet Şık is a journalist who published a book about the Ergenekon
scandal and the Turkish government, and connects that kind of mafioso
structures. Before the book could be published he was arrested and put in jail
for a whole year without trial, and he sent the PDF to friends, and the PDF
was circulating on many different computers so it couldn’t be taken. [15:06]
They published the PDF, and as authors they put over a hundred different
author names, so there was not just one author who could be taken into
responsibility.

[15:22] We have in the collection this book, it’s Teignmouth Electron by
Tacita Dean. This is the original, it’s published by Book Works and Steidl.
And to this round table, to this event, we invited also Jane Rolo, director of
Book Works (and she published this book). [15:41] And we invited her saying,
do you know that your book has been pirated? So she was really interested and
she came along. This is the pirated version, it’s Alias, [by] Damián Ortega in
Mexico. It’s a series of books where he translates texts and theory into
Spanish, which are not available in Spanish. So it’s about access, it’s about
circulation. [16:07] But actually he redesigned the book. The pirated version
looks very different, and it has a small film roll here, from Tacita Dean’s
book. And it was really amazing that Jane Rolo flipped the pirated book and
she said, well, actually this is really very nice.

[16:31] This is kind of a standard academic publishing format, it’s Gilles
Deleuze’s Proust and Signs, and the contributor, the artist who produced the
book is Neil Chapman, a writer based in London. And he made a facsimile of his
copy of this book, including the binding mistakes – so there’s one chapter
upside down printed in the book. [17:04] But the really interesting thing is
that he scanned it on his home inkjet printer – he scanned it on his scanner
and then printed it on his home inkjet printer. And the feel of it is very
crafty, because the inkjet has a very different typographic appearance than
the official copy. [17:28] And this makes you read the book in quite a
different way, you relate differently to the actual text. So it’s not just
about the information conveyed on this page, it’s really about how I can
relate to it visually. I find this really interesting when we put this book
into the library, in our collection in the library, and it sat next to the
original, [17:54] it raises really interesting questions about what kind of
authority decides which book can access the library, because this is
definitely and obviously a self-made copy – so if this self-made copy can
enter the library, any self-made text and self-published copy could enter the
library. So it was raising really interesting questions about gatekeepers of
knowledge, and hierarchies and authorities.

[18:26]
On-line catalogue

[18:30]
E. W.: We created this online catalogue give to an overview of what we have in
the collection. We have a cover photograph and then we have a short text where
we try to frame and to describe the approach taken, like the strategy, what’s
been pirated and what was the strategy. [18:55] And this is quite a lot,
because it’s giving you the framework of it, the conceptual framework. But
it’s not giving you the book, and this is really important because lots of the
books couldn’t be digitised, because it’s exactly their material quality which
is important, and which makes the point. [19:17] So if I would… if I have a
project which is working about mediation, and then I put another layer of
mediation on top of it by scanning it, it just wouldn’t work anymore.
[19:29] The purpose of the online catalogue isn’t to give you insight into all
the books to make actually all the information available, it’s more to talk
about the approach taken and the questions which are raised by this specific
book.

[19:47]
Cultures of the copy

[19:51]
A topic of cultural difference became really obvious when we went to Istanbul.
A copy shop which had many academic titles on the shelves, copied, pirated
titles... The fact is that in London, where I’m based, you can access anything
in any library, and it’s not too expensive to get the original book. [20:27]
But in Istanbul it’s very expensive, and the whole academic community thrives
on pirated, copied academic titles.

[20:39]
A. F.: So this is the original Jaime Bayly [No se lo digas a nadie], and this
is the pirated copy of the Jaime Bayly. This book is from Peru, it was bought
on the street, on a street market. [20:53] And Peru has a very big pirated
book market, most books in Peru are pirated. And we found this because there
was a rumour that books in Peru had been modified, pirated books. And this
version, the pirated version, has two extra chapters that are not in the
original one. [21:13] It’s really hard to understand the motivation behind it.
There’s no credit, so the person is inhabiting this author’s identity in a
sense. They are not getting any cultural capital from it. They are not getting
extra money, because if they are found out, nobody would buy books from this
publisher anymore. [21:33] The chapters are really well written, so you as a
reader would not realise that you are reading something that has been pirated.
And that was really fascinating in terms of what space you create. So when you
have this technology that allows you to have the book open and print it so
easily – how you can you take advantage of that, and take ownership or inhabit
these spaces that technology is opening up for you.

[22:01]
E. W.: Book piracy in China is really important when it comes to architecture
books, Western architecture books. Lots of architecture studios, but even
university libraries would buy from pirate book sellers, because it’s just so
much cheaper. [22:26] And we’ve found this Mark magazine with one of the
architecture sellers, and it’s supposed to be a bargain because you have six
magazines in one. [22:41] And we were really interested in the question, what
are the criteria for the editing? How do you edit six issues into one? But
basically everything is in here, from advertisement, to text, to images, it’s
all there. But then a really interesting question arises when it comes to
technology, because in this magazine there are pages in Italian language
clearly taken from other magazines.

[23:14]
A. F.: But it was also really interesting to go there, and actually interview
the distributor and go through the whole experience. We had to meet the
distributor in a neutral place, and he interviewed us to see if he was going
to allow us to go into the shop and buy his books. [23:31] And then going
through the catalogue and realising how Rem Koolhaas is really popular among
the pirates, but actually Chinese architecture is not popular, so there’s only
like three pirated books on Chinese architecture; or that from all the
architecture universities in the world only the AA books are copied – the
Architectural Association books. [23:51] And I think those small things are
really things that are worth spending time and reflecting on.

[23:58]
E. W.: We found this pirate copy of Tintin when we visited Beijing, and
obviously compared to the original, it looks different, a different format.
But also it’s black and white, but it’s not a photocopy of the original full-
colour. [24:23] It’s redrawn by hand, so all the drawings are redrawn and
obviously translated into Chinese. This is quite a labour of love, which is
really amazing. I can compare the two. The space is slightly differently
interpreted.

[24:50]
A. F.: And it’s really incredible, because at some point in China there were
14 or 15 different publishers publishing Tintin, and they all have their
versions. They are all hand-drawn by different people, so in the back, in
Chinese, it’s the credit. So you can buy it by deciding which person does the
best drawings of the production of Tintin, which I thought it was really…
[25:14] It’s such a different cultural way to actually give credit to the
person that is copying it, and recognise the labour, and the intention and the
value of that work.

[25:24]
Why books?

[25:28]
E. W.: Books have always been very important in my practice, in my artistic
practice, because lots of my projects culminated in a book, or led into a
book. And publications are important because they can circulate freely, they
can circulate much easier than artworks in a gallery. [25:50] So this question
of how to make things public and how to create an audience… not how to create
an audience – how to reach a reader and how to create a dialogue. So the book
is the perfect tool for this.

[26:04]
A. F.: My interest in books comes from making art, or thinking about art as a
way to interact with the world, so outside art settings, and I found books
really interesting in that. And that’s how I met Eva, in a sense, because I
was interested in that part of her practice. [26:26] When I found the Jaime
Bayly book, for me that was a real moment of excitement, of this person that
was doing this things in the world without taking any credit, but was having
such a profound effect on so many readers. I’m quite fascinated by that.
[26:44] I'm also really interested in research and using events – research
that works with people. So it kind of creates communities around certain
subjects, and then it uses that to explore different issues and to interact
with different areas of knowledge. And I think books are a privileged space to
do that.

[27:11]
E. W.: The books in the Piracy collection, because they are objects you can
grab, and because they need a place, they are a really important tool to start
a dialogue. When we had this reading room in the New York Art Book Fair, it
was really the book that created this moment when you started a conversation
with somebody else. And I think this is a very important moment in the Piracy
collection as a tool to start this discussion. [27:44] In the Piracy
collection the books are not so important to circulate, because they don’t
circulate. They only travel with us, in a way, or they travel here to Grand
Union to be installed in this reading room. But they are not meant to be
printed in a thousands print run and circulated in the world.

C. S.: So what is their function?

[28:08]
E. W.: The functions of the books here in the Piracy collection are to create
a dialogue, debate about these issues they are raising, and they are a tool
for a direct encounter, for a social encounter. As Andrea said, building a
community which is debating these issues which they are raising. [28:32] And I
also find it really interesting – when we where in China we also talked with
lots of publishers and artists, and they said that the book, in comparison to
an online file, is a really important tool in China, because it can’t be
controlled as easily as online communication. [28:53] So a book is an
autonomous object which can be passed on from one hand to the other, without
the state or another authority to intervene. I think that is an important
aspect when you talk about books in comparison with circulating information
online.

[29:13]
Passion for piracy

[29:17]
A. F.: I’m quite interested in enclosures, and people that jump those
enclosures. I’m kind of interested in these imposed… Maybe because I come from
Peru and we have a different relation to rules, and I’m in Britain where rules
seem to have so much strength. And I’m quite interested in this agency of
taking personal responsibility and saying, I’m going to obey this rule, I’m
not going to obey this one, and what does that mean. [29:42] That makes me
really interested in all these different strategies, and also to find a way to
value them and show them – how when you make this decision to jump a rule, you
actually help bring up questions, modifications, and propose new models or new
ways about thinking things. [30:02] And I think that is something that is part
of all the other projects that I do: stating the rules and the people that
break them.

[30:12]
E. W.: The pirate as a trickster who tries to push the boundaries which are
being set. And I think the interesting, or the complex part of the Piracy
Project is that we are not saying, I’m for piracy or I’m against piracy, I’m
for copyright, I’m against copyright. It’s really about testing out these
decisions and the own boundaries, the legal boundaries, the moral limits – to
push them and find them. [30:51] I mean, the Piracy Project as a whole is a
project which is pushing the boundaries because it started in this academic
library, and it’s assessed by copyright lawyers as illegal, so to run such a
project is an act of piracy in itself.

[31:17]
This method of doing or approaching this art project is to create a
collaboration to instigate this discourse, and this discourse is happening on
many different levels. One of them is conversation, debate. But the other one
is this material outcome, and then this material outcome is creating a new
debate.

Sollfrank & Goldsmith
The Poetry of Archiving
2013


Kenneth Goldsmith
The Poetry of Archiving

Berlin, 1 February 2013

[00:12]
Kenneth Goldsmith: The type of writing I do is exactly the same thing that I
do on UbuWeb. And that’s the idea that nothing new needs to be written or
created. In fact, it's the archiving and the gathering and the appropriation
of preexisting materials, that is the new mode of both writing and archiving.
[00:35] So you have a system where writing and archiving have become the
identical situation today.

[00:43]
UbuWeb

[00:47]
It started in 1996, and it began as a site for visual and concrete poetry,
which was a mid-century genre of typographical poetry. I got a scanner, and I
scanned a concrete poem. And I put it up on Ubu, and on those days the images
used to come in as interlaced GIFs, every other line filling in. So really it
was an incredible thing to watch this poem kind of grow organically. [1:21]
And it looked exactly like concrete poetry had always wanted to look – a
little bit of typographical movement. [1:27] And I thought, this is perfect.
And also, because concrete poetry is so flat and modernist, when it was
illuminated from the back by the computer screen it looked beautiful and
graphic and flat and clean. [1:40] And suddenly it was like: this is the
perfect medium for concrete poetry. Which, I do worry still, is very much a
part of Ubu. [1:50] And then, a few years later real audio came, and I began
to put up sound poetry, you know, little sound files of sound poetry. So you
could look at the concrete poetry and listen to the sound poetry. [2:07] And a
few years later we had a little bit more bandwidth, and we began to put on
videos. So this is the way the site grew. [2:16] But also what happened on Ubu
was an odd thing. Because it was concrete poetry, so I put up the poems of
John Cage – the concrete mesostics of John Cage. And then I got a little bit
of sound of John Cage reading some of these things, and suddenly it was Cage
reading a mesostic with an orchestra behind him. [2:40] And I said, wait a
minute, this no longer sound poetry, this is something else. And I thought,
what is this? And I said, ah, this is avant-garde.
[2:50] And so from there, because of Cage and Cage's practice, the whole thing
became a repository and archive for the avant-garde, which it is today. So
that's how it moved from being specifically concrete poetry in 1996, to today
being all avant-garde.

[3:09]
Avantgarde

[3:15]
[3:30] And then something happened in the digital, where it seemed to... All
of that fell off. Because we already knew that. [3:42] So it was an orphan
term. It became detached from its nefarious pre-digital context. And it was an
open term. [3:51] I was like, we can actually use this term again, avant-
garde, and redefine it as a way of, you know, multi-media, impurity,
difference, all sorts of ways that it was never allowed to be used before. So
I've actually inhabited this term, and repurposed it. [4:15] So I don't really
know what avant-garde is, it's always changing. And UbuWeb is an archive that
is not pure avant-garde. You look at it and say, no, things are wrong there.
There's rock musicians, and there's performance artists, and there's
novelists. [4:33] I mean, it doesn't quite look like the avant-garde looked
before the digital. But then, everything looks different after the digital.

[4:41]
Selection / curation

[4:46]
I don't know anything. I am a poet. I'm not a historian, I'm not an academic.
I don't know anything, I've just got a sense: that might be interesting, that
sort of feels avant-garde. I mean, it is ridiculous, it's terrible: I am the
wrong person to do this. But, you know, nobody stopped me, and so I've been
doing it. You know, anybody can do it. [5:11] It's very hard to have something
on Ubu, and that's why it's so good. That's why it's not archive-type of work,
where everything can go, and there're good things there, but there is no one
working as a gatekeeper to say, actually this is better than that. [5:26] And
I think one of the problems with net culture, or the web culture, is that
we've decided to suspend judgment. We can't say that one thing is better than
another thing, because everything is equal. There's a part of me that really
likes that idea, and it creates fabulous chaos. But I think it is a sort of a
curatorial job to go in and make sense of some of that chaos. In a very small
way, that's what I try to do on UbuWeb. [5:52] You know, it's the avant-garde,
it's not a big project. It's a rather small slice of culture that one can have
a point of view. I'm not saying that's right. It's probably very wrong. But
nobody else it's doing it, so I figured, you know… [6:12] But by virtue of the
fact that there's only one UbuWeb, it's become institutional. And the reason
that there is only one UbuWeb is that UbuWeb ignores copyright. And everybody
else, of course, is afraid of copyright. There should be hundreds of UbuWebs.
It is ridiculous that there's only one. But everybody else is afraid of
copyright, so that nobody would put anything on. [6:41] We just act like
copyright doesn't exist. Copyright, what's that? Never heard of it.

[6:48]
Contents

[5:52]
I think that these artifacts that are on UbuWeb are very valuable historically
and culturally, they are very significant. But economically, I don't think
they had that type of value. And I love small labels that try to put these
things out. But they inevitably loose money by trying to put these things out.
So when somebody does put something out, sometimes things on Ubu get released
from a small label, and I take them off the site, because I want to support
those things. [7:28] But it's hard, and people are not doing it for the money.
Nobody ever got into sound poetry or orchestral avant-garde music for the
money. [7:37] So it's kind of a weird lovely grey area that we've been able to
explore, a utopia, really, that we've been able to enact. Simply because the
economics are so sketchy.

[7:55]
Copyright

[7:59]
I am not free of fear, but I've learned over 17 years, to actually have a very
good understanding of copyright. And I have a very good understating of the
way that copyright works. So I can anticipate things. I can usually negotiate
something with somebody who, you know… [8:26] There's so many stories when
copyright is being used as a battering tool. It's not real. I had one instance
when a very powerful literary agency in New York… I received a cease and
desist DMCA Takedown, which I require a proper takedown. It was for William S.
Burroughs, and the list went on for pages and pages and pages. And then, at
the end, it says, "Under the threat of perjury, I state these facts to be
true," signed such and such person. [9:05] Now, what they did, they went into
UbuWeb and they put the words "William S. Burroughs," and they came up with
every instance of William S. Burroughs. If William S. Burroughs is mentioned
in an academic paper: that's our copyright. Nick Currie Momus wrote a song "I
Love You William S. Burroughs.” Now, Nick gave UbuWeb all of his songs. I know
that Nick owns the copyright to that. [9:30] I said, you know, it's
ridiculous! And even the things that they were claiming… It was the most
ridiculous thing. [9:37] So I wrote them back. I said: Look, I get what you're
trying to do here, but you're really going about it the wrong way. It's very
irresponsible just putting his name in the search engine, cutting and pasting,
and damn you own the copyright. You don't own the copyright to almost any of
that! And as a matter of fact, under law you perjured yourself. And I can came
right back and sue you, because this is a complete lie. But I said, look, lets
work together. If there's something that you feel that you really do own and
you really don't want there, let's talk about it, but could you please be a
little bit more reasonable. [10:13] And then of course I got a letter back,
and it's an intern, the college student saying, the state of William S.
Burroughs just asked me… [10:23] I said, look, I get it but, you know… let’s
try to do it the right way and let's see what happens. And then they came back
with another DMCA Takedown, with a much shorter list. But even in that list,
most of the copyrights didn't belong to William S. Burroughs. They belonged to
journal poetry systems, many of them were orphan. [10:45] Because in media,
often if you publish in a publication, often the publisher owns the copyright,
not the artist, you know. You have to look and see where the copyright
resides. [10:59] Finally, I said, look this is getting ridiculous. I said,
please send a note on to the executor of Burroughs' estate, who is James
Grauerholz, and he's a good guy. He's a good guy. And I said, I quoted, and I
said, look Mr. Grauerholz, William S. Burroughs' poetry wants to be free. You
know, and I quoted from Burroughs. And also it's a great thing that Burroughs
said. I said, you know, we're not making any money here. I'm not going to
pirate Naked Lunch. I know where are you making your money, and I swear I
wouldn't want to touch that. That does well on its own. [11:30] But his cut-
ups, his sound collage cut-ups? I mean, came on, no. This is for education.
This is for, you know, art schools, kindergartens and post-graduates use it.
[11:40] So this was a way in which copyright is often used as a threat, that's
not true. And then, a little bit of talking, and you can actually get back to
some logic. And then after that it was fine, and there's all the William S.
Burroughs that's there that it was always there. And everybody seems to be
okay.

[11:57]
Opt-out System

[0:12]
Things get taken down all the time. People send an email saying, you know, I
don't want that there. And I try to convince them that we don't touch any
money. Ubu runs on zero money, we don't touch any. I try to tell them that is
good, it's all feeling good, positive. [12:19] But sometimes people really
don't want their work up. And if they don't want their work up, I take it
down. An opt-out system. Why should I keep their work up if they really don't
want it there? [12:30] So it's an unstable archive. What's there today may not
be there tomorrow. And I kind of like that too.

[12:38]
Permission culture

[12:42]
I understand people get nervous. They would prefer me to ask. But if I ask, I
couldn't have built this archive. Because if you ask, you start negotiations,
you make a contract, you need lawyers, you need permissions. And if something
has... a film has music in the background by the Rolling Stones, you have to
clear the right for the Rolling Stones and pay that a little bit of money. And
you know, licenses... I couldn't do that. I do this with no money. That would
take millions… [13:14] To do UbuWeb permission, the right way, correctly,
would take millions of millions of euros. And I built this whole thing from
nothing. Zero money. [13:26] So, you know... I think I'd love to be able to
ask for permission, do things the right way. It is the right way to do things.
But it wouldn't be possible to make an archive like this, that way.

[13:40]
Cornelia Sollfrank: How much does it happen that you are approached by artists
who say, please put my work down?

[13:47]
Almost never, almost never. It's usually the estates, art dealers, the
business people, you know, who are circling around an artist. But it's almost
never artists themselves. Artists, you know... I don't know, I just think
that… [14:07] For example, we have the music concrete of Jean Dubuffet on
UbuWeb. Fantastic experimental music. And it's so great that many people now
know of a composer named Jean Dubuffet, and later they hear: he's also a
painter. Which is really very beautiful. [14:33] Now, the paintings of Jean
Dubuffet, of course, sell for millions. And the copyright, you know... You can
make a T-shirt with a Jean Dubuffet painting, they're going to want a license
for that. [14:44] But the music of Jean Dubuffet, the estate doesn't quite
understand the value of it, or what to do with it. And this is also what
happened with my Warhol book. [14:56] Before I did my Warhol book, I went to
the Warhol Foundation, because it's big money, and you don't want to get in
trouble with those guys. And I said to them, I want to do a book of Andy's
interviews. I know that they don't own the copyright, I just wanted their
blessing, from them. And they were really sweet. They laughed at me. They
said, you want Warhol's words? Take them! We are so busy dealing with
forgeries, well, you know, exactly what your piece was about. And they laughed
at me. They were like, have fun, it's all yours, glad, go away. [15:32] So I
kind of feel, if you ask Jean Dubuffet, I would assume that Dubuffet
understood that his music production was as serious as his paintings. And this
is the sort of beautiful revisionism of the avant-garde. This is a perfect
example of the revisionism of the avant-garde that I'm talking about. You say,
oh, you know, he was actually as good of a composer as he was a painter.
[15:58] So, you know, this is the kind of weird thing that's happened on
UbuWeb, I think. [16:04] But what's even better, is that UbuWeb, you know... I
care about Jean Dubuffet, or I care about Art Brut, and the history of all
that. [16:14] But usually what happens is, kids come into UbuWeb and they know
nothing about the history. And they’re usually kids that are making dance
music. But they go, oh, all these weird sounds at this place, lets take them.
And so they plunder the archive. So you have Bruce Nauman, you know, "Get out
of my life!" on dance floors in São Paulo, mixed in with the beat. And that to
me is the misuse of the archive that I think is really fantastic.

[16:48]
Technical infrastructure

[16:53]
It's web 1.0. I write everything in HTML, by hand. Hand-coded like I did in
1996, the same BBEdit, the same program.

[17:04]>
C.S.: But it's searchable.

[17:06]
Yea, it's got like a dumb, you know, a little free search engine on it, but I
don't do anything. You see, this is the thing. [17:15] For many, many years
people would always come up to me and say, we'd like to put UbuWeb in a
database. And I said no. It’s working really well as it is. And, you know,
imagine if Ubu had been locked up in some sort of horrible SQL database. And
the administrator of the database walks away, the guy that knows all that
stuff walks away with the keys – which always happens. No… [17:39] This way it
is free, is open, is simple, is backwardly compatible – it always works.
[17:45] I like the simplicity of it. It's not different than it was 17 years
ago. It's really dumb, but it does what it does very well.

[17:54]
Search engines

[17:58]
I removed it from Google. Because, you know, people would have set a Google
alert. And it was mostly the agents, or the estates that would set a kind of
an alert for their artists. And they didn't understand, they think we're
selling it. And it creates a lot of correspondence. [18:20] This is a lot of
work for me. I never get paid any money. There's no money. So, there's
nothing, you know... It's my free time that I'm spending corresponding with
people. And once I took it off from Google it got much better.

[18:33]
Copyright practice

[18:37]
Nobody seemed to care until I started to put film on, and then the filmmakers
went crazy. And so, that was something. [18:47] There was a big blow-up on the
FrameWorks film list. Do you know FrameWorks? It's the biggest avant-garde
film list – Listserv. And a couple of years ago Ubu got hacked, and went down
for a little while. And there was a big celebration on the FrameWorks list.
They said, the enemy is finally gone! We can return to life as normal. So I
responded to them. [19:14] I wrote an open letter to FrameWorks (which you can
actually find on UbuWeb) challenging them, saying, actually Ubu is a friend of
yours. I'm actually promoting your work for no money. I love what you do. I'm
a fan. There's no way I'm an enemy. [19:31] And I said, by the way, if you are
celebrating Ubu being down, I think it's a perfect time for you to now built
Ubu the way it should have been. You guys have all the materials. You are the
artists, you have all the knowledge. Go ahead and do it right, that would be
great. You have my blessing, please do it... Shut them down. Nobody ever
responded. Suddenly the thread died. [20:00] Nobody wants to do anything. It's
kind of, they considered it right to complain, but when asked to... They have
the tools to do it right. I'm a poet, what do I know about avant-garde film?
They know everything. But when I told them, please, you know, nobody's going
to lift a finger. [20:18] It's easier for people to complain and hate it. But
in fact, to make something better is something that people are not going to
do. So life went on. It went up and we moved on.

[20:32]
Un/stable archives

[20:36]
If you work on something for an hour a day for 17 years – 2 hours, 3 hours –
you come up with something really substantial. [20:45] The web is very
ephemeral, and UbuWeb is just as ephemeral. It’s amazing that it's been there
for as long as it has, but tomorrow it could vanish. I could get sued. I could
get bored. Maybe I just walk away and blow it up, I don't know! Why do I need
to keep doing all this work for? [21:03] So if you find something on the
Internet that you loved, don't assume it's going to be there forever. Download
it. Always make your own archive. Don't ever assume that it's waiting there
for you, because it won't be there when you look for it.

C.S.: In the cloud…

Fuck the cloud. I hate the cloud.


Sollfrank & Kleiner
Telekommunisten
2012


Dmytri Kleiner
Telekommunisten

Berlin, 20 November 2012

[00:12]
My name is Dmytri Kleiner. I work with Telekommunisten, which is an art
collective based in Berlin that investigates the social relations in bettering
communication technologies.

[00:24]
Peer-To-Peer Communism

[00:29]
Cornelia Sollfrank: I would like to start with the theory, which I think is
very strong, and which actually informs the practice that you are doing. For
me it's like the background where the practice comes from. And I think the
most important and well-known book or paper you've written is The
Telekommunist Manifesto. This is something that you authored personally,
Dmytri Kleiner. It's not written by the Telekommunisten. And I would like to
ask you what the main ideas and the main principles are that you explain, and
maybe you come up with a few things, and I have some bullet points here, and
then we can discuss.

[01:14]
The book has two sections. The first section is called "Peer-To-Peer Communism
Vs. The Client-Server Capitalist State," and that actually explains – using
the history of the Internet as a sort of a basis – it explains the
relationship between modes of production on one hand, like capitalism and
communism, with network topologies on the other hand, mesh networks and star
networks. [01:39] And it explains why the original design of the Internet,
which was supposed to be a decentralised system where everybody could
communicate with everybody without any kind of mediation, or control or
censorship – why that has been replaced with centralised, privatised
platforms, from an economic basis. [02:00] So that the need for capitalist
capture of user data, and user interaction, in order to allow investors to
recoup profits, is the driving force behind centralisation, and so it explains
that.

[02:15]
Copyright Myth

[02:19]
C.S.: The framework of these whole interviews is the relation between cultural
production, artistic production in particular, and copyright, as a regulatory
mechanism. In one of your presentations, you mention, or you made the
assumption or the claim, that the fact that copyright is there to protect, or
to foster or enable artistic cultural production is a myth. Could you please
elaborate a bit on that?

[02:57]
Sure. That's the second part of the manifesto. The second part of the
manifesto is called "A Contribution to the Critique of Free Culture." And in
that title I don't mean to be critiquing the practice of free culture, which I
actively support and participate in. [03:13] I am critiquing the theory around
free culture, and particularly as it's found in the Creative Commons
community. [03:20] And this is one of the myths that you often see in that
community: that copyright somehow was created in order to empower artists, but
it's gone wrong somehow, at some point it's got wrong. [03:34] It went in the
wrong direction and now it needs to be corrected. This is a kind of a
plotline, so to speak, in a lot of creative commons oriented community
discussion about copyright. [03:46] But actually, of course, the history of
copyright is the same as the history of labour and capital and markets in
every other field. So just like the kind of Lockean idea of property
attributes the product of the worker's labour to the worker, so that the
capitalist can appropriate it, so it commodifies the products of labour,
copyright was created for exactly the same reasons, at exactly the same time,
as part of exactly the same process, in order to create a commodity form of
knowledge, so that knowledge could play in markets. [04:21] That's why
copyright was invented. That was the social reason why it needed to exist.
Because as industrial capitalism was manifesting, they required a way to
commodify knowledge work in the same way they commodified other kinds of
labour. [04:37] So the artist was only given the authorship of their work in
exactly the same way as the factory worker supposedly owns the product of
their labour. [04:51] Because the artist doesn't have the means of production,
so the artist has to give away that product, and actually legitimizes the
appropriation of the product of labour from the labourer, whether it's a
cultural labourer or a physical labourer.

[05:07]
(Intellectual) Labour

[05:10]
C.S.: And why do you think that this myth is so persistent? Or, who created
it, and for what reasons?

[05:18]
I think that a lot of kind of liberal criticism sort of starts that way. I
mean, I haven't really researched this, so that's kind of an open question
that you are asking, I don’t really have a specific position. [05:30] But my
impression is always that people that come at things from a liberal critique,
not a critical critique, sort of assume that things were once good and now
they’re bad. That’s kind of a common sort of assumption. [05:42] So instead of
looking at the core structural origin of something, they sort of have an
assumption that at some point this must have served a useful function or it
wouldn’t exist. And so therefore it must have been good and now it’s bad.
[05:57] And also because of the rhetoric, of course, just like the Lockean
rhetoric of property: give the ownership of the product of labour to the
worker. Ideologically speaking, it’s been framed this way since the beginning.
[06:14] But of course, everybody understands that in the market system the
worker is only given the rights to own their labour if they can sell it.

[06:22]
Author Function

[06:26]
C.S.: Based on this assumption, developed a certain function of the author.
Could you please elaborate on this a bit more? The invention of the individual
author.

[06:39]
The author – in a certain point of history, in line of the development of, you
know, as modern society – capitalist industrial society – began to emerge, so
did with it the author. [06:53] Previous to this, the concept of the author
was not nearly so engrained. So the author hasn't always existed in this
static sense, as unique source of new creativity and new knowledge, creating
work ex nihilo from their imagination. [07:10] Previous to this there was
always a more social understanding of authorship, where authors were in a
continuous cultural dialogue with previous authors, contemporary authors,
later authors. [07:20] And authors would frequently reuse themes, plots,
characters, from other authors. For instance, Goethe’s Faust is a good example
that has been used by authors before and after Goethe, in their own stories.
And just like the Homeric traditions of ancient literature. [07:42] Culture
was always seen to be much about dialogue, where each generation of authors
would contribute to a common creative stock of characters, plots, ideas. But
that, of course, is not conducive to making knowledge into a commodity that
can be sold in the market. [08:00] So as we got into a market-based society,
in order to create this idea of intellectual property, of copyright, creating
something that can be sold on the market, the artist and the author had to
become individuals all of a sudden. [08:16] Because this kind of iterative
social dialogue doesn’t work well in a commodity form, because how do you
properly buy it and sell it?

[08:28]
Anti-Copyright

[08:33]
C.S.: The Next concept I would like to talk about is the anti-copyright. Could
you please explain a little bit what it actually is, and where it comes from?

[08:46]
From the very beginning of copyright many artists and authors rejected it from
ideological grounds, right from the beginning. [08:35] Because, of course,
what was now plagiarism, what was now illegal, and a violation of intellectual
property had been in many cases traditional practices that writers took for
granted forever. [09:09] The ability to reuse characters; the ability to take
plots, themes and ideas from other authors and reuse them. [09:16] So many
artists rejected this idea from the beginning. And this was the idea of
copyright. But, of course, because the dominant system that was emerging – the
market capitalist system – required the commodity form to make a living, this
was always a marginal community. [09:37] So it was radical artists, like the
Situationist International, or artists that had strong political beliefs, the
American folk musicians like Woody Guthrie – another famous example. [09:47]
And all of this people were not only against intellectual property. They were
not only against the commodification of cultural work. They were against the
commodification of work, period. [09:57] There was a proletarian movement.
They were very much against capitalism as well as intellectual property.

[10:04]
Examples of Anti-Copyright

[10:08]
C.S.: Could you give also some examples in the artworld for this
anti-copyright, or in the cultural world?

[10:15]
DK: Well, you know Lautréamont’s famous text, “plagiarism is necessary: it
takes a wrong idea and replaces it with the right idea.” [10:29] And
Lautréamont was a huge influence on a bunch of radical French artists
including, most famously, the Situationist International, who published their
journal with no copyright, denying copyright. [10:44] I guess that Woody
Guthrie has a famous thing that I quote in some article or other, maybe even
in the [Telekommunist] Manifesto, I don’t remember if it made it in – where he
expressly says, he openly supports people performing, copying, modifying his
songs. That was a note that he made in a song book of his. [11:11] And many
others – the whole practice is associated with communises, from Dada to
Neoism. [11:18] Much later, up to the mid-1990s, this was the dominant form.
So from the birth of copyright, up to the mid-1990s, the intellectual property
was being questioned on the radical fringes of artists. [11:34] For me
personally, as an artist, I started to become involved with artists like
Negativland and Plunderpalooza – sorry, Plunderpalooza was an act we did;
Plunderphonics is an album by John Oswald – the newest movements and the
festival of plagiarism. [11:51] This was the area that I personally
experienced in the 1990s, but it has a long history going back to Lautréamont,
if not earlier.

[12:01]
On the Fringe

[12:05]
C.S.: But you already mentioned the term fringe, so this kind of
anti-copyright attitude automatically implied that it could only happen on the
fringe, not in the actual cultural world.

[12:15]
Exactly. It is fundamentally incompatible with capitalism, because it denies
the value-form of culture. [12:22] And without the commodity form, it can’t
make a living, it has nothing to sell in the market. Because it’s not allowed
to sell on the market, it’s necessarily marginal. [12:34] So it’s necessarily
people who support themselves through “non-art” income, by other kinds of
work, or the small percentage of artists that can be supported by cultural
funding or universities, which is, you know, a relatively small group compared
to the proper cultural industries that are supported by copyright licensing.
[12:54] That includes the major movie houses, the major record labels, the
major publishing houses. Which is, you know, in orders of magnitude, a larger
number of artists.

[13:05]
Anti-Copyright Attitude

[13:10]
C.S.: So what would you say are the two, three, main characteristics of the
anti-copyright attitude?

[13:16]
Well, it completely rejects copyright as being legitimate. That’s a complete
denial of copyright. And usually it’s a denial of the existence of a unique
author as well. [13:28] So one of the things that is very characteristic is
the blurring of the distinction between producer and consumer. [13:37] So that
art is considered to be a dialogue, an interactive process where every
producer is also a consumer of art. So everybody is an artist in that sense,
everybody potentially can be. And it’s an ongoing process. [13:52] There’s no
distinction between producer and consumer. It’s just a transient role that one
plays in a process.

[13:59]
C.S.: And in that sense it relates back to the earlier ideas of cultural
production.

[14:04]
Exactly, to the pre-commodity form of culture.

[14:11]
Copyleft

[14:15]
C.S.: Could you please explain what copyleft is, where it comes from.

[14:20]
Copyleft comes out of the software community, the hacker community. It doesn’t
come out of artistic practice per se. And it comes out of the need to share
software. [14:30] Famously, Richard Stallman and the Free Software Foundation
started this project called GNU (GNU’s Not Unix), which is the, kind of, very
famous and important project. [14:44] And they publish the license called the
GPL, which sort of defined the copyleft idea. And copyleft is a very clever
kind of a hack, as they say in the hacker community. [14:53] What it does is
that it asserts copyright, full copyright, in order to provide a public
license, a free license. And it requires that any derivative work also carries
the same license. That’s what is different about it to anti-copyright. It’s
that, rather than denying copyright outright, copyleft is a copyright license
– it is a copyright – but then the claim is used in order to publicly make the
work available to anybody that wants it under very open terms. [15:28] The key
requirement, the distinctive requirement, is that any derivative work must
also be licenced under the same terms, under the copyleft terms. [15:38] This
is what we call viral, in that it perpetuates license. This is very clever,
because it takes copyright law, and it uses copyright law to create
intellectual property freedom, within a certain context. [15:55] But the
difference is, of course, that we are talking about software. And software,
economically speaking, from the point of view of the way software developers
actually make a living, is very different. [16:11] Because within the
productive cycle – the productive cycle can be said to have two phases,
sometimes called "department one" and "department two" in Marxian language or
in classical political economics. Producer’s goods and consumer’s goods; or
capital’s goods and consumer's goods models. [16:17] The idea is that some
goods are produced not for consumers but for producers. And these goods are
called capital. So they are goods that are used in production. And because
they are used in production, it’s not as important for capitalists to make a
profit on their circulation because they are input to production. [16:47] They
make their profits up stream, by actually using those goods in production, and
then creating goods that can be sold to the masses, circulated to the masses.
[16:56] And so because culture – art and culture – is normally a “department
two” good, consumer’s good, it’s completely, fundamentally incompatible with
capitalism because capitalism requires the capture of profits and the
circulation of consumer’s goods. But because software is largely a “department
one” good, producer’s good, it has no incompatibility with capitalism at all.
[17:18] In fact, capitalists very much like having their capital costs
reduced, because the vast majority of capitalists do not make commercial
software – license it. That’s only a very small class of capitalists. For the
vast majority of capitalists, the availability of free software as an input to
their production is a wonderful thing. [17:39] So this creates a sort of a
paradox, where under capitalism, only capital can be free. And because
software is capital, free software, and the GNU project, the Linux and the
vanilla projects exploded and became huge. [17:39] So, unlike the marginal-by-
necessity anti-copyright, free software became a mass movement, that has a
billion dollar industry, that has conferences all over the world that are
attended by tens of thousands of people. And everybody is for it. It’s this
really great big thing. [18:26] So it’s been rather different than
anti-copyright in term of its place in society. It’s become very prominent, very
successful. But, unfortunately – and I guess this is where we have to go next
– the reason why it is successful is because software is a producer’s good,
not a consumer’s good.

[18:38]
Copyleft Criticism

[18:42]
C.S.: So what is your basic criticism of copyleft?

[18:47]
I have no criticism of copyleft, except for the fact that some people think
that the model can be expanded into culture. It can’t be, and that’s the
problem. It's that a lot people from the arts community then kind of came back
to this original idea of questioning copyright through free software. [19:12]
So they maybe had some relationship with the original anti-copyright
tradition, or sometimes not at all. They are fresh out of design school, and
they never had any relationship with the radical tradition of anti-copyright.
And they encounter free software – they are like, yeah, that's great. [19:29]
And the spirit of sharing and cooperation inspires them. And they think that
the model can be taken from free software and applied to art and artists as
well, just like that. [19:41] But of course, there is a problem, because in a
capitalist society there has to be some economic sustainability behind the
practice, and because free culture modelled out of the GPL can’t work, because
the artists can’t make a living that way. [20:02] While capital will fund free
software, because they need free software – it’s a producer’s good, it’s input
to their production – capital has no need for free art. So they have also no
need to finance free art. [20:15] So if they can’t be financed by capital,
that automatically gives them a very marginal role in today’s society. [20:19]
Because that means that it has to be funded by something other than capital.
And those means are – back to the anti-copyright model – those are either non-
art income, meaning you do some other kind of work to self-finance your
artistic production, or the relatively small amount of public cultural
financing that is available – or now we have new things, like crowd funding –
all these  kinds of things that create some opportunities. But still
marginally small compared to the size of the capitalist economy. [20:52] So
the only criticism of copyleft is that it is inapplicable to cultural
production.

[21:00]
Copy-left and cultural production

[21:04]
C.S.: Why this principle of free software production, GPL principles, cannot
be applied to cultural production? Just again, to really point this out.

[21:20]
The difference is really the difference between “department one” goods,
producer's goods, and “department two” goods, consumer’s goods. [21:27] It’s
that capitalists, which obviously control the vast majority of investment in
this economy – so the vast majority of money that is spent to allow people to
realise projects of any kind. The source of this money is capital investment.
[21:42] And capital is happy to invest in producer’s goods, even if they are
free. Because they need these goods. So they have no requirement to seek these
goods. [21:53] If you are running a company like Amazon, you are not making
any money selling Linux, you are making money selling web services, books and
other kinds of derivative products. You need free software to run your data
centre, to run your computer. [22:08] So the cost of software to you is a
cost, and so you're happy to have free software and support it. Because it
makes a lot more sense for you to contribute to some project that it’s also
used by five other companies. [22:21] And in the end all of you have this tool
that you can run on your computer, and run your business with, than actually
either buying a license from some company, which can be expensive, inflexible,
and you can't control it, and if it doesn't work the way you want, you cannot
change it. [22:36] So free software has a great utility for producers. That's
why it's a capital good, a producer's good, a "department one" good. [22:45]
But art and culture do not have the same economic role. Capital is not
interested in developing free culture and free art. They don't need it, they
don't do anything with it. And the capitalist that produces art and culture
requires it to have a commodity form, which is what copyright is. [23:00] So
they require a form that they can sell on the market, which requires it to
have the exclusive, non-reproducible commodity form – that copyright was
developed in order to commodify culture. [23:14] So that is why the copyleft
tradition won't work for free culture – because even though free culture and
anti-copyright predates it, it predates it as a radical fringe. And the
radical fringe isn't supported by capital. It's supported, as we said, by
outside income, non-art income, and other kind of things like small cultural
funds.

[23:38]
Creative Commons

[23:42]
C.S.: In the last ten years we have seen new business models that very much
depend on free content as well. Could you please elaborate on this a bit?

[23:56]
Well, that’s the thing. Now we have the kind of Web 2.0/Facebook world.
[24:00] The entire copyright law – the so-called "good copyright" that
protected artists – was all based on the idea of the mechanical copy. And the
mechanical copy made a lot of sense in the printing press era where, if you
had some intellectual property, you could license it through mechanical
copies. So every time it was copied, somebody owed you a royalty. Very simple.
[24:26] But in a Web 2.0 world, where we have YouTube, Facebook, Twitter and
things like that, this doesn't really work very well. Because if you post
something online and then you need to get paid a royalty every time it gets
copied (and it gets copied millions of times), this becomes very impractical.
[24:44] And so this is where the Creative Commons really comes in. Because the
Creative Commons comes in just exactly at this time – as the Internet is kind
of bursting out of its original military and NGO roots, and really hitting the
general public. At the same time free software is something that is becoming
better known, and inspiring more people – so the ideas of questioning
copyright are becoming more prominent. [25:16] So Creative Commons seizes on
this kind of principles approach that anti-copyright and copyleft take. And
again, one of the single most important things about anti-copyright and
copyleft is that in both cases the freedom that they are talking about – the
free culture that they represent – is the freedom of the consumer to become
the producer. It's the denial of the distinction between consumer and
producer. [25:41] So even though the Creative Commons has a lot of different
licenses, including some that are GPL compatible – they're approved for free
cultural work, or whatever it's called – there is one license in particular
that makes up the vast majority of the works in the Creative Commons, one
license in particular which is like the signature license of the Creative
Commons – it's the non-commercial license. And this is obviously... The
utility of that is very clear because, as we said, artists can't make a living
in a copyleft sense. [26:18] In order for artists to make a living in the
capitalist system, they have to be able to negotiate non-free rights with
their publishers. And if they can't do that, they simply can't make a living.
At least, not in the mainstream community. There is a certain small place for
artists to make a living in the alternative and fringe elements of the
artworld. [26:42] But if you are talking about making a movie, a novel, a
record, then you at some point are going to need to negotiate a contract with
the publisher. Which means, you're going to have to be able negotiate non-free
terms. [27:00] So what non-commercial [licensing] does, is that it allows
people to share your stuff, making you more famous, getting more people to
know you – building its value, so to speak. But they can't actually do
anything commercial with it. And if they want to do anything commercial with
it, they have to come back to you and they have to negotiate a non-free
license. [27:19] So this is very practical, because it solves a lot of
problems for artists that want to make work available online in order to get
better known, but still want to eventually, at some point in the future,
negotiate non-free terms with a publishing company. [27:34] But while it's
very practical, it fundamentally violates the idea that copyleft and
anti-copyright set out to challenge – and this is distinction between the producer
and the consumer. Because of this, the consumer cannot become the producer.
And that is the criticism of the Creative Commons. [27:52] That's why I want
to talk about this thing, I often say, a tragedy in three parts. The first
part is a tragedy because it has to remain fringe, because of its complete
incompatibility with the dominant capitalism. [28:04] The second part,
copyleft, is a tragedy because while it works great for software, it can't and
it won't work for art. [28:10] And the third part is a tragedy because it
actually undermines the whole idea and brings the author back to the surface,
back from the dead. But the author kind of remerges as a sort of useful idiot,
because the "some rights reserved" are basically the rights to sell your
intellectual property to the publisher in exactly the same way as the early
industrial factory worker would have sold their labour to the factory.

[28:36]
C.S.: And that creates by no means a commons.

[28:41]
It by no means creative a commons, right. Because a primary function of a
commons is that it would be available for use by others producers, and the
Creative Commons isn't because you don't have any right to create your own
work to make a living from the works in the commons – because of the non-
commercial clause that covers a large percentage of the works there.

[29:09]
Peer Production License

[29:13]
C.S.: But you were thinking of an alternative. What is the alternative?

[29:19]
There is no easy alternative. The fact is that, so long as we have a cultural
industry that is dominated by market capitalism, then the majority of artists
working within it will have to work in that form. We can't arbitrarily, as
artists, simply pretend that the industry as it is doesn't exist. [29:41] But
at the same time we can hope that alternatives will develop – that alternative
ways of producing and sharing cultural works will develop. So that the
copyfarleft license... [29:52] I describe the Creative Commons as
copyjustright. It's not copyright, it's copyjustright – you can tune it, you
can tailor it to your specific interests or needs. But it is still copyright,
just a more fine-tuneable copyright that is better for a Web 2.0 distribution
model. [30:12] The alternative is what I call copyfarleft, which also starts
off with the Creative Commons non-commercial model for the simple reason that,
as we discussed, if you are an actually existing artist in the actually
existing cultural industries of today, you are going to have to make a living,
on the most part, by selling non-free works to publishers, non-free licenses
to publishers. That's simply the way the industry works. [30:37] But in order
not to close the door on another industry developing – a different kind of
industry developing – after denying commercial works blankly (so it has a non-
commercial clause), then it expressly allows commercial usage by non-
capitalist organisations, independent cooperatives, non-profits –
organisations that are not structured around investment capital and wage
labour, and so forth; that are not for-profit organisations that are enriching
private individuals and appropriating value from workers. [31:15] So this
allows you to succeed, at least potentially succeed as a commercial artist in
the commercial world as it is right now. But at the same time it doesn't close
the door on another kind of community from developing, other kind of industry
from developing. [31:35] And we have to understand that we are not going to be
able to get rid of the cultural industries as they exist today, until we have
another set of institutions that can play those same roles. They're not going
to magically vanish, and be magically replaced. [31:52] We have to, at the
same time as those exist, build up new kind of institutions. We have to think
of new ways to produce and share cultural works. And only when we've done
that, will the cultural institutions as they are today potentially go away.
[32:09] So the copyfarleft license tries to bridge that gap by allowing the
commons to grow, but at the same time allowing the commons producers to make a
living as they normally would within the regular cultural industry. [32:25]
Some good examples where you can see something like this – might be clear –
are some of the famous novelists like Wu Ming or Cory Doctorow, people that
have done very well by publishing their works under Creative Commons non-
commercial licenses. [32:42] Wu Ming's books, which are published, I believe,
by Random House or some big publisher, are available under a Creative Commons
non-commercial license. So if you want to download them for personal use, you
can. But if you are Random House, and you want to publish them and put them on
bookstores, and manufacture them in huge supply, you have to negotiate non-
free terms with Wu Ming. And this allows Wu Ming to make a living by licensing
their work to Random House. [33:10] But while it does do that, what it doesn't
do is allow that book to be manufactured any other way. So that means that
this capitalist form of production becomes the only form that you can
commercially produce this book – except for independents, just for their own
personal use. [33:25] Whereas if their book was instead under a copyfarleft
license, what we call the "peer production" licence, then not only could they
continue to work as they do, but also potentially their book could be made
available through other means as well. Like, independent workers cooperatives
could start manufacturing it, selling it and distributing it locally in their
own areas, and make a commercial living out of it. And then perhaps if those
were to actually succeed, then they could grow and start to provide some of
the functions that capitalist institutions do now.

[34:00]
Miscommunication Technology

[34:05]
The artworks that we do are more related to the topologies side of the theory
– the relationship between network topologies, communication topologies, and
the social relations embedded in communication systems with the political
economy and economic ideas, and people's relationships to each other. [34:24]
The Miscommunication Technologies series has been going on for a quite a while
now, I guess since 2006 or so. Most of the works were pretty obscure, but the
more recent works are getting more attention and better known. And I guess
that the ones that we're talking about and exhibiting the most are deadSwap,
Thimbl and R15N, and these all attempt to explore some of the ideas.

[35:01]
deadSwap

[35:06]
deadSwap is a file sharing system. It's playing on the kind of
circumventionist technologies that are coming out of the file sharing
community, and this idea that technology can make us be able to evade the
legal and economic structures. So deadSwap wants to question this by creating
a very extreme parody of what it would actually mean to really be private.
[35:40] It is a file sharing system, that in order to be private it only
exists on one USB stick. And this USB stick is hidden in public space, and its
user send text messages to an anonymous SMS gateway in order to tell other
users where they've hidden the stick. When you have the stick you can upload
and download files to it – it's a file sharing system. It has a Wiki and file
space, essentially. Then you hide the stick somewhere, and you text the system
and it forwards your message to the next person that is waiting to share data.
And this continues like that, so then that person can share data on it, they
hide it somewhere and send an SMS to the system which then it gets forwarded
to the next person. [36:28] This work serves a few different functions at
once. First, it starts to get people to understand networks and all the basic
components. The participants in the artwork actually play a network node – you
are passing on information as if you are part of a network. So this gets
people to start thinking about how networks work, because they are playing the
network. [36:52] But on the other hand, it also tries to get cross the idea
that the behaviour of the user is much important than the technology, when it
comes to security and privacy. So how difficult it is – the system is very
private – how difficult it is to actually use it, not lose the stick, not to
get discovered. [37:11] It's actually very difficult to actually use. Even
though it seems so simple, normally people lose the USB key within like an
hour or two of starting the system. It doesn't... All the secret agent manuals
that say, be a secret agent spy – isn't easy, and it tries to get this across,
that actually it's not nearly as easy to evade the economic and political
dimensions of our society as it should be. [37:45] Maybe it's better that we
politically fight to avoid having to share information only by hiding USB
sticks in public space, sticking around and acting like spies.

[37:57]
Thimbl

[38:02]
Thimbl is another work, and it is completely online. This work in some ways
has become a signature work for us, even though it doesn't really have any
physical presence. It's a purely conceptual work. [38:15] One of the arguments
that the Manifesto makes is that the Internet was a fully distributed social
media platform – that's what the Internet was, and then it was replaced,
because of capitalism and because of the economic logic of the market, with
centralised communication platforms like Twitter and Facebook. [38:40] And
despite that, within the free software community and the hacker community,
there's the opposite myth, just like the copyright myth. There's this idea
that we are moving towards decentralised software. [38:54] You see people like
Eben Moglen making this point a lot, when he says, now we have Facebook, but
because of FreedomBox, Diaspora and a laundry list of other projects, we're
eventually going to reach a decentralised software. [39:07] But this makes two
assumptions that are incorrect. The first is that we are starting with
centralised media and we are going to decentralised media, which actually is
incorrect. We started with a decentralised social media platform and we moved
to a centralised one. [39:40] And the second thing that is incorrect is that
we can move from a centralised platform to a decentralised platform if we just
create the right technology, so the problem is technological. [39:34] With
Thimbl we wanted to make the point that that wasn't true, that the problem was
actually political. The technological problem is trivial. The computer
sciences have been around forever. The problem is political. [39:43] The
problem is that these systems will not be financed by capital, because capital
requires profit in order to sustain itself. In order to capture profit it
needs to have control of user interaction and user's data. [39:57] To
illustrate this, we created a micro-blogging platform like Twitter, but using
a protocol of the 1970s called Finger. So we've used the protocol that has
been around since the 1970s and made a micro-blogging platform out of it –
fully, totally distributed micro-blogging platform. And then promoted it as if
it was a real thing, with videos and website, and stuff like that. But of
course, there is no way to sign up for it, because it's just a concept.
[40:22] And then there are some scripts that other people wrote that actually
made it to a certain degree real. For us it was just a concept, but then
people actually took it and made working implementations of it, and there are
several working implementations of Thimbl. [40:38] But the point remains that
the problem is not technical, the problem is political. So we came up with
this idea of the economic fiction, or the social fiction. [40:47] Because in
science fiction you often have situations where something that eventually
became a real technology was originally introduced in a fictional context as a
science fiction. [40:59] The reason it's fictional is because science at the
time was not able to create the thing, but as science transcends its
limitations, what was once fictional technology became real technology. So we
have this idea of a social or economic fiction. [41:15] Thimbl is not science
fiction. Technologically speaking it demonstrably works – it's a demonstrably
working concept. The problem is economic. [41:23] For Thimbl to become a
reality, society has to transcend its economic limitations – it's social and
economic limitations in order to find ways to create communication systems
that are not simply funded by the capture of user data and information, which
Thimbl can't do because it is a distributive system. You can't control the
users, you can't know who is using it or what they are doing, because it's
fully distributed.

[41:47]
R15N

[41:52]
The R15N has elements of both of those things. We wanted to create a system
that was basically drawn a little from deadSwap, but I wanted to take out the
secret agent element of it. Because I was really... [42:08] The first place it
was commissioned to be in was actually in Tel Aviv, in Israel, the [Israeli]
Center for Digital Art. And this kind of spy aesthetic that deadSwap had, I
didn't think it would be an appropriate aesthetic in that context. [42:22] The
idea that of trying to convince young people in a poor area in Tel Aviv to act
like spies and hide USB sticks in public space didn't seem like a good idea.
[42:34] So I wanted to go the other way, and I wanted to really emphasise the
collaboration, and create a kind of system that is pretty much totally
impossible to use, but only if you really cooperate you can make it work.
[42:45] So I took another old approach called the telephone tree. I don't know
if you remember telephone trees. Telephone trees existed for years before the
Internet, when schools and army reserves needed to be quickly dispatched, and
it worked with a very simple tree topology. [43:01] You had a few people that
were the top nodes, that then called the list of two or three people, that
then called the list of two or three people, that then called the list of two
or three people... And the message can be sent through the community very
rapidly through a telephone tree. [43:14] It is often used in Canada for
announcing snow days at school, for instance. If the school was closed, they
would call three parents, who would each call three parents, who would each
call three parents, and so forth. So that all the parents knew that the school
was closed. That's one aspect. [43:30] Another aspect of it is that
telephones, especially mobile phones, are really advertised as a very freedom
enabling kind of a thing. Things that you can go anywhere... [43:41] I don't
know if you remember some of the early telephones ads where there are always
businessmen on the beach. I remember this one where this woman's daughter
wants to make an appointment with her because she only has time for her
colleague appointments, and so it's this whole thing about spending more time
with her daughter – so she takes her daughter to the beach, which she is able
to do because she can still conduct business on her mobile phone. So it's this
freedom kind of a thing. [44:04] But in areas like the Jessi Cohen area in Tel
Aviv where we were working, and other areas where the project has been
exhibited, like Johannesburg – other places like that, the telephone has a
very different role, because it's free to receive phone calls, but it costs
much to make phone calls, in most parts of the world, especially in these poor
areas. [44:25] So the telephone is a very asymmetric power relationship based
on your availability of credit. So rather than being a freedom enabling thing,
it's a control technology. So young people and poor people that carry them
can't actually make any calls, they can't call anybody. They can only receive
calls. [44:40] So it's used as a tedder, a control system from their parents,
their teachers, their employers, so they can know where they are at any time
and say, hey why aren't you at work, or where are you, what are you doing.
It's actually a control technology. [44:54] We wanted to invert that too. So
the way the phone tree system work is that, when you have a message you
initiate a phone call, so you initiate a new tree, the system phones you...
[45:05] And you can initiate a new tree in the modern versions by pushing a
button in the gallery. There's a physical button in the gallery, you push the
button, there's a phone beside it, it rings a random person, you tell them
your message, and then it creates an ad hoc telephone tree. It takes all the
subscribers and arranges them in a tree, just like in the old telephone tree,
and each person calls each person, until your message, in theory, gets through
the community. [45:28] But of course in reality nobody answers their phones,
you get voicemail, and then you get voicemail talking to voicemail. Of course,
voice from the Internet is fake to begin with, so calls fail. So it actually
becomes this really frenetic system where people actually don’t know what's
going on, and the message is constantly lost. [45:44] And of course, you have
all of these missed phone calls, this high pressure of the always-on world.
You are always getting these phone calls, and you're missing phone calls, and
actually nobody ever knows what the message is. So it actually creates this
kind of mass confusion. [46:00] This once again demonstrates that the users –
what we call jokingly in the R15N literature, the diligence of the users, is
so much required for these systems to work. Technologically, the system is
actually more or less hindered. [46:21] But they also serve not only to make
that message, which is a more general message – but also, like in the other
ones, in R15N you are a node in the network. So when you don’t answer a call
you know that a message is dropped. [46:36] So you can image how volatile
information is in networks. When you pass your information through a third
party, you realise that they can drop it, they can change it, they can
introduce their own information. [46:50] And that is true in R15N, but is also
true in Facebook, in Twitter, and in any time you send messages through some
third party. That is one of the messages that is core to the series.


Sollfrank & Mars
Public Library
2013


Marcell Mars
Public Library

Berlin, 1 February 2013

[00:13]
Public Library is the concept, the idea, to encourage people to become a
librarian, where a librarian is a person which can allow access to books – and
also which has a catalogue or index, so that it's searchable. [00:32] And the
person, the human being, can communicate, can talk with others who are
interested in that catalogue of books. [00:43] And then when you have a
librarian, and you have a lot of librarians, you have a Public Library,
because we have access to books, we have a catalogue, and we have a librarian.
That's the basic set up. [00:55] And in order to really work, in practice, we
need to introduce a set of tools which are easy to use, like Calibre, for
example, for book management. [01:07] And then also some part of that set up
should be also developed because at the moment, because of the configuration
of the routers, IP addresses and other things, it's not that easy to share
your local library which you have on your laptop with the world. [01:30] So we
also provide... When I say ‘we,’ it's a small team, at the moment, of
developers who try to address that problem. [01:38] We don't need to reinvent
the public library. It's invented, and it should be just maintained. [01:47]
The old-school public libraries – they are in decline because of many reasons.
And when it comes to the digital networks, the digital books, it's almost like
the worst position. [01:59] For example, public libraries in the US, they are
not allowed to buy digital books, for example from Penguin. So even when they
want to buy, it's not that they are getting them, it's that they can't buy the
books. [02:16] By the current legal regulation, it's considered as illegal – a
million of books, or even more, are unavailable, and I think that these books
should be really available. [02:29] And it doesn't really matter how it got on
Internet – did it come from a graphic designer who is preparing that for
print, or if it was uploaded somewhere from the author of the book (that is
also very common, especially in humanities), or if it was digitised anywhere.
[02:50] So these are the books which we have, and we can't be blinded, they
are here. The practice at the moment is almost like trying to find a
prostitute or something, so when you want to get a book online you need to get
onto the websites with advertisements for casinos, for porn and things like
that. [03:14] I don't think that the library should be like that.

[03:18]
Book Management

[03:22]
What we are trying to provide is just suggesting what kind of book management
software they can use, and also what kind of new software tools they can
install in order to easily get the messy directory into the directory of
metadata which Calibre can recognise – and then you can just use Calibre. The
next step is if you can share your local library with the world. [03:52] You
need something like a management software where it's easy to see who are the
authors, what the titles, publishers and all of the metadata – and it's
accessible from the outside.

[04:08]
Calibre

[04:12]
Calibre is a book management software. It's developed by Kovid Goyal, a
software developer. [04:22] It's a free software, open source, and it started
like many other free software projects. It started as a small tool to solve
very particular small problems. [04:31] But then, because it was useful, it
got more and more users, and then Kovid started to develop it more into a
proper, big book management software. At the moment it has more that 10
million registered users who are running that. [04:52] It does so many things
for book management. It's really ‘the’ software tool... If you have an
e-reader, for example, it recognises your e-reader, it registers it inside of
Calibre and then you can easily just transfer the books. [05:08] Also for
years there was a big problem of file formats. So for example, Amazon, in
order to keep their monopoly in that area, they wouldn't support EPUB or PDF.
And then if you got your book somewhere – if you bought it or just downloaded
from the Internet, you wouldn't be able to read it on your reader. [05:31]
Then Calibre was just developing the converter tools. And it was all in one
package, so that Calibre just became the tool for book management. [05:43] It
has a web server as a part of it. So in a local area network – if you just
start that web server and you are running a local area network, it can have a
read-only searchable access to your local library, to your books, and it can
search by any of these metadata.

[06:05]
Tools Around Calibre

[06:09]
I developed a software which I call Let's Share Books, which is super small
compared to Calibre. It just allows you, with one click, to get your library
shared on the Internet. [06:24] So that means that you get a public URL, which
says something like www some-number dot memoryoftheworld dot net, and that is
the temporary public URL. You can send it to anyone in the world. [06:37] And
while you are running your local web server and share books, it would just
serve these books to the Internet. [06:45] I also set up a web chat – kind of
a room where people can talk to each other, chat to each other. [06:54] So
it’s just, trying to develop tools around Calibre, which is mostly for one
person, for one librarian – to try to make some kind of ecosystem for a lot of
librarians where they can meet with their readers or among themselves, and
talk about the books which they love to read and share. [07:23] It’s mostly
like a social networking around the books, where we use the idea and tradition
of the public library. [07:37] In order to get there I needed to set up a
server which only does routing. So with my software I don’t know which books
are transferred, anything. It’s just like a router. [07:56] You can do that
also if you have control of your router, or what we usually call modem, so the
device which you use to get to the Internet. But that is quite hard to hack,
just hackers know how to do that. [08:13] So I just made a server on the
Internet which you can use with one click, and it just routes the traffic
between you, if you’re a librarian, and your users, readers. So that’s that
easy.

[08:33]
Librarians

[08:38] It’s super easy to become a librarian, and that is what we should
celebrate. It’s not that the only librarians which we have were the librarians
who were the only ones wanting to become a librarian. [08:54] So lots of
people want to be a librarian, and lots of people are librarians whenever they
have a chance. [09:00] So you would probably recommend me some books which you
like. I’ll recommend you some books which I like. So I think we should
celebrate that now it’s super easy that anyone can be a librarian. [09:11] And
of course, we will still need professional librarians in order to push forward
the whole field. But that goes, again, in collaboration with software
engineers, information architectes, whatever… [09:26] It’s so easy to have
that, and the benefits of that are so great, that there is no reason why not
to do that, I would say.

[09:38]
Functioning

[09:43]
If you want to share your collection then you need to install at the moment
Calibre, and Let’s Share Books software, which I wrote. But also you can – for
example, there is a Calibre plugin for Aaaaarg, so if you use Calibre… from
Calibre you can search Aaaaarg, you can download books from Aaaaarg, you can
also change the metadata and upload the metadata up to Aaaaarg.

[10:13]
Repositories

[10:17]
At the moment the biggest repository for the books, in order to download and
make your catalogue, is Library Genesis. It’s around 900,000 books. It’s
libgen.info, libgen.org. And it’s a great project. [10:33] It’s done by some
Russian hackers, who also allow anyone to download all of that. It’s 9
Terabytes of books, quite some chunk of hard disks which you need for that.
[10:47] And you can also download PHP, the back end of the website and the
MySQL database (a thumb of the MySQL database), so you can run your own
Library Genesis. That’s one of the ways how you can do that. [11:00] You can
also go and join Aaaaarg.org, where it is also not just about downloading
books and uploading books, it’s also about communication and interpretation of
making, different issues and catalogues. [11:14] It’s a community of book
lovers who like to share knowledge, and who add quite a lot of value around
the books by doing that. [11:26] And then there is… you can use Calibre and
Let’s Share Books. It’s just one of these complimentary tools. So it’s not
really that Calibre and Let’s Share Books is the only way how you can today
share books.

[11:45]
Goal

[11:50]
What we do also has a non-hidden agenda for fighting for the public library. I
would say that most of the people we know, even the authors, they all
participate in the huge, massive Public Library – which we don’t call Public
Library, but usually just trying to hide that we are using that because we are
afraid of the restrictive regime. [12:20] So I don’t see a reason why we
should shut down such a great idea and great implementation – a great resource
which we have all around the world. [12:30] So it’s just an attempt to map all
of these projects and to try to improve them. Because, in order to get it into
the right shape, we need to improve the metadata. [12:47] Open Library, a
project which started also with Aaron Swartz, has 20 millions items, and we
use it. There is a basedata.org which connects the hash files, the MD5 hashes,
with the Open Library ID. And we try to contribute to Open Library as much as
possible. [13:10] So with very few people, around 5 people, we can improve it
so much that it will be for a billion of users a great Public Library, and at
the same time we can have millions of librarians, which we never had before.
So that’s the idea. [13:35] The goal is just to keep the Public Library. If we
didn’t screw up the whole situation with the Public Library, probably we’d
just try to add a little bit of new software, and new ways that we can read
the books. [13:53] But at the moment [it’s] super important actually to keep
this infrastructure running, because this super important infrastructure for
the access to knowledge is now under huge threat.

[14:09]
Copyright

[14:13]
I just think that it’s completely inappropriate – that copyright law is
completely inappropriate for the Public Library. I don’t know about other
cases, but in terms of Public Library it’s absolutely inappropriate. [14:29]
We should find the new ways of how to reward the ones who are adding value to
sharing knowledge. First authors, then anyone who is involved in public
libraries, like librarians, software engineers – so everyone who is involved
in that ecosystem should be rewarded, because it’s a great thing, it’s a
benefit for the society. [15:03] If this kind of things happens, so if the law
which regulates this blocks and doesn’t let that field blossom, it’s something
wrong with that law. [15:16] It’s getting worse and worse, so I don’t know for
how long we should wait, because while we’re waiting it’s getting worse.
[15:24] I don’t care. And I think that I can say that because I’m an artist.
Because all of these laws are made saying that they are representing art, they
are representing the interest of artists. I’m an artist. They don’t really
represent my interests. [15:46] I think that it should be taken over by the
artists. And if there are some artists who disagree – great, let’s have a
discussion.

[15:58]
Civil Disobedience

[16:03]
In the possibilities of civil disobedience – which are done also by
institutions, not just by individuals – and I think that in such clear cases
like the Public Library it’s easy. [16:17] So I think that what I did in this
particular case is nothing really super smart – it’s just reducing this huge
issue to something which is comprehensible, which is understandable for most
of the people. [16:31] There is no one really who doesn’t understand what
public library is. And if you say to anyone in the world, saying, like hey, no
more public libraries, hey, no books anymore, no books for the poor people. We
are just giving up on something which we almost consensually accepted through
the whole world. [16:55] And I think that in such clear cases, I’m really
interested [in] what institutions could do, like Transmediale. I’m now in
[Akademie] Schloss Solitude, I also proposed to make a server with a Public
Library. If you invest enough it’s a million of books, it’s a great library.
[17:16] And of course they are scared. And I think that the system will never
really move if people are not brave. [17:26] I’m not really trying to
encourage people to do something where no one could really understand, you
know, and you need expertise or whatever. [17:37] In my opinion this is the
big case. And if Transmediale or any other art institution is playing with
that, and showing that – let’s see how far away we can support this kind of
things. [17:56] The other issue which I am really interested in is what is the
infrastructure, who is running the infrastructures, and what kind of
infrastructures are happen in between these supposedly avant-garde
institutions, or something. [08:12] So I’m really interested in raising these
issues.

[18:17]
Art Project

[18:21]
Public Library is also an art project where… I would say that just in the same
way that corporations, by their legal status, can really kind of mess around
with different… they can’t be that much accountable and responsible – I think
that this is the counterpart. [18:44] So civil disobedience can use art just
the same way that corporations can use their legal status. [18:51] When I was
invited as a curator and artist to curate the HAIP Festival in Ljubljana, I
was already quite into the topic of sharing access to knowledge. And then I
came up with this idea and everybody liked it and everybody was enthusiastic.
It's one of these ideas where you can see that it’s great, there is no one
really who would oppose to that. [19:28] At the same time there was an
exhibition, Dear Art, curated by WHW, quite established curators. And then it
immediately became an art piece for that exhibition. Then I was invited here
to Transmediale, and have a couple of other invitations. [19:45] I think that
it also shows that art institutions are accepting that, they play with that
idea. And I think that this kind of projects – by having that acceptance it
becomes the issue, it becomes the problem of the whole arts establishment.
[20:10] So I think that if I do this in this way, and if there is a curator
who invites this kind of projects – so who invites Public Library into their
exhibition – it’s also showing their kind of readiness to fight for that
issue. [20:27] And if there are a number of art festivals, a number of art
exhibitions, who are supporting this kind of, lets say, civil disobedience,
that also shows something. [20:38] And I think that that kind of context
should be pushed into the confrontation, so it’s not anymore just playing “oh,
is it is ok, it is not? We should deal with all the complexity…” [20:57] There
is no real complexity here. That complexity is somewhere else, and in some
other step we should take care of that. But this is an art piece, it’s a well
established art piece. [21:11] If you make a Public Library, I'm fine, I’m
sacrificing for taking the responsibility. But you shouldn't melt down that
art piece, I think. [21:26] And I feel super stupid that such a simple concept
should be, in 2013, articulated to whom? In many ways it’s like playing dummy,
I play dummy. It’s like, why should I? [21:50] When we started to play in
Ljubljana like software developers we came up with so many great ideas of how
to use those resources. So it was immediately…  just after couple of hours we
had tools – visualisations of that, a reader of Wikipedia which can embed any
page which is referred, as a reference, a quote. [22:17] It was immediately
obvious for anyone there and for anyone from the outside what a huge resource
is having a Public Library like that – and what’s the huge harm that we don’t
have it. [22:32] But still we need to play dummy, I need to play the artist’s
role, you know.


Sollfrank & Snelting
Performing Graphic Design Practice
2014


Femke Snelting
Performing Graphic Design Practice

Leipzig, 7 April 2014

[00:12]
What is Libre Graphics?

[00:16]
Libre Graphics is quite a large ecosystem of software tools, of people –
people that develop these tools, but also people that use these tools;
practices, like how do you then work with them, not just how you make things
quickly and in an impressive way, but also these tools might change your
practice and the cultural artefacts that result from it. So it’s all these
elements that come together, and we call Libre Graphics. [00:53] The term
“Libre” is chosen deliberately. It’s slightly more mysterious that the term
“free”, especially when it turns up in the English language. It sort of hints
that there’s something different, that there’s something done on purpose.
[01:16] And it is a group of people that are inspired by free software
culture, by free culture, by thinking about how to share both their tools,
their recipes and the outcomes of all this. [01:31] So Libre Graphics is quite
wild, it goes in many directions, but it’s an interesting context to work in,
that for me it has been quite inspiring for a few years now.

[01:46]
The context of Libre Graphics

[01:50]
The context of Libre Graphics is multiple. I think that’s part of why I’m
excited about it, and also part of why it’s sometimes difficult to describe it
in a short sentence. [02:04] The context is design – so people that are
interested in design, in creating visuals, in creating animations, videos,
typography. And that is already a multiple context, because each of these
disciplines have their own histories, and their own sort of types of people
that get touched by them. [02:23] Then there is software, people that are
interested in the digital material – so, let’s say, excited about raw bits and
the way a vector gets produced. So that’s a very, almost formal interest in
how graphics are made. [02:47] Then there’s people that do software, so they
are interested in programming, in programming languages, in thinking about
interfaces and thinking about ways software can become a tool. And then
there’s people that are interested in free software, so how can you make
digital tools that can be shared, but also how can you produce processes that
can be shared. [03:11] So there you have from free software activists to
people that are interested in developing specific tools for sharing design and
software development processes, like Git or [Apache] Subversion, or those
kinds of things. So I think that multiple context is really special and rich
in Libre Graphics.

[03:34]
Free software culture

[03:38]
Free software culture… And I use the term culture because I’m more interested
in, let’s say, the cultural aspect of it, and this includes software, for me
software is a cultural object – but I think it’s important to emphasise this,
because it's easily turned into a very technocentric approach which I think is
important to stay away from. [04:01] So free software culture is the thinking
that, when you develop technology – and I’m using technology in the sense that
is cultural as well, to me, deeply cultural – you need to take care of sharing
the recipes for how this technology has been developed as well. [04:28] And
this produces many different other tools, ways of working, ways of speaking,
vocabularies, because it changes radically the way we make and the way we
produce hierarchies. [04:49] So it means, for example, if you produce a
graphic design artefact, for example, that you share all the source files that
were necessary to make it. But you also share, as much as you can,
descriptions and narrations of how it came to be, which does include, maybe,
how much was paid for it, what difficulties were in negotiating with the
printer, and what elements were included – because the graphic design object
is usually a compilation of different elements –, what software was used to
make it and where it might have resisted. [05:34] So the consequences of
taking free software culture seriously in a graphic design or a design
context, means that you care about all these different layers of the work, all
the different conditions that actually make the work happen.

[05:50]
Free culture

[05:54]
The relationship from Libre Graphics to free culture is not always that
explicit. For some people it’s enough to work with tools that are released
under GPL (GNU General Public License), or like an open content license, and
there it stops. So even their work would be released under proprietary
licenses. [06:18] For others it’s important to make the full circle and to
think about what the legal status is of the work they release. So that’s the
more general one. [06:34] Then free culture – we can use that very loosely, as
in everything that is circulating under conditions that it can be reused and
remade, that would be my position – free culture, of course, also refers to
the very specific idea of how that would work, namely Creative Commons.
[06:56] For myself, Creative Commons is problematic, although I value the fact
that it exists and has really created a broader discussion around licenses in
creative practices, so I value that. [07:11] For me, the distinction Creative
Commons makes, almost for all the licenses they promote, between commercial
and non-commercial work, and as a consequence between professional and amateur
work – I find that very problematic, because I think one of the most important
elements of free software culture, for me, is the possibility of people from
different backgrounds, with different skill sets, to actually engage the
digital artefacts they are surrounded with. [07:47] And so by making this
quite lazy separation between commercial and non-commercial, which, especially
in the context of the web as it is right now, since it’s not very easy to hold
up, seems really problematic, because it creates an illusion of clarity that I
think actually makes more trouble than clarity. [08:15] So I use free culture
licenses, I use licenses that are more explicit about the fact that anyone can
use whatever I produce, in any context, because I think that’s where the real
power is of free software culture. [08:31] For me, free software licenses and
all the licenses around them – because I think there are many different types,
and that’s interesting – is that they have a viral power built in. So if you
apply a free software license to, for example, a typeface, it means that
someone else, even someone else you don’t know, has the permission, and
doesn’t have to ask for the permission to reuse the typeface, to change it, to
mix it with something else, to distribute it and to sell it. [09:08] That’s
one part that is already very powerful. But the real secret of such a license
is that once this person re-releases a typeface, it means that they need to
keep the same license. So it means that it propagates across the network, and
that is where it’s really powerful.

[09:31]
Free tools

[09:35]
It’s important to have tools that are released under conditions that allow me
to look further than its surface, for many reasons. There is an ethical
reason. It’s very problematic, I think, to, as a friend explained last week,
to feel like you are renting a room in a hotel – because that is often the way
practitioners nowadays relate to their tools, they have no right to remove the
furniture, they’ve no right to invite friends to their hotel room, they have
to check out at 11, etc. So it’s a very sterile relationship to your tools. So
that’s one part. [10:24] The other is that there is little way of coming into
contact with the cultural aspects of the tools. Something that I suspected
before I started to use free software tools for my practice, but has been
already for almost ten years continuously exciting, is the whole… let’s say,
all the other elements around it: the way people organise themselves in
conferences, mailing lists, the fact that the kinds of communications that
happens, the vocabularies, the histories, the connections between different
disciplines. [11:07] And all that is available to look at, to work with, to
come into contact with, even to speak to people that do these tools and ask
them, why is like this and not like that. And so to me it seems obvious that
artists want to have that kind of, let’s say, layered relation with their
tools, and not just accept whatever comes out of the next-door shop. [11:36] I
have a very different, almost different physical experience of these tools,
because I can enter on many levels. And that makes them part of my practice
and not just means to an end, I really can take them into my practice, and
that I find interesting as an artist and as a designer.

[11:56] Artefacts

[12:00] The outcomes of this type of practice are different, or at least the
kind of work I make, try to make, and the people I like to work with. There’s
obviously also a group of people that would like to do Hollywood movies with
those tools. And, you know, that’s kind of interesting too, that that happens.
[12:21] For me, somehow the technological context or conditions that made the
work possible will always occur in the final result. So that’s one part.
[12:38] And the other is that the, let’s say, the product is never the end. So
it means that because, in whatever way, source materials would be released,
would be made available, it means that the product is always the beginning of
another project or product, either by me or by other people. [13:02] So I
think that’s two things that you can always see in the kind of works we make
when we do Libre Graphics – my style.

[13:15] Libre Fonts

[13:18] A very exciting part of Libre Graphics is the Libre Font movement,
which is strong, and has been strong for a long time. Fonts are the basic
building block of how a graphic comes to life. I mean, when you type
something, it’s there. [13:40] And the fact that that part of the work is free
is important in many levels. Things that you often don’t think about when we
speak English and we stay within a limited character set, is that when you
live in, let’s say, India, the language you speak is not available as a
digital typeface, meaning that when you want to produce book in the tools that
are available, or publish it online, your language has no way of expressing
itself. [14:26] And so it’s important, and that has to do with commercial
interests, laws, ways that the technical infrastructure has been built. And so
by understanding that it’s important that you can express yourself in the
language and with the characters you need, it’s also obvious that that part
needs to be free. [14:53] Fonts are also interesting because they exist on
many levels. They exist on your system. They are almost software, because they
are quite complicated objects. They appear in your screen, when you print a
document – they are there all the time. [15:17] But at the same time it’s the
alphabet. It’s the most, let’s say… we consider it as a totally accessible,
available and universal right, to have the alphabet at our disposal. [15:29]
So I think, politically and, let’s say, from a sort of interest in that kind
of practice that is very technical but at the same time also very basic, in
the sense that is about “freeing an A,” that’s quite a beautiful energy – I
think that that has made the Libre Font movement very strong.

[15:55] Free artefacts / open standards

[15:59] It took me a while to figure out myself – that for me it was so
obvious that if you do free software, that you would produce free artefacts, I
mean, it seems kind of obvious, but that is not at all the case. [16:12] There
is full-fledged commercial production happening with these tools. But one
thing that sort of keeps the results, the outcomes of these projects, freer
than most commercial tools is that there is really an emphasis on open
document formats. [16:34] And that is extremely important because, first of
all, through this sort of free software thinking it’s very obvious that the
documents that you produce with the tool should not belong to the software
vendor, they are yours. [16:49] And to be able to own your own documents you
need to be able to look, to inspect how they are produced. I know many tragic
stories of designers that with several upgrades of “their” tool set lost
documents, because they could never open them again. [17:12] So there’s really
an emphasis and a lot of work in making sure that the documents produced from
these tools remain inspectable, are documented, so that either you can open
them in another tool, or could develop a tool to open them in, to have these
files available for you. [17:38] So it’s really part and parcel of free
software culture, it’s that you care about that what generates your artefact,
but also about the materiality of your artefact. And so there, open standards
are extremely important – or maybe, let’s say, that file formats are
documented and can be understood. [18:04] And what’s interesting to see is
that in this whole Libre Graphics world there is also a very strong group of
reverse engineers, that are document formants, document activists, I would
say. [18:19] And I think that’s really interesting. They claim, they say,
documents need to be free, and so we would go against… let’s say, we would
risk breaking the law to be able to understand how non-free documents actually
are constructed. [18:37] So they are really working to be able to understand
non-free documents, to be able to read them, and to be able to develop tools
for them, so that they can be reused and remade. [18:54] So the difference
between a free and a non-free document is that, for example, an InDesign file,
which is the result of a commercial product, there’s no documentation
available to how this file works. [19:10] This means that the only way to open
the file is with that particular program. So there is a connection between
that what you’ve made and the software you’ve used to produce it. [19:24] It
also means that if the software updates, or the license runs out, you will not
have access to your own file. It means it’s fixed, you can never change it,
and you can never allow anyone else to change it. [19:39] And open document
format has documentation. That means that not only the software that created
it is available, and so that way you can understand how it was made, but also
there’s independent documentation available. [19:55] So that whenever a
project, like a software, doesn’t work anymore or it’s too old to be run, or
you don’t have it available, you have other ways of understanding the document
and being able to open it, and reuse and remake it. [20:11] Examples of open
document formats are, for example, SVG (Scalable Vector Graphics), ODT (Open
Document Text format), or OGG, a format for video that allows you to look at
all the elements that are packed into the video format. [20:31] What’s
important is that, around these open formats, you see a whole ecosystem exists
of tools to inspect, to create, to read, to change, to manipulate these
formats. And I think it’s very easy to see how around InDesign files this
culture does not exist at all.

[20:55] Getting started

[20:59] If you would be interested to start using Libre Graphics, you can
enter it in different levels. There’s well-developed tools that look a bit
like commercial photo manipulation tools, or layout tools. [21:19] There’s
something called Gimp, which is a well-developed software for treating photos.
There’s Blender, which is a fast-developing animation software, that’s being
used by thousands of thousands of people, and even it’s being used in
commercial productions, Pixar-style stuff. [21:43] These tools can be
installed on any system, so you don’t have to run a Linux system to be able to
use them. You can install them on a Macintosh or on a Windows, for example. Of
course, they are usually more powerful when you run them on a system that
recognises that power.

[22:09] Sharing practice / re-learn

[22:14] This way of working changes the way you learn, and also therefore the
way you teach. And so, as many of us have understood the relation between
learning and practice, we’ve all been somehow involved in education, many of
us are teaching in formal design or art education. [22:43] And it’s very clear
how those traditional schools are really not fit for the type of learning and
teaching that needs to happen around Libre Graphics. [22:57] So one of the
problems that we run into is the fact that art academies are traditionally
really organised on many levels – so that the validation systems are really
geared towards judging individuals. And our type of practice is always
multiple, it’s always about, let’s say, things that happen with many people.
[23:17] And it’s really difficult to inspire students to work that way, and at
the same time know that at the end of the day, they will be judged on their
own, what they produce as an individual. So that’s one part. [23:31] In
traditional education there’s always like a separation between teaching
technology and practice. So you have, in different ways, let’s say, you have
the studio practice and then you have the workshops. And it’s very difficult
to make conceptual connections between the two, so we end up trying to make
that happen but it’s clearly not made for that. [24:02] And then there is the
problematics of the hierarchies between tutors and students, that are hard to
break in formal education, just because the set up is – even when it’s a very
informal situation – that someone comes to teach and someone else comes to be
taught. [24:28] And there’s no way to truly break that hierarchy because
that’s the way the school works. So since a year we’ve been starting to think
about how to do… Well, no, for years we’ve been thinking about how to do
teaching differently, or how to do learning differently. [24:48] And so last
year for the first time we organised a summer school, just as a kind of
experiment to see if we could learn and teach differently. And the title, the
name of the school is Relearn, because the sort of relearning, for yourself
but also to others, through teaching-learning, has became really a good
methodology, it seems.

[25:15] Affiliations

[25:19] If I say “we”, that’s always a bit uncomfortable, because I like to be
clear about who that is, but when I’m speaking here there’s many “we” in my
mind. So there’s a group of designers called OSP (Opens Source Publishing).
They started in 2006 with the simple decision to not use any proprietary
software anymore for their work. And from that this whole set of questions,
and practices and methods developed. [25:51] So right now that’s about twelve
people working in Brussels having a design practice. And I’m lucky to be an
honorary member of this group, and so I’m in close contact with them, but I’m
not actively working with the design group. [20:11] Another “we”, and
overlapping “we”, is Constant, an association for art and media active in
Brussels since 1996, 1997 maybe. Our interest is more in mixing copyleft
thinking, free software thinking and feminism. And in many ways that
intersects with OSP, but they might phrase it in a different way. [26:42]
Another “we” is the Libre Graphics community, which is even a more
uncomfortable “we” because it includes engineers that would like to conquer
the world, and small hyper-intelligent developers that creep out of their
corner to talk about the very strange world they are creating, or typographers
that care about universal typefaces. [27:16] I mean, there’s many different
people that are involved in that world. So I think, in this conversation the
“we” are Contant, OSP and Libre Graphics community, whatever that is.

[27:29] Libre Graphics annual meeting, Leipzig 2014

[27:34] We worked on a Code of Conduct – which is something that seems to
appear in free software or tech conferences more and more, it comes a bit from
the U.S. context – where we have started to understand that the fact that free
software is free doesn’t mean that everyone feels welcome. [28:02] For long
there still are large problems with diversity in this community. The
excitement about freedom has led people to think that people that were no
there would probably not want to be there, and therefore had no role to be
there. [28:26] And so if you think, for example, the fact that there is very
little, that there’s not a lot of women active in free software, a lot less
than in proprietary software, which is quite painful if you think about it.
[28:41] That has to do with this sort of cyclical effects of: because women
are not there they would probably be not interested, and because they are not
interested they might not be capable, or feel capable of being active, and
they feel they might not belong. So that’s one part. [29:07] The other part is
that there’s a very brutal culture of harassment, of racist and sexist
language, of using imagery that is, let’s say, unacceptable. And that needs to
be dealt with. [29:26] Over the last two years, I think, the documents like
the Code of Conduct have started to come out from feminists active in this
world, like Geek Feminism or the Ada Initiative, as a way to deal with this.
And what it does is it describes, in a bit… let’s say, it’s slightly pompous
in the sense that you describe your values. [29:56] But it is a way to
acknowledge the fact that this communities have a problem with harassment,
first; that they explicitly say, we want diversity, which is important; that
it gives very clear and practical guidelines for what someone that feels
harassed can do, who he or she can speak to, and what will be the
consequences. [30:31] Meaning that it takes away the burden from, well, at
least as much as possible, from someone who is harassed to defend, actually,
the gravity of the case.

[30:43] Art as integrative concept

[30:47] For me, calling myself an artist is useful, it’s very useful. I’m not
so busy, let’s say, with the institutional art context – that doesn’t help me
at all. [31:03] But what does help me is the figure of the artist, the kinds
of intelligences that I sort of project on myself, and I use from others, from
my colleagues (before and contemporary), because it allows me to not have too
many… to be able to define my own context and concepts without forgetting
practice. [31:37] And I think art is one of the rare places that allows this.
Not only it allows it, but actually it rigorously asks for it. It’s really
wanting me to be explicit about my historical connections, my way of making,
my references, my choices, that are part of the situation I build. [32:11] So
the figure of the artist is a very useful toolbox in itself. And I think I use
it more than I would have thought, because it allows me to make these cross-
connections in a productive way.



1. [Preface to the English Edition](#fpref)
2. [Acknowledgments](#ack)
3. [Introduction: After the End of the Gutenberg Galaxy](#cintro)
1. [Notes](#f6-ntgp-9999)
4. [I: Evolution](#c1)
1. [The Expansion of the Social Basis of Culture](#c1-sec-0002)
2. [The Culturalization of the World](#c1-sec-0006)
3. [The Technologization of Culture](#c1-sec-0009)
4. [From the Margins to the Center of Society](#c1-sec-0013)
5. [Notes](#c1-ntgp-9999)
5. [II: Forms](#c2)
1. [Referentiality](#c2-sec-0002)
2. [Communality](#c2-sec-0009)
3. [Algorithmicity](#c2-sec-0018)
4. [Notes](#c2-ntgp-9999)
6. [III: Politics](#c3)
1. [Post-democracy](#c3-sec-0002)
2. [Commons](#c3-sec-0011)
3. [Against a Lack of Alternatives](#c3-sec-0017)
4. [Notes](#c3-ntgp-9999)

[Preface to the English Edition]{.chapterTitle} {#fpref}
  • ::: {.section}
    This book posits that we in the societies of the (transatlantic) West
    find ourselves in a new condition. I call it "the digital condition"
    because it gained its dominance as computer networks became established
    as the key infrastructure for virtually all aspects of life. However,
    the emergence of this condition pre-dates computer networks. In fact, it
    has deep historical roots, some of which go back to the late nineteenth
    century, but it really came into being after the late 1960s. As many of
    the cultural and political institutions shaped by the previous condition
    -- which McLuhan called the Gutenberg Galaxy -- fell into crisis, new
    forms of personal and collective orientation and organization emerged
    which have been shaped by the affordances of this new condition. Both
    the historical processes which unfolded over a very long time and the
    structural transformation which took place in a myriad of contexts have
    been beyond any deliberate influence. Although obviously caused by
    social actors, the magnitude of such changes was simply too great, too
    distributed, and too complex to be attributed to, or molded by, any
    particular (set of) actor(s).

    Yet -- and this is the core of what motivated me to write this book --
    this does not mean that we have somehow moved beyond the political,
    beyond the realm in which identifiable actors and their projects do
    indeed shape our collective []{#Page_vii type="pagebreak"
    title="vii"}existence, or that there are no alternatives to future
    development already expressed within contemporary dynamics. On the
    contrary, we can see very clearly that as the center -- the established
    institutions shaped by the affordances of the previous condition -- is
    crumbling, more economic and political projects are rushing in to fill
    that void with new institutions that advance their competing agendas.
    These new institutions are well adapted to the digital condition, with
    its chaotic production of vast amounts of information and innovative
    ways of dealing with that.

    From this, two competing trajectories have emerged which are
    simultaneously transforming the space of the political. First, I used
    the term "post-democracy" because it expands possibilities, and even
    requirements, of (personal) participation, while ever larger aspects of
    (collective) decision-making are moved to arenas that are structurally
    disconnected from those of participation. In effect, these arenas are
    forming an authoritarian reality in which a small elite is vastly
    empowered at the expense of everyone else. The purest incarnation of
    this tendency can be seen in the commercial social mass media, such as
    Facebook, Google, and the others, as they were newly formed in this
    condition and have not (yet) had to deal with the complications of
    transforming their own legacy.

    For the other trajectory, I applied the term "commons" because it
    expands both the possibilities of personal participation and agency, and
    those of collective decision-making. This tendency points to a
    redefinition of democracy beyond the hollowed-out forms of political
    representation characterizing the legacy institutions of liberal
    democracy. The purest incarnation of this tendency can be found in the
    institutions that produce the digital commons, such as Wikipedia and the
    various Free Software communities whose work has been and still is
    absolutely crucial for the infrastructural dimensions of the digital
    networks. They are the most advanced because, again, they have not had
    to deal with institutional legacies. But both tendencies are no longer
    confined to digital networks and are spreading across all aspects of
    social life, creating a reality that is, on the structural level,
    surprisingly coherent and, on the social and political level, full of
    contradictions and thus opportunities.[]{#Page_viii type="pagebreak"
    title="viii"}

    I traced some aspects of these developments right up to early 2016, when
    the German version of this book went into production. Since then a lot
    has happened, but I resisted the temptation to update the book for the
    English translation because ideas are always an expression of their
    historical moment and, as such, updating either turns into a completely
    new version or a retrospective adjustment of the historical record.

    What has become increasingly obvious during 2016 and into 2017 is that
    central institutions of liberal democracy are crumbling more quickly and
    dramatically than was expected. The race to replace them has kicked into
    high gear. The main events driving forward an authoritarian renewal of
    politics took place on a national level, in particular the vote by the
    UK to leave the EU (Brexit) and the election of Donald Trump to the
    office of president of the United States of America. The main events
    driving the renewal of democracy took place on a metropolitan level,
    namely the emergence of a network of "rebel cities," led by Barcelona
    and Madrid. There, community-based social movements established their
    candidates in the highest offices. These cities are now putting in place
    practical examples that other cities could emulate and adapt. For the
    concerns of this book, the most important concept put forward is that of
    "technological sovereignty": to bring the technological infrastructure,
    and its developmental potential, back under the control of those who are
    using it and are affected by it; that is, the citizens of the
    metropolis.

    Over the last 18 months, the imbalances between the two trajectories
    have become even more extreme because authoritarian tendencies and
    surveillance capitalism have been strengthened more quickly than the
    commons-oriented practices could establish themselves. But it does not
    change the fact that there are fundamental alternatives embedded in the
    digital condition. Despite structural transformations that affect how we
    do things, there is no inevitability about what we want to do
    individually and, even more importantly, collectively.

    ::: {.poem}
    ::: {.lineGroup}
    Zurich/Vienna, July 2017[]{#Page_ix type="pagebreak" title="ix"}
    :::
    :::
    :::

    [Acknowledgments]{.chapterTitle} {#ack}
  • ::: {.section}
    While it may be conventional to cite one person as the author of a book,
    writing is a process with many collective elements. This book in
    particular draws upon many sources, most of which I am no longer able to
    acknowledge with any certainty. Far too often, important references came
    to me in parenthetical remarks, in fleeting encounters, during trips, at
    the fringes of conferences, or through discussions of things that,
    though entirely new to me, were so obvious to others as not to warrant
    any explication. Often, too, my thinking was influenced by long
    conversations, and it is impossible for me now to identify the precise
    moments of inspiration. As far as the themes of this book are concerned,
    four settings were especially important. The international discourse
    network "nettime," which has a mailing list of 4,500 members and which I
    have been moderating since the late 1990s, represents an inexhaustible
    source of internet criticism and, as a collaborative filter, has enabled
    me to follow a wide range of developments from a particular point of
    view. I am also indebted to the Zurich University of the Arts, where I
    have taught for more than 10 years and where the students have been
    willing to explain to me, again and again, what is already self-evident
    to them. Throughout my time there, I have been able to observe a
    dramatic shift. For today\'s students, the "new" is no longer new but
    simply obvious, whereas they []{#Page_x type="pagebreak" title="x"}have
    experienced many things previously regarded as normal -- such as
    checking out a book from a library (instead of downloading it) -- as
    needlessly complicated. In Vienna, the hub of my life, the World
    Information Institute has for many years provided a platform for
    conferences, publications, and interventions that have repeatedly raised
    the stakes of the discussion and have brought together the most
    interesting range of positions without regard to any disciplinary
    boundaries. Housed in Vienna, too, is the Technopolitics Project, a
    non-institutionalized circle of researchers and artists whose
    discussions of techno-economic paradigms have informed this book in
    fundamental ways and which has offered multiple opportunities for me to
    workshop inchoate ideas.

    Not everything, however, takes place in diffuse conversations and
    networks. I was also able to rely on the generous support of several
    individuals who, at one stage or another, read through, commented upon,
    and made crucial improvements to the manuscript: Leonhard Dobusch,
    Günther Hack, Katja Meier, Florian Cramer, Cornelia Sollfrank, Beat
    Brogle, Volker Grassmuck, Ursula Stalder, Klaus Schönberger, Konrad
    Becker, Armin Medosch, Axel Stockburger, and Gerald Nestler. Special
    thanks are owed to Rebina Erben-Hartig, who edited the original German
    manuscript and greatly improved its readability. I am likewise grateful
    to Heinrich Greiselberger and Christian Heilbronn of the Suhrkamp
    Verlag, whose faith in the book never wavered despite several delays.
    Regarding the English version at hand, it has been a privilege to work
    with a translator as skillful as Valentine Pakis. Over the past few
    years, writing this book might have been the most import­ant project in
    my life had it not been for Andrea Mayr. In this regard, I have been
    especially fortunate.[]{#Page_xi type="pagebreak"
    title="xi"}[]{#Page_xii type="pagebreak" title="xii"}
    :::

    Introduction [After the End of the Gutenberg Galaxy]{.chapterTitle} []{.chapterSubTitle} {#cintro}

    ::: {.section}
    The show had already been going on for more than three hours, but nobody
    was bothered by this. Quite the contrary. The tension in the venue was
    approaching its peak, and the ratings were through the roof. Throughout
    all of Europe, 195 million people were watching the spectacle on
    television, and the social mass media were gaining steam. On Twitter,
    more than 47,000 messages were being sent every minute with the hashtag
    \#Eurovision.[^1^](#f6-note-0001){#f6-note-0001a} The outcome was
    decided shortly after midnight: Conchita Wurst, the bearded diva, was
    announced the winner of the 2014 Eurovision Song Contest. Cheers erupted
    as the public celebrated the victor -- but also itself. At long last,
    there was more to the event than just another round of tacky television
    programming ("This is Ljubljana calling!"). Rather, a statement was made
    -- a statement in favor of tolerance and against homophobia, for
    diversity and for the right to define oneself however one pleases. And
    Europe sent this message in the midst of a crisis and despite ongoing
    hostilities, not to mention all of the toxic rumblings that could be
    heard about decadence, cultural decay, and Gayropa. Visibly moved, the
    Austrian singer let out an exclamation -- "We are unity, and we are
    unstoppable!" -- as she returned to the stage with wobbly knees to
    accept the trophy.

    With her aesthetically convincing performance, Conchita succeeded in
    unleashing a strong desire for personal []{#Page_1 type="pagebreak"
    title="1"}self-discovery, for community, and for overcoming stale
    conventions. And she did this through a character that mainstream
    society would have considered paradoxical and deviant not long ago but
    has since come to understand: attractive beyond the dichotomy of man and
    woman, explicitly artificial and yet entirely authentic. This peculiar
    conflation of artificiality and naturalness is equally present in
    Berndnaut Smilde\'s photographic work of a real indoor cloud (*Nimbus*,
    2010) on the cover of this book. Conchita\'s performance was also on a
    formal level seemingly paradoxical: extremely focused and completely
    open. Unlike most of the other acts, she took the stage alone, and
    though she hardly moved at all, she nevertheless incited the audience to
    participate in numerous ways and genuinely to act out the motto of the
    contest ("Join us!"). Throughout the early rounds of the competition,
    the beard, which was at first so provocative, transformed into a
    free-floating symbol that the public began to appropriate in various
    ways. Men and women painted Conchita-like beards on their faces,
    newspapers printed beards to be cut out, and fans crocheted beards. Not
    only did someone Photoshop a beard on to a painting of Empress Sissi of
    Austria, but King Willem-Alexander of the Netherlands even tweeted a
    deceptively realistic portrait of his wife, Queen Máxima, wearing a
    beard. From one of the biggest stages of all, the evening of Wurst\'s
    victory conveyed an impression of how much the culture of Europe had
    changed in recent years, both in terms of its content and its forms.
    That which had long been restricted to subcultural niches -- the
    fluidity of gender iden­tities, appropriation as a cultural technique,
    or the conflation of reception and production, for instance -- was now
    part of the mainstream. Even while sitting in front of the television,
    this mainstream was no longer just a private audience but rather a
    multitude of singular producers whose networked activity -- on location
    or on social mass media -- lent particular significance to the occasion
    as a moment of collective self-perception.

    It is more than half a century since Marshall McLuhan announced the end
    of the Modern era, a cultural epoch that he called the Gutenberg Galaxy
    in honor of the print medium by which it was so influenced. What was
    once just an abstract speculation of media theory, however, now
    describes []{#Page_2 type="pagebreak" title="2"}the concrete reality of
    our everyday life. What\'s more, we have moved well past McLuhan\'s
    diagnosis: the erosion of old cultural forms, institutions, and
    certainties is not just something we affirm, but new ones have already
    formed whose contours are easy to identify not only in niche sectors but
    in the mainstream. Shortly before Conchita\'s triumph, Facebook thus
    expanded the gender-identity options for its billion-plus users from 2
    to 60. In addition to "male" and "female," users of the English version
    of the site can now choose from among the following categories:

    ::: {.extract}
    Agender, Androgyne, Androgynes, Androgynous, Asexual, Bigender, Cis, Cis
    Female, Cis Male, Cis Man, Cis Woman, Cisgender, Cisgender Female,
    Cisgender Male, Cisgender Man, Cisgender Woman, Female to Male (FTM),
    Female to Male Trans Man, Female to Male Transgender Man, Female to Male
    Transsexual Man, Gender Fluid, Gender Neutral, Gender Nonconforming,
    Gender Questioning, Gender Variant, Genderqueer, Hermaphrodite,
    Intersex, Intersex Man, Intersex Person, Intersex Woman, Male to Female
    (MTF), Male to Female Trans Woman, Male to Female Transgender Woman,
    Male to Female Transsexual Woman, Neither, Neutrois, Non-Binary, Other,
    Pangender, Polygender, T\*Man, Trans, Trans Female, Trans Male, Trans
    Man, Trans Person, Trans\*Female, Trans\*Male, Trans\*Man,
    Trans\*Person, Trans\*Woman, Transexual, Transexual Female, Transexual
    Male, Transexual Man, Transexual Person, Transexual Woman, Transgender
    Female, Transgender Person, Transmasculine, T\*Woman, Two\*Person,
    Two-Spirit, Two-Spirit Person.
    :::

    This enormous proliferation of cultural possibilities is an expression
    of what I will refer to below as the digital condition. Far from being
    universally welcomed, its growing presence has also instigated waves of
    nostalgia, diffuse resentments, and intellectual panic. Conservative and
    reactionary movements, which oppose such developments and desire to
    preserve or even re-create previous conditions, have been on the rise.
    Likewise in 2014, for instance, a cultural dispute broke out in normally
    subdued Baden-Würtemberg over which forms of sexual partnership should
    be mentioned positively in the sexual education curriculum. Its impetus
    was a working paper released at the end of 2013 by the state\'s
    []{#Page_3 type="pagebreak" title="3"}Ministry of Culture. Among other
    things, it proposed that adolescents "should confront their own sexual
    identity and orientation \[...\] from a position of acceptance with
    respect to sexual diversity."[^2^](#f6-note-0002){#f6-note-0002a} In a
    short period of time, a campaign organized mainly through social mass
    media collected more than 200,000 signatures in opposition to the
    proposal and submitted them to the petitions committee at the state
    parliament. At that point, the government responded by putting the
    initiative on ice. However, according to the analysis presented in this
    book, leaving it on ice creates a precarious situation.

    The rise and spread of the digital condition is the result of a
    wide-ranging and irreversible cultural transformation, the beginnings of
    which can in part be traced back to the nineteenth century. Since the
    1960s, however, this shift has accelerated enormously and has
    encompassed increasingly broader spheres of social life. More and more
    people have been participating in cultural processes; larger and larger
    dimensions of existence have become battlegrounds for cultural disputes;
    and social activity has been intertwined with increasingly complex
    technologies, without which it would hardly be possible to conceive of
    these processes, let alone achieve them. The number of competing
    cultural projects, works, reference points, and reference systems has
    been growing rapidly. This, in turn, has caused an escalating crisis for
    the established forms and institutions of culture, which are poorly
    equipped to deal with such an inundation of new claims to meaning. Since
    roughly the year 2000, many previously independent developments have
    been consolidating, gaining strength and modifying themselves to form a
    new cultural constellation that encompasses broad segments of society --
    a new galaxy, as McLuhan might have
    said.[^3^](#f6-note-0003){#f6-note-0003a} These days it is relatively
    easy to recognize the specific forms that characterize it as a whole and
    how these forms have contributed to new, contradictory and
    conflict-laden political dynamics.

    My argument, which is restricted to cultural developments in the
    (transatlantic) West, is divided into three chapters. In the first, I
    will outline the *historical* developments that have given rise to this
    quantitative and qualitative change and have led to the crisis faced by
    the institutions of the late phase of the Gutenberg Galaxy, which
    defined the last third []{#Page_4 type="pagebreak" title="4"}of the
    twentieth century.[^4^](#f6-note-0004){#f6-note-0004a} The expansion of
    the social basis of cultural processes will be traced back to changes in
    the labor market, to the self-empowerment of marginalized groups, and to
    the dissolution of centralized cultural geography. The broadening of
    cultural fields will be discussed in terms of the rise of design as a
    general creative discipline, and the growing significance of complex
    technologies -- as fundamental components of everyday life -- will be
    tracked from the beginnings of independent media up to the development
    of the internet as a mass medium. These processes, which at first
    unfolded on their own and may have been reversible on an individual
    basis, are integrated today and represent a socially domin­ant component
    of the coherent digital condition. From the perspective of cultural
    studies and media theory, the second chapter will delineate the already
    recognizable features of this new culture. Concerned above all with the
    analysis of forms, its focus is thus on the question of "how" cultural
    practices operate. It is only because specific forms of culture,
    exchange, and expression are prevalent across diverse var­ieties of
    content, social spheres, and locations that it is even possible to speak
    of the digital condition in the singular. Three examples of such forms
    stand out in particular. *Referentiality* -- that is, the use of
    existing cultural materials for one\'s own production -- is an essential
    feature of many methods for inscribing oneself into cultural processes.
    In the context of unmanageable masses of shifting and semantically open
    reference points, the act of selecting things and combining them has
    become fundamental to the production of meaning and the constitution of
    the self. The second feature that characterizes these processes is
    *communality*. It is only through a collectively shared frame of
    reference that meanings can be stabilized, possible courses of action
    can be determined, and resources can be made available. This has given
    rise to communal formations that generate self-referential worlds, which
    in turn modulate various dimensions of existence -- from aesthetic
    preferences to the methods of biological reproduction and the rhythms of
    space and time. In these worlds, the dynamics of network power have
    reconfigured notions of voluntary and involuntary behavior, autonomy,
    and coercion. The third feature of the new cultural landscape is its
    *algorithmicity*. It is characterized, in other []{#Page_5
    type="pagebreak" title="5"}words, by automated decision-making processes
    that reduce and give shape to the glut of information, by extracting
    information from the volume of data produced by machines. This extracted
    information is then accessible to human perception and can serve as the
    basis of singular and communal activity. Faced with the enormous amount
    of data generated by people and machines, we would be blind were it not
    for algorithms.

    The third chapter will focus on *political dimensions*. These are the
    factors that enable the formal dimensions described in the preceding
    chapter to manifest themselves in the form of social, political, and
    economic projects. Whereas the first chapter is concerned with long-term
    and irreversible histor­ical processes, and the second outlines the
    general cultural forms that emerged from these changes with a certain
    degree of inevitability, my concentration here will be on open-ended
    dynamics that can still be influenced. A contrast will be made between
    two political tendencies of the digital condition that are already quite
    advanced: *post-democracy* and *commons*. Both take full advantage of
    the possibilities that have arisen on account of structural changes and
    have advanced them even further, though in entirely different
    directions. "Post-democracy" refers to strategies that counteract the
    enormously expanded capacity for social communication by disconnecting
    the possibility to participate in things from the ability to make
    decisions about them. Everyone is allowed to voice his or her opinion,
    but decisions are ultimately made by a select few. Even though growing
    numbers of people can and must take responsibility for their own
    activity, they are unable to influence the social conditions -- the
    social texture -- under which this activity has to take place. Social
    mass media such as Facebook and Google will receive particular attention
    as the most conspicuous manifestations of this tendency. Here, under new
    structural provisions, a new combination of behavior and thought has
    been implemented that promotes the normalization of post-democracy and
    contributes to its otherwise inexplicable acceptance in many areas of
    society. "Commons," on the contrary, denotes approaches for developing
    new and comprehensive institutions that not only directly combine
    participation and decision-making but also integrate economic, social,
    and ethical spheres -- spheres that Modernity has tended to keep
    apart.[]{#Page_6 type="pagebreak" title="6"}

    Post-democracy and commons can be understood as two lines of development
    that point beyond the current crisis of liberal democracy and represent
    new political projects. One can be characterized as an essentially
    authoritarian system, the other as a radical expansion and renewal of
    democracy, from the notion of representation to that of participation.

    Even though I have brought together a number of broad perspectives, I
    have refrained from discussing certain topics that a book entitled *The
    Digital Condition* might be expected to address, notably the matter of
    copyright, for one example. This is easy to explain. As regards the new
    forms at the heart of this book, none of these developments requires or
    justifies copyright law in its present form. In any case, my thoughts on
    the matter were published not long ago in another book, so there is no
    need to repeat them here.[^5^](#f6-note-0005){#f6-note-0005a} The theme
    of privacy will also receive little attention. This is not because I
    share the view, held by proponents of "post-privacy," that it would be
    better for all personal information to be made available to everyone. On
    the contrary, this position strikes me as superficial and naïve. That
    said, the political function of privacy -- to safeguard a degree of
    personal autonomy from powerful institutions -- is based on fundamental
    concepts that, in light of the developments to be described below,
    urgently need to be updated. This is a task, however, that would take me
    far beyond the scope of the present
    book.[^6^](#f6-note-0006){#f6-note-0006a}

    Before moving on to the first chapter, I should first briefly explain my
    somewhat unorthodox understanding of the central concepts in the title
    of the book -- "condition" and "digital." In what follows, the term
    "condition" will be used to designate a cultural condition whereby the
    processes of social meaning -- that is, the normative dimension of
    existence -- are explicitly or implicitly negotiated and realized by
    means of singular and collective activity. Meaning, however, does not
    manifest itself in signs and symbols alone; rather, the practices that
    engender it and are inspired by it are consolidated into artifacts,
    institutions, and lifeworlds. In other words, far from being a symbolic
    accessory or mere overlay, culture in fact directs our actions and gives
    shape to society. By means of materialization and repetition, meaning --
    both as claim and as reality -- is made visible, productive, and
    negotiable. People are free to accept it, reject it, or ignore
    []{#Page_7 type="pagebreak" title="7"}it altogether. Social meaning --
    that is, meaning shared by multiple people -- can only come about
    through processes of exchange within larger or smaller formations.
    Production and reception (to the extent that it makes any sense to
    distinguish between the two) do not proceed linearly here, but rather
    loop back and reciprocally influence one another. In such processes, the
    participants themselves determine, in a more or less binding manner, how
    they stand in relation to themselves, to each other, and to the world,
    and they determine the frame of reference in which their activity is
    oriented. Accordingly, culture is not something static or something that
    is possessed by a person or a group, but rather a field of dispute that
    is subject to the activities of multiple ongoing changes, each happening
    at its own pace. It is characterized by processes of dissolution and
    constitution that may be collaborative, oppositional, or simply
    operating side by side. The field of culture is pervaded by competing
    claims to power and mechanisms for exerting it. This leads to conflicts
    about which frames of reference should be adopted for different fields
    and within different social groups. In such conflicts,
    self-determination and external determination interact until a point is
    reached at which both sides are mutually constituted. This, in turn,
    changes the conditions that give rise to shared meaning and personal
    identity.

    In what follows, this broadly post-structuralist perspective will inform
    my discussion of the causes and formational conditions of cultural
    orders and their practices. Culture will be conceived throughout as
    something heterogeneous and hybrid. It draws from many sources; it is
    motivated by the widest possible variety of desires, intentions, and
    compulsions; and it mobilizes whatever resources might be necessary for
    the constitution of meaning. This emphasis on the materiality of culture
    is also reflected in the concept of the digital. Media are relational
    technologies, which means that they facilitate certain types of
    connection between humans and
    objects.[^7^](#f6-note-0007){#f6-note-0007a} "Digital" thus denotes the
    set of relations that, on the infrastructural basis of digital networks,
    is realized today in the production, use, and transform­ation of
    material and immaterial goods, and in the constitution and coordination
    of personal and collective activity. In this regard, the focus is less
    on the dominance of a certain class []{#Page_8 type="pagebreak"
    title="8"}of technological artifacts -- the computer, for instance --
    and even less on distinguishing between "digital" and "analog,"
    "material" and "immaterial." Even in the digital condition, the analog
    has not gone away. Rather, it has been re-evaluated and even partially
    upgraded. The immaterial, moreover, is never entirely without
    materiality. On the contrary, the fleeting impulses of digital
    communication depend on global and unmistakably material infrastructures
    that extend from mines beneath the surface of the earth, from which rare
    earth metals are extracted, all the way into outer space, where
    satellites are circling around above us. Such things may be ignored
    because they are outside the experience of everyday life, but that does
    not mean that they have disappeared or that they are of any less
    significance. "Digital" thus refers to historically new possibilities
    for constituting and connecting various human and non-human actors,
    which is not limited to digital media but rather appears everywhere as a
    relational paradigm that alters the realm of possibility for numerous
    materials and actors. My understanding of the digital thus approximates
    the concept of the "post-digital," which has been gaining currency over
    the past few years within critical media cultures. Here, too, the
    distinction between "new" and "old" media and all of the ideological
    baggage associated with it -- for instance, that the new represents the
    future while the old represents the past -- have been rejected. The
    aesthetic projects that continue to define the image of the "digital" --
    immateriality, perfection, and virtuality -- have likewise been
    discarded.[^8^](#f6-note-0008){#f6-note-0008a} Above all, the
    "post-digital" is a critical response to this techno-utopian aesthetic
    and its attendant economic and political perspectives. According to the
    cultural theorist Florian Cramer, the concept accommodates the fact that
    "new ethical and cultural conventions which became mainstream with
    internet communities and open-source culture are being retroactively
    applied to the making of non-digital and post-digital media
    products."[^9^](#f6-note-0009){#f6-note-0009a} He thus cites the trend
    that process-based practices oriented toward open interaction, which
    first developed within digital media, have since begun to appear in more
    and more contexts and in an increasing number of
    materials.[^10[]{#Page_9 type="pagebreak"
    title="9"}^](#f6-note-0010){#f6-note-0010a}

    For the historical, cultural-theoretical, and political perspectives
    developed in this book, however, the concept of the post-digital is
    somewhat problematic, for it requires the narrow context of media art
    and its fixation on technology in order to become a viable
    counter-position. Without this context, certain misunderstandings are
    impossible to avoid. The prefix "post-," for instance, is often
    interpreted in the sense that something is over or that we have at least
    grasped the matters at hand and can thus turn to something new. The
    opposite is true. The most enduringly relevant developments are only now
    beginning to adopt a specific form, long after digital infrastructures
    and the practices made popular by them have become part of our everyday
    lives. Or, as the communication theorist and consultant Clay Shirky puts
    it, "Communication tools don\'t get socially interesting until they get
    technologically boring."[^11^](#f6-note-0011){#f6-note-0011a} For it is
    only today, now that our fascination for this technology has waned and
    its promises sound hollow, that culture and society are being defined by
    the digital condition in a comprehensive sense. Before, this was the
    case in just a few limited spheres. It is this hybridization and
    solidification of the digital -- the presence of the digital beyond
    digital media -- that lends the digital condition its dominance. As to
    the concrete realities in which these things will materialize, this is
    currently being decided in an open and ongoing process. The aim of this
    book is to contribute to our understanding of this process.[]{#Page_10
    type="pagebreak" title="10"}
    :::

    ::: {.section .notesSet type="rearnotes"}
    []{#notesSet}Notes {#f6-ntgp-9999}
    ------------------

    ::: {.section .notesList}
    [1](#f6-note-0001a){#f6-note-0001}  Dan Biddle, "Five Million Tweets for
    \#Eurovision 2014," *Twitter UK* (May 11, 2014), online.

    [2](#f6-note-0002a){#f6-note-0002}  Ministerium für Kultus, Jugend und
    Sport -- Baden-Württemberg, "Bildungsplanreform 2015/2016 -- Verankerung
    von Leitprinzipien," online \[--trans.\].

    [3](#f6-note-0003a){#f6-note-0003}  As early as 1995, Wolfgang Coy
    suggested that McLuhan\'s metaphor should be supplanted by the concept
    of the "Turing Galaxy," but this never caught on. See his introduction
    to the German edition of *The Gutenberg Galaxy*: "Von der Gutenbergschen
    zur Turingschen Galaxis: Jenseits von Buchdruck und Fernsehen," in
    Marshall McLuhan, *Die Gutenberg Galaxis: Das Ende des Buchzeitalters*,
    (Cologne: Addison-Wesley, 1995), pp. vii--xviii.[]{#Page_176
    type="pagebreak" title="176"}

    [4](#f6-note-0004a){#f6-note-0004}  According to the analysis of the
    Spanish sociologist Manuel Castells, this crisis began almost
    simultaneously in highly developed capitalist and socialist societies,
    and it did so for the same reason: the paradigm of "industrialism" had
    reached the limits of its productivity. Unlike the capitalist societies,
    which were flexible enough to tame the crisis and reorient their
    economies, the socialism of the 1970s and 1980s experienced stagnation
    until it ultimately, in a belated effort to reform, collapsed. See
    Manuel Castells, *End of Millennium*, 2nd edn (Oxford: Wiley-Blackwell,
    2010), pp. 5--68.

    [5](#f6-note-0005a){#f6-note-0005}  Felix Stalder, *Der Autor am Ende
    der Gutenberg Galaxis* (Zurich: Buch & Netz, 2014).

    [6](#f6-note-0006a){#f6-note-0006}  For my preliminary thoughts on this
    topic, see Felix Stalder, "Autonomy and Control in the Era of
    Post-Privacy," *Open: Cahier on Art and the Public Domain* 19 (2010):
    78--86; and idem, "Privacy Is Not the Antidote to Surveillance,"
    *Surveillance & Society* 1 (2002): 120--4. For a discussion of these
    approaches, see the working paper by Maja van der Velden, "Personal
    Autonomy in a Post-Privacy World: A Feminist Technoscience Perspective"
    (2011), online.

    [7](#f6-note-0007a){#f6-note-0007}  Accordingly, the "new social" media
    are mass media in the sense that they influence broadly disseminated
    patterns of social relations and thus shape society as much as the
    traditional mass media had done before them.

    [8](#f6-note-0008a){#f6-note-0008}  Kim Cascone, "The Aesthetics of
    Failure: 'Post-Digital' Tendencies in Contemporary Computer Music,"
    *Computer Music Journal* 24/2 (2000): 12--18.

    [9](#f6-note-0009a){#f6-note-0009}  Florian Cramer, "What Is
    'Post-Digital'?" *Post-Digital Research* 3 (2014), online.

    [10](#f6-note-0010a){#f6-note-0010}  In the field of visual arts,
    similar considerations have been made regarding "post-internet art." See
    Artie Vierkant, "The Image Object Post-Internet,"
    [jstchillin.org](http://jstchillin.org) (December 2010), online; and Ian
    Wallace, "What Is Post-Internet Art? Understanding the Revolutionary New
    Art Movement," *Artspace* (March 18, 2014), online.

    [11](#f6-note-0011a){#f6-note-0011}  Clay Shirky, *Here Comes Everybody:
    The Power of Organizing without Organizations* (New York: Penguin,
    2008), p. 105.
    :::
    :::

    [I]{.chapterNumber} [Evolution]{.chapterTitle} {#c1}
    =
    ::: {.section}
    Many authors have interpreted the new cultural realities that
    characterize our daily lives as a direct consequence of technological
    developments: the internet is to blame! This assumption is not only
    empirically untenable; it also leads to a problematic assessment of the
    current situation. Apparatuses are represented as "central actors," and
    this suggests that new technologies have suddenly revolutionized a
    situation that had previously been stable. Depending on one\'s point of
    view, this is then regarded as "a blessing or a
    curse."[^1^](#c1-note-0001){#c1-note-0001a} A closer examination,
    however, reveals an entirely different picture. Established cultural
    practices and social institutions had already been witnessing the
    erosion of their self-evident justification and legitimacy, long before
    they were faced with new technologies and the corresponding demands
    these make on individuals. Moreover, the allegedly new types of
    coordination and cooperation are also not so new after all. Many of them
    have existed for a long time. At first most of them were totally
    separate from the technologies for which, later on, they would become
    relevant. It is only in retrospect that these developments can be
    identified as beginnings, and it can be seen that much of what we regard
    today as novel or revolutionary was in fact introduced at the margins of
    society, in cultural niches that were unnoticed by the dominant actors
    and institutions. The new technologies thus evolved against a
    []{#Page_11 type="pagebreak" title="11"}background of processes of
    societal transformation that were already under way. They could only
    have been developed once a vision of their potential had been
    formulated, and they could only have been disseminated where demand for
    them already existed. This demand was created by social, political, and
    economic crises, which were themselves initiated by changes that were
    already under way. The new technologies seemed to provide many differing
    and promising answers to the urgent questions that these crises had
    prompted. It was thus a combination of positive vision and pressure that
    motivated a great variety of actors to change, at times with
    considerable effort, the established processes, mature institutions, and
    their own behavior. They intended to appropriate, for their own
    projects, the various and partly contradictory possibilities that they
    saw in these new technologies. Only then did a new technological
    infrastructure arise.

    This, in turn, created the preconditions for previously independent
    developments to come together, strengthening one another and enabling
    them to spread beyond the contexts in which they had originated. Thus,
    they moved from the margins to the center of culture. And by
    intensifying the crisis of previously established cultural forms and
    institutions, they became dominant and established new forms and
    institutions of their own.
    :::

    ::: {.section}
    The Expansion of the Social Basis of Culture {#c1-sec-0002}
    --------------------------------------------

    Watching television discussions from the 1950s and 1960s today, one is
    struck not only by the billows of cigarette smoke in the studio but also
    by the homogeneous spectrum of participants. Usually, it was a group of
    white and heteronormatively behaving men speaking with one
    another,[^2^](#c1-note-0002){#c1-note-0002a} as these were the people
    who held the important institutional positions in the centers of the
    West. As a rule, those involved were highly specialized representatives
    from the cultural, economic, scientific, and political spheres. Above
    all, they were legitimized to appear in public to articulate their
    opinions, which were to be regarded by others as relevant and worthy of
    discussion. They presided over the important debates of their time. With
    few exceptions, other actors and their deviant opinions -- there
    []{#Page_12 type="pagebreak" title="12"}has never been a time without
    them -- were either not taken seriously at all or were categorized as
    indecent, incompetent, perverse, irrelevant, backward, exotic, or
    idiosyncratic.[^3^](#c1-note-0003){#c1-note-0003a} Even at that time,
    the social basis of culture was beginning to expand, though the actors
    at the center of the discourse had failed to notice this. Communicative
    and cultural pro­cesses were gaining significance in more and more
    places, and excluded social groups were self-consciously developing
    their own language in order to intervene in the discourse. The rise of
    the knowledge economy, the increasingly loud critique of
    heteronormativity, and a fundamental cultural critique posed by
    post-colonialism enabled a greater number of people to participate in
    public discussions. In what follows, I will subject each of these three
    phenomena to closer examin­ation. In order to do justice to their
    complexity, I will treat them on different levels: I will depict the
    rise of the knowledge economy as a structural change in labor; I will
    reconstruct the critique of heteronormativity by outlining the origins
    and transformations of the gay movement in West Germany; and I will
    discuss post-colonialism as a theory that introduced new concepts of
    cultural multiplicity and hybridization -- concepts that are now
    influencing the digital condition far beyond the limits of the
    post-colonial discourse, and often without any reference to this
    discourse at all.

    ::: {.section}
    ### The growth of the knowledge economy {#c1-sec-0003}

    At the beginning of the 1950s, the Austrian-American economist Fritz
    Machlup was immersed in his study of the polit­ical economy of
    monopoly.[^4^](#c1-note-0004){#c1-note-0004a} Among other things, he was
    concerned with patents and copyright law. In line with the neo-classical
    Austrian School, he considered both to be problematic (because
    state-created) monopolies.[^5^](#c1-note-0005){#c1-note-0005a} The
    longer he studied the monopoly of the patent system in particular, the
    more far-reaching its consequences seemed to him. He maintained that the
    patent system was intertwined with something that might be called the
    "economy of invention" -- ultimately, patentable insights had to be
    produced in the first place -- and that this was in turn part of a much
    larger economy of knowledge. The latter encompassed government agencies
    as well as institutions of education, research, and development
    []{#Page_13 type="pagebreak" title="13"}(that is, schools, universities,
    and certain corporate laboratories), which had been increasing steadily
    in number since Roosevelt\'s New Deal. Yet it also included the
    expanding media sector and those industries that were responsible for
    providing technical infrastructure. Machlup subsumed all of these
    institutions and sectors under the concept of the "knowledge economy," a
    term of his own invention. Their common feature was that essential
    aspects of their activities consisted in communicating things to other
    people ("telling anyone anything," as he put it). Thus, the employees
    were not only recipients of information or instructions; rather, in one
    way or another, they themselves communicated, be it merely as a
    secretary who typed up, edited, and forwarded a piece of shorthand
    dictation. In his book *The Production and Distribution of Knowledge in
    the United States*, published in 1962, Machlup gathered empirical
    material to demonstrate that the American economy had entered a new
    phase that was distinguished by the production, exchange, and
    application of abstract, codified
    knowledge.[^6^](#c1-note-0006){#c1-note-0006a} This opinion was no
    longer entirely novel at the time, but it had never before been
    presented in such an empirically detailed and comprehensive
    manner.[^7^](#c1-note-0007){#c1-note-0007a} The extent of the knowledge
    economy surprised Machlup himself: in his book, he concluded that as
    much as 43 percent of all labor activity was already engaged in this
    sector. This high number came about because, until then, no one had put
    forward the idea of understanding such a variety of activities as a
    single unit.

    Machlup\'s categorization was indeed quite innovative, for the dynamics
    that propelled the sectors that he associated with one another not only
    were very different but also had originated as an integral component in
    the development of the industrial production of goods. They were more of
    an extension of such production than a break with it. The production and
    circulation of goods had been expanding and accelerating as early as the
    nineteenth century, though at highly divergent rates from one region or
    sector to another. New markets were created in order to distribute goods
    that were being produced in greater numbers; new infrastructure for
    transportation and communication was established in order to serve these
    large markets, which were mostly in the form of national territories
    (including their colonies). This []{#Page_14 type="pagebreak"
    title="14"}enabled even larger factories to be built in order to
    exploit, to an even greater extent, the cost advantages of mass
    production. In order to control these complex processes, new professions
    arose with different types of competencies and working conditions. The
    office became a workplace for an increasing number of people -- men and
    women alike -- who, in one form or another, had something to do with
    information processing and communication. Yet all of this required not
    only new management techniques. Production and products also became more
    complex, so that entire corporate sectors had to be restructured.
    Whereas the first decisive inventions of the industrial era were still
    made by more or less educated tinkerers, during the last third of the
    nineteenth century, invention itself came to be institutionalized. In
    Germany, Siemens (founded in 1847 as the Telegraphen-Bauanstalt von
    Siemens & Halske) exemplifies this transformation. Within 50 years, a
    company that began in a proverbial workshop in a Berlin backyard became
    a multinational high-tech corporation. It was in such corporate
    laboratories, which were established around the year 1900, that the
    "industrialization of invention" or the "scientification of industrial
    production" took place.[^8^](#c1-note-0008){#c1-note-0008a} In other
    words, even the processes employed in factories and the goods that they
    produced became knowledge-intensive. Their invention, planning, and
    production required a steadily growing expansion of activities, which
    today we would refer to as research and development. The informatization
    of the economy -- the acceleration of mass production, the comprehensive
    application of scientific methods to the organization of labor, and the
    central role of research and development in industry -- was hastened
    enormously by a world war that was waged on an industrial scale to an
    extent that had never been seen before.

    Another important factor for the increasing significance of the
    knowledge economy was the development of the consumer society. Over the
    course of the last third of the nineteenth century, despite dramatic
    regional and social disparities, an increasing number of people profited
    from the economic growth that the Industrial Revolution had instigated.
    Wages increased and basic needs were largely met, so that a new social
    stratum arose, the middle class, which was able to spend part of its
    income on other things. But on what? First, []{#Page_15 type="pagebreak"
    title="15"}new needs had to be created. The more production capacities
    increased, the more they had to be rethought in terms of consumption.
    Thus, in yet another way, the economy became more knowledge-intensive.
    It was now necessary to become familiar with, understand, and stimulate
    the interests and preferences of consumers, in order to entice them to
    purchase products that they did not urgently need. This knowledge did
    little to enhance the material or logistical complexity of goods or
    their production; rather, it was reflected in the increasingly extensive
    communication about and through these goods. The beginnings of this
    development were captured by Émile Zola in his 1883 novel *The Ladies\'
    Paradise*, which was set in the new world of a semi-fictitious
    department store bearing that name. In its opening scene, the young
    protagonist Denise Baudu and her brother Jean, both of whom have just
    moved to Paris from a provincial town, encounter for the first time the
    artfully arranged women\'s clothing -- exhibited with all sorts of
    tricks involving lighting, mirrors, and mannequins -- in the window
    displays of the store. The sensuality of the staged goods is so
    overwhelming that both of them are not only struck dumb, but Jean even
    "blushes."

    It was the economy of affects that brought blood to Jean\'s cheeks. At
    that time, strategies for attracting the attention of customers did not
    yet have a scientific and systematic basis. Just as the first inventions
    in the age of industrialization were made by amateurs, so too was the
    economy of affects developed intuitively and gradually rather than as a
    planned or conscious paradigm shift. That it was possible to induce and
    direct affects by means of targeted communication was the pioneering
    discovery of the Austrian-American Edward Bernays. During the 1920s, he
    combined the ideas of his uncle Sigmund Freud about unconscious
    motivations with the sociological research methods of opinion surveys to
    form a new discipline: market
    research.[^9^](#c1-note-0009){#c1-note-0009a} It became the scientific
    basis of a new field of activity, which he at first called "propa­ganda"
    but then later referred to as "public
    relations."[^10^](#c1-note-0010){#c1-note-0010a} Public communication,
    be it for economic or political ends, was now placed on a systematic
    foundation that came to distance itself more and more from the pure
    "conveyance of information." Communication became a strategic field for
    corporate and political disputes, and the mass media []{#Page_16
    type="pagebreak" title="16"}became their locus of negotiation. Between
    1880 and 1917, for instance, commercial advertising costs in the United
    States increased by more than 800 percent, and the leading advertising
    firms, using the same techniques with which they attracted consumers to
    products, were successful in selling to the American public the idea of
    their nation entering World War I. Thus, a media industry in the modern
    sense was born, and it expanded along with the rapidly growing market
    for advertising.[^11^](#c1-note-0011){#c1-note-0011a}

    In his studies of labor markets conducted at the beginning of the 1960s,
    Machlup brought these previously separ­ate developments together and
    thus explained the existence of an already advanced knowledge economy in
    the United States. His arguments fell on extremely fertile soil, for an
    intellectual transformation had taken place in other areas of science as
    well. A few years earlier, for instance, cybernetics had given the
    concepts "information" and "communication" their first scientifically
    precise (if somewhat idiosyncratic) definitions and had assigned to them
    a position of central importance in all scientific disciplines, not to
    mention life in general.[^12^](#c1-note-0012){#c1-note-0012a} Machlup\'s
    investigation seemed to confirm this in the case of the economy, given
    that the knowledge economy was primarily concerned with information and
    communication. Since then, numerous analyses, formulas, and slogans have
    repeated, modified, refined, and criticized the idea that the
    knowledge-based activities of the economy have become increasingly
    important. In the 1970s this discussion was associated above all with
    the notion of the "post-industrial
    society,"[^13^](#c1-note-0013){#c1-note-0013a} in the 1980s the guiding
    idea was the "information society,"[^14^](#c1-note-0014){#c1-note-0014a}
    and in the 1990s the debate revolved around the "network
    society"[^15^](#c1-note-0015){#c1-note-0015a} -- to name just the most
    popular concepts. What these approaches have in common is that they each
    diagnose a comprehensive societal transformation that, as regards the
    creation of economic value or jobs, has shifted the balance from
    productive to communicative activ­ities. Accordingly, they presuppose
    that we know how to distinguish the former from the latter. This is not
    unproblematic, however, because in practice the two are usually tightly
    intertwined. Moreover, whoever maintains that communicative activities
    have taken the place of industrial production in our society has adopted
    a very narrow point of []{#Page_17 type="pagebreak" title="17"}view.
    Factory jobs have not simply disappeared; they have just been partially
    relocated outside of Western economies. The assertion that communicative
    activities are somehow of "greater value" hardly chimes with the reality
    of today\'s new "service jobs," many of which pay no more than the
    minimum wage.[^16^](#c1-note-0016){#c1-note-0016a} Critiques of this
    sort, however, have done little to reduce the effectiveness of this
    analysis -- especially its political effectiveness -- for it does more
    than simply describe a condition. It also contains a set of political
    instructions that imply or directly demand that precisely those sectors
    should be promoted that it considers economically promising, and that
    society should be reorganized accordingly. Since the 1970s, there has
    thus been a feedback loop between scientific analysis and political
    agendas. More often than not, it is hardly possible to distinguish
    between the two. Especially in Britain and the United States, the
    economic transformation of the 1980s was imposed insistently and with
    political calculation (the weakening of labor unions).

    There are, however, important differences between the developments of
    the so-called "post-industrial society" of the 1970s and those of the
    so-called "network society" of the 1990s, even if both terms are
    supposed to stress the increased significance of information, knowledge,
    and communication. With regard to the digital condition, the most
    important of these differences are the greater flexibility of economic
    activity in general and employment relations in particular, as well as
    the dismantling of social security systems. Neither phenomenon played
    much of a role in analyses of the early 1970s. The development since
    then can be traced back to two currents that could not seem more
    different from one another. At first, flexibility was demanded in the
    name of a critique of the value system imposed by bureaucratic-bourgeois
    society (including the traditional organization of the workforce). It
    originated in the new social movements that had formed in the late
    1960s. Later on, toward the end of the 1970s, it then became one of the
    central points of the neoliberal critique of the welfare state. With
    completely different motives, both sides sang the praises of autonomy
    and spontaneity while rejecting the disciplinary nature of hierarchical
    organization. They demanded individuality and diversity rather than
    conformity to prescribed roles. Experimentation, openness to []{#Page_18
    type="pagebreak" title="18"}new ideas, flexibility, and change were now
    established as fundamental values with positive connotations. Both
    movements operated with the attractive idea of personal freedom. The new
    social movements understood this in a social sense as the freedom of
    personal development and coexistence, whereas neoliberals understood it
    in an economic sense as the freedom of the market. In the 1980s, the
    neoliberal ideas prevailed in large part because some of the values,
    strategies, and methods propagated by the new social movements were
    removed from their political context and appropriated in order to
    breathe new life -- a "new spirit" -- into capitalism and thus to rescue
    industrial society from its crisis.[^17^](#c1-note-0017){#c1-note-0017a}
    An army of management consultants, restructuring experts, and new
    companies began to promote flat hierarchies, self-responsibility, and
    innovation; with these aims in mind, they set about reorganizing large
    corporations into small and flexible units. Labor and leisure were no
    longer supposed to be separated, for all aspects of a given person could
    be integrated into his or her work. In order to achieve economic success
    in this new capitalism, it became necessary for every individual to
    identify himself or herself with his or her profession. Large
    corporations were restructured in such a way that entire departments
    found themselves transformed into independent "profit centers." This
    happened in the name of creating more leeway for decision-making and of
    optimizing the entrepreneurial spirit on all levels, the goals being to
    increase value creation and to provide management with more fine-grained
    powers of intervention. These measures, in turn, created the need for
    computers and the need for them to be networked. Large corporations
    reacted in this way to the emergence of highly specialized small
    companies which, by networking and cooperating with other firms,
    succeeded in quickly and flexibly exploiting niches in the expanding
    global markets. In the management literature of the 1980s, the
    catchphrases for this were "company networks" and "flexible
    specialization."[^18^](#c1-note-0018){#c1-note-0018a} By the middle of
    the 1990s, the sociologist Manuel Castells was able to conclude that the
    actual productive entity was no longer the individual company but rather
    the network consisting of companies and corporate divisions of various
    sizes. In Castells\'s estimation, the decisive advantage of the network
    is its ability to customize its elements and their configuration
    []{#Page_19 type="pagebreak" title="19"}to suit the rapidly changing
    requirements of the "project" at
    hand.[^19^](#c1-note-0019){#c1-note-0019a} Aside from a few exceptions,
    companies in their trad­itional forms came to function above all as
    strategic control centers and as economic and legal units.

    This economic structural transformation was already well under way when
    the internet emerged as a mass medium around the turn of the millennium.
    As a consequence, change became more radical and penetrated into an
    increasing number of areas of value creation. The political agenda
    oriented itself toward the vision of "creative industries," a concept
    developed in 1997 by the newly elected British government under Tony
    Blair. A Creative Industries Task Force was established right away, and
    its first step was to identify "those activities which have their
    origins in individual creativity, skill and talent and which have the
    potential for wealth and job creation through the generation and
    exploit­ation of intellectual
    property."[^20^](#c1-note-0020){#c1-note-0020a} Like Fritz Machlup at
    the beginning of the 1960s, the task force brought together existing
    areas of activity into a new category. Such activities included
    advertising, computer games, architecture, music, arts and antique
    markets, publishing, design, software and computer services, fashion,
    television and radio, and film and video. The latter were elevated to
    matters of political importance on account of their potential to create
    wealth and jobs. Not least because of this clever presentation of
    categories -- no distinction was made between the BBC, an almighty
    public-service provider, and fledgling companies in precarious
    circumstances -- it was possible to proclaim not only that the creative
    industries were contributing a relevant portion of the nation\'s
    economic output, but also that this sector was growing at an especially
    fast rate. It was reported that, in London, the creative industries were
    already responsible for one out of every five new jobs. When compared
    with traditional terms of employment as regards income, benefits, and
    prospects for advancement, however, many of these positions entailed a
    considerable downgrade for the employees in question (who were now
    treated as independent contractors). This fact was either ignored or
    explicitly interpreted as a sign of the sector\'s particular
    dynamism.[^21^](#c1-note-0021){#c1-note-0021a} Around the turn of the
    new millennium, the idea that individual creativity plays a central role
    in the economy was given further traction by []{#Page_20
    type="pagebreak" title="20"}the sociologist and consultant Richard
    Florida, who argued that creativity was essential to the future of
    cities and even announced the rise of the "creative class." As to the
    preconditions that have to be met in order to tap into this source of
    wealth, he devised a simple formula that would be easy for municipal
    bureaucrats to understand: "technology, tolerance and talent." Talent,
    as defined by Florida, is based on individual creativity and education
    and manifests itself in the ability to generate new jobs. He was thus
    able to declare talent a central element of economic
    growth.[^22^](#c1-note-0022){#c1-note-0022a} In order to "unleash" these
    resources, what we need in addition to technology is, above all,
    tolerance; that is, "an open culture -- one that does not discriminate,
    does not force people into boxes, allows us to be ourselves, and
    validates various forms of family and of human
    identity."[^23^](#c1-note-0023){#c1-note-0023a}

    The idea that a public welfare state should ensure the social security
    of individuals was considered obsolete. Collective institutions, which
    could have provided a degree of stability for people\'s lifestyles, were
    dismissed or regarded as bureaucratic obstacles. The more or less
    directly evoked role model for all of this was the individual artist,
    who was understood as an individual entrepreneur, a sort of genius
    suitable for the masses. For Florida, a central problem was that,
    according to his own calculations, only about a third of the people
    living in North American and European cities were working in the
    "creative sector," while the innate creativity of everyone else was
    going to waste. Even today, the term "creative industry," along with the
    assumption that the internet will provide increased opportunities,
    serves to legitimize the effort to restructure all areas of the economy
    according to the needs of the knowledge economy and to privilege the
    network over the institution. In times of social cutbacks and empty
    public purses, especially in municipalities, this message was warmly
    received. One mayor, who as the first openly gay top politician in
    Germany exemplified tolerance for diverse lifestyles, even adopted the
    slogan "poor but sexy" for his city. Everyone was supposed to exploit
    his or her own creativity to discover new niches and opportunities for
    monet­ization -- a magic formula that was supposed to bring about a new
    urban revival. Today there is hardly a city in Europe that does not
    issue a report about its creative economy, []{#Page_21 type="pagebreak"
    title="21"}and nearly all of these reports cite, directly or indirectly,
    Richard Florida.

    As already seen in the context of the knowledge economy, so too in the
    case of creative industries do measurable social change, wishful
    thinking, and political agendas blend together in such a way that it is
    impossible to identify a single cause for the developments taking place.
    The consequences, however, are significant. Over the last two
    generations, the demands of the labor market have fundamentally changed.
    Higher education and the ability to acquire new knowledge independently
    are now, to an increasing extent, required and expected as
    qualifications and personal attributes. The desired or enforced ability
    to be flexible at work, the widespread cooperation across institutions,
    the uprooted nature of labor, and the erosion of collective models for
    social security have displaced many activities, which once took place
    within clearly defined institutional or personal limits, into a new
    interstitial space that is neither private nor public in the classical
    sense. This is the space of networks, communities, and informal
    cooperation -- the space of sharing and exchange that has since been
    enabled by the emergence of ubiquitous digital communication. It allows
    an increasing number of people, whether willingly or otherwise, to
    envision themselves as active producers of information, knowledge,
    capability, and meaning. And because it is associated in various ways
    with the space of market-based exchange and with the bourgeois political
    sphere, it has lasting effects on both. This interstitial space becomes
    all the more important as fewer people are willing or able to rely on
    traditional institutions for their economic security. For, within it,
    personal and digital-based networks can and must be developed as
    alternatives, regardless of whether they prove sustainable for the long
    term. As a result, more and more actors, each with their own claims to
    meaning, have been rushing away from the private personal sphere into
    this new interstitial space. By now, this has become such a normal
    practice that whoever is *not* active in this ever-expanding
    interstitial space, which is rapidly becoming the main social sphere --
    whoever, that is, lacks a publicly visible profile on social mass media
    like Facebook, or does not number among those producing information and
    meaning and is thus so inconspicuous online as []{#Page_22
    type="pagebreak" title="22"}to yield no search results -- now stands out
    in a negative light (or, in far fewer cases, acquires a certain prestige
    on account of this very absence).
    :::

    ::: {.section}
    ### The erosion of heteronormativity {#c1-sec-0004}

    In this (sometimes more, sometimes less) public space for the continuous
    production of social meaning (and its exploit­ation), there is no
    question that the professional middle class is
    over-represented.[^24^](#c1-note-0024){#c1-note-0024a} It would be
    short-sighted, however, to reduce those seeking autonomy and the
    recognition of individuality and social diversity to the role of poster
    children for the new spirit of
    capitalism.[^25^](#c1-note-0025){#c1-note-0025a} The new social
    movements, for instance, initiated a social shift that has allowed an
    increasing number of people to demand, if nothing else, the right to
    participate in social life in a self-determined manner; that is,
    according to their own standards and values.

    Especially effective was the critique of patriarchal and heteronormative
    power relations, modes of conduct, and
    identities.[^26^](#c1-note-0026){#c1-note-0026a} In the context of the
    political upheavals at the end of the 1960s, the new women\'s and gay
    movements developed into influential actors. Their greatest achievement
    was to establish alternative cultural forms, lifestyles, and strategies
    of action in or around the mainstream of society. How this was done can
    be demonstrated by tracing, for example, the development of the gay
    movement in West Germany.

    In the fall of 1969, the liberalization of Paragraph 175 of the German
    Criminal Code came into effect. From then on, sexual activity between
    adult men was no longer punishable by law (women were not mentioned in
    this context). For the first time, a man could now express himself as a
    homosexual outside of semi-private space without immediately being
    exposed to the risk of criminal prosecution. This was a necessary
    precondition for the ability to defend one\'s own rights. As early as
    1971, the struggle for the recognition of gay life experiences reached
    the broader public when Rosa von Praunheim\'s film *It Is Not the
    Homosexual Who Is Perverse, but the Society in Which He Lives* was
    screened at the Berlin International Film Festival and then, shortly
    thereafter, broadcast on public television in North Rhine-Westphalia.
    The film, which is firmly situated in the agitprop tradition,
    []{#Page_23 type="pagebreak" title="23"}follows a young provincial man
    through the various milieus of Berlin\'s gay subcultures: from a
    monogamous relationship to nightclubs and public bathrooms until, at the
    end, he is enlightened by a political group of men who explain that it
    is not possible to lead a free life in a niche, as his own emancipation
    can only be achieved by a transformation of society as a whole. The film
    closes with a not-so-subtle call to action: "Out of the closets, into
    the streets!" Von Praunheim understood this emancipation to be a process
    that encompassed all areas of life and had to be carried out in public;
    it could only achieve success, moreover, in solidarity with other
    freedom movements such as the Black Panthers in the United States and
    the new women\'s movement. The goal, according to this film, is to
    articulate one\'s own identity as a specific and differentiated identity
    with its own experiences, values, and reference systems, and to anchor
    this identity within a society that not only tolerates it but also
    recognizes it as having equal validity.

    At first, however, the film triggered vehement controversies, even
    within the gay scene. The objection was that it attacked the gay
    subculture, which was not yet prepared to defend itself publicly against
    discrimination. Despite or (more likely) because of these controversies,
    more than 50 groups of gay activists soon formed in Germany. Such
    groups, largely composed of left-wing alternative students, included,
    for instance, the Homosexuelle Aktion Westberlin (HAW) and the Rote
    Zelle Schwul (RotZSchwul) in Frankfurt am
    Main.[^27^](#c1-note-0027){#c1-note-0027a} One focus of their activities
    was to have Paragraph 175 struck entirely from the legal code (which was
    not achieved until 1994). This cause was framed within a general
    struggle to overcome patriarchy and capitalism. At the earliest gay
    demonstrations in Germany, which took place in Münster in April 1972,
    protesters rallied behind the following slogan: "Brothers and sisters,
    gay or not, it is our duty to fight capitalism." This was understood as
    a necessary subordination to the greater struggle against what was known
    in the terminology of left-wing radical groups as the "main
    contradiction" of capitalism (that between capital and labor), and it
    led to strident differences within the gay movement. The dispute
    escalated during the next year. After the so-called *Tuntenstreit*, or
    "Battle of the Queens," which was []{#Page_24 type="pagebreak"
    title="24"}initiated by activists from Italy and France who had appeared
    in drag at the closing ceremony of the HAW\'s Spring Meeting in West
    Berlin, the gay movement was divided, or at least moving in a new
    direction. At the heart of the matter were the following questions: "Is
    there an inherent (many speak of an autonomous) position that gays hold
    with respect to the issue of homosexuality? Or can a position on
    homosexuality only be derived in association with the traditional
    workers\' movement?"[^28^](#c1-note-0028){#c1-note-0028a} In other
    words, was discrimination against homosexuality part of the social
    divide caused by capitalism (that is, one of its "ancillary
    contradictions") and thus only to be overcome by overcoming capitalism
    itself, or was it something unrelated to the "essence" of capitalism, an
    independent conflict requiring different strategies and methods? This
    conflict could never be fully resolved, but the second position, which
    was more interested in overcoming legal, social, and cultural
    discrimination than in struggling against economic exploitation, and
    which focused specifically on the social liberation of gays, proved to
    be far more dynamic in the long term. This was not least because both
    the old and new left were themselves not free of homophobia and because
    the entire radical student movement of the 1970s fell into crisis.

    Over the course of the 1970s and 1980s, "aesthetic self-empowerment" was
    realized through the efforts of artistic and (increasingly) commercial
    producers of images, texts, and
    sounds.[^29^](#c1-note-0029){#c1-note-0029a} Activists, artists, and
    intellectuals developed a language with which they could speak
    assertively in public about topics that had previously been taboo.
    Inspired by the expression "gay pride," which originated in the United
    States, they began to use the term *schwul* ("gay"), which until then
    had possessed negative connotations, with growing confidence. They
    founded numerous gay and lesbian cultural initiatives, theaters,
    publishing houses, magazines, bookstores, meeting places, and other
    associations in order to counter the misleading or (in their eyes)
    outright false representations of the mass media with their own
    multifarious media productions. In doing so, they typically followed a
    dual strategy: on the one hand, they wanted to create a space for the
    members of the movement in which it would be possible to formulate and
    live different identities; on the other hand, they were fighting to be
    accepted by society at large. While []{#Page_25 type="pagebreak"
    title="25"}a broader and broader spectrum of gay positions, experiences,
    and aesthetics was becoming visible to the public, the connection to
    left-wing radical contexts became weaker. Founded as early as 1974, and
    likewise in West Berlin, the General Homosexual Working Group
    (Allgemeine Homosexuelle Arbeitsgemeinschaft) sought to integrate gay
    politics into mainstream society by defining the latter -- on the basis
    of bourgeois, individual rights -- as a "politics of
    anti-discrimination." These efforts achieved a milestone in 1980 when,
    in the run-up to the parliamentary election, a podium discussion was
    held with representatives of all major political parties on the topic of
    the law governing sexual offences. The discussion took place in the
    Beethovenhalle in Bonn, which was the largest venue for political events
    in the former capital. Several participants considered the event to be a
    "disaster,"[^30^](#c1-note-0030){#c1-note-0030a} for it revived a number
    of internal conflicts (not least that between revolutionary and
    integrative positions). Yet the fact remains that representatives were
    present from every political party, and this alone was indicative of an
    unprecedented amount of public awareness for those demanding equal
    rights.

    The struggle against discrimination and for social recognition reached
    an entirely new level of urgency with the outbreak of HIV/AIDS. In 1983,
    the magazine *Der Spiegel* devoted its first cover story to the disease,
    thus bringing it to the awareness of the broader public. In the same
    year, the non-profit organization Deutsche Aids-Hilfe was founded to
    prevent further cases of discrimination, for *Der Spiegel* was not the
    only publication at the time to refer to AIDS as a "homosexual
    epidemic."[^31^](#c1-note-0031){#c1-note-0031a} The struggle against
    HIV/AIDS required a comprehensive mobilization. Funding had to be raised
    in order to deal with the social repercussions of the epidemic, to teach
    people about safe sexual practices for everyone and to direct research
    toward discovering causes and developing potential cures. The immediate
    threat that AIDS represented, especially while so little was known about
    the illness and its treatment remained a distant hope, created an
    impetus for mobilization that led to alliances between the gay movement,
    the healthcare system, and public authorities. Thus, the AIDS Inquiry
    Committee, sponsored by the conservative Christian Democratic Union,
    concluded in 1988 that, in the fight against the illness, "the
    homosexual subculture is []{#Page_26 type="pagebreak"
    title="26"}especially important. This informal structure should
    therefore neither be impeded nor repressed but rather, on the contrary,
    recognized and supported."[^32^](#c1-note-0032){#c1-note-0032a} The AIDS
    crisis proved to be a catalyst for advancing the integration of gays
    into society and for expanding what could be regarded as acceptable
    lifestyles, opinions, and cultural practices. As a consequence,
    homosexuals began to appear more frequently in the media, though their
    presence would never match that of hetero­sexuals. As of 1985, the
    television show *Lindenstraße* featured an openly gay protagonist, and
    the first kiss between men was aired in 1987. The episode still provoked
    a storm of protest -- Bayerische Rundfunk refused to broadcast it a
    second time -- but this was already a rearguard action and the
    integration of gays (and lesbians) into the social mainstream continued.
    In 1993, the first gay and lesbian city festival took place in Berlin,
    and the first Rainbow Parade was held in Vienna in 1996. In 2002, the
    Cologne Pride Day involved 1.2 million participants and attendees, thus
    surpassing for the first time the attendance at the traditional Rose
    Monday parade. By the end of the 1990s, the sociologist Rüdiger Lautmann
    was already prepared to maintain: "To be homosexual has become
    increasingly normalized, even if homophobia lives on in the depths of
    the collective disposition."[^33^](#c1-note-0033){#c1-note-0033a} This
    normalization was also reflected in a study published by the Ministry of
    Justice in the year 2000, which stressed "the similarity between
    homosexual and heterosexual relationships" and, on this basis, made an
    argument against discrimination.[^34^](#c1-note-0034){#c1-note-0034a}
    Around the year 2000, however, the classical gay movement had already
    passed its peak. A profound transformation had begun to take place in
    the middle of the 1990s. It lost its character as a new social movement
    (in the style of the 1970s) and began to splinter inwardly and
    outwardly. One could say that it transformed from a mass movement into a
    multitude of variously networked communities. The clearest sign of this
    transformation is the abbreviation "LGBT" (lesbian, gay, bisexual, and
    transgender), which, since the mid-1990s, has represented the internal
    heterogeneity of the movement as it has shifted toward becoming a
    network.[^35^](#c1-note-0035){#c1-note-0035a} At this point, the more
    radical actors were already speaking against the normalization of
    homosexuality. Queer theory, for example, was calling into question the
    "essentialist" definition of gender []{#Page_27 type="pagebreak"
    title="27"}-- that is, any definition reducing it to an immutable
    essence -- with respect to both its physical dimension (sex) and its
    social and cultural dimension (gender
    proper).[^36^](#c1-note-0036){#c1-note-0036a} It thus opened up a space
    for the articulation of experiences, self-descriptions, and lifestyles
    that, on every level, are located beyond the classical attributions of
    men and women. A new generation of intellectuals, activists, and artists
    took the stage and developed -- yet again through acts of aesthetic
    self-empowerment -- a language that enabled them to import, with
    confidence, different self-definitions into the public sphere. An
    example of this is the adoption of inclusive plural forms in German
    (*Aktivist\_innen* "activists," *Künstler\_innen* "artists"), which draw
    attention to the gaps and possibilities between male and female
    identities that are also expressed in the language itself. Just as with
    the terms "gay" or *schwul* some 30 years before, in this case, too, an
    important element was the confident and public adoption and semantic
    conversion of a formerly insulting word ("queer") by the very people and
    communities against whom it used to be
    directed.[^37^](#c1-note-0037){#c1-note-0037a} Likewise observable in
    these developments was the simultaneity of social (amateur) and
    artistic/scientific (professional) cultural production. The goal,
    however, was less to produce a clear antithesis than it was to oppose
    rigid attributions by underscoring mutability, hybridity, and
    uniqueness. Both the scope of what could be expressed in public and the
    circle of potential speakers expanded yet again. And, at least to some
    extent, the drag queen Conchita Wurst popularized complex gender
    constructions that went beyond the simple woman/man dualism. All of that
    said, the assertion by Rüdiger Lautmann quoted above -- "homophobia
    lives on in the depths of the collective dis­position" -- continued to
    hold true.

    If the gay movement is representative of the social liber­ation of the
    1970s and 1980s, then it is possible to regard its transformation into
    the LGBT movement during the 1990s -- with its multiplicity and fluidity
    of identity models and its stress on mutability and hybridity -- as a
    sign of the reinvention of this project within the context of an
    increasingly dominant digital condition. With this transformation,
    however, the diversification and fluidification of cultural practices
    and social roles have not yet come to an end. Ways of life that were
    initially subcultural and facing existential pressure []{#Page_28
    type="pagebreak" title="28"}are gradually entering the mainstream. They
    are expanding the range of readily available models of identity for
    anyone who might be interested, be it with respect to family forms
    (e.g., patchwork families, adoption by same-sex couples), diets (e.g.,
    vegetarianism and veganism), healthcare (e.g., anti-vaccination), or
    other principles of life and belief. All of them are seeking public
    recognition for a new frame of reference for social meaning that has
    originated from their own activity. This is necessarily a process
    characterized by conflicts and various degrees of resistance, including
    right-wing populism that seeks to defend "traditional values," but many
    of these movements will ultimately succeed in providing more people with
    the opportunity to speak in public, thus broadening the palette of
    themes that are considered to be important and legitimate.
    :::

    ::: {.section}
    ### Beyond center and periphery {#c1-sec-0005}

    In order to reach a better understanding of the complexity involved in
    the expanding social basis of cultural production, it is necessary to
    shift yet again to a different level. For, just as it would be myopic to
    examine the multiplication of cultural producers only in terms of
    professional knowledge workers from the middle class, it would likewise
    be insufficient to situate this multiplication exclusively in the
    centers of the West. The entire system of categories that justified the
    differentiation between the cultural "center" and the cultural
    "periphery" has begun to falter. This complex and multilayered process
    has been formulated and analyzed by the theory of "post-colonialism."
    Long before digital media made the challenge of cultural multiplicity a
    quotidian issue in the West, proponents of this theory had developed
    languages and terminologies for negotiating different positions without
    needing to impose a hierarchical order.

    Since the 1970s, the theoretical current of post-colonialism has been
    examining the cultural and epistemic dimensions of colonialism that,
    even after its end as a territorial system, have remained responsible
    for the continuation of dependent relations and power differentials. For
    my purposes -- which are to develop a European perspective on the
    factors ensuring that more and more people are able to participate in
    cultural []{#Page_29 type="pagebreak" title="29"}production -- two
    points are especially relevant because their effects reverberate in
    Europe itself. First is the deconstruction of the categories "West" (in
    the sense of the center) and "East" (in the sense of the periphery). And
    second is the focus on hybridity as a specific way for non-Western
    actors to deal with the dominant cultures of former colonial powers,
    which have continued to determine significant portions of globalized
    culture. The terms "West" and "East," "center" and "periphery," do not
    simply describe existing conditions; rather, they are categories that
    contribute, in an important way, to the creation of the very conditions
    that they presume to describe. This may sound somewhat circular, but it
    is precisely from this circularity that such cultural classifications
    derive their strength. The world that they illuminate is immersed in
    their own light. The category "East" -- or, to use the term of the
    literary theorist Edward Said,
    "orientalism"[^38^](#c1-note-0038){#c1-note-0038a} -- is a system of
    representation that pervades Western thinking. Within this system,
    Europe or the West (as the center) and the East (as the periphery)
    represent asymmetrical and antithetical concepts. This construction
    achieves a dual effect. As a self-description, on the one hand, it
    contributes to the formation of our own identity, for Europeans
    attrib­ute to themselves and to their continent such features as
    "rationality," "order," and "progress," while on the other hand
    identifying the alternative with "superstition," "chaos," or
    "stagnation." The East, moreover, is used as an exotic projection screen
    for our own suppressed desires. According to Said, a representational
    system of this sort can only take effect if it becomes "hegemonic"; that
    is, if it is perceived as self-evident and no longer as an act of
    attribution but rather as one of description, even and precisely by
    those against whom the system discriminates. Said\'s accomplishment is
    to have worked out how far-reaching this system was and, in many areas,
    it remains so today. It extended (and extends) from scientific
    disciplines, whose researchers discussed (until the 1980s) the theory of
    "oriental despotism,"[^39^](#c1-note-0039){#c1-note-0039a} to literature
    and art -- the motif of the harem was especially popular, particularly
    in paintings of the late nineteenth
    century[^40^](#c1-note-0040){#c1-note-0040a} -- all the way to everyday
    culture, where, as of 1913 in the United States, the cigarette brand
    Camel (introduced to compete with the then-leading brand, Fatima) was
    meant to evoke the []{#Page_30 type="pagebreak" title="30"}mystique and
    sensuality of the Orient.[^41^](#c1-note-0041){#c1-note-0041a} This
    system of representation, however, was more than a means of describing
    oneself and others; it also served to legitimize the allocation of all
    knowledge and agency on to one side, that of the West. Such an order was
    not restricted to culture; it also created and legitimized a sense of
    domination for colonial projects.[^42^](#c1-note-0042){#c1-note-0042a}
    This cultural legitimation, as Said points out, also persists after the
    end of formal colonial domination and continues to marginalize the
    postcolonial subjects. As before, they are unable to speak for
    themselves and therefore remain in the dependent periphery, which is
    defined by their subordinate position in relation to the center. Said
    directed the focus of critique to this arrangement of center and
    periphery, which he saw as being (re)produced and legitimized on the
    cultural level. From this arose the demand that everyone should have the
    right to speak, to place him- or herself in the center. To achieve this,
    it was necessary first of all to develop a language -- indeed, a
    cultural landscape -- that can manage without a hegemonic center and is
    thus oriented toward multiplicity instead of
    uniformity.[^43^](#c1-note-0043){#c1-note-0043a}

    A somewhat different approach has been taken by the literary theorist
    Homi K. Bhabha. He proceeds from the idea that the colonized never fully
    passively adopt the culture of the colonialists -- the "English book,"
    as he calls it. Their previous culture is never simply wiped out and
    replaced by another. What always and necessarily occurs is rather a
    process of hybridization. This concept, according to Bhabha,

    ::: {.extract}
    suggests that all of culture is constructed around negotiations and
    conflicts. Every cultural practice involves an attempt -- sometimes
    good, sometimes bad -- to establish authority. Even classical works of
    art, such as a painting by Brueghel or a composition by Beethoven, are
    concerned with the establishment of cultural authority. Now, this poses
    the following question: How does one function as a negotiator when
    one\'s own sense of agency is limited, for instance, on account of being
    excluded or oppressed? I think that, even in the role of the underdog,
    there are opportunities to upend the imposed cultural authorities -- to
    accept some aspects while rejecting others. It is in this way that
    symbols of authority are hybridized and made into something of one\'s
    own. For me, hybridization is not simply a mixture but rather a
    []{#Page_31 type="pagebreak" title="31"}strategic and selective
    appropriation of meanings; it is a way to create space for negotiators
    whose freedom and equality are
    endangered.[^44^](#c1-note-0044){#c1-note-0044a}
    :::

    Hybridization is thus a cultural strategy for evading marginality that
    is imposed from the outside: subjects, who from the dominant perspective
    are incapable of doing so, appropriate certain aspects of culture for
    themselves and transform them into something else. What is decisive is
    that this hybrid, created by means of active and unauthorized
    appropriation, opposes the dominant version and the resulting speech is
    thus legitimized from another -- that is, from one\'s own -- position.
    In this way, a cultural engagement is set under way and the superiority
    of one meaning or another is called into question. Who has the right to
    determine how and why a relationship with others should be entered,
    which resources should be appropriated from them, and how these
    resources should be used? At the heart of the matter lie the abilities
    of speech and interpretation; these can be seized in order to create
    space for a "cultural hybridity that entertains difference without an
    assumed or imposed hierarchy."[^45^](#c1-note-0045){#c1-note-0045a}

    At issue is thus a strategy for breaking down hegemonic cultural
    conditions, which distribute agency in a highly uneven manner, and for
    turning one\'s own cultural production -- which has been dismissed by
    cultural authorities as flawed, misconceived, or outright ignorant --
    into something negotiable and independently valuable. Bhabha is thus
    interested in fissures, differences, diversity, multiplicity, and
    processes of negotiation that generate something like shared meaning --
    culture, as he defines it -- instead of conceiving of it as something
    that precedes these processes and is threatened by them. Accordingly, he
    proceeds not from the idea of unity, which is threatened whenever
    "others" are empowered to speak and needs to be preserved, but rather
    from the irreducible multiplicity that, through laborious processes, can
    be brought into temporary and limited consensus. Bhabha\'s vision of
    culture is one without immutable authorities, interpretations, and
    truths. In theory, everything can be brought to the table. This is not a
    situation in which anything goes, yet the central meaning of
    negotiation, the contextuality of consensus, and the mutability of every
    frame of reference []{#Page_32 type="pagebreak" title="32"}-- none of
    which can be shared equally by everyone -- are always potentially
    negotiable.

    Post-colonialism draws attention to the "disruptive power of the
    excluded-included third," which becomes especially virulent when it
    "emerges in the middle of semantic
    structures."[^46^](#c1-note-0046){#c1-note-0046a} The recognition of
    this power reveals the increasing cultural independence of those
    formerly colonized, and it also transforms the cultural self-perception
    of the West, for, even in Western nations that were not significant
    colonial powers, there are multifaceted tensions between dominant
    cultures and those who are on the defensive against discrimination and
    attributions by others. Instead of relying on the old recipe of
    integration through assimilation (that is, the dissolution of the
    "other"), the right to self-determined difference is being called for
    more emphatically. In such a manner, collective identities, such as
    national identities, are freed from their questionable appeals to
    cultural homogeneity and essentiality, and reconceived in terms of the
    experience of immanent difference. Instead of one binding and
    unnegotiable frame of reference for everyone, which hierarchizes
    individual pos­itions and makes them appear unified, a new order without
    such limitations needs to be established. Ultimately, the aim is to
    provide nothing less than an "alternative reading of
    modernity,"[^47^](#c1-note-0047){#c1-note-0047a} which influences both
    the construction of the past and the modalities of the future. For
    European culture in particular, such a project is an immense challenge.

    Of course, these demands do not derive their everyday relevance
    primarily from theory but rather from the experiences of
    (de)colonization, migration, and globalization. Multifaceted as it is,
    however, the theory does provide forms and languages for articulating
    these phenomena, legitimizing new positions in public debates, and
    attacking persistent mechanisms of cultural marginalization. It helps to
    empower broader societal groups to become actively involved in cultural
    processes, namely people, such as migrants and their children, whose
    identity and experience are essentially shaped by non-Western cultures.
    The latter have been giving voice to their experiences more frequently
    and with greater confidence in all areas of public life, be it in
    politics, literature, music, or
    art.[^48^](#c1-note-0048){#c1-note-0048a} In Germany, for instance, the
    films by Fatih Akin (*Head-On* from 2004 and *Soul Kitchen* from 2009,
    to []{#Page_33 type="pagebreak" title="33"}name just two), in which the
    experience of immigration is represented as part of the German
    experience, have reached a wide public audience. In 2002, the group
    Kanak Attak organized a series of conferences with the telling motto *no
    integración*, and these did much to introduce postcolonial positions to
    the debates taking place in German-speaking
    countries.[^49^](#c1-note-0049){#c1-note-0049a} For a long time,
    politicians with "migration backgrounds" were considered to be competent
    in only one area, namely integration policy. This has since changed,
    though not entirely. In 2008, for instance, Cem Özdemir was elected
    co-chair of the Green Party and thus shares responsibility for all of
    its political positions. Developments of this sort have been enabled
    (and strengthened) by a shift in society\'s self-perception. In 2014,
    Cemile Giousouf, the integration commissioner for the conservative
    CDU/CSU alliance in the German Parliament, was able to make the
    following statement without inciting any controversy: "Over the past few
    years, Germany has become a modern land of
    immigration."[^50^](#c1-note-0050){#c1-note-0050a} A remarkable
    proclamation. Not ten years earlier, her party colleague Norbert Lammert
    had expressed, in his function as parliamentary president, interest in
    reviving the debate about the term "leading culture." The increasingly
    well-educated migrants of the first, second, or third gener­ation no
    longer accept the choice of being either marginalized as an exotic
    representative of the "other" or entirely assimilated. Rather, they are
    insisting on being able to introduce their specific experience as a
    constitutive contribution to the formation of the present -- in
    association and in conflict with other contributions, but at the same
    level and with the same legitimacy. It is no surprise that various forms
    of discrimin­ation and violence against "foreigners" not only continue
    in everyday life but have also been increasing in reaction to this new
    situation. Ultimately, established claims to power are being called into
    question.

    To summarize, at least three secular historical tendencies or movements,
    some of which can be traced back to the late nineteenth century but each
    of which gained considerable momentum during the last third of the
    twentieth (the spread of the knowledge economy, the erosion of
    heteronormativity, and the focus of post-colonialism on cultural
    hybridity), have greatly expanded the sphere of those who actively
    negotiate []{#Page_34 type="pagebreak" title="34"}social meaning. In
    large part, the patterns and cultural foundations of these processes
    developed long before the internet. Through the use of the internet, and
    through the experiences of dealing with it, they have encroached upon
    far greater portions of all societies.
    :::
    :::

    ::: {.section}
    The Culturalization of the World {#c1-sec-0006}
    --------------------------------

    The number of participants in cultural processes, however, is not the
    only thing that has increased. Parallel to that development, the field
    of the cultural has expanded as well -- that is, those areas of life
    that are not simply characterized by unalterable necessities, but rather
    contain or generate competing options and thus require conscious
    decisions.

    The term "culturalization of the economy" refers to the central position
    of knowledge-based, meaning-based, and affect-oriented processes in the
    creation of value. With the emergence of consumption as the driving
    force behind the production of goods and the concomitant necessity of
    having not only to satisfy existing demands but also to create new ones,
    the cultural and affective dimensions of the economy began to gain
    significance. I have already discussed the beginnings of product
    staging, advertising, and public relations. In addition to all of the
    continuities that remain with us from that time, it is also possible to
    point out a number of major changes that consumer society has undergone
    since the late 1960s. These changes can be delineated by examining the
    greater role played by design, which has been called the "core
    discipline of the creative
    economy."[^51^](#c1-note-0051){#c1-note-0051a}

    As a field of its own, design originated alongside industrialization,
    when, in collaborative processes, the activities of planning and
    designing were separated from those of carrying out
    production.[^52^](#c1-note-0052){#c1-note-0052a} It was not until the
    modern era that designers consciously endeavored to seek new forms for
    the logic inherent to mass production. With the aim of economic
    efficiency, they intended their designs to optimize the clearly defined
    functions of anonymous and endlessly reproducible objects. At the end of
    the nineteenth century, the architect Louis Sullivan, whose buildings
    still distinguish the skyline of Chicago, condensed this new attitude
    into the famous axiom []{#Page_35 type="pagebreak" title="35"}"form
    follows function." Mies van der Rohe, working as an architect in Chicago
    in the middle of the twentieth century, supplemented this with a pithy
    and famous formulation of his own: "less is more." The rationality of
    design, in the sense of isolating and improving specific functions, and
    the economical use of resources were of chief importance to modern
    (industrial) designers. Even the ten design principles of Dieter Rams,
    who led the design division of the consumer products company Braun from
    1965 to 1991 -- one of the main sources of inspiration for Jonathan Ive,
    Apple\'s chief design officer -- aimed to make products "usable,"
    "understandable," "honest," and "long-lasting." "Good design," according
    to his guiding principle, "is as little design as
    possible."[^53^](#c1-note-0053){#c1-note-0053a} This orientation toward
    the technical and functional promised to solve problems for everyone in
    a long-term and binding manner, for the inherent material and design
    qual­ities of an object were supposed to make it independent from
    changing times and from the tastes of consumers.

    ::: {.section}
    ### Beyond the object {#c1-sec-0007}

    At the end of the 1960s, a new generation of designers rebelled against
    this industrial and instrumental rationality, which was now felt to be
    authoritarian, soulless, and reductionist. In the works associated with
    "anti-design" or "radical design," the objectives of the discipline were
    redefined and a new formal language was developed. In the place of
    tech­nical and functional optimization, recombination -- ecological
    recycling or the postmodern interplay of forms -- emerged as a design
    method and aesthetic strategy. Moreover, the aspiration of design
    shifted from the individual object to its entire social and material
    environment. The processes of design and production, which had been
    closed off from one another and restricted to specialists, were opened
    up precisely to encourage the participation of non-designers, be it
    through interdisciplinary cooperation with other types of professions or
    through the empowerment of laymen. The objectives of design were
    radically expanded: rather than ending with the completion of an
    individual product, it was now supposed to engage with society. In the
    sense of cybernetics, this was regarded as a "system," controlled by
    feedback processes, []{#Page_36 type="pagebreak" title="36"}which
    connected social, technical, and biological dimensions to one
    another.[^54^](#c1-note-0054){#c1-note-0054a} Design, according to this
    new approach, was meant to be a "socially significant
    activity."[^55^](#c1-note-0055){#c1-note-0055a}

    Embedded in the social movements of the 1960s and 1970s, this new
    generation of designers was curious about the social and political
    potential of their discipline, and about possibilities for promoting
    flexibility and autonomy instead of rigid industrial efficiency. Design
    was no longer expected to solve problems once and for all, for such an
    idea did not correspond to the self-perception of an open and mutable
    society. Rather, it was expected to offer better opportun­ities for
    enabling people to react to continuously changing conditions. A radical
    proposal was developed by the Italian designer Enzo Mari, who in 1974
    published his handbook *Autoprogettazione* (Self-Design). It contained
    19 simple designs with which people could make, on their own,
    aesthetically and functionally sophisticated furniture out of pre-cut
    pieces of wood. In this case, the designs themselves were less important
    than the critique of conventional design as elitist and of consumer
    society as alienated and wasteful. Mari\'s aim was to reconceive the
    relations among designers, the manufacturing industry, and users.
    Increasingly, design came to be understood as a holistic and open
    process. Victor Papanek, the founder of ecological design, took things a
    step further. For him, design was "basic to all human activity. The
    planning and patterning of any act towards a desired, foreseeable end
    constitutes the design process. Any attempt to separate design, to make
    it a thing-by-itself, works counter to the inherent value of design as
    the primary underlying matrix of
    life."[^56^](#c1-note-0056){#c1-note-0056a}

    Potentially all aspects of life could therefore fall under the purview
    of design. This came about from the desire to oppose industrialism,
    which was blind to its catastrophic social and ecological consequences,
    with a new and comprehensive manner of seeing and acting that was
    unrestricted by economics.

    Toward the end of the 1970s, this expanded notion of design owed less
    and less to emancipatory social movements, and its socio-political goals
    began to fall by the wayside. Three fundamental patterns survived,
    however, which go beyond design and remain characteristic of the
    culturalization []{#Page_37 type="pagebreak" title="37"}of the economy:
    the discovery of the public as emancipated users and active
    participants; the use of appropriation, transformation, and
    recombination as methods for creating ever-new aesthetic
    differentiations; and, finally, the intention of shaping the lifeworld
    of the user.[^57^](#c1-note-0057){#c1-note-0057a}

    As these patterns became depoliticized and commercialized, the focus of
    designing the "lifeworld" shifted more and more toward designing the
    "experiential world." By the end of the 1990s, this had become so
    normalized that even management consultants could assert that
    "\[e\]xperiences represent an existing but previously unarticulated
    *genre of economic output*."[^58^](#c1-note-0058){#c1-note-0058a} It was
    possible to define the dimensions of the experiential world in various
    ways. For instance, it could be clearly delimited and product-oriented,
    like the flagship stores introduced by Nike in 1990, which, with their
    elaborate displays, were meant to turn shopping into an experience. This
    experience, as the company\'s executives hoped, radiated outward and
    influenced how the brand was perceived as a whole. The experiential
    world could also, however, be conceived in somewhat broader terms, for
    instance by design­ing entire institutions around the idea of creating a
    more attractive work environment and thereby increasing the commitment
    of employees. This approach is widespread today in creative industries
    and has become popularized through countless stories about ping-pong
    tables, gourmet cafeterias, and massage rooms in certain offices. In
    this case, the process of creativity is applied back to itself in order
    to systematize and optimize a given workplace\'s basis of operation. The
    development is comparable to the "invention of invention" that
    characterized industrial research around the end of the nineteenth
    century, though now the concept has been re­located to the field of
    knowledge production.

    Yet the "experiential world" can be expanded even further, for instance
    when entire cities attempt to make themselves attractive to
    international clientele and compete with others by building spectacular
    museums or sporting arenas. Displays in cities, as well as a few other
    central locations, are regularly constructed in order to produce a
    particular experience. This also entails, however, that certain forms of
    use that fail to fit the "urban
    script"[^59^](#c1-note-0059){#c1-note-0059a} are pushed to the margins
    or driven away.[^60^](#c1-note-0060){#c1-note-0060a} Thus, today, there
    is hardly a single area of life to []{#Page_38 type="pagebreak"
    title="38"}which the strategies and methods of design do not have
    access, and this access occurs at all levels. For some time, design has
    not been a purely visible matter, restricted to material objects; it
    rather forms and controls all of the senses. Cities, for example, have
    come to be understood increasingly as "sound spaces" and have
    accordingly been reconfigured with the goal of modulating their various
    noises.[^61^](#c1-note-0061){#c1-note-0061a} Yet design is no longer
    just a matter of objects, processes, and experiences. By now, in the
    context of reproductive medicine, it has even been applied to the
    biological foundations of life ("designer babies"). I will revisit this
    topic below.
    :::

    ::: {.section}
    ### Culture everywhere {#c1-sec-0008}

    Of course, design is not the only field of culture that has imposed
    itself over society as a whole. A similar development has occurred in
    the field of advertising, which, since the 1970s, has been integrated
    into many more physical and social spaces and by now has a broad range
    of methods at its disposal. Advertising is no longer found simply on
    billboards or in display windows. In the form of "guerilla marketing" or
    "product placement," it has penetrated every space and occupied every
    discourse -- by blending with political messages, for instance -- and
    can now even be spread, as "viral marketing," by the addressees of the
    advertisements themselves. Similar processes can be observed in the
    fields of art, fashion, music, theater, and sports. This has taken place
    perhaps most radically in the field of "gaming," which has drawn upon
    technical progress in the most direct possible manner and, with the
    spread of powerful computers and mobile applications, has left behind
    the confines of the traditional playing field. In alternate reality
    games, the realm of the virtual and fictitious has also been
    transcended, as physical spaces have been overlaid with their various
    scripts.[^62^](#c1-note-0062){#c1-note-0062a}

    This list could be extended, but the basic trend is clear enough,
    especially as the individual fields overlap and mutually influence one
    another. They are blending into a single interdependent field for
    generating social meaning in the form of economic activity. Moreover,
    through digitalization and networking, many new opportunities have
    arisen for large-scale involvement by the public in design processes.
    Thanks []{#Page_39 type="pagebreak" title="39"}to new communication
    technologies and flexible production processes, today\'s users can
    personalize and create products to suit their wishes. Here, the spectrum
    extends from tiny batches of creative-industrial products all the way to
    global processes of "mass customization," in which factory-based mass
    production is combined with personalization. One of the first
    applications of this was introduced in 1999 when, through its website, a
    sporting-goods company allowed customers to design certain elements of a
    shoe by altering it within a set of guidelines. This was taken a step
    further by the idea of "user-centered innovation," which relies on the
    specific knowledge of users to enhance a product, with the additional
    hope of discovering unintended applications and transforming these into
    new areas of business.[^63^](#c1-note-0063){#c1-note-0063a} It has also
    become possible for end users to take over the design process from the
    beginning, which has become considerably easier with the advent of
    specialized platforms for exchanging knowledge, alongside semi-automated
    production tools such as mechanical mills and 3D printers.
    Digitalization, which has allowed all content to be processed, and
    networking, which has created an endless amount of content ("raw
    material"), have turned appropriation and recombination into general
    methods of cultural production.[^64^](#c1-note-0064){#c1-note-0064a}
    This phenomenon will be examined more closely in the next chapter.

    Both the involvement of users in the production process and the methods
    of appropriation and recombination are extremely information-intensive
    and communication-intensive. Without the corresponding technological
    infrastructure, neither could be achieved efficiently or on a large
    scale. This was evident in the 1970s, when such approaches never made it
    beyond subcultures and conceptual studies. With today\'s search engines,
    every single user can trawl through an amount of information that, just
    a generation ago, would have been unmanageable even by professional
    archivists. A broad array of communication platforms (together with
    flexible production capacities and efficient logistics) not only weakens
    the contradiction between mass fabrication and personalization; it also
    allows users to network directly with one another in order to develop
    specialized knowledge together and thus to enable themselves to
    intervene directly in design processes, both as []{#Page_40
    type="pagebreak" title="40"}willing participants in and as critics of
    flexible global production processes.
    :::
    :::

    ::: {.section}
    The Technologization of Culture {#c1-sec-0009}
    -------------------------------

    That society is dependent on complex information technologies in order
    to organize its constitutive processes is, in itself, nothing new.
    Rather, this began as early as the late nineteenth century. It is
    directly correlated with the expansion and acceleration of the
    circulation of goods, which came about through industrialization. As the
    historian and sociologist James Beniger has noted, this led to a
    "control crisis," for administrative control centers were faced with the
    problem of losing sight of what was happening in their own factories,
    with their suppliers, and in the important markets of the time.
    Management was in a bind: decisions had to be made either on the basis
    of insufficient information or too late. The existing administrative and
    control mechanisms could no longer deal with the rapidly increasing
    complexity and time-sensitive nature of extensively organized production
    and distribution. The office became more important, and ever more people
    were needed there to fulfill a growing number of functions. Yet this was
    not enough for the crisis to subside. The old administrative methods,
    which involved manual information processing, simply could no longer
    keep up. The crisis reached its first dramatic peak in 1889 in the
    United States, with the realization that the census data from the year
    1880 had not yet been analyzed when the next census was already
    scheduled to take place during the subsequent year. In the same year,
    the Secretary of the Interior organized a conference to investigate
    faster methods of data processing. Two methods were tested for making
    manual labor more efficient, one of which had the potential to achieve
    greater efficiency by means of novel data-processing machines. The
    latter system emerged as the clear victor; developed by an engineer
    named Hermann Hollerith, it mechanically processed and stored data on
    punch cards. The idea was based on Hollerith\'s observations of the
    coup­ling and decoupling of railroad cars, which he interpreted as
    modular units that could be combined in any desired order. The punch
    card transferred this approach to information []{#Page_41
    type="pagebreak" title="41"}management. Data were no longer stored in
    fixed, linear arrangements (tables and lists) but rather in small units
    (the punch cards) that, like railroad cars, could be combined in any
    given way. The increase in efficiency -- with respect to speed *and*
    flexibility -- was enormous, and nearly a hundred of Hollerith\'s
    machines were used by the Census
    Bureau.[^65^](#c1-note-0065){#c1-note-0065a} This marked a turning point
    in the history of information processing, with technical means no longer
    being used exclusively to store data, but to process data as well. This
    was the only way to avoid the impending crisis, ensuring that
    bureaucratic management could maintain centralized control. Hollerith\'s
    machines proved to be a resounding success and were implemented in many
    more branches of government and corporate administration, where
    data-intensive processes had increased so rapidly they could not have
    been managed without such machines. This growth was accompanied by that
    of Hollerith\'s Tabulating Machine Company, which he founded in 1896 and
    which, after a number of mergers, was renamed in 1924 as the
    International Business Machines Corporation (IBM). Throughout the
    following decades, dependence on information-processing machines only
    deepened. The growing number of social, commercial, and military
    processes could only be managed by means of information technology. This
    largely took place, however, outside of public view, namely in the
    specialized divisions of large government and private organizations.
    These were the only institutions in command of the necessary resources
    for operating the complex technical infrastructure -- so-called
    mainframe computers -- that was essential to automatic information
    processing.

    ::: {.section}
    ### The independent media {#c1-sec-0010}

    As with so much else, this situation began to change in the 1960s. Mass
    media and information-processing technologies began to attract
    criticism, even though all of the involved subcultures, media activists,
    and hackers continued to act independently from one another until the
    1990s. The freedom-oriented social movements of the 1960s began to view
    the mass media as part of the political system against which they were
    struggling. The connections among the economy, politics, and the media
    were becoming more apparent, not []{#Page_42 type="pagebreak"
    title="42"}least because many mass media companies, especially those in
    Germany related to the Springer publishing house, were openly inimical
    to these social movements. Critical theor­ies arose that, borrowing
    Louis Althusser\'s influential term, regarded the media as part of the
    "ideological state apparatus"; that is, as one of the authorities whose
    task is to influence people to accept social relations to such a degree
    that the "repressive state apparatuses" (the police, the military, etc.)
    form a constant background in everyday
    life.[^66^](#c1-note-0066){#c1-note-0066a} Similarly influential,
    Antonio Gramsci\'s theory of "cultural hegemony" emphasized the
    condition in which the governed are manipulated to form a cultural
    consensus with the ruling class; they accept the latter\'s
    presuppositions (and the politics which are thus justified) even though,
    by doing so, they are forced to suffer economic
    disadvantages.[^67^](#c1-note-0067){#c1-note-0067a} Guy Debord and the
    Situationists attributed to the media a central role in the new form of
    rule known as "the spectacle," the glittery surfaces and superficial
    manifestations of which served to conceal society\'s true
    relations.[^68^](#c1-note-0068){#c1-note-0068a} In doing so, they
    aligned themselves with the critique of the "culture industry," which
    had been formulated by Max Horkheimer and Theodor W. Adorno at the
    beginning of the 1940s and had become a widely discussed key text by the
    1960s.

    Their differences aside, these perspectives were united in that they no
    longer understood the "public" as a neutral sphere, in which citizens
    could inform themselves freely and form their opinions, but rather as
    something that was created with specific intentions and consequences.
    From this grew an interest in "counter-publics"; that is, in forums
    where other actors could appear and negotiate theories of their own. The
    mass media thus became an important instrument for organizing the
    bourgeois--capitalist public, but they were also responsible for the
    development of alternatives. Media, according to one of the core ideas
    of these new approaches, are less a sphere in which an external reality
    is depicted; rather, they are themselves a constitutive element of
    reality.
    :::

    ::: {.section}
    ### Media as lifeworlds {#c1-sec-0011}

    Another branch of new media theories, that of Marshall McLuhan and the
    Toronto School of Communication,[^69^](#c1-note-0069){#c1-note-0069a}
    []{#Page_43 type="pagebreak" title="43"}reached a similar conclusion on
    different grounds. In 1964, McLuhan aroused a great deal of attention
    with his slogan "the medium is the message." He maintained that every
    medium of communication, by means of its media-specific characteristics,
    directly affected the consciousness, self-perception, and worldview of
    every individual.[^70^](#c1-note-0070){#c1-note-0070a} This, he
    believed, happens independently of and in addition to whatever specific
    message a medium might be conveying. From this perspective, reality does
    not exist outside of media, given that media codetermine our personal
    relation to and behavior in the world. For McLuhan and the Toronto
    School, media were thus not channels for transporting content but rather
    the all-encompassing environments -- galaxies -- in which we live.

    Such ideas were circulating much earlier and were intensively developed
    by artists, many of whom were beginning to experiment with new
    electronic media. An important starting point in this regard was the
    1963 exhibit *Exposition of Music -- Electronic Television* by the
    Korean artist Nam June Paik, who was then collaborating with Karlheinz
    Stockhausen in Düsseldorf. Among other things, Paik presented 12
    television sets, the screens of which were "distorted" by magnets. Here,
    however, "distorted" is a problematic term, for, as Paik explicitly
    noted, the electronic images were "a beautiful slap in the face of
    classic dualism in philosophy since the time of Plato. \[...\] Essence
    AND existence, essentia AND existentia. In the case of the electron,
    however, EXISTENTIA IS ESSENTIA."[^71^](#c1-note-0071){#c1-note-0071a}
    Paik no longer understood the electronic image on the television screen
    as a portrayal or representation of anything. Rather, it engendered in
    the moment of its appearance an autonomous reality beyond and
    independent of its representational function. A whole generation of
    artists began to explore forms of existence in electronic media, which
    they no longer understood as pure media of information. In his work
    *Video Corridor* (1969--70), Bruce Nauman stacked two monitors at the
    end of a corridor that was approximately 10 meters long but only 50
    centimeters wide. On the lower monitor ran a video showing the empty
    hallway. The upper monitor displayed an image captured by a camera
    installed at the entrance of the hall, about 3 meters high. If the
    viewer moved down the corridor toward the two []{#Page_44
    type="pagebreak" title="44"}monitors, he or she would thus be recorded
    by the latter camera. Yet the closer one came to the monitor, the
    farther one would be from the camera, so that one\'s image on the
    monitor would become smaller and smaller. Recorded from behind, viewers
    would thus watch themselves walking away from themselves. Surveillance
    by others, self-surveillance, recording, and disappearance were directly
    and intuitively connected with one another and thematized as fundamental
    issues of electronic media.

    Toward the end of the 1960s, the easier availability and mobility of
    analog electronic production technologies promoted the search for
    counter-publics and the exploration of media as comprehensive
    lifeworlds. In 1967, Sony introduced its first Portapak system: a
    battery-powered, self-contained recording system -- consisting of a
    camera, a cord, and a recorder -- with which it was possible to make
    (black-and-white) video recordings outside of a studio. Although the
    recording apparatus, which required additional devices for editing and
    projection, was offered at the relatively expensive price of \$1,500
    (which corresponds to about €8,000 today), it was still affordable for
    interested groups. Compared with the situation of traditional film
    cameras, these new cameras considerably lowered the initial hurdle for
    media production, for video tapes were not only much cheaper than film
    reels (and could be used for multiple recordings); they also made it
    possible to view recorded material immediately and on location. This
    enabled the production of works that were far more intuitive and
    spontaneous than earlier ones. The 1970s saw the formation of many video
    groups, media workshops, and other initiatives for the independent
    production of electronic media. Through their own distribution,
    festivals, and other channels, such groups created alternative public
    spheres. The latter became especially prominent in the United States
    where, at the end of the 1960s, the providers of cable networks were
    legally obligated to establish public-access channels, on which citizens
    were able to operate self-organized and non-commercial television
    programs. This gave rise to a considerable public-access movement there,
    which at one point extended across 4,000 cities and was responsible for
    producing programs from and for these different
    communities.[^72[]{#Page_45 type="pagebreak"
    title="45"}^](#c1-note-0072){#c1-note-0072a}

    What these initiatives shared in common, in Western Europe and the
    United States, was their attempt to close the gap between the
    consumption and production of media, to activate the public, and at
    least in part to experiment with the media themselves. Non-professional
    producers were empowered with the ability to control who told their
    stories and how this happened. Groups that previously had no access to
    the medial public sphere now had opportunities to represent themselves
    and their own interests. By working together on their own productions,
    such groups demystified the medium of television and simultaneously
    equipped it with a critical consciousness.

    Especially well received in Germany was the work of Hans Magnus
    Enzensberger, who in 1970 argued (on the basis of Bertolt Brecht\'s
    radio theory) in favor of distinguishing between "repressive" and
    "emancipatory" uses of media. For him, the emancipatory potential of
    media lay in the fact that "every receiver is \[...\] a potential
    transmitter" that can participate "interactively" in "collective
    production."[^73^](#c1-note-0073){#c1-note-0073a} In the same year, the
    first German video group, Telewissen, debuted in public with a
    demonstration in downtown Darmstadt. In 1980, at the peak of the
    movement for independent video production, there were approximately a
    hundred such groups throughout (West) Germany. The lack of distribution
    channels, however, represented a nearly insuperable obstacle and ensured
    that many independent productions were seldom viewed outside of
    small-scale settings. Tapes had to be exchanged between groups through
    the mail, and they were mainly shown at gatherings and events, and in
    bars. The dynamic of alternative media shifted toward a small subculture
    (though one networked throughout all of Europe) of pirate radio and
    television broadcasters. At the beginning of the 1980s and in the space
    of Radio Dreyeckland in Freiburg, which had been founded in 1977 as
    Radio Verte Fessenheim, operations began at Germany\'s first pirate or
    citizens\' radio station, which regularly broadcast information about
    the political protest movements that had arisen against the use of
    nuclear power in Fessenheim (France), Wyhl (Germany), and Kaiseraugst
    (Switzerland). The epicenter of the scene, however, was located in
    Amsterdam, where the group known as Rabotnik TV, which was an offshoot
    []{#Page_46 type="pagebreak" title="46"}of the squatter scene there,
    would illegally feed its signal through official television stations
    after their programming had ended at night (many stations then stopped
    broadcasting at midnight). In 1988, the group acquired legal
    broadcasting slots on the cable network and reached up to 50,000 viewers
    with their weekly experimental shows, which largely consisted of footage
    appropriated freely from elsewhere.[^74^](#c1-note-0074){#c1-note-0074a}
    Early in 1990, the pirate television station Kanal X was created in
    Leipzig; it produced its own citizens\' television programming in the
    quasi-lawless milieu of the GDR before
    reunification.[^75^](#c1-note-0075){#c1-note-0075a}

    These illegal, independent, or public-access stations only managed to
    establish themselves as real mass media to a very limited extent.
    Nevertheless, they played an important role in sensitizing an entire
    generation of media activists, whose opportunities expanded as the means
    of production became both better and cheaper. In the name of "tactical
    media," a new generation of artistic and political media activists came
    together in the middle of the
    1990s.[^76^](#c1-note-0076){#c1-note-0076a} They combined the "camcorder
    revolution," which in the late 1980s had made video equipment available
    to broader swaths of society, stirring visions of democratic media
    production, with the newly arrived medium of the internet. Despite still
    struggling with numerous technical difficulties, they remained constant
    in their belief that the internet would solve the hitherto intractable
    problem of distributing content. The transition from analog to digital
    media lowered the production hurdle yet again, not least through the
    ongoing development of improved software. Now, many stages of production
    that had previously required professional or semi-professional expertise
    and equipment could also be carried out by engaged laymen. As a
    consequence, the focus of interest broadened to include not only the
    development of alternative production groups but also the possibility of
    a flexible means of rapid intervention in existing structures. Media --
    both television and the internet -- were understood as environments in
    which one could act without directly representing a reality outside of
    the media. Television was analyzed down to its own legalities, which
    could then be manipulated to affect things beyond the media.
    Increasingly, culture jamming and the campaigns of so-called
    communication guerrillas were blurring the difference between media and
    political activity.[^77[]{#Page_47 type="pagebreak"
    title="47"}^](#c1-note-0077){#c1-note-0077a}

    This difference was dissolved entirely by a new generation of
    politically motivated artists, activists, and hackers, who transferred
    the tactics of civil disobedience -- blockading a building with a
    sit-in, for instance -- to the
    internet.[^78^](#c1-note-0078){#c1-note-0078a} When, in 1994, the
    Zapatista Army of National Liberation rose up in the south of Mexico,
    several media projects were created to support its mostly peaceful
    opposition and to make the movement known in Europe and North America.
    As part of this loose network, in 1998 the American artist collective
    Electronic Disturbance Theater developed a relatively simple computer
    program called FloodNet that enabled networked sympathizers to shut down
    websites, such as those of the Mexican government, in a targeted and
    temporary manner. The principle was easy enough: the program would
    automatic­ally reload a certain website over and over again in order to
    exhaust the capacities of its network
    servers.[^79^](#c1-note-0079){#c1-note-0079a} The goal was not to
    destroy data but rather to disturb the normal functioning of an
    institution in order to draw attention to the activities and interests
    of the protesters.
    :::

    ::: {.section}
    ### Networks as places of action {#c1-sec-0012}

    What this new generation of media activists shared in common with the
    hackers and pioneers of computer networks was the idea that
    communication media are spaces for agency. During the 1960s, these
    programmers were also in search of alternatives. The difference during
    the 1960s is that they did not pursue these alternatives in
    counter-publics, but rather in alternative lifestyles and communication.
    The rejection of bureaucracy as a form of social organization played a
    significant role in the critique of industrial society formulated by
    freedom-oriented social movements. At the beginning of the previous
    century, Max Weber had still regarded bureaucracy as a clear sign of
    progress toward a rational and method­ical
    organization.[^80^](#c1-note-0080){#c1-note-0080a} He based this
    assessment on processes that were impersonal, rule-bound, and
    transparent (in the sense that they were documented with files). But
    now, in the 1960s, bureaucracy was being criticized as soulless,
    alienated, oppressive, non-transparent, and unfit for an increasingly
    complex society. Whereas the first four of these points are in basic
    agreement with Weber\'s thesis about "disenchanting" []{#Page_48
    type="pagebreak" title="48"}the world, the last point represents a
    radical departure from his analysis. Bureaucracies were no longer
    regarded as hyper-efficient but rather as inefficient, and their size
    and rule-bound nature were no longer seen as strengths but rather as
    decisive weaknesses. The social bargain of offering prosperity and
    security in exchange for subordination to hierarchical relations struck
    many as being anything but attractive, and what blossomed instead was a
    broad interest in alternative forms of coexistence. New institutions
    were expected to be more flexible and more open. The desire to step away
    from the system was widespread, and many (mostly young) people set about
    doing exactly that. Alternative ways of life -- communes, shared
    apartments, and cooperatives -- were explored in the country and in
    cities. They were meant to provide the individual with greater autonomy
    and the opportunity to develop his or her own unique potential. Despite
    all of the differences between these concepts of life, they nevertheless
    shared something of a common denominator: the promise of
    reconceptualizing social institutions and the fundamentals of
    coexistence, with the aim of reformulating them in such a way as to
    allow everyone\'s personal potential to develop fully in the here and
    now.

    According to critics of such alternatives, bureaucracy was necessary in
    order to organize social life as it radically reduced the world\'s
    complexity by forcing it through the bottleneck of official procedures.
    However, the price paid for such efficiency involved the atrophying of
    human relationships, which had to be subordinated to rigid processes
    that were incapable of registering unique characteristics and
    differences and were unable to react in a timely manner to changing
    circumstances.

    In the 1960s, many countercultural attempts to find new forms of
    organization placed personal and open communication at the center of
    their efforts. Each individual was understood as a singular person with
    untapped potential rather than a carrier of abstract and clearly defined
    functions. It was soon realized, however, that every common activity and
    every common decision entailed processes that were time-intensive and
    communication-intensive. As soon as a group exceeded a certain size, it
    became practically impossible for it to reach any consensus. As a result
    of these experiences, an entire worldview emerged that propagated
    "smallness" as a central []{#Page_49 type="pagebreak" title="49"}value
    ("small is beautiful"). It was thought that in this way society might
    escape from bureaucracy with its ostensibly disastrous consequences for
    humanity and the environment.[^81^](#c1-note-0081){#c1-note-0081a} But
    this belief did not last for long. For, unlike the majority of European
    alternative movements, the counterculture in the United States was not
    overwhelmingly critical of technology. On the contrary, many actors
    there sought suitable technologies for solving the practical problems of
    social organization. At the end of the 1960s, a considerable amount of
    attention was devoted to the field of basic technological research. This
    field brought together the interests of the military, academics,
    businesses, and activists from the counterculture. The common ground for
    all of them was a cybernetic vision of institutions, or, in the words of
    the historian Fred Turner:

    ::: {.extract}
    a picture of humans and machines as dynamic, collaborating elements in a
    single, highly fluid, socio-technical system. Within that system,
    control emerged not from the mind of a commanding officer, but from the
    complex, probabilistic interactions of humans, machines and events
    around them. Moreover, the mechanical elements of the system in question
    -- in this case, the predictor -- enabled the human elements to achieve
    what all Americans would agree was a worthwhile goal. \[...\] Over the
    coming decades, this second vision of benevolent man-machine systems, of
    circular flows of information, would emerge as a driving force in the
    establishment of the military--industrial--academic complex and as a
    model of an alternative to that
    complex.[^82^](#c1-note-0082){#c1-note-0082a}
    :::

    This complex was possible because, as a theory, cybernetics was
    formulated in extraordinarily abstract terms, so much so that a whole
    variety of competing visions could be associated with
    it.[^83^](#c1-note-0083){#c1-note-0083a} With cybernetics as a
    meta-science, it was possible to investigate the common features of
    technical, social, and biological
    processes.[^84^](#c1-note-0084){#c1-note-0084a} They were analyzed as
    open, interactive, and information-processing systems. It was especially
    consequential that cybernetics defined control and communication as the
    same thing, namely as activities oriented toward informational
    feedback.[^85^](#c1-note-0085){#c1-note-0085a} The heterogeneous legacy
    of cybernetics and its synonymous treatment of the terms "communication"
    and "control" continue to influence information technology and the
    internet today.[]{#Page_50 type="pagebreak" title="50"}

    The various actors who contributed to the development of the internet
    shared a common interest for forms of organ­ization based on the
    comprehensive, dynamic, and open exchange of information. Both on the
    micro and macro level (and this is decisive at this point),
    decentralized and flexible communication technologies were meant to
    become the foundation of new organizational models. Militaries feared
    attacks on their command and communication centers; academics wanted to
    broaden their culture of autonomy, collaboration among peers, and the
    free exchange of information; businesses were looking for new areas of
    activity; and countercultural activists were longing for new forms of
    peaceful coexistence.[^86^](#c1-note-0086){#c1-note-0086a} They all
    rejected the bureaucratic model, and the counterculture provided them
    with the central catchword for their alternative vision: community.
    Though rather difficult to define, it was a powerful and positive term
    that somehow promised the opposite of bureaucracy: humanity,
    cooperation, horizontality, mutual trust, and consensus. Now, however,
    humanity was expected to be reconfigured as a community in cooperation
    with and inseparable from machines. And what was yearned for had become
    a liberating symbiosis of man and machine, an idea that the author
    Richard Brautigan was quick to mock in his poem "All Watched Over by
    Machines of Loving Grace" from 1967:

    ::: {.poem}
    ::: {.lineGroup}
    I like to think (and

    the sooner the better!)

    of a cybernetic meadow

    where mammals and computers

    live together in mutually

    programming harmony

    like pure water

    touching clear sky.[^87^](#c1-note-0087){#c1-note-0087a}
    :::
    :::

    Here, Brautigan is ridiculing both the impatience (*the sooner the
    better!*) and the naïve optimism (*harmony, clear sky*) of the
    countercultural activists. Primarily, he regarded the underlying vision
    as an innocent but amusing fantasy and not as a potential threat against
    which something had to be done. And there were also reasons to believe
    that, ultimately, the new communities would be free from the coercive
    nature that []{#Page_51 type="pagebreak" title="51"}had traditionally
    characterized the downside of community experiences. It was thought that
    the autonomy and freedom of the individual could be regained in and by
    means of the community. The conditions for this were that participation
    in the community had to be voluntary and that the rules of participation
    had to be self-imposed. I will return to this topic in greater detail
    below.

    In line with their solution-oriented engineering culture and the
    results-focused military funders who by and large set the agenda, a
    relatively small group of computer scientists now took it upon
    themselves to establish the technological foundations for new
    institutions. This was not an abstract goal for the distant future;
    rather, they wanted to change everyday practices as soon as possible. It
    was around this time that advanced technology became the basis of social
    communication, which now adopted forms that would have been
    inconceivable (not to mention impracticable) without these
    preconditions. Of course, effective communication technologies already
    existed at the time. Large corporations had begun long before then to
    operate their own computing centers. In contrast to the latter, however,
    the new infrastructure could also be used by individuals outside of
    established institutions and could be implemented for all forms of
    communication and exchange. This idea gave rise to a pragmatic culture
    of horizontal, voluntary cooperation. The clearest summary of this early
    ethos -- which originated at the unusual intersection of military,
    academic, and countercultural interests -- was offered by David D.
    Clark, a computer scientist who for some time coordinated the
    development of technical standards for the internet: "We reject: kings,
    presidents and voting. We believe in: rough consensus and running
    code."[^88^](#c1-note-0088){#c1-note-0088a}

    All forms of classical, formal hierarchies and their methods for
    resolving conflicts -- commands (by kings and presidents) and votes --
    were dismissed. Implemented in their place was a pragmatics of open
    cooperation that was oriented around two guiding principles. The first
    was that different views should be discussed without a single individual
    being able to block any final decisions. Such was the meaning of the
    expression "rough consensus." The second was that, in accordance with
    the classical engineering tradition, the focus should remain on concrete
    solutions that had to be measured against one []{#Page_52
    type="pagebreak" title="52"}another on the basis of transparent
    criteria. Such was the meaning of the expression "running code." In
    large part, this method was possible because the group oriented around
    these principles was, internally, relatively homogeneous: it consisted
    of top-notch computer scientists -- all of them men -- at respected
    American universities and research centers. For this very reason, many
    potential and fundamental conflicts were avoided, at least at first.
    This internal homogeneity lends rather dark undertones to their sunny
    vision, but this was hardly recognized at the time. Today these
    undertones are far more apparent, and I will return to them below.

    Not only were technical protocols developed on the basis of these
    principles, but organizational forms as well. Along with the Internet
    Engineering Task Force (which he directed), Clark created the so-called
    Request-for-Comments documents, with which ideas could be presented to
    interested members of the community and simultaneous feedback could be
    collected in order to work through the ideas in question and thus reach
    a rough consensus. If such a consensus could not be reached -- if, for
    instance, an idea failed to resonate with anyone or was too
    controversial -- then the matter would be dropped. The feedback was
    organized as a form of many-to-many communication through email lists,
    newsgroups, and online chat systems. This proved to be so effective that
    horizontal communication within large groups or between multiple groups
    could take place without resulting in chaos. This therefore invalidated
    the traditional trend that social units, once they reach a certain size,
    would necessarily introduce hierarchical structures for the sake of
    reducing complexity and communication. In other words, the foundations
    were laid for larger numbers of (changing) people to organize flexibly
    and with the aim of building an open consensus. For Manuel Castells,
    this combination of organizational flexibility and scalability in size
    is the decisive innovation that was enabled by the rise of the network
    society.[^89^](#c1-note-0089){#c1-note-0089a} At the same time, however,
    this meant that forms of organization spread that could only be possible
    on the basis of technologies that have formed (and continue to form)
    part of the infrastructure of the internet. Digital technology and the
    social activity of individual users were linked together to an
    unprecedented extent. Social and cultural agendas were now directly
    related []{#Page_53 type="pagebreak" title="53"}to and entangled with
    technical design. Each of the four original interest groups -- the
    military, scientists, businesses, and the counterculture -- implemented
    new technologies to pursue their own projects, which partly complemented
    and partly contradicted one another. As we know today, the first three
    groups still cooperate closely with each other. To a great extent, this
    has allowed the military and corporations, which are willingly supported
    by researchers in need of funding, to determine the technology and thus
    aspects of the social and cultural agendas that depend on it.

    The software developers\' immediate environment experienced its first
    major change in the late 1970s. Software, which for many had been a mere
    supplement to more expensive and highly specialized hardware, became a
    marketable good with stringent licensing restrictions. A new generation
    of businesses, led by Bill Gates, suddenly began to label co­operation
    among programmers as theft.[^90^](#c1-note-0090){#c1-note-0090a}
    Previously it had been par for the course, and above all necessary, for
    programmers to share software with one another. The former culture of
    horizontal cooperation between developers transformed into a
    hierarchical and commercially oriented relation between developers and
    users (many of whom, at least at the beginning, had developed programs
    of their own). For the first time, copyright came to play an important
    role in digital culture. In order to survive in this environment, the
    practice of open cooperation had to be placed on a new legal foundation.
    Copyright law, which served to separate programmers (producers) from
    users (consumers), had to be neutralized or circumvented. The first step
    in this direction was taken in 1984 by the activist and programmer
    Richard Stallman. Composed by Stallman, the GNU General Public License
    was and remains a brilliant hack that uses the letter of copyright law
    against its own spirit. This happens in the form of a license that
    defines "four freedoms":

    1. The freedom to run the program as you wish, for any purpose (freedom
    0).
    2. The freedom to study how the program works and change it so it does
    your computing as you wish (freedom 1).
    3. The freedom to redistribute copies so you can help your neighbor
    (freedom 2).[]{#Page_54 type="pagebreak" title="54"}
    4. The freedom to distribute copies of your modified versions to others
    (freedom 3). By doing this you can give the whole community a chance
    to benefit from your changes.[^91^](#c1-note-0091){#c1-note-0091a}

    Thanks to this license, people who were personally unacquainted and did
    not share a common social environment could now cooperate (freedoms 2
    and 3) and simultaneously remain autonomous and unrestricted (freedoms 0
    and 1). For many, the tension between the need to develop complex
    software in large teams and the desire to maintain one\'s own autonomy
    represented an incentive to try out new forms of
    cooperation.[^92^](#c1-note-0092){#c1-note-0092a}

    Stallman\'s influence was at first limited to a small circle of
    programmers. In the middle of the 1980s, the goal of developing a
    completely free operating system seemed a distant one. Communication
    between those interested in doing so was often slow and complicated. In
    part, program codes still had to be sent by mail. It was not until the
    beginning of the 1990s that students in technical departments at many
    universities could access the
    internet.[^93^](#c1-note-0093){#c1-note-0093a} One of the first to use
    these new opportunities in an innovative way was a Finnish student named
    Linus Torvalds. He built upon Stallman\'s work and programmed a kernel,
    which, as the most important module of an operating system, governs the
    interaction between hardware and software. He published the first free
    version of this in 1991 and encouraged anyone interested to give him
    feedback.[^94^](#c1-note-0094){#c1-note-0094a} And it poured in.
    Torvalds reacted promptly and issued new versions of his software in
    quick succession. Instead of understanding his software as a finished
    product, he treated it like an open-ended process. This, in turn,
    motiv­ated even more developers to participate, because they saw that
    their contributions were being adopted swiftly, which led to the
    formation of an open community of interested programmers who swapped
    ideas over the internet and continued writing software. In order to
    maintain an overview of the different versions of the program, which
    appeared in parallel with one another, it soon became necessary to
    employ specialized platforms. The fusion of social processes --
    horizontal and voluntary cooperation among developers -- and
    technological platforms, which enabled this form of cooperation
    []{#Page_55 type="pagebreak" title="55"}by providing archives, filter
    functions, and search capabil­ities that made it possible to organize
    large amounts of data, was thus advanced even further. The programmers
    were no longer primarily working on the development of the internet
    itself, which by then was functioning quite reliably, but were rather
    using the internet to apply their cooperative principles to other
    arenas. By the end of the 1990s, the free-software movement had
    established a new, internet-based form of organization and had
    demonstrated its efficiency in practice: horizontal, informal
    communities of actors -- voluntary, autonomous, and focused on a common
    interest -- that, on the basis of high-tech infrastructure, could
    include thousands of people without having to create formal hierarchies.
    :::
    :::

    ::: {.section}
    From the Margins to the Center of Society {#c1-sec-0013}
    -----------------------------------------

    It was around this same time that the technologies in question, which
    were already no longer very new, entered mainstream society. Within a
    few years, the internet became part of everyday life. Three years before
    the turn of the millennium, only about 6 percent of the entire German
    population used the internet, often only occasionally. Three years after
    the millennium, the number of users already exceeded 53 percent. Since
    then, this share has increased even further. In 2014, it was more than
    97 percent for people under the age of
    40.[^95^](#c1-note-0095){#c1-note-0095a} Parallel to these developments,
    data transfer rates increased considerably, broadband connections ousted
    the need for dial-up modems, and the internet was suddenly "here" and no
    longer "there." With the spread of mobile devices, especially since the
    year 2007 when the first iPhone was introduced, digital communication
    became available both extensively and continuously. Since then, the
    internet has been ubiquitous. The amount of time that users spend online
    has increased and, with the rapid ascent of social mass media such as
    Facebook, people have been online in almost every situation and
    circumstance in life.[^96^](#c1-note-0096){#c1-note-0096a} The internet,
    like water or electricity, has become for many people a utility that is
    simply taken for granted.

    In a BBC survey from 2010, 80 percent of those polled believed that
    internet access -- a precondition for participating []{#Page_56
    type="pagebreak" title="56"}in the now dominant digital condition --
    should be regarded as a fundamental human right. This idea was most
    popular in South Korea (96 percent) and Mexico (94 percent), while in
    Germany at least 72 percent were of the same
    opinion.[^97^](#c1-note-0097){#c1-note-0097a}

    On the basis of this new infrastructure, which is now relevant in all
    areas of life, the cultural developments described above have been
    severed from the specific historical conditions from which they emerged
    and have permeated society as a whole. Expressivity -- the ability to
    communicate something "unique" -- is no longer a trait of artists and
    know­ledge workers alone, but rather something that is required by an
    increasingly broader stratum of society and is already being taught in
    schools. Users of social mass media must produce (themselves). The
    development of specific, differentiated identities and the demand that
    each be treated equally are no longer promoted exclusively by groups who
    have to struggle against repression, existential threats, and
    marginalization, but have penetrated deeply into the former mainstream,
    not least because the present forms of capitalism have learned to profit
    from the spread of niches and segmentation. When even conservative
    parties have abandoned the idea of a "leading culture," then cultural
    differences can no longer be classified by enforcing an absolute and
    indisputable hierarchy, the top of which is occupied by specific
    (geographical and cultural) centers. Rather, a space has been opened up
    for endless negotiations, a space in which -- at least in principle --
    everything can be called into question. This is not, of course, a
    peaceful and egalitarian process. In addition to the practical hurdles
    that exist in polarizing societies, there are also violent backlashes
    and new forms of fundamentalism that are attempting once again to remove
    certain religious, social, cultural, or political dimensions of
    existence from the discussion. Yet these can only be understood in light
    of a sweeping cultural transformation that has already reached
    mainstream society.[^98^](#c1-note-0098){#c1-note-0098a} In other words,
    the digital condition has become quotidian and dominant. It forms a
    cultural constellation that determines all areas of life, and its
    characteristic features are clearly recognizable. These will be the
    focus of the next chapter.[]{#Page_57 type="pagebreak" title="57"}
    :::

    ::: {.section .notesSet type="rearnotes"}
    []{#notesSet}Notes {#c1-ntgp-9999}
    ------------------

    ::: {.section .notesList}
    [1](#c1-note-0001a){#c1-note-0001}  Kathrin Passig and Sascha Lobo,
    *Internet: Segen oder Fluch* (Berlin: Rowohlt, 2012) \[--trans.\].

    [2](#c1-note-0002a){#c1-note-0002}  The expression "heteronormatively
    behaving" is used here to mean that, while in the public eye, the
    behavior of the people []{#Page_177 type="pagebreak" title="177"}in
    question conformed to heterosexual norms regardless of their personal
    sexual orientations.

    [3](#c1-note-0003a){#c1-note-0003}  No order is ever entirely closed
    off. In this case, too, there was also room for exceptions and for
    collective moments of greater cultural multiplicity. That said, the
    social openness of the end of the 1920s, for instance, was restricted to
    particular milieus within large cities and was accordingly short-lived.

    [4](#c1-note-0004a){#c1-note-0004}  Fritz Machlup, *The Political
    Economy of Monopoly: Business, Labor and Government Policies*
    (Baltimore, MD: The Johns Hopkins University Press, 1952).

    [5](#c1-note-0005a){#c1-note-0005}  Machlup was a student of Ludwig von
    Mises, the most influential representative of this radically
    individualist school. See Hans-Hermann Hoppe, "Die Österreichische
    Schule und ihre Bedeutung für die moderne Wirtschaftswissenschaft," in
    Karl-Dieter Grüske (ed.), *Die Gemeinwirtschaft: Kommentarband zur
    Neuauflage von Ludwig von Mises' "Die Gemeinwirtschaft"* (Düsseldorf:
    Verlag Wirtschaft und Finanzen, 1996), pp. 65--90.

    [6](#c1-note-0006a){#c1-note-0006}  Fritz Machlup, *The Production and
    Distribution of Knowledge in the United States* (New York: John Wiley &
    Sons, 1962).

    [7](#c1-note-0007a){#c1-note-0007}  The term "knowledge worker" had
    already been introduced to the discussion a few years before; see Peter
    Drucker, *Landmarks of Tomorrow: A Report on the New* (New York: Harper,
    1959).

    [8](#c1-note-0008a){#c1-note-0008}  Peter Ecker, "Die
    Verwissenschaftlichung der Industrie: Zur Geschichte der
    Industrieforschung in den europäischen und amerikanischen
    Elektrokonzernen 1890--1930," *Zeitschrift für Unternehmensgeschichte*
    35 (1990): 73--94.

    [9](#c1-note-0009a){#c1-note-0009}  Edward Bernays was the son of
    Sigmund Freud\'s sister Anna and Ely Bernays, the brother of Freud\'s
    wife, Martha Bernays.

    [10](#c1-note-0010a){#c1-note-0010}  Edward L. Bernays, *Propaganda*
    (New York: Horace Liverlight, 1928).

    [11](#c1-note-0011a){#c1-note-0011}  James Beniger, *The Control
    Revolution: Technological and Economic Origins of the Information
    Society* (Cambridge, MA: Harvard University Press, 1986), p. 350.

    [12](#c1-note-0012a){#c1-note-0012}  Norbert Wiener, *Cybernetics: Or
    Control and Communication in the Animal and the Machine* (New York: J.
    Wiley, 1948).

    [13](#c1-note-0013a){#c1-note-0013}  Daniel Bell, *The Coming of
    Post-Industrial Society: A Venture in Social Forecasting* (New York:
    Basic Books, 1973).

    [14](#c1-note-0014a){#c1-note-0014}  Simon Nora and Alain Minc, *The
    Computerization of Society: A Report to the President of France*
    (Cambridge, MA: MIT Press, 1980).

    [15](#c1-note-0015a){#c1-note-0015}  Manuel Castells, *The Rise of the
    Network Society* (Oxford: Blackwell, 1996).

    [16](#c1-note-0016a){#c1-note-0016}  Hans-Dieter Kübler, *Mythos
    Wissensgesellschaft: Gesellschaft­licher Wandel zwischen Information,
    Medien und Wissen -- Eine Einführung* (Wiesbaden: Verlag für
    Sozialwissenschaften, 2009).[]{#Page_178 type="pagebreak" title="178"}

    [17](#c1-note-0017a){#c1-note-0017}  Luc Boltanski and Ève Chiapello,
    *The New Spirit of Capitalism*, trans. Gregory Elliott (London: Verso,
    2005).

    [18](#c1-note-0018a){#c1-note-0018}  Michael Piore and Charles Sabel,
    *The Second Industrial Divide: Possibilities of Prosperity* (New York:
    Basic Books, 1984).

    [19](#c1-note-0019a){#c1-note-0019}  Castells, *The Rise of the Network
    Society*. For a critical evaluation of Castells\'s work, see Felix
    Stalder, *Manuel Castells and the Theory of the Network Society*
    (Cambridge: Polity, 2006).

    [20](#c1-note-0020a){#c1-note-0020}  "UK Creative Industries Mapping
    Documents" (1998); quoted from Terry Flew, *The Creative Industries:
    Culture and Policy* (Los Angeles, CA: Sage, 2012), pp. 9--10.

    [21](#c1-note-0021a){#c1-note-0021}  The rise of the creative
    industries, and the hope that they inspired among politicians, did not
    escape criticism. Among the first works to draw attention to the
    precarious nature of working in such industries was Angela McRobbie\'s
    *British Fashion Design: Rag Trade or Image Industry?* (New York:
    Routledge, 1998).

    [22](#c1-note-0022a){#c1-note-0022}  This definition is not without a
    degree of tautology, given that economic growth is based on talent,
    which itself is defined by its ability to create new jobs; that is,
    economic growth. At the same time, he employs the term "talent" in an
    extremely narrow sense. Apparently, if something has nothing to do with
    job creation, it also has nothing to do with talent or creativity. All
    forms of creativity are thus measured and compared according to a common
    criterion.

    [23](#c1-note-0023a){#c1-note-0023}  Richard Florida, *Cities and the
    Creative Class* (New York: Routledge, 2005), p. 5.

    [24](#c1-note-0024a){#c1-note-0024}  One study has reached the
    conclusion that, despite mass participation, "a new form of
    communicative elite has developed, namely digitally and technically
    versed actors who inform themselves in this way, exchange ideas and thus
    gain influence. For them, the possibilities of platforms mainly
    represent an expansion of useful tools. Above all, the dissemination of
    digital technology makes it easier for versed and highly networked
    individuals to convey their news more simply -- and, for these groups of
    people, it lowers the threshold for active participation." Michael
    Bauer, "Digitale Technologien und Partizipation," in Clara Landler et
    al. (eds), *Netzpolitik in Österreich: Internet, Macht, Menschenrechte*
    (Krems: Donau-Universität Krems, 2013), pp. 219--24, at 224
    \[--trans.\].

    [25](#c1-note-0025a){#c1-note-0025}  Boltanski and Chiapello, *The New
    Spirit of Capitalism*.

    [26](#c1-note-0026a){#c1-note-0026}  According to Wikipedia,
    "Heteronormativity is the belief that people fall into distinct and
    complementary genders (man and woman) with natural roles in life. It
    assumes that heterosexuality is the only sexual orientation or only
    norm, and states that sexual and marital relations are most (or only)
    fitting between people of opposite sexes."[]{#Page_179 type="pagebreak"
    title="179"}

    [27](#c1-note-0027a){#c1-note-0027}  Jannis Plastargias, *RotZSchwul:
    Der Beginn einer Bewegung (1971--1975)* (Berlin: Querverlag, 2015).

    [28](#c1-note-0028a){#c1-note-0028}  Helmut Ahrens et al. (eds),
    *Tuntenstreit: Theoriediskussion der Homosexuellen Aktion Westberlin*
    (Berlin: Rosa Winkel, 1975), p. 4.

    [29](#c1-note-0029a){#c1-note-0029}  Susanne Regener and Katrin Köppert
    (eds), *Privat/öffentlich: Mediale Selbstentwürfe von Homosexualität*
    (Vienna: Turia + Kant, 2013).

    [30](#c1-note-0030a){#c1-note-0030}  Such, for instance, was the
    assessment of Manfred Bruns, the spokesperson for the Lesbian and Gay
    Association in Germany, in his text "Schwulenpolitik früher" (link no
    longer active). From today\'s perspective, however, the main problem
    with this event was the unclear position of the Green Party with respect
    to pedophilia. See Franz Walter et al. (eds), *Die Grünen und die
    Pädosexualität: Eine bundesdeutsche Geschichte* (Göttingen: Vandenhoeck
    & Ruprecht, 2014).

    [31](#c1-note-0031a){#c1-note-0031}  "AIDS: Tödliche Seuche," *Der
    Spiegel* 23 (1983) \[--trans.\].

    [32](#c1-note-0032a){#c1-note-0032}  Quoted from Frank Niggemeier, "Gay
    Pride: Schwules Selbst­bewußtsein aus dem Village," in Bernd Polster
    (ed.), *West-Wind: Die Amerikanisierung Europas* (Cologne: Dumont,
    1995), pp. 179--87, at 184 \[--trans.\].

    [33](#c1-note-0033a){#c1-note-0033}  Quoted from Regener and Köppert,
    *Privat/öffentlich*, p. 7 \[--trans.\].

    [34](#c1-note-0034a){#c1-note-0034}  Hans-Peter Buba and László A.
    Vaskovics, *Benachteiligung gleichgeschlechtlich orientierter Personen
    und Paare: Studie im Auftrag des Bundesministerium der Justiz* (Cologne:
    Bundes­anzeiger, 2001).

    [35](#c1-note-0035a){#c1-note-0035}  This process of internal
    differentiation has not yet reached its conclusion, and thus the
    acronyms have become longer and longer: LGBPTTQQIIAA+ stands for
    "lesbian, gay, bisexual, pansexual, transgender, transsexual, queer,
    questioning, intersex, intergender, asexual, ally."

    [36](#c1-note-0036a){#c1-note-0036}  Judith Butler, *Gender Trouble:
    Feminism and the Subversion of Identity* (New York: Routledge, 1989).

    [37](#c1-note-0037a){#c1-note-0037}  Andreas Krass, "Queer Studies: Eine
    Einführung," in Krass (ed.), *Queer denken: Gegen die Ordnung der
    Sexualität* (Frankfurt am Main: Suhrkamp, 2003), pp. 7--27.

    [38](#c1-note-0038a){#c1-note-0038}  Edward W. Said, *Orientalism* (New
    York: Vintage Books, 1978).

    [39](#c1-note-0039a){#c1-note-0039}  Kark August Wittfogel, *Oriental
    Despotism: A Comparative Study of Total Power* (New Haven, CT: Yale
    University Press, 1957).

    [40](#c1-note-0040a){#c1-note-0040}  Silke Förschler, *Bilder des Harem:
    Medienwandel und kultereller Austausch* (Berlin: Reimer, 2010).

    [41](#c1-note-0041a){#c1-note-0041}  The selection and effectiveness of
    these images is not a coincidence. Camel was one of the first brands of
    cigarettes for []{#Page_180 type="pagebreak" title="180"}which
    advertising, in the sense described above, was used in a systematic
    manner.

    [42](#c1-note-0042a){#c1-note-0042}  This would not exclude feelings of
    regret about the loss of an exotic and romantic way of life, such as
    those of T. E. Lawrence, whose activities in the Near East during the
    First World War were memorialized in the film *Lawrence of Arabia*
    (1962).

    [43](#c1-note-0043a){#c1-note-0043}  Said has often been criticized,
    however, for portraying orientalism so dominantly that there seems to be
    no way out of the existing dependent relations. For an overview of the
    debates that Said has instigated, see María do Mar Castro Varela and
    Nikita Dhawan, *Postkoloniale Theorie: Eine kritische Ein­führung*
    (Bielefeld: Transcript, 2005), pp. 37--46.

    [44](#c1-note-0044a){#c1-note-0044}  "Migration führt zu 'hybrider'
    Gesellschaft" (an interview with Homi K. Bhabha), *ORF Science*
    (November 9, 2007), online \[--trans.\].

    [45](#c1-note-0045a){#c1-note-0045}  Homi K. Bhabha, *The Location of
    Culture* (New York: Routledge, 1994), p. 4.

    [46](#c1-note-0046a){#c1-note-0046}  Elisabeth Bronfen and Benjamin
    Marius, "Hybride Kulturen: Einleitung zur anglo-amerikanischen
    Multikulturismusdebatte," in Bronfen et al. (eds), *Hybride Kulturen*
    (Tübingen: Stauffenburg), pp. 1--30, at 8 \[--trans.\].

    [47](#c1-note-0047a){#c1-note-0047}  "What Is Postcolonial Thinking? An
    Interview with Achille Mbembe," *Eurozine* (December 2006), online.

    [48](#c1-note-0048a){#c1-note-0048}  Migrants have always created their
    own culture, which deals in various ways with the experience of
    migration itself, but non-migrant populations have long tended to ignore
    this. Things have now begun to change in this regard, for instance
    through Imra Ayata and Bülent Kullukcu\'s compilation of songs by the
    Turkish diaspora of the 1970s and 1980s: *Songs of Gastarbeiter*
    (Munich: Trikont, 2013).

    [49](#c1-note-0049a){#c1-note-0049}  The conference programs can be
    found at: \<\>.

    [50](#c1-note-0050a){#c1-note-0050}  "Deutschland entwickelt sich zu
    einem attraktiven Einwanderungsland für hochqualifizierte Zuwanderer,"
    press release by the CDU/CSU Alliance in the German Parliament (June 4,
    2014), online \[--trans.\].

    [51](#c1-note-0051a){#c1-note-0051}  Andreas Reckwitz, *Die Erfindung
    der Kreativität: Zum Prozess gesellschaftlicher Ästhetisierung* (Berlin:
    Suhrkamp, 2011), p. 180 \[--trans.\]. An English translation of this
    book is forthcoming: *The Invention of Creativity: Modern Society and
    the Culture of the New*, trans. Steven Black (Cambridge: Polity, 2017).

    [52](#c1-note-0052a){#c1-note-0052}  Gert Selle, *Geschichte des Design
    in Deutschland* (Frankfurt am Main: Campus, 2007).

    [53](#c1-note-0053a){#c1-note-0053}  "Less Is More: The Design Ethos of
    Dieter Rams," *SFMOMA* (June 29, 2011), online.[]{#Page_181
    type="pagebreak" title="181"}

    [54](#c1-note-0054a){#c1-note-0054}  The cybernetic perspective was
    introduced to the field of design primarily by Buckminster Fuller. See
    Diedrich Diederichsen and Anselm Franke, *The Whole Earth: California
    and the Disappearance of the Outside* (Berlin: Sternberg, 2013).

    [55](#c1-note-0055a){#c1-note-0055}  Clive Dilnot, "Design as a Socially
    Significant Activity: An Introduction," *Design Studies* 3/3 (1982):
    139--46.

    [56](#c1-note-0056a){#c1-note-0056}  Victor J. Papanek, *Design for the
    Real World: Human Ecology and Social Change* (New York: Pantheon, 1972),
    p. 2.

    [57](#c1-note-0057a){#c1-note-0057}  Reckwitz, *Die Erfindung der
    Kreativität*.

    [58](#c1-note-0058a){#c1-note-0058}  B. Joseph Pine and James H.
    Gilmore, *The Experience Economy: Work Is Theater and Every Business Is
    a Stage* (Boston, MA: Harvard Business School Press, 1999), p. ix (the
    emphasis is original).

    [59](#c1-note-0059a){#c1-note-0059}  Mona El Khafif, *Inszenierter
    Urbanismus: Stadtraum für Kunst, Kultur und Konsum im Zeitalter der
    Erlebnisgesellschaft* (Saarbrücken: VDM Verlag Dr. Müller, 2013).

    [60](#c1-note-0060a){#c1-note-0060}  Konrad Becker and Martin Wassermair
    (eds), *Phantom Kulturstadt* (Vienna: Löcker, 2009).

    [61](#c1-note-0061a){#c1-note-0061}  See, for example, Andres Bosshard,
    *Stadt hören: Klang­spaziergänge durch Zürich* (Zurich: NZZ Libro,
    2009).

    [62](#c1-note-0062a){#c1-note-0062}  "An alternate realty game (ARG),"
    according to Wikipedia, "is an interactive networked narrative that uses
    the real world as a platform and employs transmedia storytelling to
    deliver a story that may be altered by players\' ideas or actions."

    [63](#c1-note-0063a){#c1-note-0063}  Eric von Hippel, *Democratizing
    Innovation* (Cambridge, MA: MIT Press, 2005).

    [64](#c1-note-0064a){#c1-note-0064}  It is often the case that the
    involvement of users simply serves to increase the efficiency of
    production processes and customer service. Many activities that were
    once undertaken at the expense of businesses now have to be carried out
    by the customers themselves. See Günter Voss, *Der arbeitende Kunde:
    Wenn Konsumenten zu unbezahlten Mitarbeitern werden* (Frankfurt am Main:
    Campus, 2005).

    [65](#c1-note-0065a){#c1-note-0065}  Beniger, *The Control Revolution*,
    pp. 411--16.

    [66](#c1-note-0066a){#c1-note-0066}  Louis Althusser, "Ideology and
    Ideological State Apparatuses (Notes towards an Investigation)," in
    Althusser, *Lenin and Philosophy and Other Essays*, trans. Ben Brewster
    (New York: Monthly Review Press, 1971), pp. 127--86.

    [67](#c1-note-0067a){#c1-note-0067}  Florian Becker et al. (eds),
    *Gramsci lesen! Einstiege in die Gefängnis­hefte* (Hamburg: Argument,
    2013), pp. 20--35.

    [68](#c1-note-0068a){#c1-note-0068}  Guy Debord, *The Society of the
    Spectacle*, trans. Fredy Perlman and Jon Supak (Detroit: Black & Red,
    1977).

    [69](#c1-note-0069a){#c1-note-0069}  Derrick de Kerckhove, "McLuhan and
    the Toronto School of Communication," *Canadian Journal of
    Communication* 14/4 (1989): 73--9.[]{#Page_182 type="pagebreak"
    title="182"}

    [70](#c1-note-0070a){#c1-note-0070}  Marshall McLuhan, *Understanding
    Media: The Extensions of Man* (New York: McGraw-Hill, 1964).

    [71](#c1-note-0071a){#c1-note-0071}  Nam Jun Paik, "Exposition of Music
    -- Electronic Television" (leaflet accompanying the exhibition). Quoted
    from Zhang Ga, "Sounds, Images, Perception and Electrons," *Douban*
    (March 3, 2016), online.

    [72](#c1-note-0072a){#c1-note-0072}  Laura R. Linder, *Public Access
    Television: America\'s Electronic Soapbox* (Westport, CT: Praeger,
    1999).

    [73](#c1-note-0073a){#c1-note-0073}  Hans Magnus Enzensberger,
    "Constituents of a Theory of the Media," in Noah Wardrip-Fruin and Nick
    Montfort (eds), *The New Media Reader* (Cambridge, MA: MIT Press, 2003),
    pp. 259--75.

    [74](#c1-note-0074a){#c1-note-0074}  Paul Groot, "Rabotnik TV,"
    *Mediamatic* 2/3 (1988), online.

    [75](#c1-note-0075a){#c1-note-0075}  Inke Arns, "Social Technologies:
    Deconstruction, Subversion and the Utopia of Democratic Communication,"
    *Medien Kunst Netz* (2004), online.

    [76](#c1-note-0076a){#c1-note-0076}  The term was coined at a series of
    conferences titled The Next Five Minutes (N5M), which were held in
    Amsterdam from 1993 to 2003. See \<\>.

    [77](#c1-note-0077a){#c1-note-0077}  Mark Dery, *Culture Jamming:
    Hacking, Slashing and Sniping in the Empire of Signs* (Westfield: Open
    Media, 1993); Luther Blisset et al., *Handbuch der
    Kommunikationsguerilla*, 5th edn (Berlin: Assoziationen A, 2012).

    [78](#c1-note-0078a){#c1-note-0078}  Critical Art Ensemble, *Electronic
    Civil Disobedience and Other Unpopular Ideas* (New York: Autonomedia,
    1996).

    [79](#c1-note-0079a){#c1-note-0079}  Today this method is known as a
    "distributed denial of service attack" (DDOS).

    [80](#c1-note-0080a){#c1-note-0080}  Max Weber, *Economy and Society: An
    Outline of Interpretive Sociology*, trans. Guenther Roth and Claus
    Wittich (Berkeley, CA: University of California Press, 1978), pp. 26--8.

    [81](#c1-note-0081a){#c1-note-0081}  Ernst Friedrich Schumacher, *Small
    Is Beautiful: Economics as if People Mattered*, 8th edn (New York:
    Harper Perennial, 2014).

    [82](#c1-note-0082a){#c1-note-0082}  Fred Turner, *From Counterculture
    to Cyberculture: Stewart Brand, the Whole Earth Movement and the Rise of
    Digital Utopianism* (Chicago, IL: University of Chicago Press, 2006), p.
    21. In this regard, see also the documentary films *Das Netz* by Lutz
    Dammbeck (2003) and *All Watched Over by Machines of Loving Grace* by
    Adam Curtis (2011).

    [83](#c1-note-0083a){#c1-note-0083}  It was possible to understand
    cybernetics as a language of free markets or also as one of centralized
    planned economies. See Slava Gerovitch, *From Newspeak to Cyberspeak: A
    History of Soviet Cybernetics* (Cambridge, MA: MIT Press, 2002). The
    great interest of Soviet scientists in cybernetics rendered the term
    rather suspicious in the West, where it was disassociated from
    artificial intelligence.[]{#Page_183 type="pagebreak" title="183"}

    [84](#c1-note-0084a){#c1-note-0084}  Claus Pias, "The Age of
    Cybernetics," in Pias (ed.), *Cybernetics: The Macy Conferences
    1946--1953* (Zurich: Diaphanes, 2016), pp. 11--27.

    [85](#c1-note-0085a){#c1-note-0085}  Norbert Wiener, one of the
    cofounders of cybernetics, explained this as follows in 1950: "In giving
    the definition of Cybernetics in the original book, I classed
    communication and control together. Why did I do this? When I
    communicate with another person, I impart a message to him, and when he
    communicates back with me he returns a related message which contains
    information primarily accessible to him and not to me. When I control
    the actions of another person, I communicate a message to him, and
    although this message is in the imperative mood, the technique of
    communication does not differ from that of a message of fact.
    Furthermore, if my control is to be effective I must take cognizance of
    any messages from him which may indicate that the order is understood
    and has been obeyed." Norbert Wiener, *The Human Use of Human Beings:
    Cybernetics and Society*, 2nd edn (London: Free Association Books,
    1989), p. 16.

    [86](#c1-note-0086a){#c1-note-0086}  Though presented here as distinct,
    these interests could in fact be held by one and the same person. In
    *From Counterculture to Cyberculture*, for instance, Turner discusses
    "countercultural entrepreneurs."

    [87](#c1-note-0087a){#c1-note-0087}  Richard Brautigan, "All Watched
    Over by Machines of Loving Grace," in *All Watched Over by Machines of
    Loving Grace*, by Brautigan (San Francisco: The Communication Company,
    1967).

    [88](#c1-note-0088a){#c1-note-0088}  David D. Clark, "A Cloudy Crystal
    Ball: Visions of the Future," *Internet Engineering Taskforce* (July
    1992), online.

    [89](#c1-note-0089a){#c1-note-0089}  Castells, *The Rise of the Network
    Society*.

    [90](#c1-note-0090a){#c1-note-0090}  Bill Gates, "An Open Letter to
    Hobbyists," *Homebrew Computer Club Newsletter* 2/1 (1976): 2.

    [91](#c1-note-0091a){#c1-note-0091}  Richard Stallman, "What Is Free
    Software?", *GNU Operating System*, online.

    [92](#c1-note-0092a){#c1-note-0092}  The fundamentally cooperative
    nature of programming was recognized early on. See Gerald M. Weinberg,
    *The Psychology of Computer Programming*, rev. edn (New York: Dorset
    House, 1998 \[originally published in 1971\]).

    [93](#c1-note-0093a){#c1-note-0093}  On the history of free software,
    see Volker Grassmuck, *Freie Software: Zwischen Privat- und
    Gemeineigentum* (Berlin: Bundeszentrale für politische Bildung, 2002).

    [94](#c1-note-0094a){#c1-note-0094}  In his first email on the topic, he
    wrote: "Hello everybody out there \[...\]. I'm doing a (free) operating
    system (just a hobby, won\'t be big and professional like gnu) \[...\].
    This has been brewing since April, and is starting to get ready. I\'d
    like any feedback on things people like/dislike." Linus Torvalds, "What
    []{#Page_184 type="pagebreak" title="184"}Would You Like to See Most in
    Minix," *Usenet Group* (August 1991), online.

    [95](#c1-note-0095a){#c1-note-0095}  ARD/ZDF, "Onlinestudie" (2015),
    online.

    [96](#c1-note-0096a){#c1-note-0096}  From 1997 to 2003, the average use
    of online media in Germany climbed from 76 to 138 minutes per day, and
    by 2013 it reached 169 minutes. Over the same span of time, the average
    frequency of use increased from 3.3 to 4.4 days per week, and by 2013 it
    was 5.8. From 2007 to 2013, the percentage of people who were members of
    private social networks like Facebook grew from 15 percent to 46
    percent. Of these, nearly 60 percent -- around 19 million people -- used
    such services on a daily basis. The source of this information is the
    article cited in the previous note.

    [97](#c1-note-0097a){#c1-note-0097}  "Internet Access Is 'a Fundamental
    Right'," *BBC News* (8 March 2010), online.

    [98](#c1-note-0098a){#c1-note-0098}  Manuel Castells, *The Power of
    Identity* (Oxford: Blackwell, 1997), pp. 7--22.
    :::
    :::

    [II]{.chapterNumber} [Forms]{.chapterTitle} {#c2}

    ::: {.section}
    With the emergence of the internet around the turn of the millennium as
    an omnipresent infrastructure for communication and coordination,
    previously independent cultural developments began to spread beyond
    their specific original contexts, mutually influencing and enhancing one
    another, and becoming increasingly intertwined. Out of a disconnected
    conglomeration of more or less marginalized practices, a new and
    specific cultural environment thus took shape, usurping or marginalizing
    an ever greater variety of cultural constellations. The following
    discussion will focus on three *forms* of the digital condition; that
    is, on those formal qualities that (notwithstanding all of its internal
    conflicts and contradictions) lend a particular shape to this cultural
    environment as a whole: *referentiality*, *communality*, and
    *algorithmicity*. It is only because most of the cultural processes
    operating under the digital condition are characterized by common formal
    features such as these that it is reasonable to speak of the digital
    condition in the singular.

    "Referentiality" is a method with which individuals can inscribe
    themselves into cultural processes and constitute themselves as
    producers. Understood as shared social meaning, the arena of culture
    entails that such an undertaking cannot be limited to the individual.
    Rather, it takes place within a larger framework whose existence and
    development depend on []{#Page_58 type="pagebreak" title="58"}communal
    formations. "Algorithmicity" denotes those aspects of cultural processes
    that are (pre-)arranged by the activities of machines. Algorithms
    transform the vast quantities of data and information that characterize
    so many facets of present-day life into dimensions and formats that can
    be registered by human perception. It is impossible to read the content
    of billions of websites. Therefore we turn to services such as Google\'s
    search algorithm, which reduces the data flood ("big data") to a
    manageable amount and translates it into a format that humans can
    understand ("small data"). Without them, human beings could not
    comprehend or do anything within a culture built around digital
    technologies, but they influence our understanding and activity in an
    ambivalent way. They create new dependencies by pre-sorting and making
    the (informational) world available to us, yet simultaneously ensure our
    autonomy by providing the preconditions that enable us to act.
    :::

    ::: {.section}
    Referentiality {#c2-sec-0002}
    --------------

    In the digital condition, one of the methods (if not *the* most
    fundamental method) enabling humans to participate -- alone or in groups
    -- in the collective negotiation of meaning is the system of creating
    references. In a number of arenas, referential processes play an
    important role in the assignment of both meaning and form. According to
    the art historian André Rottmann, for instance, "one might claim that
    working with references has in recent years become the dominant
    production-aesthetic model in contemporary
    art."[^1^](#c2-note-0001){#c2-note-0001a} This burgeoning engagement
    with references, however, is hardly restricted to the world of
    contemporary art. Referentiality is a feature of many processes that
    encompass the operations of various genres of professional and everyday
    culture. In its essence, it is the use of materials that are already
    equipped with meaning -- as opposed to so-called raw material -- to
    create new meanings. The referential techniques used to achieve this are
    extremely diverse, a fact reflected in the numerous terms that exist to
    describe them: re-mix, re-make, re-enactment, appropriation, sampling,
    meme, imitation, homage, tropicália, parody, quotation, post-production,
    re-performance, []{#Page_59 type="pagebreak" title="59"}camouflage,
    (non-academic) research, re-creativity, mashup, transformative use, and
    so on.

    These processes have two important aspects in common: the
    recognizability of the sources and the freedom to deal with them however
    one likes. The first creates an internal system of references from which
    meaning and aesthetics are derived in an essential
    manner.[^2^](#c2-note-0002){#c2-note-0002a} The second is the
    precondition enabling the creation of something that is both new and on
    the same level as the re-used material. This represents a clear
    departure from the historical--critical method, which endeavors to embed
    a source in its original context in order to re-determine its meaning,
    but also a departure from classical forms of rendition such as
    translations, adaptations (for instance, adapting a book for a film), or
    cover versions, which, though they translate a work into another
    language or medium, still attempt to preserve its original meaning.
    Re-mixes produced by DJs are one example of the referential treatment of
    source material. In his book on the history of DJ culture, the
    journalist Ulf Poschardt notes: "The remixer isn\'t concerned with
    salvaging authenticity, but with creating a new
    authenticity."[^3^](#c2-note-0003){#c2-note-0003a} For instead of
    distancing themselves from the past, which would follow the (Western)
    logic of progress or the spirit of the avant-garde, these processes
    refer explicitly to precursors and to existing material. In one and the
    same gesture, both one\'s own new position and the context and cultural
    tradition that is being carried on in one\'s own work are constituted
    performatively; that is, through one\'s own activity in the moment. I
    will discuss this phenomenon in greater depth below.

    To work with existing cultural material is, in itself, nothing new. In
    modern montages, artists likewise drew upon available texts, images, and
    treated materials. Yet there is an important difference: montages were
    concerned with bringing together seemingly incongruous but stable
    "finished pieces" in a more or less unmediated and fragmentary manner.
    This is especially clear in the collages by the Dadaists or in
    Expressionist literature such as Alfred Döblin\'s *Berlin
    Alexanderplatz*. In these works, the experience of Modernity\'s many
    fractures -- its fragmentation and turmoil -- was given a new aesthetic
    form. In his reference to montages, Adorno thus observed that the
    "negation of synthesis becomes a principle []{#Page_60 type="pagebreak"
    title="60"}of form."[^4^](#c2-note-0004){#c2-note-0004a} At least for a
    brief moment, he considered them an adequate expression for the
    impossibility of reconciling the contradictions of capitalist culture.
    Influenced by Adorno, the literary theorist Peter Bürger went so far as
    to call the montage the true "paradigm of
    modernity."[^5^](#c2-note-0005){#c2-note-0005a} In today\'s referential
    processes, on the contrary, pieces are not brought together as much as
    they are integrated into one another by being altered, adapted, and
    transformed. Unlike the older arrangement, it is not the fissures
    between elements that are foregrounded but rather their synthesis in the
    present. Conchita Wurst, the bearded diva, is not torn between two
    conflicting poles. Rather, she represents a successful synthesis --
    something new and harmonious that distinguishes itself by showcasing
    elements of the old order (man/woman) and simultaneously transcending
    them.

    This synthesis, however, is usually just temporary, for at any time it
    can itself serve as material for yet another rendering. Of course, this
    is far easier to pull off with digital objects than with analog objects,
    though these categories have become increasingly porous and thus
    increasingly problematic as opposites. More and more objects exist both
    in an analog and in a digital form. Think of photographs and slides,
    which have become so easy to digitalize. Even three-dimensional objects
    can now be scanned and printed. In the future, programmable materials
    with controllable and reversible features will cause the difference
    between the two domains to vanish: analog is becoming more and more
    digital.

    Montages and referential processes can only become widespread methods
    if, in a given society, cultural objects are available in three
    different respects. The first is economic and organizational: they must
    be affordable and easily accessible. Whoever is unable to afford books
    or get hold of them by some other means will not be able to reconfigure
    any texts. The second is cultural: working with cultural objects --
    which can always create deviations from the source in unpredictable ways
    -- must not be treated as taboo or illegal, but rather as an everyday
    activity without any special preconditions. It is much easier to
    manipulate a text from a secular newspaper than one from a religious
    canon. The third is material: it must be possible to use the material
    and to change it.[^6[]{#Page_61 type="pagebreak"
    title="61"}^](#c2-note-0006){#c2-note-0006a}

    In terms of this third form of availability, montages differ from
    referential processes, for cultural objects can be integrated into one
    another -- instead of simply being placed side by side -- far more
    readily when they are digitally coded. Information is digitally coded
    when it is stored by means of a limited system of discrete (that is,
    separated by finite intervals or distances) signs that are meaningless
    in themselves. This allows information to be copied from one carrier to
    another without any loss and it allows the respective signs, whether
    individually or in groups, to be arranged freely. Seen in this way,
    digital coding is not necessarily bound to computers but can rather be
    realized with all materials: a mosaic is a digital process in which
    information is coded by means of variously colored tiles, just as a
    digital image consists of pixels. In the case of the mosaic, of course,
    the resolution is far lower. Alphabetic writing is a form of coding
    linguistic information by means of discrete signs that are, in
    themselves, meaningless. Consequently, Florian Cramer has argued that
    "every form of literature that is recorded alphabetically and not based
    on analog parameters such as ideograms or orality is already digital in
    that it is stored in discrete
    signs."[^7^](#c2-note-0007){#c2-note-0007a} However, the specific
    features of the alphabet, as Marshall McLuhan repeatedly underscored,
    did not fully develop until the advent of the printing
    press.[^8^](#c2-note-0008){#c2-note-0008a} It was the printing press, in
    other words, that first abstracted written signs from analog handwriting
    and transformed them into standardized symbols that could be repeated
    without any loss of information. In this practical sense, the printing
    press made writing digital, with the result that dealing with texts soon
    became radically different.

    ::: {.section}
    ### Information overload 1.0 {#c2-sec-0003}

    The printing press made texts available in the three respects mentioned
    above. For one thing, their number increased rapidly, while their price
    significantly sank. During the first two generations after Gutenberg\'s
    invention -- that is, between 1450 and 1500 -- more books were produced
    than during the thousand years
    before.[^9^](#c2-note-0009){#c2-note-0009a} And that was just the
    beginning. Dealing with books and their content changed from the ground
    up. In manuscript culture, every new copy represented a potential
    degradation of the original, and therefore []{#Page_62 type="pagebreak"
    title="62"}the oldest sources (those that had undergone as little
    corruption as possible) were valued above all. With the advent of print
    culture, the idea took hold that texts could be improved by the process
    of editing, not least because the availability of old sources, through
    reprints and facsimiles, had also improved dramatically. Pure
    reproduction was mechanized and overcome as a cultural challenge.

    According to the historian Elizabeth Eisenstein, one of the first
    consequences of the greatly increased availability of the printed book
    was that it overcame the "tyranny of major authorities, which was common
    in small libraries."[^10^](#c2-note-0010){#c2-note-0010a} Scientists
    were now able to compare texts with one another and critique them to an
    unprecedented extent. Their general orientation turned around: instead
    of looking back in order to preserve what they knew, they were now
    looking ahead toward what they might not (yet) know.

    In order to organize this information flood of rapidly amassing texts,
    it was necessary to create new conventions: books were now specified by
    their author, publisher, and date of publication, not to mention
    furnished with page numbers. This enabled large numbers of texts to be
    catalogued and every individual text -- indeed, every single passage --
    to be referenced.[^11^](#c2-note-0011){#c2-note-0011a} Scientists could
    legitimize the pursuit of new knowledge by drawing attention to specific
    mistakes or gaps in existing texts. In the scientific culture that was
    developing at the time, the close connection between old and new
    ma­terial was not simply regarded as something positive; it was also
    urgently prescribed as a method of argumentation. Every text had to
    contain an internal system of references, and this was the basis for the
    development of schools, disciplines, and specific discourses.

    The digital character of printed writing also made texts available in
    the third respect mentioned above. Because discrete signs could be
    reproduced without any loss of information, it was possible not only to
    make perfect copies but also to remove content from one carrier and
    transfer it to another. Materials were no longer simply arranged
    sequentially, as in medieval compilations and almanacs, but manipulated
    to give rise to a new and independent fluid text. A set of conventions
    was developed -- one that remains in use today -- for modifying embedded
    or quoted material in order for it []{#Page_63 type="pagebreak"
    title="63"}to fit into its new environment. In this manner, quotations
    could be altered in such a way that they could be integrated seamlessly
    into a new text while remaining recognizable as direct citations.
    Several of these conventions, for instance the use of square brackets to
    indicate additions ("\[ \]") or ellipses to indicate omissions ("..."),
    are also used in this very book. At the same time, the conventions for
    making explicit references led to the creation of an internal reference
    system that made the singular position of the new text legible within a
    collective field of work. "Printing," to quote Elizabeth Eisenstein once
    again, "encouraged forms of combinatory activity which were social as
    well as intellectual. It changed relationships between men of learning
    as well as between systems of
    ideas."[^12^](#c2-note-0012){#c2-note-0012a} Exchange between scholars,
    in the form of letters and visits, intensified. The seventeenth century
    saw the formation of the *respublica literaria* or the "Republic of
    Letters," a loose network of scholars devoted to promoting the ideas of
    the Enlightenment. Beginning in the eighteenth century, the rapidly
    growing number of scientific fields was arranged and institutionalized
    into clearly distinct disciplines. In the nineteenth and twentieth
    centuries, diverse media-technical innovations made images, sounds, and
    moving images available, though at first only in analog formats. These
    created the preconditions that enabled the montage in all of its forms
    -- film cuts, collages, readymades, *musique concrète*, found-footage
    films, literary cut-ups, and artistic assemblages (to name only the
    best-known genres) -- to become the paradigm of Modernity.
    :::

    ::: {.section}
    ### Information overload 2.0 {#c2-sec-0004}

    It was not until new technical possibilities for recording, storing,
    processing, and reproduction appeared over the course of the 1990s that
    it also became increasingly possible to code and edit images, audio, and
    video digitally. Through the networking that was taking place not far
    behind, society was flooded with an unprecedented amount of digit­ally
    coded information *of every sort*, and the circulation of this
    information accelerated. This was not, however, simply a quantitative
    change but also and above all a qualitative one. Cultural materials
    became available in a comprehensive []{#Page_64 type="pagebreak"
    title="64"}sense -- economically and organizationally, culturally
    (despite legal problems), and materially (because digitalized). Today it
    would not be bold to predict that nearly every text, image, or sound
    will soon exist in a digital form. Most of the new reproducible works
    are already "born digital" and digit­ally distributed, or they are
    physically produced according to digital instructions. Many initiatives
    are working to digitalize older, analog works. We are now anchored in
    the digital.

    Among the numerous digitalization projects currently under way, the most
    ambitious is that of Google Books, which, since its launch in 2004, has
    digitalized around 20 million books from the collections of large
    libraries and prepared them for full-text searches. Right from the
    start, a fierce debate arose about the legal and cultural acceptability
    of this project. One concern was whether Google\'s process infringed
    upon the rights of the authors and publishers of the scanned books or
    whether, according to American law, it qualified as "fair use," in which
    case there would be no obligation for the company to seek authorization
    or offer compensation. The second main concern was whether it would be
    culturally or politically appropriate for a private corporation to hold
    a de facto monopoly over the digital heritage of book culture. The first
    issue incited a complex legal battle that, in 2013, was decided in
    Google\'s favor by a judge on the United States District Court in New
    York.[^13^](#c2-note-0013){#c2-note-0013a} At the heart of the second
    issue was the question of how a public library should look in the
    twenty-first century.[^14^](#c2-note-0014){#c2-note-0014a} In November
    of 2008, the European Commission and the cultural minister of the
    European Union launched the virtual Europeana library, which occurred
    after a number of European countries had already invested hundreds of
    millions of euros in various digitalization
    initiatives.[^15^](#c2-note-0015){#c2-note-0015a} Today, Europeana
    serves as a common access point to the online archives of around 2,500
    European cultural institutions. By the end of 2015, its digital holdings
    had grown to include more than 40 million objects. This is still,
    however, a relatively small number, for it has been estimated that
    European archives and museums contain more than 220 million
    natural-historical and more than 260 million cultural-historical
    objects. In the United States, discussions about the future of libraries
    []{#Page_65 type="pagebreak" title="65"}led to the 2013 launch of the
    Digital Public Library of America (DPLA), which, like Europeana,
    provides common access to the digitalized holdings of archives, museums,
    and libraries. By now, more than 14 million items can be viewed there.

    In one way or another, however, both the private and the public projects
    of this sort have been limited by binding copyright laws. The librarian
    and book historian Robert Darnton, one of the most prominent advocates
    of the Digital Public Library of America, has accordingly stated: "The
    main impediment to the DPLA\'s growth is legal, not financial. Copyright
    laws could exclude everything published after 1964, most works published
    after 1923, and some that go back as far as
    1873."[^16^](#c2-note-0016){#c2-note-0016a} The legal situation in
    Europe is similar to that in the United States. It, too, massively
    obstructs the work of public
    institutions.[^17^](#c2-note-0017){#c2-note-0017a} In many cases, this
    has had the absurd consequence that certain materials, though they have
    been fully digitalized, may only be accessed in part or exclusively
    inside the facilities of a particular institution. Whereas companies
    such as Google can afford to wage long legal battles, and in the
    meantime create precedents, public institutions must proceed with great
    caution, not least to avoid the accusation of using public funds to
    violate copyright laws. Thus, they tend to fade into the background and
    leave users, who are unfamiliar with the complex legal situation, with
    the impression that they are even more out-of-date than they often are.

    Informal actors, who explicitly operate beyond the realm of copyright
    law, are not faced with such restrictions. UbuWeb, for instance, which
    is the largest online archive devoted to the history of
    twentieth-century avant-garde art, was not created by an art museum but
    rather by the initiative of an individual artist, Kenneth Goldsmith.
    Since 1996, he has been collecting historically relevant materials that
    were no longer in distribution and placing them online for free and
    without any stipulations. He forgoes the process of obtaining the rights
    to certain works of art because, as he remarks on the website, "Let\'s
    face it, if we had to get permission from everyone on UbuWeb, there
    would be no UbuWeb."[^18^](#c2-note-0018){#c2-note-0018a} It would
    simply be too demanding to do so. Because he pursues the project without
    any financial interest and has saved so much []{#Page_66
    type="pagebreak" title="66"}from oblivion, his efforts have provoked
    hardly any legal difficulties. On the contrary, UbuWeb has become so
    important that Goldsmith has begun to receive more and more material
    directly from artists and their heirs, who would like certain works not
    to be forgotten. Nevertheless, or perhaps for this very reason,
    Goldsmith repeatedly stresses the instability of his archive, which
    could disappear at any moment if he loses interest in maintaining it or
    if something else happens. Users are therefore able to download works
    from UbuWeb and archive, on their own, whatever items they find most
    important. Of course, this fragility contradicts the idea of an archive
    as a place for long-term preservation. Yet such a task could only be
    undertaken by an institution that is oriented toward the long term.
    Because of the existing legal conditions, however, it is hardly likely
    that such an institution will come about.

    Whereas Goldsmith is highly adept at operating within a niche that not
    only tolerates but also accepts the violation of formal copyright
    claims, large websites responsible for the uncontrolled dissemination of
    digital content do not bother with such niceties. Their purpose is
    rather to ensure that all popular content is made available digitally
    and for free, whether legally or not. These sites, too, have experienced
    uninterrupted growth. By the end of 2015, dozens of millions of people
    were simultaneously using the BitTorrent tracker The Pirate Bay -- the
    largest nodal point for file-sharing networks during the last decade --
    to exchange several million digital files with one
    another.[^19^](#c2-note-0019){#c2-note-0019a} And this was happening
    despite protracted attempts to block or close down the file-sharing site
    by legal means and despite a variety of competing services. Even when
    the founders of the website were sentenced in Sweden to pay large fines
    (around €3 million) and to serve time in prison, the site still did not
    disappear from the internet.[^20^](#c2-note-0020){#c2-note-0020a} At the
    same time, new providers have entered the market of free access; their
    method is not to facilitate distributed downloads but rather to offer,
    on account of the drastically reduced cost of data transfers, direct
    streaming. Although some of these services are relatively easy to locate
    and some have been legally banned -- the best-known case in Germany
    being that of the popular site kino.to -- more of them continue to
    appear.[^21^](#c2-note-0021){#c2-note-0021a} Moreover, this phenomenon
    []{#Page_67 type="pagebreak" title="67"}is not limited to music and
    films, but encompasses all media formats. For instance, it is
    foreseeable that the number of freely available plans for 3D objects
    will increase along with the popularity of 3D printing. It has almost
    escaped notice, however, that so-called "shadow libraries" have been
    popping up everywhere; the latter are not accessible to the public but
    rather to members, for instance, of closed exchange platforms or of
    university intranets. Few seminars take place any more without a corpus
    of scanned texts, regardless of whether this practice is legal or
    not.[^22^](#c2-note-0022){#c2-note-0022a}

    The lines between these different mechanisms of access are highly
    permeable. Content acquired legally can make its way to file-sharing
    networks as an illegal copy; content available for free can be sold in
    special editions; content from shadow libraries can make its way to
    publicly accessible sites; and, conversely, content that was once freely
    available can disappear into shadow libraries. As regards free access,
    the details of this rapidly changing landscape are almost
    inconsequential, for the general trend that has emerged from these
    various dynamics -- legal and illegal, public and private -- is
    unambiguous: in a comprehensive and practical sense, cultural works of
    all sorts will become freely available despite whatever legal and
    technical restrictions might be in place. Whether absolutely all
    material will be made available in this way is not the decisive factor,
    at least not for the individual, for, as the German Library Association
    has stated, "it is foreseeable that non-digitalized material will
    increasingly escape the awareness of users, who have understandably come
    to appreciate the ubiquitous availability and more convenient
    processability of the digital versions of analog
    objects."[^23^](#c2-note-0023){#c2-note-0023a} In this context of excess
    information, it is difficult to determine whether a particular work or a
    crucial reference is missing, given that a multitude of other works and
    references can be found in their place.

    At the same time, prodigious amounts of new material are being produced
    that, before the era of digitalization and networks, never could have
    existed at all or never would have left the private sphere. An example
    of this is amateur photography. This is nothing new in itself; as early
    as 1899, Kodak was marketing its films and apparatus with the slogan
    "You press the button, we do the rest," and ever since, []{#Page_68
    type="pagebreak" title="68"}drawers and albums have been overflowing
    with photographs. With the advent of digitalization, however, certain
    economic and material limitations ceased to exist that, until then, had
    caused most private photographers to think twice about how many shots
    they wanted to take. After all, they had to pay for the film to be
    developed and then store the pictures somewhere. Cameras also became
    increasingly "intelligent," which improved the technical quality of
    photo­graphs. Even complex procedures such as increasing the level of
    detail or the contrast ratio -- the difference between an image\'s
    brightest and darkest points -- no longer require any specialized
    knowledge of photochemical processes in the darkroom. Today, such
    features are often pre-installed in many cameras as an option (high
    dynamic range). Ever since the introduction of built-in digital cameras
    for smartphones, anyone with such a device can take pictures everywhere
    and at any time and then store them digitally. Images can then be posted
    on online platforms and shared with others. By the middle of 2015,
    Flickr -- the largest but certainly not the only specialized platform of
    this sort -- had more than 112 million registered users participating in
    more than 2 million groups. Every user has access to free storage space
    for about half a million of his or her own pictures. At that point, in
    other words, the platform was equipped to manage more than 55 billion
    photographs. Around 3.5 million images were being uploaded every day,
    many of which could be accessed by anyone. This may seem like a lot, but
    in reality it is just a small portion of the pictures that are posted
    online on a daily basis. Around that same time -- again, the middle of
    2015 -- approximately 350 million pictures were being posted on Facebook
    *every day*. The total number of photographs saved there has been
    estimated to be 250 billion. In addition, there are also large platforms
    for professional "stock photos" (supplies of pre-produced images that
    are supposed to depict generic situations) and the databanks of
    professional agencies such Getty Images or Corbis. All of these images
    can be found easily and acquired quickly (though not always for free).
    Yet photography is not unique in this regard. In all fields, the number
    of cultural artifacts available to the public on specialized platforms
    has been increasing rapidly in recent years.[]{#Page_69 type="pagebreak"
    title="69"}
    :::

    ::: {.section}
    ### The great disorder {#c2-sec-0005}

    The old orders that had been responsible for filtering, organ­izing, and
    publishing cultural material -- culture industries, mass media,
    libraries, museums, archives, etc. -- are incapable of managing almost
    any aspect of this deluge. They can barely function as gatekeepers any
    more between those realms that, with their help, were once defined as
    "private" and "public." Their decisions about what is or is not
    important matter less and less. Moreover, having already been subjected
    to a decades-long critique, their rules, which had been relatively
    binding and formative over long periods of time, are rapidly losing
    practical significance.

    Even Europeana, a relatively small project based on trad­itional museums
    and archives and with a mandate to make the European cultural heritage
    available online, has contributed to the disintegration of established
    orders: it indiscriminately brings together 2,500 previously separated
    institutions. The specific semantic contexts that formerly shaped the
    history and orientation of institutions have been dissolved or reduced
    to dry meta-data, and millions upon millions of cultural artifacts are
    now equidistant from one another. Instead of certain artifacts being
    firmly anchored in a location, for instance in an ethnographic
    collection devoted to the colonial history of France, it is now possible
    for everything to exist side by side. Europeana is not an archive in the
    traditional sense, or even a museum with a fixed and meaningful order;
    rather, it is just a standard database. Everything in it is just one
    search request away, and every search generates a unique order in the
    form of a sequence of visible artifacts. As a result, individual objects
    are freed from those meta-narratives, created by the museums and
    archives that preserve them, which situate them within broader contexts
    and assign more or less clear meanings to them. They consequently become
    more open to interpretation. A search result does not articulate an
    interpretive field of reference but merely a connection, created by
    constantly changing search algorithms, between a request and the corpus
    of material, which is likewise constantly changing.

    Precisely because it offers so many different approaches to more or less
    freely combinable elements of information, []{#Page_70 type="pagebreak"
    title="70"}the order of the database no longer really provides a
    framework for interpreting search results in a meaningful way.
    Al­together, the meaning of many objects and signs is becoming even more
    uncertain. On the one hand, this is because the connection to their
    original context is becoming fragile; on the other hand, it is because
    they can appear in every possible combination and in the greatest
    variety of reception contexts. In less official archives and in less
    specialized search engines, the dissolution of context is far more
    pronounced than it is in the case of the Europeana project. For the sake
    of orienting its users, for instance, YouTube provides the date when a
    video has been posted, but there is no indication of when a video was
    actually produced. Further information provided about a video, for
    example in the comments section, is essentially unreliable. It might be
    true -- or it might not. The internet researcher David Weinberger has
    called this the "new digital disorder," which, at least for many users,
    is an entirely apt description.[^24^](#c2-note-0024){#c2-note-0024a} For
    individuals, this disorder has created both the freedom to establish
    their own orders and the obligation of doing so, regardless of whether
    or not they are ready for the task.

    This tension between freedom and obligation is at its strongest online,
    where the excess of culture and its more or less free availability are
    immediate and omnipresent. In fact, everything that can be retrieved
    online is culture in the sense that everything -- from the deepest layer
    of hardware to the most superficial tweet -- has been made by someone
    with a particular intention, and everything has been made to fit a
    particular order. And it is precisely this excess of often contradictory
    meanings and limited, regional, and incompatible orders that leads to
    disorder and meaninglessness. This is not limited to the online world,
    however, because the latter is not self-contained. In an essential way,
    digital media also serve to organize the material world. On the basis of
    extremely complex and opaque yet highly efficient logistical and
    production processes, people are also confronted with constantly
    changing material things about whose origins and meanings they have
    little idea. Even something as simple to produce as yoghurt usually has
    a thousand kilometers behind it before it ends up on a shelf in the
    supermarket. The logistics that enable this are oriented toward
    flexibility; []{#Page_71 type="pagebreak" title="71"}they bring elements
    together as efficiently as possible. It is nearly impossible for final
    customers to find out anything about the ingredients. Customers are
    merely supposed to be oriented by signs and notices such as "new" or "as
    before," "natural," and "healthy," which are written by specialists and
    meant to manipulate shoppers as much as the law allows. Even here, in
    corporeal everyday life, every individual has to deal with a surge of
    excess and disorder that threatens to erode the original meaning
    conferred on every object -- even where such meaning was once entirely
    unproblematic, as in the case of
    yoghurt.[^25^](#c2-note-0025){#c2-note-0025a}
    :::

    ::: {.section}
    ### Selecting and organizing {#c2-sec-0006}

    In this situation, the creation of one\'s own system of references has
    become a ubiquitous and generally accessible method for organizing all
    of the ambivalent things that one encounters on a given day. Such things
    are thus arranged within a specific context of meaning that also
    (co)determines one\'s own relation to the world and subjective position
    in it. Referentiality takes place through three types of activity, the
    first being simply to attract attention to certain things, which affirms
    (at least implicitly) that they are important. With every single picture
    posted on Flickr, every tweet, every blog post, every forum post, and
    every status update, the user is doing exactly that; he or she is
    communicating to others: "Look over here! I think this is important!" Of
    course, there is nothing new to filtering and allocating meaning. What
    is new, however, is that these processes are no longer being carried out
    primarily by specialists at editorial offices, museums, or archives, but
    have become daily requirements for a large portion of the population,
    regardless of whether they possess the material and cultural resources
    that are necessary for the task.
    :::

    ::: {.section}
    ### The loop through the body {#c2-sec-0007}

    Given the flood of information that perpetually surrounds everyone, the
    act of focusing attention and reducing vast numbers of possibilities
    into something concrete has become a productive achievement, however
    banal each of these micro-activities might seem on its own, and even if,
    at first, []{#Page_72 type="pagebreak" title="72"}the only concern might
    be to focus the attention of the person doing it. The value of this
    (often very brief) activity is that it singles out elements from the
    uniform sludge of unmanageable complexity. Something plucked out in this
    way gains value because it has required the use of a resource that
    cannot be reproduced, that exists outside of the world of information
    and that is invariably limited for every individual: our own lifetime.
    Every status update that is not machine-generated means that someone has
    invested time, be it only a second, in order to point to this and not to
    something else. Thus, a process of validating what exists in the excess
    takes place in connection with the ultimate scarcity -- our own
    lifetimes, our own bodies. Even if the value generated by this act is
    minimal or diffuse, it is still -- to borrow from Gregory Bateson\'s
    famous definition of information -- a difference that makes a difference
    in this stream of equivalencies and
    meaninglessness.[^26^](#c2-note-0026){#c2-note-0026a} This singling out
    -- this use of one\'s own body to generate meaning -- does not, however,
    take place by means of mere micro-activities throughout the day; it is
    also a defining aspect of complex cultural strategies. In recent years,
    re-enactment (that is, the re-staging of historical situ­ations and
    events) has established itself as a common practice in contemporary art.
    Unlike traditional re-enactments, such as those of historically
    significant battles, which attempt to represent the past as faithfully
    as possible, "artistic re-enactments," according to the curator Inke
    Arns, "are not an affirmative confirmation of the past; rather, they are
    *questionings* of the present through reaching back to historical
    events," especially as they are represented in images and other forms of
    documentation. Thanks to search engines and databases, such
    representations are more or less always present, though in the form of
    indeterminate images, ambivalent documents, and contentious
    interpretations. Artists in this situation, as Arns explains,

    ::: {.extract}
    do not ask the naïve question about what really happened outside of the
    history represented in the media -- the "authenticity" beyond the images
    -- instead, they ask what the images we see might mean concretely to us,
    if we were to experience these situations personally. In this way the
    artistic reenactment confronts the general feeling of insecurity about
    the meaning []{#Page_73 type="pagebreak" title="73"}of images by using a
    paradoxical approach: through erasing distance to the images and at the
    same time distancing itself from the
    images.[^27^](#c2-note-0027){#c2-note-0027a}
    :::

    This paradox manifests itself in that the images are appropriated and
    sublated through the use of one\'s own body in the re-enactments. They
    simultaneously refer to the past and create a new reality in the
    present. In perhaps the best-known re-enactment of this type, the artist
    Jeremy Deller revived, in 2001, the Battle of Orgreave, one of the
    central episodes of the British miners\' strike of 1984 and 1985. This
    historical event is regarded as a turning point in the protracted
    conflict between Margaret Thatcher\'s government and the labor unions --
    a key moment in the implementation of Great Britain\'s neoliberal
    regime, which is still in effect today. In Deller\'s re-enactment, the
    heart of the matter is not historical accuracy, which is always
    controversial in such epoch-changing events. Rather, he focuses on the
    former participants -- the miners and police officers alike, who, along
    with non-professional actors, lived through the situation again -- in
    order to explore both the distance from the events and their
    representation in the media, as well as their ongoing biographical and
    societal presence.[^28^](#c2-note-0028){#c2-note-0028a}

    Elaborate practices of embodying medial images through processes of
    appropriation and distancing have also found their way into popular
    culture, for instance in so-called "cosplay." The term, which is a
    contraction of the words "costume" and "play," was coined by a Japanese
    man named Nobuyuki Takahashi. In 1984, while attending the World Science
    Fiction Convention in Los Angeles, he used the word to describe the
    practice of certain attendees to dress up as their favorite characters.
    Participants in cosplay embody fictitious figures -- mostly from the
    worlds of science fiction, comics/manga, or computer games -- by donning
    home-made costumes and striking characteristic
    poses.[^29^](#c2-note-0029){#c2-note-0029a} The often considerable
    effort that goes into this is mostly reflected in the costumes, not in
    the choreography or dramaturgy of the performance. What is significant
    is that these costumes are usually not exact replicas but are rather
    freely adapted by each player to represent the character as he or she
    interprets it to be. Accordingly, "Cosplay is a form of appropriation
    []{#Page_74 type="pagebreak" title="74"}that transforms, actualizes and
    performs an existing story in close connection to the fan\'s own
    identity."[^30^](#c2-note-0030){#c2-note-0030a} This practice,
    admittedly, goes back quite far in the history of fan culture, but it
    has experienced a striking surge through the opportunity for fans to
    network with one another around the world, to produce costumes and
    images of professional quality, and to place themselves on the same
    level as their (fictitious) idols. By now it has become a global
    subculture whose members are active not only online but also at hundreds
    of conventions throughout the world. In Germany, an annual cosplay
    competition has been held since 2007 (it is organized by the Frankfurt
    Book Fair and Animexx, the country\'s largest manga and anime
    community). The scene, which has grown and branched out considerably
    over the past few years, has slowly begun to professionalize, with
    shops, books, and players who make paid appearances. Even in fan
    culture, stars are born. As soon as the subculture has exceeded a
    certain size, this gradual onset of commercialization will undoubtedly
    lead to tensions within the community. For now, however, two of its
    noteworthy features remain: the power of the desire to appropriate, in a
    bodily manner, characters from vast cultural universes, and the
    widespread combination of free interpretation and meticulous attention
    to detail.
    :::

    ::: {.section}
    ### Lineages and transformations {#c2-sec-0008}

    Because of the great effort tha they require, re-enactment and cosplay
    are somewhat extreme examples of singling out, appropriating, and
    referencing. As everyday activities that almost take place incidentally,
    however, these three practices usually do not make any significant or
    lasting differences. Yet they do not happen just once, but over and over
    again. They accumulate and thus constitute referentiality\'s second type
    of activity: the creation of connections between the many things that
    have attracted attention. In such a way, paths are forged through the
    vast complexity. These paths, which can be formed, for instance, by
    referring to different things one after another, likewise serve to
    produce and filter meaning. Things that can potentially belong in
    multiple contexts are brought into a single, specific context. For the
    individual []{#Page_75 type="pagebreak" title="75"}producer, this is how
    fields of attention, reference systems, and contexts of meaning are
    first established. In the third step, the things that have been selected
    and brought together are changed. Perhaps something is removed to modify
    the meaning, or perhaps something is added that was previously absent or
    unavailable. Either way, referential culture is always producing
    something new.

    These processes are applied both within individual works (referentiality
    in a strict sense) and within currents of communication that consist of
    numerous molecular acts (referentiality in a broader sense). This latter
    sort of compilation is far more widespread than the creation of new
    re-mix works. Consider, for example, the billionfold sequences of status
    updates, which sometimes involve a link to an interesting video,
    sometimes a post of a photograph, then a short list of favorite songs, a
    top 10 chart from one\'s own feed, or anything else. Such methods of
    inscribing oneself into the world by means of references, combinations,
    or alterations are used to create meaning through one\'s own activity in
    the world and to constitute oneself in it, both for one\'s self and for
    others. In a culture that manifests itself to a great extent through
    mediatized communication, people have to constitute themselves through
    such acts, if only by posting
    "selfies."[^31^](#c2-note-0031){#c2-note-0031a} Not to do so would be to
    risk invisibility and being forgotten.

    On this basis, a genuine digital folk culture of re-mixing and mashups
    has formed in recent years on online platforms, in game worlds, but also
    through cultural-economic productions of individual pieces or short
    series. It is generated and maintained by innumerable people with
    varying degrees of intensity and ambition. Its common feature with
    trad­itional folk culture, in choirs or elsewhere, is that production
    and reception (but also reproduction and creation) largely coincide.
    Active participation admittedly requires a certain degree of
    proficiency, interest, and engagement, but usually not any extraordinary
    talent. Many classical institutions such as museums and archives have
    been attempting to take part in this folk culture by setting up their
    own re-mix services. They know that the "public" is no longer able or
    willing to limit its engagement with works of art and cultural history
    to one of quiet contemplation. At the end of 2013, even []{#Page_76
    type="pagebreak" title="76"}the Deutsches Symphonie-Orchester Berlin
    initiated a re-mix competition. A year earlier, the Rijksmuseum in
    Amsterdam launched so-called "Rijksstudios." Since then, the museum has
    made available on its website more than 200,000 high-resolution images
    from its collection. Users are free to use these to create their own
    re-mixes online and share them with others. Interestingly, the
    Rijksmuseum does not distinguish between the work involved in
    transforming existing pieces and that involved in curating its own
    online gallery.

    Referential processes have no beginning and no end. Any material that is
    used to make something new has a pre-history of its own, even if its
    traces are lost in clouds of uncertainty. Upon closer inspection, this
    cloud might clear a little bit, but it is extremely uncommon for a
    genuine beginning -- a *creatio ex nihilo* -- to be revealed. This
    raises the question of whether there can really be something like
    originality in the emphatic sense.[^32^](#c2-note-0032){#c2-note-0032a}
    Regardless of the answer to this question, the fact that by now many
    people select, combine, and alter objects on a daily basis has led to a
    slow shift in our perception and sensibilities. In light of the
    experiences that so many people are creating, the formerly exotic
    theories of deconstruction suddenly seem anything but outlandish. Nearly
    half a century ago, Roland Barthes defined the text as a fabric of
    quotations, and this incited vehement
    opposition.[^33^](#c2-note-0033){#c2-note-0033a} "But of course," one
    would be inclined to say today, "that can be statistically proven
    through software analysis!" Amazon identifies books by means of their
    "statistically improbable phrases"; that is, by means of textual
    elements that are highly unlikely to occur elsewhere. This implies, of
    course, that books contain many textual elements that are highly likely
    to be found in other texts, without suggesting that such elements would
    have to be regarded as plagiarism.

    In the Gutenberg Galaxy, with its fixation on writing, the earliest
    textual document is usually understood to represent a beginning. If no
    references to anything before can be identified, the text is then
    interpreted as a closed entity, as a new text. Thus, fairy tales and
    sagas, which are typical elements of oral culture, are still more
    strongly associated with the names of those who recorded them than with
    the names of those who narrated them. This does not seem very convincing
    today. In recent years, literary historians have made strong []{#Page_77
    type="pagebreak" title="77"}efforts to shift the focus of attention to
    the people (mostly women) who actually told certain fairy tales. In
    doing so, they have been able to work out to what extent the respective
    narrators gave shape to specific stories, which were written down as
    common versions, and to what extent these stories reflect their
    narrators\' personal histories.[^34^](#c2-note-0034){#c2-note-0034a}

    Today, after more than 40 years of deconstructionist theory and a change
    in our everyday practices, it is no longer controversial to read works
    -- even by canonical figures like Wagner or Mozart -- in such a way as
    to highlight the other works, either by the artists in question or by
    other artists, that are contained within
    them.[^35^](#c2-note-0035){#c2-note-0035a} This is not an expression of
    decreased appreciation but rather an indication that, as Zygmunt Bauman
    has stressed, "The way human beings understand the world tends to be at
    all times *praxeomorphic*: it is always shaped by the know-how of the
    day, by what people can do and how they usually go about doing
    it."[^36^](#c2-note-0036){#c2-note-0036a} And the everyday practice of
    today is one of singling out, bringing together, altering, and adding.
    Accordingly, not only has our view of current cultural production
    shifted; our view of cultural history has shifted as well. As always,
    the past is made to suit the sensibilities of the present.

    As a rule, however, things that have no beginning also have no end. This
    is not only because they can in turn serve as elements for other new
    contexts of meaning, but also because the attention paid to the context
    in which they take on specific meaning is sensitive to the work that has
    to be done to maintain the context itself. Even timelessness is an
    elaborate everyday business. The attempt to rescue works of art from the
    ravages of time -- to preserve them forever -- means that they regularly
    need to be restored. Every restoration inevit­ably stirs a debate about
    whether the planned interventions are appropriate and about how to deal
    with the traces of previous interventions, which, from the current
    perspective, often seem to be highly problematic. Whereas, just a
    generation ago, preservationists ensured that such interventions
    remained visible (as articulations of the historical fissures that are
    typical of Modernity), today greater emphasis is placed on reducing
    their visibility and re-creating the illusion of an "original condition"
    (without, however, impeding any new functionality that a piece might
    have in the present). []{#Page_78 type="pagebreak" title="78"}The
    historically faithful restoration of the Berlin City Palace, and yet its
    repurposed function as a museum and meeting place, are typical of this
    new attitude in dealing with our historical heritage.

    In everyday activity, too, the never-ending necessity of this work can
    be felt at all times. Here the issue is not timelessness, but rather
    that the established contexts of meaning quickly become obsolete and
    therefore have to be continuously affirmed, expanded, and changed in
    order to maintain the relevance of the field that they define. This
    lends referentiality a performative character that combines productive
    and reproductive dimensions. That which is not constantly used and
    renewed simply disappears. Often, however, this only means that it will
    sink into an endless archive and become unrealized potential until
    someone reactivates it, breathes new life into it, rouses it from its
    slumber, and incorporates it into a newly relevant context of meaning.
    "To be relevant," according to the artist Eran Schaerf, "things must be
    recyclable."[^37^](#c2-note-0037){#c2-note-0037a}

    Alone, everyone is overwhelmed by the task of having to generate meaning
    against this backdrop of all-encompassing meaninglessness. First, the
    challenge is too great for any individual to overcome; second, meaning
    itself is only created intersubjectively. While it can admittedly be
    asserted by a single person, others have to confirm it before it can
    become a part of culture. For this reason, the actual subject of
    cultural production under the digital condition is not the individual
    but rather the next-largest unit.
    :::
    :::

    ::: {.section}
    Communality {#c2-sec-0009}
    -----------

    As an individual, it is impossible to orient oneself within a complex
    environment. Meaning -- as well as the ability to act -- can only be
    created, reinforced, and altered in exchange with others. This is
    nothing noteworthy; biologically and culturally, people are social
    beings. What has changed historically is how people are integrated into
    larger contexts, how processes of exchange are organized, and what every
    individual is expected to do in order to become a fully fledged
    participant in these processes. For nearly 50 years, traditional
    []{#Page_79 type="pagebreak" title="79"}institutions -- that is,
    hierarchically and bureaucratically organ­ized civic institutions such
    as established churches, labor unions, and political parties -- have
    continuously been losing members.[^38^](#c2-note-0038){#c2-note-0038a}
    In tandem with this, the overall commitment to the identities, family
    values, and lifestyles promoted by these institutions has likewise been
    in decline. The great mech­anisms of socialization from the late stages
    of the Gutenberg Galaxy have been losing more and more of their
    influence, though at different speeds and to different extents. All
    told, however, explicitly and collectively normative impulses are
    decreasing, while others (implicitly economic, above all) are on the
    rise. According to mainstream sociology, a cause or consequence of this
    is the individualization and atomization of society. As early as the
    middle of the 1980s, Ulrich Beck claimed: "In the individualized society
    the individual must therefore learn, on pain of permanent disadvantage,
    to conceive of himself or herself as the center of action, as the
    planning office with respect to his/her own biography, abil­ities,
    orientations, relationships and so
    on."[^39^](#c2-note-0039){#c2-note-0039a} Over the past three decades,
    the dominant neoliberal political orientation, with its strong stress on
    the freedom of the individual -- to realize oneself as an individual
    actor in the allegedly open market and in opposition to allegedly
    domineering collective mechanisms -- has radicalized these tendencies
    even further. The ability to act, however, is not only a question of
    one\'s personal attitude but also of material resources. And it is this
    same neoliberal politics that deprives so many people of the resources
    needed to take advantage of these new freedoms in their own lives. As a
    result they suffer, in Ulrich Beck\'s terms, "permanent disadvantage."

    Under the digital condition, this process has permeated the finest
    structures of social life. Individualization, commercialization, and the
    production of differences (through design, for instance) are ubiquitous.
    Established civic institutions are not alone in being hollowed out;
    relatively new collectives are also becoming more differentiated, a
    development that I outlined above with reference to the transformation
    of the gay movement into the LGBT community. Yet nevertheless, or
    perhaps for this very reason, new forms of communality are being formed
    in these offshoots -- in the small activities of everyday life. And
    these new communal formations -- rather []{#Page_80 type="pagebreak"
    title="80"}than individual people -- are the actual subjects who create
    the shared meaning that we call culture.

    ::: {.section}
    ### The problem of the "community" {#c2-sec-0010}

    I have chosen the rather cumbersome expression "communal formation" in
    order to avoid the term "community" (*Gemeinschaft*), although the
    latter is used increasingly often in discussions of digital cultures and
    has played an import­ant role, from the beginning, in conceptions of
    networking. Viewed analytically, however, "community" is a problematic
    term because it is almost hopelessly overloaded. Particularly in the
    German-speaking tradition, Ferdinand Tönnies\'s polar distinction
    between "community" (*Gemeinschaft*) and "society" (*Gesellschaft*),
    which he introduced in 1887, remains
    influential.[^40^](#c2-note-0040){#c2-note-0040a} Tönnies contrasted two
    fundamentally different and exclusive types of social relations. Whereas
    community is characterized by the overlapping multidimensional nature of
    social relationships, society is defined by the functional separation of
    its sectors and spheres. Community embeds every individual into complex
    social relationships, all of which tend to be simultaneously present. In
    the traditional village community ("communities of place," in Tönnies\'s
    terms), neighbors are involved with one another, for better or for
    worse, both on a familiar basis and economically or religiously. Every
    activity takes place on several different levels at the same time.
    Communities are comprehensive social institutions that penetrate all
    areas of life, endowing them with meaning. Through mutual dependency,
    they create stability and security, but they also obstruct change and
    hinder social mobility. Because everyone is connected with each other,
    no can leave his or her place without calling into question the
    arrangement as a whole. Communities are thus structurally conservative.
    Because every human activity is embedded in multifaceted social
    relationships, every change requires adjustments across the entire
    interrelational web -- a task that is not easy to accomplish.
    Accordingly, the trad­itional communities of the eighteenth and
    nineteenth centuries fiercely opposed the establishment of capitalist
    society. In order to impose the latter, the old community structures
    were broken apart with considerable violence. This is what Marx
    []{#Page_81 type="pagebreak" title="81"}and Engels were referring to in
    that famous passage from *The Communist Manifesto*: "All the settled,
    age-old relations with their train of time-honoured preconceptions and
    viewpoints are dissolved. \[...\] Everything feudal and fixed goes up in
    smoke, everything sacred is
    profaned."[^41^](#c2-note-0041){#c2-note-0041a}

    The defining feature of society, on the contrary, is that it frees the
    individual from such multifarious relationships. Society, according to
    Tönnies, separates its members from one another. Although they
    coordinate their activity with others, they do so in order to pursue
    partial, short-term, and personal goals. Not only are people separated,
    but so too are different areas of life. In a market-oriented society,
    for instance, the economy is conceptualized as an independent sphere. It
    can therefore break away from social connections to be organized simply
    by limited formal or legal obligations between actors who, beyond these
    obligations, have nothing else to do with one another. Costs or benefits
    that inadvertently affect people who are uninvolved in a given market
    transaction are referred to by economists as "externalities," and market
    participants do not need to care about these because they are strictly
    pursuing their own private interests. One of the consequences of this
    form of social relationship is a heightened social dynamic, for now it
    is possible to introduce changes into one area of life without
    considering its effects on other areas. In the end, the dissolution of
    mutual obligations, increased uncertainty, and the reduction of many
    social connections go hand in hand with what Marx and Engels referred to
    in *The Communist Manifesto* as "unfeeling hard cash."

    From this perspective, the historical development looks like an
    ambivalent process of modernization in which society (dynamic, but cold)
    is erected over the ruins of community (static, but warm). This is an
    unusual combination of romanticism and progress-oriented thinking, and
    the problems with this influential perspective are numerous. There is,
    first, the matter of its dichotomy; that is, its assumption that there
    can only be these two types of arrangement, community and society. Or
    there is the notion that the one form can be completely ousted by the
    other, even though aspects of community and aspects of society exist at
    the same time in specific historical situations, be it in harmony or in
    conflict.[^42^](#c2-note-0042){#c2-note-0042a} []{#Page_82
    type="pagebreak" title="82"}These impressions, however, which are so
    firmly associated with the German concept of *Gemeinschaft*, make it
    rather difficult to comprehend the new forms of communality that have
    developed in the offshoots of networked life. This is because, at least
    for now, these latter forms do not represent a genuine alternative to
    societal types of social
    connectedness.[^43^](#c2-note-0043){#c2-note-0043a} The English word
    "community" is somewhat more open. The opposition between community and
    society resonates with it as well, although the dichotomy is not as
    clear-cut. American communitarianism, for instance, considers the
    difference between community and society to be gradual and not
    categorical. Its primary aim is to strengthen civic institutions and
    mechanisms, and it regards community as an intermediary level between
    the individual and society.[^44^](#c2-note-0044){#c2-note-0044a} But
    there is a related English term, which seems even more productive for my
    purposes, namely "community of practice," a concept that is more firmly
    grounded in the empirical observation of concrete social relationships.
    The term was introduced at the beginning of the 1990s by the social
    researchers Jean Lave and Étienne Wenger. They observed that, in most
    cases, professional learning (for instance, in their case study of
    midwives) does not take place as a one-sided transfer of knowledge or
    proficiency, but rather as an open exchange, often outside of the formal
    learning environment, between people with different levels of knowledge
    and experience. In this sense, learning is an activity that, though
    distinguishable, cannot easily be separated from other "normal"
    activities of everyday life. As Lave and Wenger stress, however, the
    community of practice is not only a social space of exchange; it is
    rather, and much more fundamentally, "an intrinsic condition for the
    existence of knowledge, not least because it provides the interpretive
    support necessary for making sense of its
    heritage."[^45^](#c2-note-0045){#c2-note-0045a} Communities of practice
    are thus always epistemic communities that form around certain ways of
    looking at the world and one\'s own activity in it. What constitutes a
    community of practice is thus the joint acquisition, development, and
    preservation of a specific field of practice that contains abstract
    knowledge, concrete proficiencies, the necessary material and social
    resources, guidelines, expectations, and room to interpret one\'s own
    activity. All members are active participants in the constitution of
    this field, and this reinforces the stress on []{#Page_83
    type="pagebreak" title="83"}practice. Each of them, however, brings
    along different presuppositions and experiences, for their situations
    are embedded within numerous and specific situations of life or work.
    The processes within the community are mostly informal, and yet they are
    thoroughly structured, for authority is distributed unequally and is
    based on the extent to which the members value each other\'s (and their
    own) levels of knowledge and experience. At first glance, then, the term
    "community of practice" seems apt to describe the meaning-generating
    communal formations that are at issue here. It is also somewhat
    problematic, however, because, having since been subordinated to
    management strategies, its use is now narrowly applied to professional
    learning and managing knowledge.[^46^](#c2-note-0046){#c2-note-0046a}

    From these various notions of community, it is possible to develop the
    following way of looking at new types of communality: they are formed in
    a field of practice, characterized by informal yet structured exchange,
    focused on the generation of new ways of knowing and acting, and
    maintained through the reflexive interpretation of their own activity.
    This last point in particular -- the communal creation, preservation,
    and alteration of the interpretive framework in which actions,
    processes, and objects acquire a firm meaning and connection -- can be
    seen as the central role of communal formations.

    Communication is especially significant to them. Indi­viduals must
    continuously communicate in order to constitute themselves within the
    fields and practices, or else they will remain invisible. The mass of
    tweets, updates, emails, blogs, shared pictures, texts, posts on
    collaborative platforms, and databases (etc.) that are necessary for
    this can only be produced and processed by means of digital
    technologies. In this act of incessant communication, which is a
    constitutive element of social existence, the personal desire for
    self-constitution and orientation becomes enmeshed with the outward
    pressure of having to be present and available to form a new and binding
    set of requirements. This relation between inward motivation and outward
    pressure can vary highly, depending on the character of the communal
    formation and the position of the individual within it (although it is
    not the individual who determines what successful communication is, what
    represents a contribution to the communal formation, or in which form
    one has to be present). []{#Page_84 type="pagebreak" title="84"}Such
    decisions are made by other members of the formation in the form of
    positive or negative feedback (or none at all), and they are made with
    recourse to the interpretive framework that has been developed in
    common. These communal and continuous acts of learning, practicing, and
    orientation -- the exchange, that is, between "novices" and "experts" on
    the same field, be it concerned with internet politics, illegal street
    racing, extreme right-wing music, body modification, or a free
    encyclopedia -- serve to maintain the framework of shared meaning,
    expand the constituted field, recruit new members, and adapt the
    framework of interpretation and activity to changing conditions. Such
    communal formations constitute themselves; they preserve and modify
    themselves by constantly working out the foundations of their
    constitution. This may sound circular, for the process of reflexive
    self-constitution -- "autopoiesis" in the language of systems theory --
    is circular in the sense that control is maintained through continuous,
    self-generating feedback. Self-referentiality is a structural feature of
    these formations.
    :::

    ::: {.section}
    ### Singularity and communality {#c2-sec-0011}

    The new communal formations are informal forms of organ­ization that are
    based on voluntary action. No one is born into them, and no one
    possesses the authority to force anyone else to join or remain against
    his or her will, or to assign anyone with tasks that he or she might be
    unwilling to do. Such a formation is not an enclosed disciplinary
    institution in Foucault\'s sense,[^47^](#c2-note-0047){#c2-note-0047a}
    and, within it, power is not exercised through commands, as in the
    classical sense formulated by Max
    Weber.[^48^](#c2-note-0048){#c2-note-0048a} The condition of not being
    locked up and not being subordinated can, at least at first, represent
    for the individual a gain in freedom. Under a given set of conditions,
    everyone can (and must) choose which formations to participate in, and
    he or she, in doing so, will have a better or worse chance to influence
    the communal field of reference.

    On the everyday level of communicative self-constitution and creating a
    personal cognitive horizon -- in innumerable streams, updates, and
    timelines on social mass media -- the most important resource is the
    attention of others; that is, their feedback and the mutual recognition
    that results from it. []{#Page_85 type="pagebreak" title="85"}And this
    recognition may simply be in the form of a quickly clicked "like," which
    is the smallest unit that can assure the sender that, somewhere out
    there, there is a receiver. Without the latter, communication has no
    meaning. The situation is somewhat menacing if no one clicks the "like"
    button beneath a post or a photo. It is a sign that communication has
    broken, and the result is the dissolution of one\'s own communicatively
    constituted social existence. In this context, the boundaries are
    blurred between the categories of information, communication, and
    activity. Making information available always involves the active --
    that is, communicating -- person, and not only in the case of ubiquitous
    selfies, for in an overwhelming and chaotic environment, as discussed
    above, selection itself is of such central importance that the
    differences between the selected and the selecting become fluid,
    particularly when the goal of the latter is to experience confirmation
    from others. In this back-and-forth between one\'s own presence and the
    validation of others, one\'s own motives and those of the community are
    not in opposition but rather mutually depend on one another. Condensed
    to simple norms and to a basic set of guidelines within the context of
    an image-oriented social mass media service, the rule (or better:
    friendly tip) that one need not but probably ought to follow is this:

    ::: {.extract}
    Be an active member of the Instagram community to receive likes and
    comments. Take time to comment on a friend\'s photo, or to like photos.
    If you do this, others will reciprocate. If you never acknowledge your
    followers\' photos, then they won\'t acknowledge
    you.[^49^](#c2-note-0049){#c2-note-0049a}
    :::

    The context of this widespread and highly conventional piece of advice
    is not, for instance, a professional marketing campaign; it is simply
    about personally positioning oneself within a social network. The goal
    is to establish one\'s own, singular, identity. The process required to
    do so is not primarily inward-oriented; it is not based on questions
    such as: "Who am I really, apart from external influences?" It is rather
    outward-oriented. It takes place through making connections with others
    and is concerned with questions such as: "Who is in my network, and what
    is my position within it?" It is []{#Page_86 type="pagebreak"
    title="86"}revealing that none of the tips in the collection cited above
    offers advice about achieving success within a community of
    photographers; there are not suggestions, for instance, about how to
    take high-quality photographs. With smart cameras and built-in filters
    for post-production, this is not especially challenging any more,
    especially because individual pictures, to be examined closely and on
    their own terms, have become less important gauges of value than streams
    of images that are meant to be quickly scrolled through. Moreover, the
    function of the critic, who once monopolized the right to interpret and
    evaluate an image for everyone, is no longer of much significance.
    Instead, the quality of a picture is primarily judged according to
    whether "others like it"; that is, according to its performance in the
    ongoing popularity contest within a specific niche. But users do not
    rely on communal formations and the feedback they provide just for the
    sharing and evaluation of pictures. Rather, this dynamic has come to
    determine more and more facets of life. Users experience the
    constitution of singularity and communality, in which a person can be
    perceived as such, as simultaneous and reciprocal processes. A million
    times over and nearly subconsciously (because it is so commonplace),
    they engage in a relationship between the individual and others that no
    longer really corresponds to the liberal opposition between
    individuality and society, between personal and group identity. Instead
    of viewing themselves as exclusive entities (either in terms of the
    emphatic affirmation of individuality or its dissolution within a
    homogeneous group), the new formations require that the production of
    difference and commonality takes place
    simultaneously.[^50^](#c2-note-0050){#c2-note-0050a}
    :::

    ::: {.section}
    ### Authenticity and subjectivity {#c2-sec-0012}

    Because members have decided to participate voluntarily in the
    community, their expressions and actions are regarded as authentic, for
    it is implicitly assumed that, in making these gestures, they are not
    following anyone else\'s instructions but rather their own motivations.
    The individual does not act as a representative or functionary of an
    organization but rather as a private and singular (that is, unique)
    person. While at a gathering of the Occupy movement, a sure way to be
    kicked out to is to stick stubbornly to a party line, even if this way
    []{#Page_87 type="pagebreak" title="87"}of thinking happens to agree
    with that of the movement. Not only at Occupy gatherings, however, but
    in all new communal formations it is expected that everyone there is
    representing his or her own interests. As most people are aware, this
    assumption is theoretically naïve and often proves to be false in
    practice. Even spontaneity can be calculated, and in many cases it is.
    Nevertheless, the expectation of authenticity is relevant because it
    creates a minimum of trust. As the basis of social trust, such
    contra-factual expectations exist elsewhere as well. Critical readers of
    newspapers, for instance, must assume that what they are reading has
    been well researched and is presented as objectively as possible, even
    though they know that objectivity is theoretically a highly problematic
    concept -- to this extent, postmodern theory has become common knowledge
    -- and that newspapers often pursue (hidden) interests or lead
    campaigns. Yet without such contra-factual assumptions, the respective
    orders of knowledge and communication would not function, for they
    provide the normative framework within which deviations can be
    perceived, criticized, and sanctioned.

    In a seemingly traditional manner, the "authentic self" is formulated
    with reference to one\'s inner world, for instance to personal
    knowledge, interests, or desires. As the core of personality, however,
    this inner world no longer represents an immutable and essential
    characteristic but rather a temporary position. Today, even someone\'s
    radical reinvention can be regarded as authentic. This is the central
    difference from the classical, bourgeois conception of the subject. The
    self is no longer understood in essentialist terms but rather
    performatively. Accordingly, the main demand on the individual who
    voluntarily opts to participate in a communal formation is no longer to
    be self-aware but rather to be
    self-motivated.[^51^](#c2-note-0051){#c2-note-0051a} Nor is it necessary
    any more for one\'s core self to be coherent. It is not a contradiction
    to appear in various communal formations, each different from the next,
    as a different "I myself," for every formation is comprehensive, in that
    it appeals to the whole person, and simultaneously partial, in that it
    is oriented toward a particular goal and not toward all areas of life.
    As in the case of re-mixes and other referential processes, the concern
    here is not to preserve authenticity but rather to create it in the
    moment. The success or failure []{#Page_88 type="pagebreak"
    title="88"}of these efforts is determined by the continuous feedback of
    others -- one like after another.

    These practices have led to a modified form of subject constitution for
    which some sociologists, engaged in empir­ical research, have introduced
    the term "networked individualism."[^52^](#c2-note-0052){#c2-note-0052a}
    The idea is based on the observation that people in Western societies
    (the case studies were mostly in North America) are defining their
    identity less and less by their family, profession, or other stable
    collective, but rather increasingly in terms of their personal social
    networks; that is, according to the communal formations in which they
    are active as individuals and in which they are perceived as singular
    people. In this regard, individualization and atomization no longer
    necessarily go hand in hand. On the contrary, the intertwined nature of
    personal identity and communality can be experienced on an everyday
    level, given that both are continuously created, adapted, and affirmed
    by means of personal communication. This makes the networks in question
    simultaneously fragile and stable. Fragile because they require the
    ongoing presence of every individual and because communication can break
    down quickly. Stable because the networks of relationships that can
    support a single person -- as regards the number of those included,
    their geograph­ical distribution, and the duration of their cohesion --
    have expanded enormously by means of digital communication technologies.

    Here the issue is not that of close friendships, whose number remains
    relatively constant for most people and over long periods of
    time,[^53^](#c2-note-0053){#c2-note-0053a} but rather so-called "weak
    ties"; that is, more or less loose acquaintances that can be tapped for
    new information and resources that do not exist within one\'s close
    circle of friends.[^54^](#c2-note-0054){#c2-note-0054a} The more they
    are expanded, the more sustainable and valuable these networks become,
    for they bring together a large number of people and thus multiply the
    material and organizational resources that are (potentially) accessible
    to the individual. It is impossible to make a sweeping statement as to
    whether these formations actually represent communities in a
    comprehensive sense and how stable they really are, especially in times
    of crisis, for this is something that can only be found out on a
    case-by-case basis. It is relevant that the development of personal
    networks []{#Page_89 type="pagebreak" title="89"}has not taken place in
    a vacuum. The disintegration of institutions that were formerly
    influential in the formation of identity and meaning began long before
    the large-scale spread of networks. For most people, there is no other
    choice but to attempt to orient and organize oneself, regardless of how
    provisional or uncertain this may be. Or, as Manuel Castells somewhat
    melodramatically put it, "At the turn of the millennium, the king and
    the queen, the state and civil society, are both naked, and their
    children-citizens are wandering around a variety of foster
    homes."[^55^](#c2-note-0055){#c2-note-0055a}
    :::

    ::: {.section}
    ### Space and time as a communal practice {#c2-sec-0013}

    Although participation in a communal formation is voluntary, it is not
    unselfish. Quite the contrary: an important motivation is to gain access
    to a formation\'s constitutive field of practice and to the resources
    associated with it. A communal formation ultimately does more than
    simply steer the attention of its members toward one another. Through
    the common production of culture, it also structures how the members
    perceive the world and how they are able to design themselves and their
    potential actions in it. It is thus a co­operative mechanism of
    filtering, interpretation, and constitution. Through the everyday
    referential work of its members, the community selects a manageable
    amount of information from the excess of potentially available
    information and brings it into a meaningful context, whereby it
    validates the selection itself and orients the activity of each of its
    members.

    The new communal formations consist of self-referential worlds whose
    constructive common practice affects the foundations of social activity
    itself -- the constitution of space and time. How? The spatio-temporal
    horizon of digital communication is a global (that is, placeless) and
    ongoing present. The technical vision of digital communication is always
    the here and now. With the instant transmission of information,
    everything that is not "here" is inaccessible and everything that is not
    "now" has disappeared. Powerful infrastructure has been built to achieve
    these effects: data centers, intercontinental networks of cables,
    satellites, high-performance nodes, and much more. Through globalized
    high-frequency trading, actors in the financial markets have realized
    this []{#Page_90 type="pagebreak" title="90"}technical vision to its
    broadest extent by creating a never-ending global present whose expanse
    is confined to milliseconds. This process is far from coming to an end,
    for massive amounts of investment are allocated to accomplish even the
    smallest steps toward this goal. On November 3, 2015, a 4,600-kilometer,
    300-million-dollar transatlantic telecommunications cable (Hibernia
    Express) was put into operation between London and New York -- the first
    in more than 10 years -- with the single goal of accelerating automated
    trading between the two places by 5.2 milliseconds.

    For social and biological processes, this technical horizon of space and
    time is neither achievable nor desirable. Such processes, on the
    contrary, are existentially dependent on other spatial and temporal
    orders. Yet because of the existence of this non-geographical and
    atemporal horizon, the need -- as well as the possibility -- has arisen
    to redefine the parameters of space and time themselves in order to
    counteract the mire of technically defined spacelessness and
    timelessness. If space and time are not simply to vanish in this
    spaceless, ongoing present, how then should they be defined? Communal
    formations create spaces for action not least by determining their own
    geographies and temporal rhythms. They negotiate what is near and far
    and also which places are disregarded (that is, not even perceived). If
    every place is communicatively (and physically) reachable, every person
    must decide which place he or she would like to reach in practice. This,
    however, is not an individual decision but rather a task that can only
    be approached collectively. Those places which are important and thus
    near are determined by communal formations. This takes place in the form
    of a rough consensus through the blogs that "one" has to read, the
    exhibits that "one" has to see, the events and conferences that "one"
    has to attend, the places that "one" has to visit before they are
    overrun by tourists, the crises in which "the West" has to intervene,
    the targets that "lend themselves" to a terrorist attack, and so on. On
    its own, however, selection is not enough. Communal formations are
    especially powerful when they generate the material and organizational
    resources that are necessary for their members to implement their shared
    worldview through actions -- to visit, for instance, the places that
    have been chosen as important. This can happen if they enable access
    []{#Page_91 type="pagebreak" title="91"}to stipends, donations, price
    reductions, ride shares, places to stay, tips, links, insider knowledge,
    public funds, airlifts, explosives, and so on. It is in this way that
    each formation creates its respective spatial constructs, which define
    distances in a great variety of ways. At the same time that war-torn
    Syria is unreachably distant even for seasoned reporters and their
    staff, veritable travel agencies are being set up in order to bring
    Western jihadists there in large numbers.

    Things are similar for the temporal dimensions of social and biological
    processes. Permanent presence is a temporality that is inimical to life
    but, under its influence, temporal rhythms have to be redefined as well.
    What counts as fast? What counts as slow? In what order should things
    proceed? On the everyday level, for instance, the matter can be as
    simple as how quickly to respond to an email. Because the transmission
    of information hardly takes any time, every delay is a purely social
    creation. But how much is acceptable? There can be no uniform answer to
    this. The members of each communal formation have to negotiate their own
    rules with one another, even in areas of life that are otherwise highly
    formalized. In an interview with the magazine *Zeit*, for instance, a
    lawyer with expertise in labor law was asked whether a boss may require
    employees to be reachable at all times. Instead of answering by
    referring to any binding legal standards, the lawyer casually advised
    that this was a matter of flexible negotiation: "Express your misgivings
    openly and honestly about having to be reachable after hours and,
    together with your boss, come up with an agreeable rule to
    follow."[^56^](#c2-note-0056){#c2-note-0056a} If only it were that easy.

    Temporalities that, in many areas, were once simply taken for granted by
    everyone on account of the factuality of things now have to be
    culturally determined -- that is, explicitly negotiated -- in a greater
    number of contexts. Under the conditions of capitalism, which is always
    creating new competitions and incentives, one consequence is the
    often-lamented "acceleration of time." We are asked to produce, consume,
    or accomplish more and more in less and less
    time.[^57^](#c2-note-0057){#c2-note-0057a} This change in the
    structuring of time is not limited to linear acceleration. It reaches
    deep into the foundations of life and has even reconfigured biological
    processes themselves. Today there is an entire industry that specializes
    in freezing the stem []{#Page_92 type="pagebreak" title="92"}cells of
    newborns in liquid nitrogen -- that is, in suspending cellular
    biological time -- in case they might be needed later on in life for a
    transplant or for the creation of artificial organs. Children can be
    born even if their physical mothers are already dead. Or they can be
    "produced" from ova that have been stored for many years at minus 196
    degrees.[^58^](#c2-note-0058){#c2-note-0058a} At the same time,
    questions now have to be addressed every day whose grand temporal
    dimensions were once the matter of myth. In the case of atomic energy,
    for instance, there is the issue of permanent disposal. Where can we
    deposit nuclear waste for the next hundred thousand years without it
    causing catastrophic damage? How can the radioactive material even be
    transported there, wherever that is, within the framework of everday
    traffic laws?[^59^](#c2-note-0059){#c2-note-0059a}

    The construction of temporal dimensions and sequences has thus become an
    everyday cultural question. Whereas throughout Europe, for example,
    committees of experts and ethicists still meet to discuss reproductive
    medicine and offer their various recommendations, many couples are
    concerned with the specific question of whether or how they can fulfill
    their wish to have children. Without a coherent set of rules, questions
    such as these have to be answered by each individual with recourse to
    his or her personally relevant communal formation. If there is no
    cultural framework that at least claims to be binding for everyone, then
    the individual must negotiate independently within each communal
    formation with the goal of acquiring the resources necessary to act
    according to communal values and objectives.
    :::

    ::: {.section}
    ### Self-generating orders {#c2-sec-0014}

    These three functions -- selection, interpretation, and the constitutive
    ability to act -- make communal formations the true subject of the
    digital condition. In principle, these functions are nothing new;
    rather, they are typical of fields that are organized without reference
    to external or irrefutable authorities. The state of scholarship, for
    instance, is determined by what is circulated in refereed publications.
    In this case, "refereed" means that scientists at the same professional
    rank mutually evaluate each other\'s work. The scientific community (or
    better: the sub-community of a specialized discourse) []{#Page_93
    type="pagebreak" title="93"}evaluates the contributions of individual
    scholars. They decide what should be considered valuable, and this
    consensus can theoretically be revised at any time. It is based on a
    particular catalog of criteria, on an interpretive framework that
    provides lines of inquiry, methods, appraisals, and conventions of
    presentation. With every article, this framework is confirmed and
    reconstituted. If the framework changes, this can lead in the most
    extreme case to a paradigm shift, which overturns fundamental
    orientations, assumptions, and
    certainties.[^60^](#c2-note-0060){#c2-note-0060a} The result of this is
    not only a change in how scientific contributions are evaluated but also
    a change in how the external world is perceived and what activities are
    possible in it. Precisely because the sciences claim to define
    themselves, they have the ability to revise their own foundations.

    The sciences were the first large sphere of society to achieve
    comprehensive cultural autonomy; that is, the ability to determine its
    own binding meaning. Art was the second that began to organize itself on
    the basis of internal feedback. It was during the era of Romanticism
    that artists first laid claim to autonomy. They demanded "to absolve art
    from all conditions, to represent it as a realm -- indeed as the only
    realm -- in which truth and beauty are expressed in their pure form, a
    realm in which everything truly human is
    transcended."[^61^](#c2-note-0061){#c2-note-0061a} With the spread of
    photography in the second half of the nineteenth century, art also
    liberated itself from its final task, which was hoisted upon it from the
    outside, namely the need to represent external reality. Instead of
    having to represent the external world, artists could now focus on their
    own subjectivity. This gave rise to a radical individualism, which found
    its clearest summation in Marcel Duchamp\'s assertion that only the
    artist could determine what is art. This he claimed in 1917 by way of
    explaining how an industrially produced urinal, exhibited as a signed
    piece with the title "Fountain," could be considered a work of art.

    With the rise of the knowledge economy and the expansion of cultural
    fields, including the field of art and the artists active within it,
    this individualism quickly swelled to unmanageable levels. As a
    consequence, the task of defining what should be regarded as art shifted
    from the individual artist to the curator. It now fell upon the latter
    to select a few works from the surplus of competing scenes and thus
    bring temporary []{#Page_94 type="pagebreak" title="94"}order to the
    constantly diversifying and changing world of contemporary art. This
    order was then given expression in the form of exhibits, which were
    intended to be more than the sum of their parts. The beginning of this
    practice can be traced to the 1969 exhibition When Attitudes Become
    Form, which was curated by Harald Szeemann for the Kunsthalle Bern (it
    was also sponsored by Philip Morris). The works were not neatly
    separated from one another and presented without reference to their
    environment, but were connected with each other both spatially and in
    terms of their content. The effect of the exhibition could be felt at
    least as much through the collection of works as a whole as it could
    through the individual pieces, many of which had been specially
    commissioned for the exhibition itself. It not only cemented Szeemann\'s
    reputation as one of the most significant curators of the twentieth
    century; it also completely redefined the function of the curator as a
    central figure within the art system.

    This was more than 40 years ago and in a system that functioned
    differently from that of today. The distance from this exhibition, but
    also its ongoing relevance, was negotiated, significantly, in a
    re-enactment at the 2013 Biennale in Venice. For this, the old rooms at
    the Kunsthalle Bern were reconstructed in the space of the Fondazione
    Prada in such a way that both could be seen simultaneously. As is
    typical with such re-enactments, the curators of the project described
    its goals in terms of appropriation and distancing: "This was the
    challenge: how could we find and communicate a limit to a non-limit,
    creating a place that would reflect exactly the architectural structures
    of the Kunsthalle, but also an asymmetrical space with respect to our
    time and imbued with an energy and tension equivalent to that felt at
    Bern?"[^62^](#c2-note-0062){#c2-note-0062a}

    Curation -- that is, selecting works and associating them with one
    another -- has become an omnipresent practice in the art system. No
    exhibition takes place any more without a curator. Nevertheless,
    curators have lost their extraordinary
    position,[^63^](#c2-note-0063){#c2-note-0063a} with artists taking on
    more of this work themselves, not only because the boundaries between
    artistic and curatorial activities have become fluid but also because
    many artists explicitly co-produce the context of their work by
    incorporating a multitude of references into their pieces. It is with
    precisely this in mind that André Rottmann, in the []{#Page_95
    type="pagebreak" title="95"}quotation cited at the beginning of this
    chapter, can assert that referentiality has become the dominant
    production-aesthetic model in contemporary art. This practice enables
    artists to objectify themselves by explicitly placing themselves into a
    historical and social context. At the same time, it also enables them to
    subjectify the historical and social context by taking the liberty to
    select and arrange the references
    themselves.[^64^](#c2-note-0064){#c2-note-0064a}

    Such strategies are no longer specific to art. Self-generated spaces of
    reference and agency are now deeply embedded in everyday life. The
    reason for this is that a growing number of questions can no longer be
    answered in a generally binding way (such as those about what
    constitutes fine art), while the enormous expansion of the cultural
    requires explicit decisions to be made in more aspects of life. The
    reaction to this dilemma has been radical subjectivation. This has not,
    however, been taking place at the level of the individual but rather at
    that of communal formations. There is now a patchwork of answers to
    large questions and a multitude of reactions to large challenges, all of
    which are limited in terms of their reliability and scope.
    :::

    ::: {.section}
    ### Ambivalent voluntariness {#c2-sec-0015}

    Even though participation in new formations is voluntary and serves the
    interests of their members, it is not without preconditions. The most
    important of these is acceptance, the willing adoption of the
    interpretive framework that is generated by the communal formation. The
    latter is formed from the social, cultural, legal, and technical
    protocols that lend to each of these formations its concrete
    constitution and specific character. Protocols are common sets of rules;
    they establish, according to the network theorist Alexander Galloway,
    "the essential points necessary to enact an agreed-upon standard of
    action." They provide, he goes on, "etiquette for autonomous
    agents."[^65^](#c2-note-0065){#c2-note-0065a} Protocols are
    simul­taneously voluntary and binding; they allow actors to meet
    eye-to-eye instead of entering into hierarchical relations with one
    another. If everyone voluntarily complies with the protocols, then it is
    not necessary for one actor to give instructions to another. Whoever
    accepts the relevant protocols can interact with others who do the same;
    whoever opts not to []{#Page_96 type="pagebreak" title="96"}accept them
    will remain on the outside. Protocols establish, for example, common
    languages, technical standards, or social conventions. The fundamental
    protocol for the internet is the Transmission Control Protocol/Internet
    Protocol (TCP/IP). This suite of protocols defines the common language
    for exchanging data. Every device that exchanges information over the
    internet -- be it a smartphone, a supercomputer in a data center, or a
    networked thermostat -- has to use these protocols. In growing areas of
    social contexts, the common language is English. Whoever wishes to
    belong has to speak it increasingly often. In the natural sciences,
    communication now takes place almost exclusively in English. Non-native
    speakers who accept this norm may pay a high price: they have to learn a
    new language and continually improve their command of it or else resign
    themselves to being unable to articulate things as they would like --
    not to mention losing the possibility of expressing something for which
    another language would perhaps be more suitable, or forfeiting
    trad­itions that cannot be expressed in English. But those who refuse to
    go along with these norms pay an even higher price, risking
    self-marginalization. Those who "voluntarily" accept conventions gain
    access to a field of practice, even though within this field they may be
    structurally disadvantaged. But unwillingness to accept such
    conventions, with subsequent denial of access to this field, might have
    even greater disadvantages.[^66^](#c2-note-0066){#c2-note-0066a}

    In everyday life, the factors involved with this trade-off are often
    presented in the form of subtle cultural codes. For instance, in order
    to participate in a project devoted to the development of free software,
    it is not enough for someone to possess the necessary technical
    knowledge; he or she must also be able to fit into a wide-ranging
    informal culture with a characteristic style of expression, humor, and
    preferences. Ultimately, software developers do not form a professional
    corps in the traditional sense -- in which functionaries meet one
    another in the narrow and regulated domain of their profession -- but
    rather a communal formation in which the engagement of the whole person,
    both one\'s professional and social self, is scrutinized. The
    abolishment of the separ­ation between different spheres of life,
    requiring interaction of a more holistic nature, is in fact a key
    attraction of []{#Page_97 type="pagebreak" title="97"}these communal
    formations and is experienced by some as a genuine gain in freedom. In
    this situation, one is no longer subjected to rules imposed from above
    but rather one is allowed to -- and indeed ought to -- be authentically
    pursuing his or her own interests.

    But for others the experience can be quite the opposite because the
    informality of the communal formation also allows forms of exclusion and
    discrimination that are no longer acceptable in formally organized
    realms of society. Discrimination is more difficult to identify when it
    takes place within the framework of voluntary togetherness, for no one
    is forced to participate. If you feel uncomfortable or unwelcome, you
    are free to leave at any time. But this is a specious argument. The
    areas of free software or Wikipedia are difficult places for women. In
    these clubby atmospheres of informality, they are often faced with
    blatant sexism, and this is one of the reasons why many women choose to
    stay away from such projects.[^67^](#c2-note-0067){#c2-note-0067a} In
    2007, according to estimates by the American National Center for Women &
    Information Technology, whereas approximately 27 percent of all jobs
    related to computer science were held by women, their representation at
    the same time was far lower in the field of free software -- on average
    less than 2 percent. And for years, the proportion of women who edit
    texts on Wikipedia has hovered at around 10
    percent.[^68^](#c2-note-0068){#c2-note-0068a}

    The consequences of such widespread, informal, and elusive
    discrimination are not limited to the fact that certain values and
    prejudices of the shared culture are included in these products, while
    different viewpoints and areas of knowledge are
    excluded.[^69^](#c2-note-0069){#c2-note-0069a} What is more, those who
    are excluded or do not wish to expose themselves to discrimination (and
    thus do not even bother to participate in any communal formations) do
    not receive access to the resources that circulate there (attention and
    support, valuable and timely knowledge, or job offers). Many people are
    thus faced with the choice of either enduring the discrimination within
    a community or remaining on the outside and thus invisible. That this
    decision is made on a voluntary basis and on one\'s own responsibility
    hardly mitigates the coercive nature of the situation. There may be a
    choice, but it would be misleading to call it a free one.[]{#Page_98
    type="pagebreak" title="98"}
    :::

    ::: {.section}
    ### The power of sociability {#c2-sec-0016}

    In order to explain the peculiar coercive nature of the (nom­inally)
    voluntary acceptance of protocols, rules, and norms, the political
    scientist David Singh Grewal, drawing on the work of Max Weber and
    Michel Foucault, has distinguished between the "power of sovereignty"
    and the "power of sociabil­ity."[^70^](#c2-note-0070){#c2-note-0070a}
    The former develops on the basis of dominance and subordination, as
    imposed by authorities, police officers, judges, or other figures within
    formal hierarchies. Their power is anchored in disciplinary
    institutions, and the dictum of this sort of power is: "You must!" The
    power of sociability, on the contrary, functions by prescribing the
    conditions or protocols under which people are able to enter into an
    exchange with one another. The dictum of this sort of power is: "You
    can!" The more people accept certain protocols and standards, the more
    powerful these become. Accordingly, the sociability that they structure
    also becomes more comprehensive, and those not yet involved have to ask
    themselves all the more urgently whether they can afford not to accept
    these protocols and standards. Whereas the first type of power is
    ultimately based on the monopoly of violence and on repression, the
    second is founded on voluntary submission. When the entire internet
    speaks TCP/IP, then an individual\'s decision to use it may be voluntary
    in nominal terms, but at the same time it is an indispensable
    precondition for existing within the network at all. Protocols exert
    power without there having to be anyone present to possess the power in
    question. Whereas the sovereign can be located, the effects of
    sociability\'s power are diffuse and omnipresent. They are not
    repressive but rather constitutive. No one forces a scientist to publish
    in English or a woman editor to tolerate disparaging remarks on
    Wikipedia. People accept these often implicit behavioral norms (sexist
    comments are permitted, for instance) out of their own interests in
    order to acquire access to the resources circulating within the networks
    and to constitute themselves within it. In this regard, Singh
    distinguishes between the "intrinsic" and "extrinsic" reasons for
    abiding by certain protocols.[^71^](#c2-note-0071){#c2-note-0071a} In
    the first case, the motivation is based on a new protocol being better
    suited than existing protocols for carrying out []{#Page_99
    type="pagebreak" title="99"}a specific objective. People thus submit
    themselves to certain rules because they are especially efficient,
    transparent, or easy to use. In the second case, a protocol is accepted
    not because but in spite of its features. It is simply a precondition
    for gaining access to a space of agency in which resources and
    opportunities are available that cannot be found anywhere else. In the
    first case, it is possible to speak subjectively of voluntariness,
    whereas the second involves some experience of impersonal compunction.
    One is forced to do something that might potentially entail grave
    disadvantages in order to have access, at least, to another level of
    opportunities or to create other advantages for oneself.
    :::

    ::: {.section}
    ### Homogeneity, difference and authority {#c2-sec-0017}

    Protocols are present on more than a technical level; as interpretive
    frameworks, they structure viewpoints, rules, and patterns of behavior
    on all levels. Thus, they provide a degree of cultural homogeneity, a
    set of commonalities that lend these new formations their communal
    nature. Viewed from the outside, these formations therefore seem
    inclined toward consensus and uniformity, for their members have already
    accepted and internalized certain aspects in common -- the protocols
    that enable exchange itself -- whereas everyone on the outside has not
    done so. When everyone is speaking in English, the conversation sounds
    quite monotonous to someone who does not speak the language.

    Viewed from the inside, the experience is something different: in order
    to constitute oneself within a communal formation, not only does one
    have to accept its rules voluntarily and in a self-motivated manner; one
    also has to make contributions to the reproduction and development of
    the field. Everyone is urged to contribute something; that is, to
    produce, on the basis of commonalities, differences that simultaneously
    affirm, modify, and enhance these commonalities. This leads to a
    pronounced and occasionally highly competitive internal differentiation
    that can only be understood, however, by someone who has accepted the
    commonalities. To an outsider, this differentiation will seem
    irrelevant. Whoever is not well versed in the universe of *Star Wars*
    will not understand why the various character interpretations at
    []{#Page_100 type="pagebreak" title="100"}cosplay conventions, which I
    discussed above, might be brilliant or even controversial. To such a
    person, they will all seem equally boring and superficial.

    These formations structure themselves internally through the production
    of differences; that is, by constantly changing their common ground.
    Those who are able to add many novel aspects to the common resources
    gain a degree of authority. They assume central positions and they
    influence, through their behavior, the development of the field more
    than others do. However, their authority, influence, and de facto power
    are not based on any means of coercion. As Niklas Luhmann noted, "In the
    end, one participant\'s achievements in making selections \[...\] are
    accepted by another participant \[...\] as a limitation of the latter\'s
    potential experiences and activities without him having to make the
    selection on his own."[^72^](#c2-note-0072){#c2-note-0072a} Even this is
    a voluntary and self-interested act: the members of the formation
    recognize that this person has contributed more to the common field and
    to the resources within it. This, in turn, is to everyone\'s advantage,
    for each member would ultimately like to make use of the field\'s
    resources to achieve his or her own goals. This arrangement, which can
    certainly take on hierarchical qualities, is experienced as something
    meritocratically legitimized and voluntarily
    accepted.[^73^](#c2-note-0073){#c2-note-0073a} In the context of free
    software, there has therefore been some discussion of "benevolent
    dictators."[^74^](#c2-note-0074){#c2-note-0074a} The matter of
    "dictators" is raised because projects are often led by charismatic
    figures without a formal mandate. They are "benevolent" because their
    pos­ition of authority is based on the fact that a critical mass of
    participating producers has voluntarily subordinated itself for its own
    self-interest. If the consensus breaks over whose contributions have
    been carrying the most weight, then the formation will be at risk of
    losing its internal structure and splitting apart ("forking," in the
    jargon of free software).
    :::
    :::

    ::: {.section}
    Algorithmicity {#c2-sec-0018}
    --------------

    Through personal communication, referential processes in communal
    formations create cultural zones of various sizes and scopes. They
    expand into the empty spaces that have been created by the erosion of
    established institutions and []{#Page_101 type="pagebreak"
    title="101"}processes, and once these new processes have been
    established the process of erosion intensifies. Multiple processes of
    exchange take place alongside one another, creating a patchwork of
    interconnected, competing, or entirely unrelated spheres of meaning,
    each with specific goals and resources and its own preconditions and
    potentials. The structures of knowledge, order, and activity that are
    generated by this are holistic as well as partial and limited. The
    participants in such structures are simultaneously addressed on many
    levels that were once functionally separated; previously independent
    spheres, such as work and leisure, are now mixed together, but usually
    only with respect to the subdivisions of one\'s own life. And, at first,
    the structures established in this way are binding only for active
    participants.

    ::: {.section}
    ### Exiting the "Library of Babel" {#c2-sec-0019}

    For one person alone, however, these new processes would not be able to
    generate more than a local island of meaning from the enormous clamor of
    chaotic spheres of information. In his 1941 story "The Library of
    Babel," Jorge Luis Borges fashioned a fitting image for such a
    situation. He depicts the world as a library of unfathomable and
    possibly infinite magnitude. The characters in the story do not know
    whether there is a world outside of the library. There are reasons to
    believe that there is, and reasons that suggest otherwise. The library
    houses the complete collection of all possible books that can be written
    on exactly 410 pages. Contained in these volumes is the promise that
    there is "no personal or universal problem whose eloquent solution
    \[does\] not exist," for every possible combination of letters, and thus
    also every possible pronouncement, is recorded in one book or another.
    No catalog has yet been found for the library (though it must exist
    somewhere), and it is impossible to identify any order in its
    arrangement of books. The "men of the library," according to Borges,
    wander round in search of the one book that explains everything, but
    their actual discoveries are far more modest. Only once in a while are
    books found that contain more than haphazard combinations of signs. Even
    small regularities within excerpts of texts are heralded as sensational
    discoveries, and it is around these discoveries that competing
    []{#Page_102 type="pagebreak" title="102"}schools of interpretation
    develop. Despite much labor and effort, however, the knowledge gained is
    minimal and fragmentary, so the prevailing attitude in the library is
    bleak. By the time of the narrator\'s generation, "nobody expects to
    discover anything."[^75^](#c2-note-0075){#c2-note-0075a}

    Although this vision has now been achieved from a quantitative
    perspective -- no one can survey the "library" of digital information,
    which in practical terms is infinitely large, and all of the growth
    curves continue to climb steeply -- today\'s cultural reality is
    nevertheless entirely different from that described by Borges. Our
    ability to deal with massive amounts of data has radically improved, and
    thus our faith in the utility of information is not only unbroken but
    rather gaining strength. What is new is precisely such large quantities
    of data ("big data"), which, as we are promised or forewarned, will lead
    to new knowledge, to a comprehensive understanding of the world, indeed
    even to "omniscience."[^76^](#c2-note-0076){#c2-note-0076a} This faith
    in data is based above all on the fact that the two processes described
    above -- referentiality and communality -- are not the only new
    mechanisms for filtering, sorting, aggregating, and evaluating things.
    Beneath or ahead of the social mechanisms of decentralized and networked
    cultural production, there are algorithmic processes that pre-sort the
    immeasurably large volumes of data and convert them into a format that
    can be apprehended by individuals, evaluated by communities, and
    invested with meaning.

    Strictly speaking, it is impossible to maintain a categorical
    distinction between social processes that take place in and by means of
    technological infrastructures and technical pro­cesses that are socially
    constructed. In both cases, social actors attempt to realize their own
    interests with the resources at their disposal. The methods of
    (attempted) realization, the available resources, and the formulation of
    interests mutually influence one another. The technological resources
    are inscribed in the formulation of goals. These open up fields of
    imagination and desire, which in turn inspire technical
    development.[^77^](#c2-note-0077){#c2-note-0077a} Although it is
    impossible to draw clear theoretical lines, the attempt to make such a
    distinction can nevertheless be productive in practice, for in this way
    it is possible to gain different perspectives about the same object of
    investigation.[]{#Page_103 type="pagebreak" title="103"}
    :::

    ::: {.section}
    ### The rise of algorithms {#c2-sec-0020}

    An algorithm is a set of instructions for converting a given input into
    a desired output by means of a finite number of steps: algorithms are
    used to solve predefined problems. For a set of instructions to become
    an algorithm, it has to be determined in three different respects.
    First, the necessary steps -- individually and as a whole -- have to be
    described unambiguously and completely. To do this, it is usually
    neces­sary to use a formal language, such as mathematics, or a
    programming language, in order to avoid the characteristic imprecision
    and ambiguity of natural language and to ensure instructions can be
    followed without interpretation. Second, it must be possible in practice
    to execute the individual steps together. For this reason, every
    algorithm is tied to the context of its realization. If the context
    changes, so do the operating processes that can be formalized as
    algorithms and thus also the ways in which algorithms can partake in the
    constitution of the world. Third, it must be possible to execute an
    operating instruction mechanically so that, under fixed conditions, it
    always produces the same result.

    Defined in such general terms, it would also be possible to understand
    the instruction manual for a typical piece of Ikea furniture as an
    algorithm. It is a set of instructions for creating, with a finite
    number of steps, a specific and predefined piece of furniture (output)
    from a box full of individual components (input). The instructions are
    composed in a formal language, pictograms, which define each step as
    unambiguously as possible, and they can be executed by a single person
    with simple tools. The process can be repeated, for the final result is
    always the same: a Billy box will always yield a Billy shelf. In this
    case, a person takes over the role of a machine, which (unambiguous
    pictograms aside) can lead to problems, be it that scratches and other
    traces on the finished piece of furniture testify to the unique nature
    of the (unsuccessful) execution, or that, inspired by the micro-trend of
    "Ikea hacking," the official instructions are intentionally ignored.

    Because such imprecision is supposed to be avoided, the most important
    domain of algorithms in practice is mathematics and its implementation
    on the computer. The term []{#Page_104 type="pagebreak"
    title="104"}"algorithm" derives from the Persian mathematician,
    astronomer, and geographer Muḥammad ibn Mūsā al-Khwārizmī. His book *On
    the Calculation with Hindu Numerals*, which was written in Baghdad in
    825, was known widely in the Western Middle Ages through a Latin
    translation and made the essential contribution of introducing
    Indo-Arabic nu­merals and the number zero to Europe. The work begins
    with the formula *dixit algorizmi* ... ("Algorismi said ..."). During
    the Middle Ages, *algorizmi* or *algorithmi* soon became a general term
    for advanced methods of
    calculation.[^78^](#c2-note-0078){#c2-note-0078a}

    The modern effort to build machines that could mechanic­ally carry out
    instructions achieved its first breakthrough with Gottfried Wilhelm
    Leibniz. He has often been credited with making the following remark:
    "It is unworthy of excellent men to lose hours like slaves in the labour
    of calculation which could be done by any peasant with the aid of a
    machine."[^79^](#c2-note-0079){#c2-note-0079a} This vision already
    contains a distinction between higher cognitive and interpretive
    activities, which are regarded as being truly human, and lower processes
    that involve pure execution and can therefore be mechanized. To this
    end, Leibniz himself developed the first calculating machine, which
    could carry out all four of the basic types of arithmetic. He was not
    motivated to do this by the practical necessities of production and
    business (although conceptually groundbreaking, Leibniz\'s calculating
    machine remained, on account of its mechanical complexity, a unique item
    and was never used).[^80^](#c2-note-0080){#c2-note-0080a} In the
    estimation of the philosopher Sybille Krämer, calculating machines "were
    rather speculative masterpieces of a century that, like none before it,
    was infatuated by the idea of mechanizing 'intellectual'
    processes."[^81^](#c2-note-0081){#c2-note-0081a} Long before machines
    were implemented on a large scale to increase the efficiency of material
    production, Leibniz had already speculated about using them to enhance
    intellectual labor. And this vision has never since disappeared. Around
    a century and a half later, the English polymath Charles Babbage
    formulated it anew, now in direct connection with industrial
    mechanization and its imperative of time-saving
    efficiency.[^82^](#c2-note-0082){#c2-note-0082a} Yet he, too, failed to
    overcome the problem of practically realizing such a machine.

    The decisive step that turned the vision of calculating machines into
    reality was made by Alan Turing in 1937. With []{#Page_105
    type="pagebreak" title="105"}a theoretical model, he demonstrated that
    every algorithm could be executed by a machine as long as it could read
    an incremental set of signs, manipulate them according to established
    rules, and then write them out again. The validity of his model did not
    depend on whether the machine would be analog or digital, mechanical or
    electronic, for the rules of manipulation were not at first conceived as
    being a fixed component of the machine itself (that is, as being
    implemented in its hardware). The electronic and digital approach came
    to be preferred because it was hoped that even the instructions could be
    read by the machine itself, so that the machine would be able to execute
    not only one but (theoretically) every written algorithm. The
    Hungarian-born mathematician John von Neumann made it his goal to
    implement this idea. In 1945, he published a model in which the program
    (the algorithm) and the data (the input and output) were housed in a
    common storage device. Thus, both could be manipulated simultaneously
    without having to change the hardware. In this way, he converted the
    "Turing machine" into the "universal Turing machine"; that is, the
    modern computer.[^83^](#c2-note-0083){#c2-note-0083a}

    Gordon Moore, the co-founder of the chip manufacturer Intel,
    prognosticated 20 years later that the complexity of integrated circuits
    and thus the processing power of computer chips would double every 18 to
    24 months. Since the 1970s, his prediction has been known as Moore\'s
    Law and has essentially been correct. This technical development has
    indeed taken place exponentially, not least because the semi-conductor
    industry has been oriented around
    it.[^84^](#c2-note-0084){#c2-note-0084a} An IBM 360/40 mainframe
    computer, which was one of the first of its kind to be produced on a
    large scale, could make approximately 40,000 calculations per second and
    its cost, when it was introduced to the market in 1965, was \$1.5
    million per unit. Just 40 years later, a standard server (with a
    quad-core Intel processor) could make more than 40 billion calculations
    per second, and this at a price of little more than \$1,500. This
    amounts to an increase in performance by a factor of a million and a
    corresponding price reduction by a factor of a thousand; that is, an
    improvement in the price-to-performance ratio by a factor of a billion.
    With inflation taken into consideration, this factor would be even
    higher. No less dramatic were the increases in performance -- or rather
    []{#Page_106 type="pagebreak" title="106"}the price reductions -- in the
    area of data storage. In 1980, it cost more than \$400,000 to store a
    gigabyte of data, whereas 30 years later it would cost just 10 cents to
    do the same -- a price reduction by a factor of 4 million. And in both
    areas, this development has continued without pause.

    These increases in performance have formed the material basis for the
    rapidly growing number of activities carried out by means of algorithms.
    We have now reached a point where Leibniz\'s distinction between
    creative mental functions and "simple calculations" is becoming
    increasingly fuzzy. Recent discussions about the allegedly threatening
    "domination of the computer" have been kindled less by the increased use
    of algorithms as such than by the gradual blurring of this distinction
    with new possibilities to formalize and mechanize increasing areas of
    creative thinking.[^85^](#c2-note-0085){#c2-note-0085a} Activities that
    not long ago were reserved for human intelligence, such as composing
    texts or analyzing the content of images, are now frequently done by
    machines. As early as 2010, a program called Stats Monkey was introduced
    to produce short reports about baseball games. All that the program
    needs for this is comprehensive data about the games, which can be
    accumulated mechanically and which have since become more detailed due
    to improved image recognition and sensors. From these data, the program
    extracts the decisive moments and players of a game, recognizes
    characteristic patterns throughout the course of play (such as
    "extending an early lead," "a dramatic comeback," etc.), and on this
    basis generates its own report. Regarding the reports themselves, a
    number of variables can be determined in advance, for instance whether
    the story should be written from the perspective of a neutral observer
    or from the standpoint of one of the two teams. If writing about little
    league games, the program can be instructed to ignore the errors made by
    children -- because no parent wants to read about those -- and simply
    focus on their heroics. The algorithm was soon patented, and a start-up
    business was created from the original interdisciplinary research
    project: Narrative Science. In addition to sport reports it now offers
    texts of all sorts, but above all financial reports -- another field for
    which there is a great deal of available data. These texts have been
    published by reputable media outlets such as the business magazine
    *Forbes*, in which their authorship []{#Page_107 type="pagebreak"
    title="107"}is credited to "Narrative Science." Although these
    contributions are still limited to relatively simple topics, this will
    not remain the case for long. When asked about the percentage of news
    that would be written by computers 15 years from now, Narrative
    Science\'s chief technology officer and co-founder Kristian Hammond
    confidently predicted "\[m\]ore than 90 percent." He added that, within
    the next five years, an algorithm could even win a Pulitzer
    Prize.[^86^](#c2-note-0086){#c2-note-0086a} This may be blatant hype and
    self-promotion but, as a general estimation, Hammond\'s assertion is not
    entirely beyond belief. It remains to be seen whether algorithms will
    replace or simply supplement traditional journalism. Yet because media
    companies are now under strong financial pressure, it is certainly
    reasonable to predict that many journalistic texts will be automated in
    the future. Entirely different applications, however, have also been
    conceived. Alexander Pschera, for instance, foresees a new age in the
    relationship between humans and nature, for, as soon as animals are
    equipped with transmitters and sensors and are thus able to tell their
    own stories through the appropriate software, they will be regarded as
    individuals and not merely as generic members of a
    species.[^87^](#c2-note-0087){#c2-note-0087a}

    We have not yet reached this point. However, given that the CIA has also
    expressed interest in Narrative Science and has invested in it through
    its venture-capital firm In-Q-Tel, there are indications that
    applications are being developed beyond the field of journalism. For the
    purpose of spreading propaganda, for instance, algorithms can easily be
    used to create a flood of entries on online forums and social mass
    media.[^88^](#c2-note-0088){#c2-note-0088a} Narrative Science is only
    one of many companies offering automated text analysis and production.
    As implemented by IBM and other firms, so-called E-discovery software
    promises to reduce dramatically the amount of time and effort required
    to analyze the constantly growing numbers of files that are relevant to
    complex legal cases. Without such software, it would be impossible in
    practice for lawyers to deal with so many documents. Numerous bots
    (automated editing programs) are active in the production of Wikipedia
    as well. Whereas, in the German edition, bots are forbidden from writing
    their own articles, this is not the case in the Swedish version.
    Measured by the number of entries, the latter is now the second-largest
    edition of the online encyclopedia in the []{#Page_108 type="pagebreak"
    title="108"}world, for, in the summer of 2013, a single bot contributed
    more than 200,000 articles to it.[^89^](#c2-note-0089){#c2-note-0089a}
    Since 2013, moreover, the company Epagogix has offered software that
    uses histor­ical data to evaluate the market potential of film scripts.
    At least one major Hollywood studio uses this software behind the backs
    of scriptwriters and directors, for, according to the company\'s CEO,
    the latter would be "nervous" to learn that their creative work was
    being analyzed in such a way.[^90^](#c2-note-0090){#c2-note-0090a}
    Think, too, of the typical statement that is made at the beginning of a
    call to a telephone hotline -- "This call may be recorded for training
    purposes." Increasingly, this training is not intended for the employees
    of the call center but rather for algorithms. The latter are expected to
    learn how to recognize the personality type of the caller and, on that
    basis, to produce an appropriate script to be read by its poorly
    educated and part-time human
    co-workers.[^91^](#c2-note-0091){#c2-note-0091a} Another example is the
    use of algorithms to grade student
    essays,[^92^](#c2-note-0092){#c2-note-0092a} or ... But there is no need
    to expand this list any further. Even without additional references to
    comparable developments in the fields of image, sound, language, and
    film analysis, it is clear by now that, on many fronts, the borders
    between the creative and the mechanical have
    shifted.[^93^](#c2-note-0093){#c2-note-0093a}
    :::

    ::: {.section}
    ### Dynamic algorithms {#c2-sec-0021}

    The algorithms used for such tasks, however, are no longer simple
    sequences of static instructions. They are no longer repeated unchanged,
    over and over again, but are dynamic and adaptive to a high degree. The
    computing power available today is used to write programs that modify
    and improve themselves semi-automatically and in response to feedback.

    What this means can be illustrated by the example of evolutionary and
    self-learning algorithms. An evolutionary algorithm is developed in an
    iterative process that continues to run until the desired result has
    been achieved. In most cases, the values of the variables of the first
    generation of algorithms are chosen at random in order to diminish the
    influence of the programmer\'s presuppositions on the results. These
    cannot be avoided entirely, however, because the type of variables
    (independent of their value) has to be determined in the first place. I
    will return to this problem later on. This is []{#Page_109
    type="pagebreak" title="109"}followed by a phase of evaluation: the
    output of every tested algorithm is evaluated according to how close it
    is to the desired solution. The best are then chosen and combined with
    one another. In addition, mutations (that is, random changes) are
    introduced. These steps are then repeated as often as necessary until,
    according to the specifications in question, the algorithm is
    "sufficient" or cannot be improved any further. By means of intensive
    computational processes, algorithms are thus "cultivated"; that is,
    large numbers of these are tested instead of a single one being designed
    analytically and then implemented. At the heart of this pursuit is a
    functional solution that proves itself experimentally and in practice,
    but about which it might no longer be possible to know why it functions
    or whether it actually is the best possible solution. The fundamental
    methods behind this process largely derive from the 1970s (the first
    stage of artificial intelligence), the difference being that today they
    can be carried out far more effectively. One of the best-known examples
    of an evolutionary algorithm is that of Google Flu Trends. In order to
    predict which regions will be especially struck by the flu in a given
    year, it evaluates the geographic distribution of internet searches for
    particular terms ("cold remedies," for instance). To develop the
    program, Google tested 450 million different models until one emerged
    that could reliably identify local flu epidemics one to two weeks ahead
    of the national health authorities.[^94^](#c2-note-0094){#c2-note-0094a}

    In pursuits of this magnitude, the necessary processes can only be
    administered by computer programs. The series of tests are no longer
    conducted by programmers but rather by algorithms. In short, algorithms
    are implemented in order to write new algorithms or determine their
    variables. If this reflexive process, in turn, is built into an
    algorithm, then the latter becomes "self-learning": the programmers do
    not set the rules for its execution but rather the rules according to
    which the algorithm is supposed to know how to accomplish a particular
    goal. In many cases, the solution strategies are so complex that they
    are incomprehensible in retrospect. They can no longer be tested
    logically, only experimentally. Such algorithms are essentially black
    boxes -- objects that can only be understood by their outer behavior but
    whose internal structure cannot be known.[]{#Page_110 type="pagebreak"
    title="110"}

    Automatic facial recognition, as used in surveillance technologies and
    for authorizing access to certain things, is based on the fact that
    computers can evaluate large numbers of facial images, first to produce
    a general model for a face, then to identify the variables that make a
    face unique and therefore recognizable. With so-called "unsupervised" or
    "deep-learning" algorithms, some developers and companies have even
    taken this a step further: computers are expected to extract faces from
    unstructured images -- that is, from volumes of images that contain
    images both with faces and without them -- and to do so without
    possessing in advance any model of the face in question. So far, the
    extraction and evaluation of unknown patterns from unstructured material
    has only been achieved in the case of very simple patterns -- with edges
    or surfaces in images, for instance -- for it is extremely complex and
    computationally intensive to program such learning processes. In recent
    years, however, there have been enormous leaps in available computing
    power, and both the data inputs and the complexity of the learning
    models have increased exponentially. Today, on the basis of simple
    patterns, algorithms are developing improved recognition of the complex
    content of images. They are refining themselves on their own. The term
    "deep learning" is meant to denote this very complexity. In 2012, Google
    was able to demonstrate the performance capacity of its new programs in
    an impressive manner: from a collection of randomly chosen YouTube
    videos, analyzed in a cluster by 1,000 computers with 16,000 processors,
    it was possible to create a model in just three days that increased
    facial recognition in unstructured images by 70
    percent.[^95^](#c2-note-0095){#c2-note-0095a} Of course, the algorithm
    does not "know" what a face is, but it reliably recognizes a class of
    forms that humans refer to as a face. One advantage of a model that is
    not created on the basis of prescribed parameters is that it can also
    identify faces in non-standard situ­ations (for instance if a person is
    in the background, if a face is half-concealed, or if it has been
    recorded at a sharp angle). Thanks to this technique, it is possible to
    search the content of images directly and not, as before, primarily by
    searching their descriptions. Such algorithms are also being used to
    identify people in images and to connect them in social networks with
    the profiles of the people in question, and this []{#Page_111
    type="pagebreak" title="111"}without any cooperation from the users
    themselves. Such algorithms are also expected to assist in directly
    controlling activity in "unstructured" reality, for instance in
    self-driving cars or other autonomous mobile applications that are of
    great interest to the military in particular.

    Algorithms of this sort can react and adjust themselves directly to
    changes in the environment. This feedback, however, also shortens the
    timeframe within which they are able to generate repetitive and
    therefore predictable results. Thus, algorithms and their predictive
    powers can themselves become unpredictable. Stock markets have
    frequently experi­enced so-called "sub-second extreme events"; that is,
    price fluctuations that happen in less than a
    second.[^96^](#c2-note-0096){#c2-note-0096a} Dramatic "flash crashes,"
    however, such as that which occurred on May 6, 2010, when the Dow Jones
    Index dropped almost a thousand points in a few minutes (and was thus
    perceptible to humans), have not been terribly
    uncommon.[^97^](#c2-note-0097){#c2-note-0097a} With the introduction of
    voice commands on mobile phones (Apple\'s Siri, for example, which came
    out in 2011), programs based on self-learning algorithms have now
    reached the public at large and have infiltrated increased areas of
    everyday life.
    :::

    ::: {.section}
    ### Sorting, ordering, extracting {#c2-sec-0022}

    Orders generated by algorithms are a constitutive element of the digital
    condition. On the one hand, the mechanical pre-sorting of the
    (informational) world is a precondition for managing immense and
    unstructured amounts of data. On the other hand, these large amounts of
    data and the computing centers in which they are stored and processed
    provide the material precondition for developing increasingly complex
    algorithms. Necessities and possibilities are mutually motivating one
    another.[^98^](#c2-note-0098){#c2-note-0098a}

    Perhaps the best-known algorithms that sort the digital infosphere and
    make it usable in its present form are those of search engines, above
    all Google\'s PageRank. Thanks to these, we can find our way around in a
    world of unstructured information and transfer increasingly larger parts
    of the (informational) world into the order of unstructuredness without
    giving rise to the "Library of Babel." Here, "unstructured" means that
    there is no prescribed order such as (to stick []{#Page_112
    type="pagebreak" title="112"}with the image of the library) a cataloging
    system that assigns to each book a specific place on a shelf. Rather,
    the books are spread all over the place and are dynamically arranged,
    each according to a search, so that the appropriate books for each
    visitor are always standing ready at the entrance. Yet the metaphor of
    books being strewn all about is problematic, for "unstructuredness" does
    not simply mean the absence of any structure but rather the presence of
    another type of order -- a meta-structure, a potential for order -- out
    of which innumerable specific arrangements can be generated on an ad hoc
    basis. This meta-structure is created by algorithms. They subsequently
    derive from it an actual order, which the user encounters, for instance,
    when he or she scrolls through a list of hits produced by a search
    engine. What the user does not see are the complex preconditions for
    assembling the search results. By the middle of 2014, according to the
    company\'s own information, the Google index alone included more than a
    hundred million gigabytes of data.

    Originally (that is, in the second half of the 1990s), Page­Rank
    functioned in such a way that the algorithm analyzed the structure of
    links on the World Wide Web, first by noting the number of links that
    referred to a given document, and second by evaluating the "relevance"
    of the site that linked to the document in question. The relevance of a
    site, in turn, was determined by the number of links that led to it.
    From these two variables, every document registered by the search engine
    was assigned a value, the PageRank. The latter served to present the
    documents found with a given search term as a hierarchical list (search
    results), whereby the document with the highest value was listed
    first.[^99^](#c2-note-0099){#c2-note-0099a} This algorithm was extremely
    successful because it reduced the unfathomable chaos of the World Wide
    Web to a task that could be managed without difficulty by an individual
    user: inputting a search term and selecting from one of the presented
    "hits." The simplicity of the user\'s final choice, together with the
    quality of the algorithmic pre-selection, quickly pushed Google past its
    competition.

    Underlying this process is the assumption that every link is an
    indication of relevance, and that links from frequently linked (that is,
    popular) sources are more important than those from less frequently
    linked (that is, unpopular) sources. []{#Page_113 type="pagebreak"
    title="113"}The advantage of this assumption is that it can be
    understood in terms of purely quantitative variables and it is not
    necessary to have any direct understanding of a document\'s content or
    of the context in which it exists.

    In the middle of the 1990s, when the first version of the PageRank
    algorithm was developed, the problem of judging the relevance of
    documents whose content could only partially be evaluated was not a new
    one. Science administrators at universities and funding agencies had
    been facing this difficulty since the 1950s. During the rise of the
    knowledge economy, the number of scientific publications increased
    rapidly. Scientific fields, perspectives, and methods also multiplied
    and diversified during this time, so that even experts could not survey
    all of the work being done in their own areas of
    research.[^100^](#c2-note-0100){#c2-note-0100a} Thus, instead of reading
    and evaluating the content of countless new publications, they shifted
    their analysis to a higher level of abstraction. They began to count how
    often an article or book was cited and applied this information to
    assess the value of a given author or
    publication.[^101^](#c2-note-0101){#c2-note-0101a} The underlying
    assumption was (and remains) that only important things are referenced,
    and therefore every citation and every reference can be regarded as an
    indirect vote for something\'s relevance.

    In both cases -- classifying a chaotic sphere of information and
    administering an expanding industry of knowledge -- the challenge is to
    develop dynamic orders for rapidly changing fields, enabling the
    evaluation of the importance of individual documents without knowledge
    of their content. Because the analysis of citations or links operates on
    a purely quantitative basis, large amounts of data can be quickly
    structured with them, and especially relevant positions can be
    determined. The second advantage of this approach is that it does not
    require any assumptions about the contours of different fields or their
    relationships to one another. This enables the organ­ization of
    disordered or dynamic content. In both cases, references made by the
    actors themselves are used: citations in a scientific text, links on
    websites. Their value for establishing the order of a field as a whole,
    however, is only visible in the aggregate, for instance in the frequency
    with which a given article is
    cited.[^102^](#c2-note-0102){#c2-note-0102a} In both cases, the shift
    from analyzing "data" (the content of documents in the traditional
    sense) to []{#Page_114 type="pagebreak" title="114"}analyzing
    "meta-data" (describing documents in light of their relationships to one
    another) is a precondition for being able to make any use at all of
    growing amounts of information.[^103^](#c2-note-0103){#c2-note-0103a}
    This shift introduced a new level of abstraction. Information is no
    longer understood as a representation of external reality; its
    significance is not evaluated with regard to the relation between
    "information" and "the world," for instance with a qualitative criterion
    such as "true"/"false." Rather, the sphere of information is treated as
    a self-referential, closed world, and documents are accordingly only
    evaluated in terms of their position within this world, though with
    quantitative criteria such as "central"/"peripheral."

    Even though the PageRank algorithm was highly effective and assisted
    Google\'s rapid ascent to a market-leading position, at the beginning it
    was still relatively simple and its mode of operation was at least
    partially transparent. It followed the classical statistical model of an
    algorithm. A document or site referred to by many links was considered
    more important than one to which fewer links
    referred.[^104^](#c2-note-0104){#c2-note-0104a} The algorithm analyzed
    the given structural order of information and determined the position of
    every document therein, and this was largely done independently of the
    context of the search and without making any assumptions about it. This
    approach functioned relatively well as long as the volume of information
    did not exceed a certain size, and as long as the users and their
    searches were somewhat similar to one another. In both respects, this is
    no longer the case. The amount of information to be pre-sorted is
    increasing, and users are searching in all possible situations and
    places for everything under the sun. At the time Google was founded, no
    one would have thought to check the internet, quickly and while on
    one\'s way, for today\'s menu at the restaurant round the corner. Now,
    thanks to smartphones, this is an obvious thing to do.
    :::

    ::: {.section}
    ### Algorithm clouds {#c2-sec-0023}

    In order to react to such changes in user behavior -- and simultaneously
    to advance it further -- Google\'s search algorithm is constantly being
    modified. It has become increasingly complex and has assimilated a
    greater amount of contextual []{#Page_115 type="pagebreak"
    title="115"}information, which influences the value of a site within
    Page­Rank and thus the order of search results. The algorithm is no
    longer a fixed object or unchanging recipe but is transforming into a
    dynamic process, an opaque cloud composed of multiple interacting
    algorithms that are continuously refined (between 500 and 600 times a
    year, according to some estimates). These ongoing developments are so
    extensive that, since 2003, several new versions of the algorithm cloud
    have appeared each year with their own names. In 2014 alone, Google
    carried out 13 large updates, more than ever
    before.[^105^](#c2-note-0105){#c2-note-0105a}

    These changes continue to bring about new levels of abstraction, so that
    the algorithm takes into account add­itional variables such as the time
    and place of a search, alongside a person\'s previously recorded
    behavior -- but also his or her involvement in social environments, and
    much more. Personalization and contextualization were made part of
    Google\'s search algorithm in 2005. At first it was possible to choose
    whether or not to use these. Since 2009, however, they have been a fixed
    and binding component for everyone who conducts a search through
    Google.[^106^](#c2-note-0106){#c2-note-0106a} By the middle of 2013, the
    search algorithm had grown to include at least 200
    variables.[^107^](#c2-note-0107){#c2-note-0107a} What is relevant is
    that the algorithm no longer determines the position of a document
    within a dynamic informational world that exists for everyone
    externally. Instead, it now assigns a rank to their content within a
    dynamic and singular universe of information that is tailored to every
    individual user. For every person, an entirely different order is
    created instead of just an excerpt from a previously existing order. The
    world is no longer being represented; it is generated uniquely for every
    user and then presented. Google is not the only company that has gone
    down this path. Orders produced by algorithms have become increasingly
    oriented toward creating, for each user, his or her own singular world.
    Facebook, dating services, and other social mass media have been
    pursuing this approach even more radically than Google.
    :::

    ::: {.section}
    ### From the data shadow to the synthetic profile {#c2-sec-0024}

    This form of generating the world requires not only detailed information
    about the external world (that is, the reality []{#Page_116
    type="pagebreak" title="116"}shared by everyone) but also information
    about every individual\'s own relation to the
    latter.[^108^](#c2-note-0108){#c2-note-0108a} To this end, profiles are
    established for every user, and the more extensive they are, the better
    they are for the algorithms. A profile created by Google, for instance,
    identifies the user on three levels: as a "knowledgeable person" who is
    informed about the world (this is established, for example, by recording
    a person\'s searches, browsing behavior, etc.), as a "physical person"
    who is located and mobile in the world (a component established, for
    example, by tracking someone\'s location through a smartphone, sensors
    in a smart home, or body signals), and as a "social person" who
    interacts with other people (a facet that can be determined, for
    instance, by following someone\'s activity on social mass
    media).[^109^](#c2-note-0109){#c2-note-0109a}

    Unlike the situation in the 1990s, however, these profiles are no longer
    simply representations of singular people -- they are not "digital
    personas" or "data shadows." They no longer represent what is
    conventionally referred to as "individuality," in the sense of a
    spatially and temporally uniform identity. On the one hand, profiles
    rather consist of sub-individual elements -- of fragments of recorded
    behavior that can be evaluated on the basis of a particular search
    without promising to represent a person as a whole -- and they consist,
    on the other hand, of clusters of multiple people, so that the person
    being modeled can simultaneously occupy different positions in time.
    This temporal differentiation enables predictions of the following sort
    to be made: a person who has already done *x* will, with a probability
    of *y*, go on to engage in activity *z*. It is in this way that Amazon
    assembles its book recommendations, for the company knows that, within
    the cluster of people that constitutes part of every person\'s profile,
    a certain percentage of them have already gone through this sequence of
    activity. Or, as the data-mining company Science Rockstars (!) once
    pointedly expressed on its website, "Your next activity is a function of
    the behavior of others and your own past."

    Google and other providers of algorithmically generated orders have been
    devoting increased resources to the prognostic capabilities of their
    programs in order to make the confusing and potentially time-consuming
    step of the search obsolete. The goal is to minimize a rift that comes
    to light []{#Page_117 type="pagebreak" title="117"}in the act of
    searching, namely that between the world as everyone experiences it --
    plagued by uncertainty, for searching implies "not knowing something" --
    and the world of algorithmically generated order, in which certainty
    prevails, for everything has been well arranged in advance. Ideally,
    questions should be answered before they are asked. The first attempt by
    Google to eliminate this rift is called Google Now, and its slogan is
    "The right information at just the right time." The program, which was
    originally developed as an app but has since been made available on
    Chrome, Google\'s own web browser, attempts to anticipate, on the basis
    of existing data, a user\'s next step, and to provide the necessary
    information before it is searched for in order that such steps take
    place efficiently. Thus, for instance, it draws upon information from a
    user\'s calendar in order to figure out where he or she will have to go
    next. On the basis of real-time traffic data, it will then suggest the
    optimal way to get there. For those driving cars, the amount of traffic
    on the road will be part of the equation. This is ascertained by
    analyzing the motion profiles of other drivers, which will allow the
    program to determine whether the traffic is flowing or stuck in a jam.
    If enough historical data is taken into account, the hope is that it
    will be possible to redirect cars in such a way that traffic jams should
    no longer occur.[^110^](#c2-note-0110){#c2-note-0110a} For those who use
    public transport, Google Now evaluates real-time data about the
    locations of various transport services. With this information, it will
    suggest the optimal route and, depending on the calculated travel time,
    it will send a reminder (sometimes earlier, sometimes later) when it is
    time to go. That which Google is just experimenting with and testing in
    a limited and unambiguous context is already part of Facebook\'s
    everyday operations. With its EdgeRank algorithm, Facebook already
    organizes everyone\'s newsfeed, entirely in the background and without
    any explicit user interaction. On the basis of three variables -- user
    affinity (previous interactions between two users), content weight (the
    rate of interaction between all users and a specific piece of content),
    and currency (the age of a post) -- the algorithm selects content from
    the status updates made by one\'s friends to be displayed on one\'s own
    page.[^111^](#c2-note-0111){#c2-note-0111a} In this way, Facebook
    ensures that the stream of updates remains easy to scroll through, while
    also -- it is safe []{#Page_118 type="pagebreak" title="118"}to assume
    -- leaving enough room for advertising. This potential for manipulation,
    which algorithms possess as they work away in the background, will be
    the topic of my next section.
    :::

    ::: {.section}
    ### Variables and correlations {#c2-sec-0025}

    Every complex algorithm contains a multitude of variables and usually an
    even greater number of ways to make connections between them. Every
    variable and every relation, even if they are expressed in technical or
    mathematical terms, codifies assumptions that express a specific
    position in the world. There can be no purely descriptive variables,
    just as there can be no such thing as "raw
    data."[^112^](#c2-note-0112){#c2-note-0112a} Both -- data and variables
    -- are always already "cooked"; that is, they are engendered through
    cultural operations and formed within cultural
    categories.[^113^](#c2-note-0113){#c2-note-0113a} With every use of
    produced data and with every execution of an algorithm, the assumptions
    embedded in them are activated, and the positions contained within them
    have effects on the world that the algorithm generates and presents.

    As already mentioned, the early version of the PageRank algorithm was
    essentially based on the rather simple assumption that frequently linked
    content is more relevant than content that is only seldom linked to, and
    that links to sites that are themselves frequently linked to should be
    given more weight than those found on sites with fewer links to them.
    Replacing the qualitative criterion of "relevance" with the quantitative
    criterion of "popularity" not only proved to be tremendously practical
    but also extremely consequential, for search engines not only describe
    the world; they create it as well. That which search engines put at the
    top of this list is not just already popular but will remain so. A third
    of all users click on the first search result, and around 95 percent do
    not look past the first 10.[^114^](#c2-note-0114){#c2-note-0114a} Even
    the earliest version of the PageRank algorithm did not represent
    existing reality but rather (co-)constituted it.

    Popularity, however, is not the only element with which algorithms
    actively give shape to the user\'s world. A search engine can only sort,
    weigh, and make available that portion of information which has already
    been incorporated into its index. Everything else remains invisible. The
    relation between []{#Page_119 type="pagebreak" title="119"}the recorded
    part of the internet (the "surface web") and the unrecorded part (the
    "deep web") is difficult to determine. Estimates have varied between
    ratios of 1:5 and 1:500.[^115^](#c2-note-0115){#c2-note-0115a} There are
    many reasons why content might be inaccessible to search engines.
    Perhaps the information has been saved in formats that search engines
    cannot read or can only poorly read, or perhaps it has been hidden
    behind proprietary barriers such as paywalls. In order to expand the
    realm of things that can be exploited by their algorithms, the operators
    of search engines offer extensive guidance about how providers should
    design their sites so that search tools can find them in an optimal
    manner. It is not necessary to follow this guidance, but given the
    central role of search engines in sorting and filtering information, it
    is clear that they exercise a great deal of power by setting the
    standards.[^116^](#c2-note-0116){#c2-note-0116a}

    That the individual must "voluntarily" submit to this authority is
    typical of the power of networks, which do not give instructions but
    rather constitute preconditions. Yet it is in the interest of (almost)
    every producer of information to optimize its position in a search
    engine\'s index, and thus there is a strong incentive to accept the
    preconditions in question. Considering, moreover, the nearly
    monopolistic character of many providers of algorithmically generated
    orders and the high price that one would have to pay if one\'s own site
    were barely (or not at all) visible to others, the term "voluntary"
    begins to take on a rather foul taste. This is a more or less subtle way
    of pre-formatting the world so that it can be optimally recorded by
    algorithms.[^117^](#c2-note-0117){#c2-note-0117a}

    The providers of search engines usually justify such methods in the name
    of offering "more efficient" services and "more relevant" results.
    Ostensibly technical and neutral terms such as "efficiency" and
    "relevance" do little, however, to conceal the political nature of
    defining variables. Efficient with respect to what? Relevant for whom?
    These are issues that are decided without much discussion by the
    developers and institutions that regard the algorithms as their own
    property. Every now and again such questions incite public debates,
    mostly when the interests of one provider happen to collide with those
    of its competition. Thus, for instance, the initiative known as
    FairSearch has argued that Google abuses its market power as a search
    engine to privilege its []{#Page_120 type="pagebreak" title="120"}own
    content and thus to showcase it prominently in search
    results.[^118^](#c2-note-0118){#c2-note-0118a} FairSearch\'s
    representatives alleged, for example, that Google favors its own map
    service in the case of address searches and its own price comparison
    service in the case of product searches. The argument had an effect. In
    November of 2010, the European Commission initiated an antitrust
    investigation against Google. In 2014, a settlement was proposed that
    would have required the American internet giant to pay certain
    concessions, but the members of the Commission, the EU Parliament, and
    consumer protection agencies were not satisfied with the agreement. In
    April 2015, the anti-trust proceedings were recommenced by a newly
    appointed Commission, its reasoning being that "Google does not apply to
    its own comparison shopping service the system of penalties which it
    applies to other comparison shopping services on the basis of defined
    parameters, and which can lead to the lowering of the rank in which they
    appear in Google\'s general search results
    pages."[^119^](#c2-note-0119){#c2-note-0119a} In other words, the
    Commission accused the company of manipulating search results to its own
    advantage and the disadvantage of users.

    This is not the only instance in which the political side of search
    algorithms has come under public scrutiny. In the summer of 2012, Google
    announced that sites with higher numbers of copyright removal notices
    would henceforth appear lower in its
    rankings.[^120^](#c2-note-0120){#c2-note-0120a} The company thus
    introduced explicitly political and economic criteria in order to
    influence what, according to the standards of certain powerful players
    (such as film studios), users were able to
    view.[^121^](#c2-note-0121){#c2-note-0121a} In this case, too, it would
    be possible to speak of the personalization of searching, except that
    the heart of the situation was not the natural person of the user but
    rather the juridical person of the copyright holder. It was according to
    the latter\'s interests and preferences that searching was being
    reoriented. Amazon has employed similar tactics. In 2014, the online
    merchant changed its celebrated recommendation algorithm with the goal
    of reducing the presence of books released by irritating publishers that
    dared to enter into price negotiations with the
    company.[^122^](#c2-note-0122){#c2-note-0122a}

    Controversies over the methods of Amazon or Google, however, are the
    exception rather than the rule. Necessary (but never neutral) decisions
    about recording and evaluating data []{#Page_121 type="pagebreak"
    title="121"}with algorithms are being made almost all the time without
    any discussion whatsoever. The logic of the original Page­Rank algorithm
    was criticized as early as the year 2000 for essentially representing
    the commercial logic of mass media, systematically disadvantaging
    less-popular though perhaps otherwise relevant information, and thus
    undermining the "substantive vision of the web as an inclusive
    democratic space."[^123^](#c2-note-0123){#c2-note-0123a} The changes to
    the search algorithm that have been adopted since then may have modified
    this tendency, but they have certainly not weakened it. In addition to
    concentrating on what is popular, the new variables privilege recently
    uploaded and constantly updated content. The selection of search results
    is now contingent upon the location of the user, and it takes into
    account his or her social networking. It is oriented toward the average
    of a dynamically modeled group. In other words, Google\'s new algorithm
    favors that which is gaining popularity within a user\'s social network.
    The global village is thus becoming more and more
    provincial.[^124^](#c2-note-0124){#c2-note-0124a}
    :::

    ::: {.section}
    ### Data behaviorism {#c2-sec-0026}

    Algorithms such as Google\'s thus reiterate and reinforce a tendency
    that has already been apparent on both the level of individual users and
    that of communal formations: in order to deal with the vast amounts and
    complexity of information, they direct their gaze inward, which is not
    to say toward the inner being of individual people. As a level of
    reference, the individual person -- with an interior world and with
    ideas, dreams, and wishes -- is irrelevant. For algorithms, people are
    black boxes that can only be understood in terms of their reactions to
    stimuli. Consciousness, perception, and intention do not play any role
    for them. In this regard, the legal philosopher Antoinette Rouvroy has
    written about "data behaviorism."[^125^](#c2-note-0125){#c2-note-0125a}
    With this, she is referring to the gradual return of a long-discredited
    approach to behavioral psychology that postulated that human behavior
    could be explained, predicted, and controlled purely by our outwardly
    observable and measurable actions.[^126^](#c2-note-0126){#c2-note-0126a}
    Psychological dimensions were ignored (and are ignored in this new
    version of behaviorism) because it is difficult to observe them
    empiric­ally. Accordingly, this approach also did away with the need
    []{#Page_122 type="pagebreak" title="122"}to question people directly or
    take into account their subjective experiences, thoughts, and feelings.
    People were regarded (and are so again today) as unreliable, as poor
    judges of themselves, and as only partly honest when disclosing
    information. Any strictly empirical science, or so the thinking went,
    required its practitioners to disregard everything that did not result
    in physical and observable action. From this perspective, it was
    possible to break down even complex behavior into units of stimulus and
    reaction. This led to the conviction that someone observing another\'s
    activity always knows more than the latter does about himself or herself
    for, unlike the person being observed, whose impressions can be
    inaccurate, the observer is in command of objective and complete
    information. Even early on, this approach faced a wave of critique. It
    was held to be mechanistic, reductionist, and authoritarian because it
    privileged the observing scientist over the subject. In practice, it
    quickly ran into its own limitations: it was simply too expensive and
    complicated to gather data about human behavior.

    Yet that has changed radically in recent years. It is now possible to
    measure ever more activities, conditions, and contexts empirically.
    Algorithms like Google\'s or Amazon\'s form the technical backdrop for
    the revival of a mechanistic, reductionist, and authoritarian approach
    that has resurrected the long-lost dream of an objective view -- the
    view from nowhere.[^127^](#c2-note-0127){#c2-note-0127a} Every critique
    of this positivistic perspective -- that every measurement result, for
    instance, reflects not only the measured but also the measurer -- is
    brushed aside with reference to the sheer amounts of data that are now
    at our disposal.[^128^](#c2-note-0128){#c2-note-0128a} This attitude
    substantiates the claim of those in possession of these new and
    comprehensive powers of observation (which, in addition to Google and
    Facebook, also includes the intelligence services of Western nations),
    namely that they know more about individuals than individuals know about
    themselves, and are thus able to answer our questions before we ask
    them. As mentioned above, this is a goal that Google expressly hopes to
    achieve.

    At issue with this "inward turn" is thus the space of communal
    formations, which is constituted by the sum of all of the activities of
    their interacting participants. In this case, however, a communal
    formation is not consciously created []{#Page_123 type="pagebreak"
    title="123"}and maintained in a horizontal process, but rather
    synthetic­ally constructed as a computational function. Depending on the
    context and the need, individuals can either be assigned to this
    function or removed from it. All of this happens behind the user\'s back
    and in accordance with the goals and pos­itions that are relevant to the
    developers of a given algorithm, be it to optimize profit or
    surveillance, create social norms, improve services, or whatever else.
    The results generated in this way are sold to users as a personalized
    and efficient service that provides a quasi-magical product. Out of the
    enormous haystack of searchable information, results are generated that
    are made to seem like the very needle that we have been looking for. At
    best, it is only partially transparent how these results came about and
    which positions in the world are strengthened or weakened by them. Yet,
    as long as the needle is somewhat functional, most users are content,
    and the algorithm registers this contentedness to validate itself. In
    this dynamic world of unmanageable complexity, users are guided by a
    sort of radical, short-term pragmatism. They are happy to have the world
    pre-sorted for them in order to improve their activity in it. Regarding
    the matter of whether the information being provided represents the
    world accurately or not, they are unable to formulate an adequate
    assessment for themselves, for it is ultimately impossible to answer
    this question without certain resources. Outside of rapidly shrinking
    domains of specialized or everyday know­ledge, it is becoming
    increasingly difficult to gain an overview of the world without
    mechanisms that pre-sort it. Users are only able to evaluate search
    results pragmatically; that is, in light of whether or not they are
    helpful in solving a concrete problem. In this regard, it is not
    paramount that they find the best solution or the correct answer but
    rather one that is available and sufficient. This reality lends an
    enormous amount of influence to the institutions and processes that
    provide the solutions and answers.[]{#Page_124 type="pagebreak"
    title="124"}
    :::
    :::

    ::: {.section .notesSet type="rearnotes"}
    []{#notesSet}Notes {#c2-ntgp-9999}
    ------------------

    ::: {.section .notesList}
    [1](#c2-note-0001a){#c2-note-0001}  André Rottmann, "Reflexive Systems
    of Reference: Approximations to 'Referentialism' in Contemporary Art,"
    trans. Gerrit Jackson, in Dirk Snauwaert et al. (eds), *Rehabilitation:
    The Legacy of the Modern Movement* (Ghent: MER, 2010), pp. 97--106, at
    99.

    [2](#c2-note-0002a){#c2-note-0002}  The recognizability of the sources
    distinguishes these processes from plagiarism. The latter operates with
    the complete opposite aim, namely that of borrowing sources without
    acknow­ledging them.

    [3](#c2-note-0003a){#c2-note-0003}  Ulf Poschardt, *DJ Culture* (London:
    Quartet Books, 1998), p. 34.

    [4](#c2-note-0004a){#c2-note-0004}  Theodor W. Adorno, *Aesthetic
    Theory*, trans. Robert Hullot-Kentor (Minneapolis, MN: University of
    Minnesota Press, 1997), p. 151.

    [5](#c2-note-0005a){#c2-note-0005}  Peter Bürger, *Theory of the
    Avant-Garde*, trans. Michael Shaw (Minneapolis, MN: University of
    Minnesota Press, 1984).

    [6](#c2-note-0006a){#c2-note-0006}  Felix Stalder, "Neun Thesen zur
    Remix-Kultur," *i-rights.info* (May 25, 2009), online.

    [7](#c2-note-0007a){#c2-note-0007}  Florian Cramer, *Exe.cut(up)able
    Statements: Poetische Kalküle und Phantasmen des selbstausführenden
    Texts* (Munich: Wilhelm Fink, 2011), pp. 9--10 \[--trans.\]

    [8](#c2-note-0008a){#c2-note-0008}  McLuhan stressed that, despite using
    the alphabet, every manuscript is unique because it not only depended on
    the sequence of letters but also on the individual ability of a given
    scribe to []{#Page_185 type="pagebreak" title="185"}lend these letters a
    particular shape. With the rise of the printing press, the alphabet shed
    these last elements of calligraphy and became typography.

    [9](#c2-note-0009a){#c2-note-0009}  Elisabeth L. Eisenstein, *The
    Printing Revolution in Early Modern Europe* (Cambridge: Cambridge
    University Press, 1983), p. 15.

    [10](#c2-note-0010a){#c2-note-0010}  Eisenstein, *The Printing
    Revolution in Early Modern Europe*, p. 204.

    [11](#c2-note-0011a){#c2-note-0011}  The fundamental aspects of these
    conventions were formulated as early as the beginning of the sixteenth
    century; see Michael Giesecke, *Der Buchdruck in der frühen Neuzeit:
    Eine historische Fallstudie über die Durchsetzung neuer Informations-
    und Kommunikationstechnologien* (Frankfurt am Main: Suhrkamp, 1991), pp.
    420--40.

    [12](#c2-note-0012a){#c2-note-0012}  Eisenstein, *The Printing
    Revolution in Early Modern Europe*, p. 49.

    [13](#c2-note-0013a){#c2-note-0013}  In April 2014, the Authors Guild --
    the association of American writers that had sued Google -- filed an
    appeal to overturn the decision and made a public statement demanding
    that a new organization be established to license the digital rights of
    out-of-print books. See "Authors Guild: Amazon was Google's Target,"
    *The Authors Guild: Industry & Advocacy News* (April 11, 2014), online.
    In October 2015, however, the next-highest authority -- the United
    States Court of Appeals for the Second Circuit -- likewise decided in
    Google\'s favor. The Authors Guild promptly announced its intention to
    take the case to the Supreme Court.

    [14](#c2-note-0014a){#c2-note-0014}  Jean-Noël Jeanneney, *Google and
    the Myth of Universal Knowledge: A View from Europe*, trans. Teresa
    Lavender Fagan (Chicago, IL: University of Chicago Press, 2007).

    [15](#c2-note-0015a){#c2-note-0015}  Within the framework of the Images
    for the Future project (2007--14), the Netherlands alone invested more
    than €170 million to digitize the collections of the most important
    audiovisual archives. Over 10 years, the cost of digitizing the entire
    cultural heritage of Europe has been estimated to be around €100
    billion. See Nick Poole, *The Cost of Digitising Europe\'s Cultural
    Heritage: A Report for the Comité des Sages of the European Commission*
    (November 2010), online.

    [16](#c2-note-0016a){#c2-note-0016}  Richard Darnton, "The National
    Digital Public Library Is Launched!", *New York Review of Books* (April
    25, 2013), online.

    [17](#c2-note-0017a){#c2-note-0017}  According to estimates by the
    British Library, so-called "orphan works" alone -- that is, works still
    legally protected but whose right holders are unknown -- make up around
    40 percent of the books in its collection that still fall under
    copyright law. In an effort to alleviate this problem, the European
    Parliament and the European Commission issued a directive []{#Page_186
    type="pagebreak" title="186"}in 2012 concerned with "certain permitted
    uses of orphan works." This has allowed libraries and archives to make
    works available online without permission if, "after carrying out
    diligent searches," the copyright holders cannot be found. What
    qualifies as a "diligent search," however, is so strictly formulated
    that the German Library Association has called the directive
    "impracticable." Deutscher Bibliotheksverband, "Rechtlinie über
    bestimmte zulässige Formen der Nutzung verwaister Werke" (February 27,
    2012), online.

    [18](#c2-note-0018a){#c2-note-0018}  UbuWeb, "Frequently Asked
    Questions," online.

    [19](#c2-note-0019a){#c2-note-0019}  The numbers in this area of
    activity are notoriously unreliable, and therefore only rough estimates
    are possible. It seems credible, however, that the Pirate Bay was
    attracting around a billion page views per month by the end of 2013.
    That would make it the seventy-fourth most popular internet destination.
    See Ernesto, "Top 10 Most Popular Torrent Sites of 2014" (January 4,
    2014), online.

    [20](#c2-note-0020a){#c2-note-0020}  See the documentary film *TPB AFK:
    The Pirate Bay Away from Keyboard* (2013), directed by Simon Klose.

    [21](#c2-note-0021a){#c2-note-0021}  In technical terms, there is hardly
    any difference between a "stream" and a "download." In both cases, a
    complete file is transferred to the user\'s computer and played.

    [22](#c2-note-0022a){#c2-note-0022}  The practice is legal in Germany
    but illegal in Austria, though digitized texts are routinely made
    available there in seminars. See Seyavash Amini Khanimani and Nikolaus
    Forgó, "Rechtsgutachten über die Erforderlichkeit einer freien
    Werknutzung im österreichischen Urheberrecht zur Privilegierung
    elektronisch unterstützter Lehre," *Forum Neue Medien Austria* (January
    2011), online.

    [23](#c2-note-0023a){#c2-note-0023}  Deutscher Bibliotheksverband,
    "Digitalisierung" (2015), online \[--trans\].

    [24](#c2-note-0024a){#c2-note-0024}  David Weinberger, *Everything Is
    Miscellaneous: The Power of the New Digital Disorder* (New York: Times
    Books, 2007).

    [25](#c2-note-0025a){#c2-note-0025}  This is not a question of material
    wealth. Those who are economically or socially marginalized are
    confronted with the same phenomenon. Their primary experience of this
    excess is with cheap goods and junk.

    [26](#c2-note-0026a){#c2-note-0026}  See Gregory Bateson, "Form,
    Substance and Difference," in Bateson, *Steps to an Ecology of Mind:
    Collected Essays in Anthropology, Psychiatry, Evolution and
    Epistemology* (London: Jason Aronson, 1972), pp. 455--71, at 460:
    "\[I\]n fact, what we mean by information -- the elementary unit of
    information -- is *a difference which makes a difference*" (the emphasis
    is original).

    [27](#c2-note-0027a){#c2-note-0027}  Inke Arns and Gabriele Horn,
    *History Will Repeat Itself* (Frankfurt am Main: Revolver, 2007), p.
    42.[]{#Page_187 type="pagebreak" title="187"}

    [28](#c2-note-0028a){#c2-note-0028}  See the film *The Battle of
    Orgreave* (2001), directed by Mike Figgis.

    [29](#c2-note-0029a){#c2-note-0029}  Theresa Winge, "Costuming the
    Imagination: Origins of Anime and Manga Cosplay," *Mechademia* 1 (2006),
    pp. 65--76.

    [30](#c2-note-0030a){#c2-note-0030}  Nicolle Lamerichs, "Stranger than
    Fiction: Fan Identity in Cosplay," *Transformative Works and Cultures* 7
    (2011), online.

    [31](#c2-note-0031a){#c2-note-0031}  The *Oxford English Dictionary*
    defines "selfie" as a "photographic self-portrait; *esp*. one taken with
    a smartphone or webcam and shared via social media."

    [32](#c2-note-0032a){#c2-note-0032}  Odin Kroeger et al. (eds),
    *Geistiges Eigentum und Originalität: Zur Politik der Wissens- und
    Kulturproduktion* (Vienna: Turia + Kant, 2011).

    [33](#c2-note-0033a){#c2-note-0033}  Roland Barthes, "The Death of the
    Author," in Barthes, *Image -- Music -- Text*, trans. Stephen Heath
    (London: Fontana Press, 1977), pp. 142--8.

    [34](#c2-note-0034a){#c2-note-0034}  Heinz Rölleke and Albert
    Schindehütte, *Es war einmal: Die wahren Märchen der Brüder Grimm und
    wer sie ihnen erzählte* (Frankfurt am Main: Eichborn, 2011); and Heiner
    Boehncke, *Marie Hassenpflug: Eine Märchenerzählerin der Brüder Grimm*
    (Darmstadt: Von Zabern, 2013).

    [35](#c2-note-0035a){#c2-note-0035}  Hansjörg Ewert, "Alles nur
    geklaut?", *Zeit Online* (February 26, 2013), online. This is not a new
    realization but has long been a special area of research for
    musicologists. What is new, however, is that it is no longer
    controversial outside of this narrow disciplinary discourse. See Peter
    J. Burkholder, "The Uses of Existing Music: Musical Borrowing as a
    Field," *Notes* 50 (1994), pp. 851--70.

    [36](#c2-note-0036a){#c2-note-0036}  Zygmunt Bauman, *Liquid Modernity*
    (Cambridge: Polity, 2000), p. 56.

    [37](#c2-note-0037a){#c2-note-0037}  Quoted from Eran Schaerf\'s audio
    installation *FM-Scenario: Reality Race* (2013), online.

    [38](#c2-note-0038a){#c2-note-0038}  The number of members, for
    instance, of the two large polit­ical parties in Germany, the Social
    Democratic Party and the Christian Democratic Union, reached its peak at
    the end of the 1970s or the beginning of the 1980s. Both were able to
    increase their absolute numbers for a brief time at the beginning of the
    1990s, when the Christian Democratic Party even reached its absolute
    high point, but this can be explained by a surge in new members after
    reunification. By 2010, both parties already had fewer members than
    Greenpeace, whose 580,000 members make it Germany's largest NGO.
    Parallel to this, between 1970 and 2010, the proportion of people
    without any religious affiliations shrank to approximately 37 percent.
    That there are more churches and political parties today is indicative
    of how difficult []{#Page_188 type="pagebreak" title="188"}it has become
    for any single organization to attract broad strata of society.

    [39](#c2-note-0039a){#c2-note-0039}  Ulrich Beck, *Risk Society: Towards
    a New Modernity*, trans. Mark Ritter (London: SAGE, 1992), p. 135.

    [40](#c2-note-0040a){#c2-note-0040}  Ferdinand Tönnies, *Community and
    Society*, trans. Charles P. Loomis (East Lansing: Michigan State
    University Press, 1957).

    [41](#c2-note-0041a){#c2-note-0041}  Karl Marx and Friedrich Engels,
    "The Manifesto of the Communist Party (1848)," trans. Terrell Carver, in
    *The Cambridge Companion to the Communist Manifesto*, ed. Carver and
    James Farr (Cambridge: Cambridge University Press, 2015), pp. 237--60,
    at 239. For Marx and Engels, this was -- like everything pertaining to
    the dynamics of capitalism -- a thoroughly ambivalent development. For,
    in this case, it finally forced people "to take a down-to-earth view of
    their circumstances, their multifarious relationships" (ibid.).

    [42](#c2-note-0042a){#c2-note-0042}  As early as the 1940s, Karl Polanyi
    demonstrated in *The Great Transformation* (New York: Farrar & Rinehart,
    1944) that the idea of strictly separated spheres, which are supposed to
    be so typical of society, is in fact highly ideological. He argued above
    all that the attempt to implement this separation fully and consistently
    in the form of the free market would destroy the foundations of society
    because both the life of workers and the environment of the market
    itself would be regarded as externalities. For a recent adaptation of
    this argument, see David Graeber, *Debt: The First 5000 Years* (New
    York: Melville House, 2011).

    [43](#c2-note-0043a){#c2-note-0043}  Tönnies's persistent influence can
    be felt, for instance, in Zygmunt Bauman's negative assessment of the
    compunction to strive for community in his *Community: Seeking Safety in
    an Insecure World* (Malden, MA: Blackwell, 2001).

    [44](#c2-note-0044a){#c2-note-0044}  See, for example, Amitai Etzioni,
    *The Third Way to a Good Society* (London: Demos, 2000).

    [45](#c2-note-0045a){#c2-note-0045}  Jean Lave and Étienne Wenger,
    *Situated Learning: Legitimate Peripheral Participation* (Cambridge:
    Cambridge University Press, 1991), p. 98.

    [46](#c2-note-0046a){#c2-note-0046}  Étienne Wenger, *Cultivating
    Communities of Practice: A Guide to Managing Knowledge* (Boston, MA:
    Harvard Business School Press, 2000).

    [47](#c2-note-0047a){#c2-note-0047}  The institutions of the
    disciplinary society -- schools, factories, prisons and hospitals, for
    instance -- were closed. Whoever was inside could not get out.
    Participation was obligatory, and instructions had to be followed. See
    Michel Foucault, *Discipline and Punish: The Birth of the Prison*,
    trans. Alan Sheridan (New York: Pantheon Books, 1977).[]{#Page_189
    type="pagebreak" title="189"}

    [48](#c2-note-0048a){#c2-note-0048}  Weber famously defined power as
    follows: "Power is the probability that one actor within a social
    relationship will be in a position to carry out his own will despite
    resistance, regardless of the basis on which this probability rests."
    Max Weber, *Economy and Society: An Outline of Interpretive Sociology*,
    trans. Guenther Roth and Claus Wittich (Berkeley, CA: University of
    California Press, 1978), p. 53.

    [49](#c2-note-0049a){#c2-note-0049}  For those in complete despair, the
    following tip is provided: "To get more likes, start liking the photos
    of random people." Such a strategy, it seems, is more likely to increase
    than decrease one's hopelessness. The quotations are from "How to Get
    More Likes on Your Instagram Photos," *WikiHow* (2016), online.

    [50](#c2-note-0050a){#c2-note-0050}  Jeremy Gilbert, *Democracy and
    Collectivity in an Age of Individualism* (London: Pluto Books, 2013).

    [51](#c2-note-0051a){#c2-note-0051}  Diedrich Diederichsen,
    *Eigenblutdoping: Selbstverwertung, Künstlerromantik, Partizipation*
    (Cologne: Kiepenheuer & Witsch, 2008).

    [52](#c2-note-0052a){#c2-note-0052}  Harrison Rainie and Barry Wellman,
    *Networked: The New Social Operating System* (Cambridge, MA: MIT Press,
    2012). The term is practical because it is easy to understand, but it is
    also conceptually contradictory. An individual (an indivisible entity)
    cannot be defined in terms of a distributed network. With a nod toward
    Gilles Deleuze, the cumbersome but theoretically more precise term
    "dividual" (the divisible) has also been used. See Gerald Raunig,
    "Dividuen des Facebook: Das neue Begehren nach Selbstzerteilung," in
    Oliver Leistert and Theo Röhle (eds), *Generation Facebook: Über das
    Leben im Social Net* (Bielefeld: Transcript, 2011), pp. 145--59.

    [53](#c2-note-0053a){#c2-note-0053}  Jariu Saramäki et al., "Persistence
    of Social Signatures in Human Communication," *Proceedings of the
    National Academy of Sciences of the United States of America* 111
    (2014): 942--7.

    [54](#c2-note-0054a){#c2-note-0054}  The term "weak ties" derives from a
    study of where people find out information about new jobs. As the study
    shows, this information does not usually come from close friends, whose
    level of knowledge often does not differ much from that of the person
    looking for a job, but rather from loose acquaintances, whose living
    environments do not overlap much with one\'s own and who can therefore
    make information available from outside of one\'s own network. See Mark
    Granovetter, "The Strength of Weak Ties," *American Journal of
    Sociology* 78 (1973): 1360--80.

    [55](#c2-note-0055a){#c2-note-0055}  Castells, *The Power of Identity*,
    420.

    [56](#c2-note-0056a){#c2-note-0056}  Ulf Weigelt, "Darf der Chef
    ständige Erreichbarkeit ver­langen?" *Zeit Online* (June 13, 2012),
    online \[--trans.\].[]{#Page_190 type="pagebreak" title="190"}

    [57](#c2-note-0057a){#c2-note-0057}  Hartmut Rosa, *Social Acceleration:
    A New Theory of Modernity*, trans. Jonathan Trejo-Mathys (New York:
    Columbia University Press, 2013).

    [58](#c2-note-0058a){#c2-note-0058}  This technique -- "social freezing"
    -- has already become so standard that it is now regarded as way to help
    women achieve a better balance between work and family life. See Kolja
    Rudzio "Social Freezing: Ein Kind von Apple," *Zeit Online* (November 6,
    2014), online.

    [59](#c2-note-0059a){#c2-note-0059}  See the film *Into Eternity*
    (2009), directed by Michael Madsen.

    [60](#c2-note-0060a){#c2-note-0060}  Thomas S. Kuhn, *The Structure of
    Scientific Revolutions*, 3rd edn (Chicago, IL: University of Chicago
    Press, 1996).

    [61](#c2-note-0061a){#c2-note-0061}  Werner Busch and Peter Schmoock,
    *Kunst: Die Geschichte ihrer Funktionen* (Weinheim: Quadriga/Beltz,
    1987), p. 179 \[--trans.\].

    [62](#c2-note-0062a){#c2-note-0062}  "'When Attitude Becomes Form' at
    the Fondazione Prada," *Contemporary Art Daily* (September 18, 2013),
    online.

    [63](#c2-note-0063a){#c2-note-0063}  Owing to the hyper-capitalization
    of the art market, which has been going on since the 1990s, this role
    has shifted somewhat from curators to collectors, who, though validating
    their choices more on financial than on argumentative grounds, are
    essentially engaged in the same activity. Today, leading cur­ators
    usually work closely together with collectors and thus deal with more
    money than the first generation of curators ever could have imagined.

    [64](#c2-note-0064a){#c2-note-0064}  Diedrich Diederichsen, "Showfreaks
    und Monster," *Texte zur Kunst* 71 (2008): 69--77.

    [65](#c2-note-0065a){#c2-note-0065}  Alexander R. Galloway, *Protocol:
    How Control Exists after Decentralization* (Cambridge, MA: MIT Press,
    2004), pp. 7, 75.

    [66](#c2-note-0066a){#c2-note-0066}  Even the *Frankfurter Allgemeine
    Zeitung* -- at least in its online edition -- has begun to publish more
    and more articles in English. The newspaper has accepted the
    disadvantage of higher editorial costs in order to remain relevant in
    the increasingly globalized debate.

    [67](#c2-note-0067a){#c2-note-0067}  Joseph Reagle, "'Free as in
    Sexist?' Free Culture and the Gender Gap," *First Monday* 18 (2013),
    online.

    [68](#c2-note-0068a){#c2-note-0068}  Wikipedia\'s own "Editor Survey"
    from 2011 reports a women\'s quota of 9 percent. Other studies have come
    to a slightly higher number. See Benjamin Mako Hill and Aaron Shaw, "The
    Wikipedia Gender Gap Revisited: Characterizing Survey Response Bias with
    Propensity Score Estimation," *PLOS ONE* 8 (July 26, 2013), online. The
    problem is well known, and the Wikipedia Foundation has been making
    efforts to correct matters. In 2011, its goal was to increase the
    participation of women to 25 percent by 2015. This has not been
    achieved.[]{#Page_191 type="pagebreak" title="191"}

    [69](#c2-note-0069a){#c2-note-0069}  Shyong (Tony) K. Lam et al. (2011),
    "WP: Clubhouse? An Exploration of Wikipedia's Gender Imbalance,"
    *WikiSym* 11 (2011), online.

    [70](#c2-note-0070a){#c2-note-0070}  David Singh Grewal, *Network Power:
    The Social Dynamics of Globalization* (New Haven, CT: Yale University
    Press, 2008).

    [71](#c2-note-0071a){#c2-note-0071}  Ibid., p. 29.

    [72](#c2-note-0072a){#c2-note-0072}  Niklas Luhmann, *Macht im System*
    (Berlin: Suhrkamp, 2013), p. 52 \[--trans.\].

    [73](#c2-note-0073a){#c2-note-0073}  Mathieu O\'Neil, *Cyberchiefs:
    Autonomy and Authority in Online Tribes* (London: Pluto Press, 2009).

    [74](#c2-note-0074a){#c2-note-0074}  Eric Steven Raymond, "The Cathedral
    and the Bazaar," *First Monday* 3 (1998), online.

    [75](#c2-note-0075a){#c2-note-0075}  Jorge Luis Borges, "The Library of
    Babel," trans. Anthony Kerrigan, in Borges, *Ficciones* (New York: Grove
    Weidenfeld, 1962), pp. 79--88.

    [76](#c2-note-0076a){#c2-note-0076}  Heinrich Geiselberger and Tobias
    Moorstedt (eds), *Big Data: Das neue Versprechen der Allwissenheit*
    (Berlin: Suhrkamp, 2013).

    [77](#c2-note-0077a){#c2-note-0077}  This is one of the central tenets
    of science and technology studies. See, for instance, Geoffrey C. Bowker
    and Susan Leigh Star, *Sorting Things Out: Classification and Its
    Consequences* (Cambridge, MA: MIT Press, 1999).

    [78](#c2-note-0078a){#c2-note-0078}  Sybille Krämer, *Symbolische
    Maschinen: Die Idee der Formalisierung in geschichtlichem Abriß*
    (Darmstadt: Wissenschaft­liche Buchgesellschaft, 1988), 50--69.

    [79](#c2-note-0079a){#c2-note-0079}  Quoted from Doron Swade, "The
    'Unerring Certainty of Mechanical Agency': Machines and Table Making in
    the Nineteenth Century," in Martin Campbell-Kelly et al. (eds), *The
    History of Mathematical Tables: From Sumer to Spreadsheets* (Oxford:
    Oxford University Press, 2003), pp. 145--76, at 150.

    [80](#c2-note-0080a){#c2-note-0080}  The mechanical construction
    suggested by Leibniz was not to be realized as a practically usable (and
    therefore patentable) calculating machine until 1820, by which point it
    was referred to as an "arithmometer."

    [81](#c2-note-0081a){#c2-note-0081}  Krämer, *Symbolische Maschinen*, 98
    \[--trans.\].

    [82](#c2-note-0082a){#c2-note-0082}  Charles Babbage, *On the Economy of
    Machinery and Manufactures* (London: Charles Knight, 1832), p. 153: "We
    have already mentioned what may, perhaps, appear paradoxical to some of
    our readers -- that the division of labour can be applied with equal
    success to mental operations, and that it ensures, by its adoption, the
    same economy of time."

    [83](#c2-note-0083a){#c2-note-0083}  This structure, which is known as
    "Von Neumann architecture," continues to form the basis of almost all
    computers.

    [84](#c2-note-0084a){#c2-note-0084}  "Gordon Moore Says Aloha to
    Moore\'s Law," *The Inquirer* (April 13, 2005), online.[]{#Page_192
    type="pagebreak" title="192"}

    [85](#c2-note-0085a){#c2-note-0085}  Miriam Meckel, *Next: Erinnerungen
    an eine Zukunft ohne uns* (Reinbeck bei Hamburg: Rowohlt, 2011). One
    could also say that this anxiety has been caused by the fact that the
    automation of labor has begun to affect middle-class jobs as well.

    [86](#c2-note-0086a){#c2-note-0086}  Steven Levy, "Can an Algorithm
    Write a Better News Story than a Human Reporter?" *Wired* (April 24,
    2012), online.

    [87](#c2-note-0087a){#c2-note-0087}  Alexander Pschera, *Animal
    Internet: Nature and the Digital Revolution*, trans. Elisabeth Laufer
    (New York: New Vessel Press, 2016).

    [88](#c2-note-0088a){#c2-note-0088}  The American intelligence services
    are not unique in this regard. *Spiegel* has reported that, in Russia,
    entire "bot armies" have been mobilized for the "propaganda battle."
    Benjamin Bidder, "Nemzow-Mord: Die Propaganda der russischen Hardliner,"
    *Spiegel Online* (February 28, 2015), online.

    [89](#c2-note-0089a){#c2-note-0089}  Lennart Guldbrandsson, "Swedish
    Wikipedia Surpasses 1 Million Articles with Aid of Article Creation
    Bot," [blog.wikimedia.org](http://blog.wikimedia.org) (June 17, 2013),
    online.

    [90](#c2-note-0090a){#c2-note-0090}  Thomas Bunnell, "The Mathematics of
    Film," *Boom Magazine* (November 2007): 48--51.

    [91](#c2-note-0091a){#c2-note-0091}  Christopher Steiner, "Automatons
    Get Creative," *Wall Street Journal* (August 17, 2012), online.

    [92](#c2-note-0092a){#c2-note-0092}  "The Hewlett Foundation: Automated
    Essay Scoring," [kaggle.com](http://kaggle.com) (February 10, 2012),
    online.

    [93](#c2-note-0093a){#c2-note-0093}  Ian Ayres, *Super Crunchers: How
    Anything Can Be Predicted* (London: Bookpoint, 2007).

    [94](#c2-note-0094a){#c2-note-0094}  Each of these models was tested on
    the basis of the 50 million most common search terms from the years
    2003--8 and classified according to the time and place of the search.
    The results were compared with data from the health authorities. See
    Jeremy Ginsberg et al., "Detecting Influenza Epidemics Using Search
    Engine Query Data," *Nature* 457 (2009): 1012--4.

    [95](#c2-note-0095a){#c2-note-0095}  In absolute terms, the rate of
    correct hits, at 15.8 percent, was still relatively low. With the same
    dataset, however, random guessing would only have an accuracy of 0.005
    percent. See V. Le Quoc et al., "Building High-Level Features Using
    Large-Scale Unsupervised Learning,"
    [research.google.com](http://research.google.com) (2012), online.

    [96](#c2-note-0096a){#c2-note-0096}  Neil Johnson et al., "Abrupt Rise
    of New Machine Ecology beyond Human Response Time," *Nature: Scientific
    Reports* 3 (2013), online. The authors counted 18,520 of these events
    between January 2006 and February 2011; that is, about 15 per day on
    average.

    [97](#c2-note-0097a){#c2-note-0097}  Gerald Nestler, "Mayhem in Mahwah:
    The Case of the Flash Crash; or, Forensic Re-performance in Deep Time,"
    in Anselm []{#Page_193 type="pagebreak" title="193"}Franke et al. (eds),
    *Forensis: The Architecture of Public Truth* (Berlin: Sternberg Press,
    2014), pp. 125--46.

    [98](#c2-note-0098a){#c2-note-0098}  Another facial recognition
    algorithm by Google provides a good impression of the rate of progress.
    As early as 2011, the latter was able to identify dogs in images with 80
    percent accuracy. Three years later, this rate had not only increased to
    93.5 percent (which corresponds to human capabilities), but the
    algorithm could also identify more than 200 different types of dog,
    something that hardly any person can do. See Robert McMillan, "This Guy
    Beat Google\'s Super-Smart AI -- But It Wasn\'t Easy," *Wired* (January
    15, 2015), online.

    [99](#c2-note-0099a){#c2-note-0099}  Sergey Brin and Lawrence Page, "The
    Anatomy of a Large-Scale Hypertextual Web Search Engine," *Computer
    Networks and ISDN Systems* 30 (1998): 107--17.

    [100](#c2-note-0100a){#c2-note-0100}  Eugene Garfield, "Citation Indexes
    for Science: A New Dimension in Documentation through Association of
    Ideas," *Science* 122 (1955): 108--11.

    [101](#c2-note-0101a){#c2-note-0101}  Since 1964, the data necessary for
    this has been published as the Science Citation Index (SCI).

    [102](#c2-note-0102a){#c2-note-0102}  The assumption that the subjects
    produce these structures indirectly and without any strategic intention
    has proven to be problematic in both contexts. In the world of science,
    there are so-called citation cartels -- groups of scientists who
    frequently refer to one another\'s work in order to improve their
    respective position in the SCI. Search engines have likewise given rise
    to search engine optimizers, which attempt by various means to optimize
    a website\'s evaluation by search engines.

    [103](#c2-note-0103a){#c2-note-0103}  Regarding the history of the SCI
    and its influence on the early version of Google\'s PageRank, see Katja
    Mayer, "Zur Soziometrik der Suchmaschinen: Ein historischer Überblick
    der Methodik," in Konrad Becker and Felix Stalder (eds), *Deep Search:
    Die Politik des Suchens jenseits von Google* (Innsbruck: Studienverlag,
    2009), pp. 64--83.

    [104](#c2-note-0104a){#c2-note-0104}  A site with zero links to it could
    not be registered by the algorithm at all, for the search engine indexed
    the web by having its "crawler" follow the links itself.

    [105](#c2-note-0105a){#c2-note-0105}  "Google Algorithm Change History,"
    [moz.com](http://moz.com) (2016), online.

    [106](#c2-note-0106a){#c2-note-0106}  Martin Feuz et al., "Personal Web
    Searching in the Age of Semantic Capitalism: Diagnosing the Mechanisms
    of Personalisation," *First Monday* 17 (2011), online.

    [107](#c2-note-0107a){#c2-note-0107}  Brian Dean, "Google\'s 200 Ranking
    Factors," *Search Engine Journal* (May 31, 2013), online.

    [108](#c2-note-0108a){#c2-note-0108}  Thus, it is not only the world of
    advertising that motivates the collection of personal information. Such
    information is also needed for the development of personalized
    algorithms that []{#Page_194 type="pagebreak" title="194"}give order to
    the flood of data. It can therefore be assumed that the rampant
    collection of personal information will not cease or slow down even if
    commercial demands happen to change, for instance to a business model
    that is not based on advertising.

    [109](#c2-note-0109a){#c2-note-0109}  For a detailed discussion of how
    these three levels are recorded, see Felix Stalder and Christine Mayer,
    "Der zweite Index: Suchmaschinen, Personalisierung und Überwachung," in
    Konrad Becker and Felix Stalder (eds), *Deep Search: Die Politik des
    Suchens jenseits von Google* (Innsbruck: Studienverlag, 2009), pp.
    112--31.

    [110](#c2-note-0110a){#c2-note-0110}  This raises the question of which
    drivers should be sent on a detour, so that no traffic jam comes about,
    and which should be shown the most direct route, which would now be
    traffic-free.

    [111](#c2-note-0111a){#c2-note-0111}  Pamela Vaughan, "Demystifying How
    Facebook\'s EdgeRank Algorithm Works," *HubSpot* (April 23, 2013),
    online.

    [112](#c2-note-0112a){#c2-note-0112}  Lisa Gitelman (ed.), *"Raw Data"
    Is an Oxymoron* (Cambridge, MA: MIT Press, 2013).

    [113](#c2-note-0113a){#c2-note-0113}  The terms "raw," in the sense of
    unprocessed, and "cooked," in the sense of processed, derive from the
    anthropologist Claude Lévi-Strauss, who introduced them to clarify the
    difference between nature and culture. See Claude Lévi-Strauss, *The Raw
    and the Cooked*, trans. John Weightman and Doreen Weightman (Chicago,
    IL: University of Chicago Press, 1983).

    [114](#c2-note-0114a){#c2-note-0114}  Jessica Lee, "No. 1 Position in
    Google Gets 33% of Search Traffic," *Search Engine Watch* (June 20,
    2013), online.

    [115](#c2-note-0115a){#c2-note-0115}  One estimate that continues to be
    cited quite often is already obsolete: Michael K. Bergman, "White Paper
    -- The Deep Web: Surfacing Hidden Value," *Journal of Electronic
    Publishing* 7 (2001), online. The more content is dynamically generated
    by databases, the more questionable such estimates become. It is
    uncontested, however, that only a small portion of online information is
    registered by search engines.

    [116](#c2-note-0116a){#c2-note-0116}  Theo Röhle, "Die Demontage der
    Gatekeeper: Relationale Perspektiven zur Macht der Suchmaschinen," in
    Konrad Becker and Felix Stalder (eds), *Deep Search: Die Politik des
    Suchens jenseits von Google* (Innsbruck: Studienverlag, 2009), pp.
    133--48.

    [117](#c2-note-0117a){#c2-note-0117}  The phenomenon of preparing the
    world to be recorded by algorithms is not restricted to digital
    networks. As early as 1994 in Germany, for instance, a new sort of
    typeface was introduced (the *Fälschungserschwerende Schrift*,
    "forgery-impeding typeface") on license plates for the sake of machine
    readability and facilitating automatic traffic control. To the human
    eye, however, it appears somewhat misshapen and
    disproportionate.[]{#Page_195 type="pagebreak" title="195"}

    [118](#c2-note-0118a){#c2-note-0118}  [Fairsearch.org](http://Fairsearch.org)
    was officially supported by several of Google\'s competitors, including
    Microsoft, TripAdvisor, and Oracle.

    [119](#c2-note-0119a){#c2-note-0119}  "Antitrust: Commission Sends
    Statement of Objections to Google on Comparison Shopping Service,"
    *European Commission: Press Release Database* (April 15, 2015), online.

    [120](#c2-note-0120a){#c2-note-0120}  Amit Singhal, "An Update to Our
    Search Algorithms," *Google Inside Search* (August 10, 2012), online. By
    the middle of 2014, according to some sources, Google had received
    around 20 million requests to remove links from its index on account of
    copyright violations.

    [121](#c2-note-0121a){#c2-note-0121}  Alexander Wragge, "Google-Ranking:
    Herabstufung ist 'Zensur light'," *iRights.info* (August 23, 2012),
    online.

    [122](#c2-note-0122a){#c2-note-0122}  Farhad Manjoo,"Amazon\'s Tactics
    Confirm Its Critics\' Worst Suspicions," *New York Times: Bits Blog*
    (May 23, 2014), online.

    [123](#c2-note-0123a){#c2-note-0123}  Lucas D. Introna and Helen
    Nissenbaum, "Shaping the Web: Why the Politics of Search Engines
    Matters," *Information Society* 16 (2000): 169--85, at 181.

    [124](#c2-note-0124a){#c2-note-0124}  Eli Pariser, *The Filter Bubble:
    How the New Personalized Web Is Changing What We Read and How We Think*
    (New York: Penguin, 2012).

    [125](#c2-note-0125a){#c2-note-0125}  Antoinette Rouvroy, "The End(s) of
    Critique: Data-Behaviourism vs. Due-Process," in Katja de Vries and
    Mireille Hilde­brandt (eds), *Privacy, Due Process and the Computational
    Turn: The Philosophy of Law Meets the Philosophy of Technology* (New
    York: Routledge, 2013), pp. 143--65.

    [126](#c2-note-0126a){#c2-note-0126}  See B. F. Skinner, *Science and
    Human Behavior* (New York: The Free Press, 1953), p. 35: "We undertake
    to predict and control the behavior of the individual organism. This is
    our 'dependent variable' -- the effect for which we are to find the
    cause. Our 'independent variables' -- the causes of behavior -- are the
    external conditions of which behavior is a function."

    [127](#c2-note-0127a){#c2-note-0127}  Nathan Jurgenson, "View from
    Nowhere: On the Cultural Ideology of Big Data," *New Inquiry* (October
    9, 2014), online.

    [128](#c2-note-0128a){#c2-note-0128}  danah boyd and Kate Crawford,
    "Critical Questions for Big Data: Provocations for a Cultural,
    Technological and Scholarly Phenomenon," *Information, Communication &
    Society* 15 (2012): 662--79.
    :::
    :::

    [III]{.chapterNumber} [Politics]{.chapterTitle} {#c3}
    ::: {.section}
    The show had already been going on for more than three hours, but nobody
    was bothered by this. Quite the contrary. The tension in the venue was
    approaching its peak, and the ratings were through the roof. Throughout
    all of Europe, 195 million people were watching the spectacle on
    television, and the social mass media were gaining steam. On Twitter,
    more than 47,000 messages were being sent every minute with the hashtag
    \#Eurovision.[^1^](#f6-note-0001){#f6-note-0001a} The outcome was
    decided shortly after midnight: Conchita Wurst, the bearded diva, was
    announced the winner of the 2014 Eurovision Song Contest. Cheers erupted
    as the public celebrated the victor -- but also itself. At long last,
    there was more to the event than just another round of tacky television
    programming ("This is Ljubljana calling!"). Rather, a statement was made
    -- a statement in favor of tolerance and against homophobia, for
    diversity and for the right to define oneself however one pleases. And
    Europe sent this message in the midst of a crisis and despite ongoing
    hostilities, not to mention all of the toxic rumblings that could be
    heard about decadence, cultural decay, and Gayropa. Visibly moved, the
    Austrian singer let out an exclamation -- "We are unity, and we are
    unstoppable!" -- as she returned to the stage with wobbly knees to
    accept the trophy.

    With her aesthetically convincing performance, Conchita succeeded in
    unleashing a strong desire for personal []{#Page_1 type="pagebreak"
    title="1"}self-discovery, for community, and for overcoming stale
    conventions. And she did this through a character that mainstream
    society would have considered paradoxical and deviant not long ago but
    has since come to understand: attractive beyond the dichotomy of man and
    woman, explicitly artificial and yet entirely authentic. This peculiar
    conflation of artificiality and naturalness is equally present in
    Berndnaut Smilde\'s photographic work of a real indoor cloud (*Nimbus*,
    2010) on the cover of this book. Conchita\'s performance was also on a
    formal level seemingly paradoxical: extremely focused and completely
    open. Unlike most of the other acts, she took the stage alone, and
    though she hardly moved at all, she nevertheless incited the audience to
    participate in numerous ways and genuinely to act out the motto of the
    contest ("Join us!"). Throughout the early rounds of the competition,
    the beard, which was at first so provocative, transformed into a
    free-floating symbol that the public began to appropriate in various
    ways. Men and women painted Conchita-like beards on their faces,
    newspapers printed beards to be cut out, and fans crocheted beards. Not
    only did someone Photoshop a beard on to a painting of Empress Sissi of
    Austria, but King Willem-Alexander of the Netherlands even tweeted a
    deceptively realistic portrait of his wife, Queen Máxima, wearing a
    beard. From one of the biggest stages of all, the evening of Wurst\'s
    victory conveyed an impression of how much the culture of Europe had
    changed in recent years, both in terms of its content and its forms.
    That which had long been restricted to subcultural niches -- the
    fluidity of gender iden­tities, appropriation as a cultural technique,
    or the conflation of reception and production, for instance -- was now
    part of the mainstream. Even while sitting in front of the television,
    this mainstream was no longer just a private audience but rather a
    multitude of singular producers whose networked activity -- on location
    or on social mass media -- lent particular significance to the occasion
    as a moment of collective self-perception.

    It is more than half a century since Marshall McLuhan announced the end
    of the Modern era, a cultural epoch that he called the Gutenberg Galaxy
    in honor of the print medium by which it was so influenced. What was
    once just an abstract speculation of media theory, however, now
    describes []{#Page_2 type="pagebreak" title="2"}the concrete reality of
    our everyday life. What\'s more, we have moved well past McLuhan\'s
    diagnosis: the erosion of old cultural forms, institutions, and
    certainties is not just something we affirm, but new ones have already
    formed whose contours are easy to identify not only in niche sectors but
    in the mainstream. Shortly before Conchita\'s triumph, Facebook thus
    expanded the gender-identity options for its billion-plus users from 2
    to 60. In addition to "male" and "female," users of the English version
    of the site can now choose from among the following categories:

    ::: {.extract}
    Agender, Androgyne, Androgynes, Androgynous, Asexual, Bigender, Cis, Cis
    Female, Cis Male, Cis Man, Cis Woman, Cisgender, Cisgender Female,
    Cisgender Male, Cisgender Man, Cisgender Woman, Female to Male (FTM),
    Female to Male Trans Man, Female to Male Transgender Man, Female to Male
    Transsexual Man, Gender Fluid, Gender Neutral, Gender Nonconforming,
    Gender Questioning, Gender Variant, Genderqueer, Hermaphrodite,
    Intersex, Intersex Man, Intersex Person, Intersex Woman, Male to Female
    (MTF), Male to Female Trans Woman, Male to Female Transgender Woman,
    Male to Female Transsexual Woman, Neither, Neutrois, Non-Binary, Other,
    Pangender, Polygender, T\*Man, Trans, Trans Female, Trans Male, Trans
    Man, Trans Person, Trans\*Female, Trans\*Male, Trans\*Man,
    Trans\*Person, Trans\*Woman, Transexual, Transexual Female, Transexual
    Male, Transexual Man, Transexual Person, Transexual Woman, Transgender
    Female, Transgender Person, Transmasculine, T\*Woman, Two\*Person,
    Two-Spirit, Two-Spirit Person.
    :::

    This enormous proliferation of cultural possibilities is an expression
    of what I will refer to below as the digital condition. Far from being
    universally welcomed, its growing presence has also instigated waves of
    nostalgia, diffuse resentments, and intellectual panic. Conservative and
    reactionary movements, which oppose such developments and desire to
    preserve or even re-create previous conditions, have been on the rise.
    Likewise in 2014, for instance, a cultural dispute broke out in normally
    subdued Baden-Würtemberg over which forms of sexual partnership should
    be mentioned positively in the sexual education curriculum. Its impetus
    was a working paper released at the end of 2013 by the state\'s
    []{#Page_3 type="pagebreak" title="3"}Ministry of Culture. Among other
    things, it proposed that adolescents "should confront their own sexual
    identity and orientation \[...\] from a position of acceptance with
    respect to sexual diversity."[^2^](#f6-note-0002){#f6-note-0002a} In a
    short period of time, a campaign organized mainly through social mass
    media collected more than 200,000 signatures in opposition to the
    proposal and submitted them to the petitions committee at the state
    parliament. At that point, the government responded by putting the
    initiative on ice. However, according to the analysis presented in this
    book, leaving it on ice creates a precarious situation.

    The rise and spread of the digital condition is the result of a
    wide-ranging and irreversible cultural transformation, the beginnings of
    which can in part be traced back to the nineteenth century. Since the
    1960s, however, this shift has accelerated enormously and has
    encompassed increasingly broader spheres of social life. More and more
    people have been participating in cultural processes; larger and larger
    dimensions of existence have become battlegrounds for cultural disputes;
    and social activity has been intertwined with increasingly complex
    technologies, without which it would hardly be possible to conceive of
    these processes, let alone achieve them. The number of competing
    cultural projects, works, reference points, and reference systems has
    been growing rapidly. This, in turn, has caused an escalating crisis for
    the established forms and institutions of culture, which are poorly
    equipped to deal with such an inundation of new claims to meaning. Since
    roughly the year 2000, many previously independent developments have
    been consolidating, gaining strength and modifying themselves to form a
    new cultural constellation that encompasses broad segments of society --
    a new galaxy, as McLuhan might have
    said.[^3^](#f6-note-0003){#f6-note-0003a} These days it is relatively
    easy to recognize the specific forms that characterize it as a whole and
    how these forms have contributed to new, contradictory and
    conflict-laden political dynamics.

    My argument, which is restricted to cultural developments in the
    (transatlantic) West, is divided into three chapters. In the first, I
    will outline the *historical* developments that have given rise to this
    quantitative and qualitative change and have led to the crisis faced by
    the institutions of the late phase of the Gutenberg Galaxy, which
    defined the last third []{#Page_4 type="pagebreak" title="4"}of the
    twentieth century.[^4^](#f6-note-0004){#f6-note-0004a} The expansion of
    the social basis of cultural processes will be traced back to changes in
    the labor market, to the self-empowerment of marginalized groups, and to
    the dissolution of centralized cultural geography. The broadening of
    cultural fields will be discussed in terms of the rise of design as a
    general creative discipline, and the growing significance of complex
    technologies -- as fundamental components of everyday life -- will be
    tracked from the beginnings of independent media up to the development
    of the internet as a mass medium. These processes, which at first
    unfolded on their own and may have been reversible on an individual
    basis, are integrated today and represent a socially domin­ant component
    of the coherent digital condition. From the perspective of cultural
    studies and media theory, the second chapter will delineate the already
    recognizable features of this new culture. Concerned above all with the
    analysis of forms, its focus is thus on the question of "how" cultural
    practices operate. It is only because specific forms of culture,
    exchange, and expression are prevalent across diverse var­ieties of
    content, social spheres, and locations that it is even possible to speak
    of the digital condition in the singular. Three examples of such forms
    stand out in particular. *Referentiality* -- that is, the use of
    existing cultural materials for one\'s own production -- is an essential
    feature of many methods for inscribing oneself into cultural processes.
    In the context of unmanageable masses of shifting and semantically open
    reference points, the act of selecting things and combining them has
    become fundamental to the production of meaning and the constitution of
    the self. The second feature that characterizes these processes is
    *communality*. It is only through a collectively shared frame of
    reference that meanings can be stabilized, possible courses of action
    can be determined, and resources can be made available. This has given
    rise to communal formations that generate self-referential worlds, which
    in turn modulate various dimensions of existence -- from aesthetic
    preferences to the methods of biological reproduction and the rhythms of
    space and time. In these worlds, the dynamics of network power have
    reconfigured notions of voluntary and involuntary behavior, autonomy,
    and coercion. The third feature of the new cultural landscape is its
    *algorithmicity*. It is characterized, in other []{#Page_5
    type="pagebreak" title="5"}words, by automated decision-making processes
    that reduce and give shape to the glut of information, by extracting
    information from the volume of data produced by machines. This extracted
    information is then accessible to human perception and can serve as the
    basis of singular and communal activity. Faced with the enormous amount
    of data generated by people and machines, we would be blind were it not
    for algorithms.

    The third chapter will focus on *political dimensions*. These are the
    factors that enable the formal dimensions described in the preceding
    chapter to manifest themselves in the form of social, political, and
    economic projects. Whereas the first chapter is concerned with long-term
    and irreversible histor­ical processes, and the second outlines the
    general cultural forms that emerged from these changes with a certain
    degree of inevitability, my concentration here will be on open-ended
    dynamics that can still be influenced. A contrast will be made between
    two political tendencies of the digital condition that are already quite
    advanced: *post-democracy* and *commons*. Both take full advantage of
    the possibilities that have arisen on account of structural changes and
    have advanced them even further, though in entirely different
    directions. "Post-democracy" refers to strategies that counteract the
    enormously expanded capacity for social communication by disconnecting
    the possibility to participate in things from the ability to make
    decisions about them. Everyone is allowed to voice his or her opinion,
    but decisions are ultimately made by a select few. Even though growing
    numbers of people can and must take responsibility for their own
    activity, they are unable to influence the social conditions -- the
    social texture -- under which this activity has to take place. Social
    mass media such as Facebook and Google will receive particular attention
    as the most conspicuous manifestations of this tendency. Here, under new
    structural provisions, a new combination of behavior and thought has
    been implemented that promotes the normalization of post-democracy and
    contributes to its otherwise inexplicable acceptance in many areas of
    society. "Commons," on the contrary, denotes approaches for developing
    new and comprehensive institutions that not only directly combine
    participation and decision-making but also integrate economic, social,
    and ethical spheres -- spheres that Modernity has tended to keep
    apart.[]{#Page_6 type="pagebreak" title="6"}

    Post-democracy and commons can be understood as two lines of development
    that point beyond the current crisis of liberal democracy and represent
    new political projects. One can be characterized as an essentially
    authoritarian system, the other as a radical expansion and renewal of
    democracy, from the notion of representation to that of participation.

    Even though I have brought together a number of broad perspectives, I
    have refrained from discussing certain topics that a book entitled *The
    Digital Condition* might be expected to address, notably the matter of
    copyright, for one example. This is easy to explain. As regards the new
    forms at the heart of this book, none of these developments requires or
    justifies copyright law in its present form. In any case, my thoughts on
    the matter were published not long ago in another book, so there is no
    need to repeat them here.[^5^](#f6-note-0005){#f6-note-0005a} The theme
    of privacy will also receive little attention. This is not because I
    share the view, held by proponents of "post-privacy," that it would be
    better for all personal information to be made available to everyone. On
    the contrary, this position strikes me as superficial and naïve. That
    said, the political function of privacy -- to safeguard a degree of
    personal autonomy from powerful institutions -- is based on fundamental
    concepts that, in light of the developments to be described below,
    urgently need to be updated. This is a task, however, that would take me
    far beyond the scope of the present
    book.[^6^](#f6-note-0006){#f6-note-0006a}

    Before moving on to the first chapter, I should first briefly explain my
    somewhat unorthodox understanding of the central concepts in the title
    of the book -- "condition" and "digital." In what follows, the term
    "condition" will be used to designate a cultural condition whereby the
    processes of social meaning -- that is, the normative dimension of
    existence -- are explicitly or implicitly negotiated and realized by
    means of singular and collective activity. Meaning, however, does not
    manifest itself in signs and symbols alone; rather, the practices that
    engender it and are inspired by it are consolidated into artifacts,
    institutions, and lifeworlds. In other words, far from being a symbolic
    accessory or mere overlay, culture in fact directs our actions and gives
    shape to society. By means of materialization and repetition, meaning --
    both as claim and as reality -- is made visible, productive, and
    negotiable. People are free to accept it, reject it, or ignore
    []{#Page_7 type="pagebreak" title="7"}it altogether. Social meaning --
    that is, meaning shared by multiple people -- can only come about
    through processes of exchange within larger or smaller formations.
    Production and reception (to the extent that it makes any sense to
    distinguish between the two) do not proceed linearly here, but rather
    loop back and reciprocally influence one another. In such processes, the
    participants themselves determine, in a more or less binding manner, how
    they stand in relation to themselves, to each other, and to the world,
    and they determine the frame of reference in which their activity is
    oriented. Accordingly, culture is not something static or something that
    is possessed by a person or a group, but rather a field of dispute that
    is subject to the activities of multiple ongoing changes, each happening
    at its own pace. It is characterized by processes of dissolution and
    constitution that may be collaborative, oppositional, or simply
    operating side by side. The field of culture is pervaded by competing
    claims to power and mechanisms for exerting it. This leads to conflicts
    about which frames of reference should be adopted for different fields
    and within different social groups. In such conflicts,
    self-determination and external determination interact until a point is
    reached at which both sides are mutually constituted. This, in turn,
    changes the conditions that give rise to shared meaning and personal
    identity.

    In what follows, this broadly post-structuralist perspective will inform
    my discussion of the causes and formational conditions of cultural
    orders and their practices. Culture will be conceived throughout as
    something heterogeneous and hybrid. It draws from many sources; it is
    motivated by the widest possible variety of desires, intentions, and
    compulsions; and it mobilizes whatever resources might be necessary for
    the constitution of meaning. This emphasis on the materiality of culture
    is also reflected in the concept of the digital. Media are relational
    technologies, which means that they facilitate certain types of
    connection between humans and
    objects.[^7^](#f6-note-0007){#f6-note-0007a} "Digital" thus denotes the
    set of relations that, on the infrastructural basis of digital networks,
    is realized today in the production, use, and transform­ation of
    material and immaterial goods, and in the constitution and coordination
    of personal and collective activity. In this regard, the focus is less
    on the dominance of a certain class []{#Page_8 type="pagebreak"
    title="8"}of technological artifacts -- the computer, for instance --
    and even less on distinguishing between "digital" and "analog,"
    "material" and "immaterial." Even in the digital condition, the analog
    has not gone away. Rather, it has been re-evaluated and even partially
    upgraded. The immaterial, moreover, is never entirely without
    materiality. On the contrary, the fleeting impulses of digital
    communication depend on global and unmistakably material infrastructures
    that extend from mines beneath the surface of the earth, from which rare
    earth metals are extracted, all the way into outer space, where
    satellites are circling around above us. Such things may be ignored
    because they are outside the experience of everyday life, but that does
    not mean that they have disappeared or that they are of any less
    significance. "Digital" thus refers to historically new possibilities
    for constituting and connecting various human and non-human actors,
    which is not limited to digital media but rather appears everywhere as a
    relational paradigm that alters the realm of possibility for numerous
    materials and actors. My understanding of the digital thus approximates
    the concept of the "post-digital," which has been gaining currency over
    the past few years within critical media cultures. Here, too, the
    distinction between "new" and "old" media and all of the ideological
    baggage associated with it -- for instance, that the new represents the
    future while the old represents the past -- have been rejected. The
    aesthetic projects that continue to define the image of the "digital" --
    immateriality, perfection, and virtuality -- have likewise been
    discarded.[^8^](#f6-note-0008){#f6-note-0008a} Above all, the
    "post-digital" is a critical response to this techno-utopian aesthetic
    and its attendant economic and political perspectives. According to the
    cultural theorist Florian Cramer, the concept accommodates the fact that
    "new ethical and cultural conventions which became mainstream with
    internet communities and open-source culture are being retroactively
    applied to the making of non-digital and post-digital media
    products."[^9^](#f6-note-0009){#f6-note-0009a} He thus cites the trend
    that process-based practices oriented toward open interaction, which
    first developed within digital media, have since begun to appear in more
    and more contexts and in an increasing number of
    materials.[^10[]{#Page_9 type="pagebreak"
    title="9"}^](#f6-note-0010){#f6-note-0010a}

    For the historical, cultural-theoretical, and political perspectives
    developed in this book, however, the concept of the post-digital is
    somewhat problematic, for it requires the narrow context of media art
    and its fixation on technology in order to become a viable
    counter-position. Without this context, certain misunderstandings are
    impossible to avoid. The prefix "post-," for instance, is often
    interpreted in the sense that something is over or that we have at least
    grasped the matters at hand and can thus turn to something new. The
    opposite is true. The most enduringly relevant developments are only now
    beginning to adopt a specific form, long after digital infrastructures
    and the practices made popular by them have become part of our everyday
    lives. Or, as the communication theorist and consultant Clay Shirky puts
    it, "Communication tools don\'t get socially interesting until they get
    technologically boring."[^11^](#f6-note-0011){#f6-note-0011a} For it is
    only today, now that our fascination for this technology has waned and
    its promises sound hollow, that culture and society are being defined by
    the digital condition in a comprehensive sense. Before, this was the
    case in just a few limited spheres. It is this hybridization and
    solidification of the digital -- the presence of the digital beyond
    digital media -- that lends the digital condition its dominance. As to
    the concrete realities in which these things will materialize, this is
    currently being decided in an open and ongoing process. The aim of this
    book is to contribute to our understanding of this process.[]{#Page_10
    type="pagebreak" title="10"}
    :::

    ::: {.section .notesSet type="rearnotes"}
    []{#notesSet}Notes {#f6-ntgp-9999}
    ------------------

    ::: {.section .notesList}
    [1](#f6-note-0001a){#f6-note-0001}  Dan Biddle, "Five Million Tweets for
    \#Eurovision 2014," *Twitter UK* (May 11, 2014), online.

    [2](#f6-note-0002a){#f6-note-0002}  Ministerium für Kultus, Jugend und
    Sport -- Baden-Württemberg, "Bildungsplanreform 2015/2016 -- Verankerung
    von Leitprinzipien," online \[--trans.\].

    [3](#f6-note-0003a){#f6-note-0003}  As early as 1995, Wolfgang Coy
    suggested that McLuhan\'s metaphor should be supplanted by the concept
    of the "Turing Galaxy," but this never caught on. See his introduction
    to the German edition of *The Gutenberg Galaxy*: "Von der Gutenbergschen
    zur Turingschen Galaxis: Jenseits von Buchdruck und Fernsehen," in
    Marshall McLuhan, *Die Gutenberg Galaxis: Das Ende des Buchzeitalters*,
    (Cologne: Addison-Wesley, 1995), pp. vii--xviii.[]{#Page_176
    type="pagebreak" title="176"}

    [4](#f6-note-0004a){#f6-note-0004}  According to the analysis of the
    Spanish sociologist Manuel Castells, this crisis began almost
    simultaneously in highly developed capitalist and socialist societies,
    and it did so for the same reason: the paradigm of "industrialism" had
    reached the limits of its productivity. Unlike the capitalist societies,
    which were flexible enough to tame the crisis and reorient their
    economies, the socialism of the 1970s and 1980s experienced stagnation
    until it ultimately, in a belated effort to reform, collapsed. See
    Manuel Castells, *End of Millennium*, 2nd edn (Oxford: Wiley-Blackwell,
    2010), pp. 5--68.

    [5](#f6-note-0005a){#f6-note-0005}  Felix Stalder, *Der Autor am Ende
    der Gutenberg Galaxis* (Zurich: Buch & Netz, 2014).

    [6](#f6-note-0006a){#f6-note-0006}  For my preliminary thoughts on this
    topic, see Felix Stalder, "Autonomy and Control in the Era of
    Post-Privacy," *Open: Cahier on Art and the Public Domain* 19 (2010):
    78--86; and idem, "Privacy Is Not the Antidote to Surveillance,"
    *Surveillance & Society* 1 (2002): 120--4. For a discussion of these
    approaches, see the working paper by Maja van der Velden, "Personal
    Autonomy in a Post-Privacy World: A Feminist Technoscience Perspective"
    (2011), online.

    [7](#f6-note-0007a){#f6-note-0007}  Accordingly, the "new social" media
    are mass media in the sense that they influence broadly disseminated
    patterns of social relations and thus shape society as much as the
    traditional mass media had done before them.

    [8](#f6-note-0008a){#f6-note-0008}  Kim Cascone, "The Aesthetics of
    Failure: 'Post-Digital' Tendencies in Contemporary Computer Music,"
    *Computer Music Journal* 24/2 (2000): 12--18.

    [9](#f6-note-0009a){#f6-note-0009}  Florian Cramer, "What Is
    'Post-Digital'?" *Post-Digital Research* 3 (2014), online.

    [10](#f6-note-0010a){#f6-note-0010}  In the field of visual arts,
    similar considerations have been made regarding "post-internet art." See
    Artie Vierkant, "The Image Object Post-Internet,"
    [jstchillin.org](http://jstchillin.org) (December 2010), online; and Ian
    Wallace, "What Is Post-Internet Art? Understanding the Revolutionary New
    Art Movement," *Artspace* (March 18, 2014), online.

    [11](#f6-note-0011a){#f6-note-0011}  Clay Shirky, *Here Comes Everybody:
    The Power of Organizing without Organizations* (New York: Penguin,
    2008), p. 105.
    :::
    :::

    [I]{.chapterNumber} [Evolution]{.chapterTitle} {#c1}
    =
    ::: {.section}
    Many authors have interpreted the new cultural realities that
    characterize our daily lives as a direct consequence of technological
    developments: the internet is to blame! This assumption is not only
    empirically untenable; it also leads to a problematic assessment of the
    current situation. Apparatuses are represented as "central actors," and
    this suggests that new technologies have suddenly revolutionized a
    situation that had previously been stable. Depending on one\'s point of
    view, this is then regarded as "a blessing or a
    curse."[^1^](#c1-note-0001){#c1-note-0001a} A closer examination,
    however, reveals an entirely different picture. Established cultural
    practices and social institutions had already been witnessing the
    erosion of their self-evident justification and legitimacy, long before
    they were faced with new technologies and the corresponding demands
    these make on individuals. Moreover, the allegedly new types of
    coordination and cooperation are also not so new after all. Many of them
    have existed for a long time. At first most of them were totally
    separate from the technologies for which, later on, they would become
    relevant. It is only in retrospect that these developments can be
    identified as beginnings, and it can be seen that much of what we regard
    today as novel or revolutionary was in fact introduced at the margins of
    society, in cultural niches that were unnoticed by the dominant actors
    and institutions. The new technologies thus evolved against a
    []{#Page_11 type="pagebreak" title="11"}background of processes of
    societal transformation that were already under way. They could only
    have been developed once a vision of their potential had been
    formulated, and they could only have been disseminated where demand for
    them already existed. This demand was created by social, political, and
    economic crises, which were themselves initiated by changes that were
    already under way. The new technologies seemed to provide many differing
    and promising answers to the urgent questions that these crises had
    prompted. It was thus a combination of positive vision and pressure that
    motivated a great variety of actors to change, at times with
    considerable effort, the established processes, mature institutions, and
    their own behavior. They intended to appropriate, for their own
    projects, the various and partly contradictory possibilities that they
    saw in these new technologies. Only then did a new technological
    infrastructure arise.

    This, in turn, created the preconditions for previously independent
    developments to come together, strengthening one another and enabling
    them to spread beyond the contexts in which they had originated. Thus,
    they moved from the margins to the center of culture. And by
    intensifying the crisis of previously established cultural forms and
    institutions, they became dominant and established new forms and
    institutions of their own.
    :::

    ::: {.section}
    The Expansion of the Social Basis of Culture {#c1-sec-0002}
    --------------------------------------------

    Watching television discussions from the 1950s and 1960s today, one is
    struck not only by the billows of cigarette smoke in the studio but also
    by the homogeneous spectrum of participants. Usually, it was a group of
    white and heteronormatively behaving men speaking with one
    another,[^2^](#c1-note-0002){#c1-note-0002a} as these were the people
    who held the important institutional positions in the centers of the
    West. As a rule, those involved were highly specialized representatives
    from the cultural, economic, scientific, and political spheres. Above
    all, they were legitimized to appear in public to articulate their
    opinions, which were to be regarded by others as relevant and worthy of
    discussion. They presided over the important debates of their time. With
    few exceptions, other actors and their deviant opinions -- there
    []{#Page_12 type="pagebreak" title="12"}has never been a time without
    them -- were either not taken seriously at all or were categorized as
    indecent, incompetent, perverse, irrelevant, backward, exotic, or
    idiosyncratic.[^3^](#c1-note-0003){#c1-note-0003a} Even at that time,
    the social basis of culture was beginning to expand, though the actors
    at the center of the discourse had failed to notice this. Communicative
    and cultural pro­cesses were gaining significance in more and more
    places, and excluded social groups were self-consciously developing
    their own language in order to intervene in the discourse. The rise of
    the knowledge economy, the increasingly loud critique of
    heteronormativity, and a fundamental cultural critique posed by
    post-colonialism enabled a greater number of people to participate in
    public discussions. In what follows, I will subject each of these three
    phenomena to closer examin­ation. In order to do justice to their
    complexity, I will treat them on different levels: I will depict the
    rise of the knowledge economy as a structural change in labor; I will
    reconstruct the critique of heteronormativity by outlining the origins
    and transformations of the gay movement in West Germany; and I will
    discuss post-colonialism as a theory that introduced new concepts of
    cultural multiplicity and hybridization -- concepts that are now
    influencing the digital condition far beyond the limits of the
    post-colonial discourse, and often without any reference to this
    discourse at all.

    ::: {.section}
    ### The growth of the knowledge economy {#c1-sec-0003}

    At the beginning of the 1950s, the Austrian-American economist Fritz
    Machlup was immersed in his study of the polit­ical economy of
    monopoly.[^4^](#c1-note-0004){#c1-note-0004a} Among other things, he was
    concerned with patents and copyright law. In line with the neo-classical
    Austrian School, he considered both to be problematic (because
    state-created) monopolies.[^5^](#c1-note-0005){#c1-note-0005a} The
    longer he studied the monopoly of the patent system in particular, the
    more far-reaching its consequences seemed to him. He maintained that the
    patent system was intertwined with something that might be called the
    "economy of invention" -- ultimately, patentable insights had to be
    produced in the first place -- and that this was in turn part of a much
    larger economy of knowledge. The latter encompassed government agencies
    as well as institutions of education, research, and development
    []{#Page_13 type="pagebreak" title="13"}(that is, schools, universities,
    and certain corporate laboratories), which had been increasing steadily
    in number since Roosevelt\'s New Deal. Yet it also included the
    expanding media sector and those industries that were responsible for
    providing technical infrastructure. Machlup subsumed all of these
    institutions and sectors under the concept of the "knowledge economy," a
    term of his own invention. Their common feature was that essential
    aspects of their activities consisted in communicating things to other
    people ("telling anyone anything," as he put it). Thus, the employees
    were not only recipients of information or instructions; rather, in one
    way or another, they themselves communicated, be it merely as a
    secretary who typed up, edited, and forwarded a piece of shorthand
    dictation. In his book *The Production and Distribution of Knowledge in
    the United States*, published in 1962, Machlup gathered empirical
    material to demonstrate that the American economy had entered a new
    phase that was distinguished by the production, exchange, and
    application of abstract, codified
    knowledge.[^6^](#c1-note-0006){#c1-note-0006a} This opinion was no
    longer entirely novel at the time, but it had never before been
    presented in such an empirically detailed and comprehensive
    manner.[^7^](#c1-note-0007){#c1-note-0007a} The extent of the knowledge
    economy surprised Machlup himself: in his book, he concluded that as
    much as 43 percent of all labor activity was already engaged in this
    sector. This high number came about because, until then, no one had put
    forward the idea of understanding such a variety of activities as a
    single unit.

    Machlup\'s categorization was indeed quite innovative, for the dynamics
    that propelled the sectors that he associated with one another not only
    were very different but also had originated as an integral component in
    the development of the industrial production of goods. They were more of
    an extension of such production than a break with it. The production and
    circulation of goods had been expanding and accelerating as early as the
    nineteenth century, though at highly divergent rates from one region or
    sector to another. New markets were created in order to distribute goods
    that were being produced in greater numbers; new infrastructure for
    transportation and communication was established in order to serve these
    large markets, which were mostly in the form of national territories
    (including their colonies). This []{#Page_14 type="pagebreak"
    title="14"}enabled even larger factories to be built in order to
    exploit, to an even greater extent, the cost advantages of mass
    production. In order to control these complex processes, new professions
    arose with different types of competencies and working conditions. The
    office became a workplace for an increasing number of people -- men and
    women alike -- who, in one form or another, had something to do with
    information processing and communication. Yet all of this required not
    only new management techniques. Production and products also became more
    complex, so that entire corporate sectors had to be restructured.
    Whereas the first decisive inventions of the industrial era were still
    made by more or less educated tinkerers, during the last third of the
    nineteenth century, invention itself came to be institutionalized. In
    Germany, Siemens (founded in 1847 as the Telegraphen-Bauanstalt von
    Siemens & Halske) exemplifies this transformation. Within 50 years, a
    company that began in a proverbial workshop in a Berlin backyard became
    a multinational high-tech corporation. It was in such corporate
    laboratories, which were established around the year 1900, that the
    "industrialization of invention" or the "scientification of industrial
    production" took place.[^8^](#c1-note-0008){#c1-note-0008a} In other
    words, even the processes employed in factories and the goods that they
    produced became knowledge-intensive. Their invention, planning, and
    production required a steadily growing expansion of activities, which
    today we would refer to as research and development. The informatization
    of the economy -- the acceleration of mass production, the comprehensive
    application of scientific methods to the organization of labor, and the
    central role of research and development in industry -- was hastened
    enormously by a world war that was waged on an industrial scale to an
    extent that had never been seen before.

    Another important factor for the increasing significance of the
    knowledge economy was the development of the consumer society. Over the
    course of the last third of the nineteenth century, despite dramatic
    regional and social disparities, an increasing number of people profited
    from the economic growth that the Industrial Revolution had instigated.
    Wages increased and basic needs were largely met, so that a new social
    stratum arose, the middle class, which was able to spend part of its
    income on other things. But on what? First, []{#Page_15 type="pagebreak"
    title="15"}new needs had to be created. The more production capacities
    increased, the more they had to be rethought in terms of consumption.
    Thus, in yet another way, the economy became more knowledge-intensive.
    It was now necessary to become familiar with, understand, and stimulate
    the interests and preferences of consumers, in order to entice them to
    purchase products that they did not urgently need. This knowledge did
    little to enhance the material or logistical complexity of goods or
    their production; rather, it was reflected in the increasingly extensive
    communication about and through these goods. The beginnings of this
    development were captured by Émile Zola in his 1883 novel *The Ladies\'
    Paradise*, which was set in the new world of a semi-fictitious
    department store bearing that name. In its opening scene, the young
    protagonist Denise Baudu and her brother Jean, both of whom have just
    moved to Paris from a provincial town, encounter for the first time the
    artfully arranged women\'s clothing -- exhibited with all sorts of
    tricks involving lighting, mirrors, and mannequins -- in the window
    displays of the store. The sensuality of the staged goods is so
    overwhelming that both of them are not only struck dumb, but Jean even
    "blushes."

    It was the economy of affects that brought blood to Jean\'s cheeks. At
    that time, strategies for attracting the attention of customers did not
    yet have a scientific and systematic basis. Just as the first inventions
    in the age of industrialization were made by amateurs, so too was the
    economy of affects developed intuitively and gradually rather than as a
    planned or conscious paradigm shift. That it was possible to induce and
    direct affects by means of targeted communication was the pioneering
    discovery of the Austrian-American Edward Bernays. During the 1920s, he
    combined the ideas of his uncle Sigmund Freud about unconscious
    motivations with the sociological research methods of opinion surveys to
    form a new discipline: market
    research.[^9^](#c1-note-0009){#c1-note-0009a} It became the scientific
    basis of a new field of activity, which he at first called "propa­ganda"
    but then later referred to as "public
    relations."[^10^](#c1-note-0010){#c1-note-0010a} Public communication,
    be it for economic or political ends, was now placed on a systematic
    foundation that came to distance itself more and more from the pure
    "conveyance of information." Communication became a strategic field for
    corporate and political disputes, and the mass media []{#Page_16
    type="pagebreak" title="16"}became their locus of negotiation. Between
    1880 and 1917, for instance, commercial advertising costs in the United
    States increased by more than 800 percent, and the leading advertising
    firms, using the same techniques with which they attracted consumers to
    products, were successful in selling to the American public the idea of
    their nation entering World War I. Thus, a media industry in the modern
    sense was born, and it expanded along with the rapidly growing market
    for advertising.[^11^](#c1-note-0011){#c1-note-0011a}

    In his studies of labor markets conducted at the beginning of the 1960s,
    Machlup brought these previously separ­ate developments together and
    thus explained the existence of an already advanced knowledge economy in
    the United States. His arguments fell on extremely fertile soil, for an
    intellectual transformation had taken place in other areas of science as
    well. A few years earlier, for instance, cybernetics had given the
    concepts "information" and "communication" their first scientifically
    precise (if somewhat idiosyncratic) definitions and had assigned to them
    a position of central importance in all scientific disciplines, not to
    mention life in general.[^12^](#c1-note-0012){#c1-note-0012a} Machlup\'s
    investigation seemed to confirm this in the case of the economy, given
    that the knowledge economy was primarily concerned with information and
    communication. Since then, numerous analyses, formulas, and slogans have
    repeated, modified, refined, and criticized the idea that the
    knowledge-based activities of the economy have become increasingly
    important. In the 1970s this discussion was associated above all with
    the notion of the "post-industrial
    society,"[^13^](#c1-note-0013){#c1-note-0013a} in the 1980s the guiding
    idea was the "information society,"[^14^](#c1-note-0014){#c1-note-0014a}
    and in the 1990s the debate revolved around the "network
    society"[^15^](#c1-note-0015){#c1-note-0015a} -- to name just the most
    popular concepts. What these approaches have in common is that they each
    diagnose a comprehensive societal transformation that, as regards the
    creation of economic value or jobs, has shifted the balance from
    productive to communicative activ­ities. Accordingly, they presuppose
    that we know how to distinguish the former from the latter. This is not
    unproblematic, however, because in practice the two are usually tightly
    intertwined. Moreover, whoever maintains that communicative activities
    have taken the place of industrial production in our society has adopted
    a very narrow point of []{#Page_17 type="pagebreak" title="17"}view.
    Factory jobs have not simply disappeared; they have just been partially
    relocated outside of Western economies. The assertion that communicative
    activities are somehow of "greater value" hardly chimes with the reality
    of today\'s new "service jobs," many of which pay no more than the
    minimum wage.[^16^](#c1-note-0016){#c1-note-0016a} Critiques of this
    sort, however, have done little to reduce the effectiveness of this
    analysis -- especially its political effectiveness -- for it does more
    than simply describe a condition. It also contains a set of political
    instructions that imply or directly demand that precisely those sectors
    should be promoted that it considers economically promising, and that
    society should be reorganized accordingly. Since the 1970s, there has
    thus been a feedback loop between scientific analysis and political
    agendas. More often than not, it is hardly possible to distinguish
    between the two. Especially in Britain and the United States, the
    economic transformation of the 1980s was imposed insistently and with
    political calculation (the weakening of labor unions).

    There are, however, important differences between the developments of
    the so-called "post-industrial society" of the 1970s and those of the
    so-called "network society" of the 1990s, even if both terms are
    supposed to stress the increased significance of information, knowledge,
    and communication. With regard to the digital condition, the most
    important of these differences are the greater flexibility of economic
    activity in general and employment relations in particular, as well as
    the dismantling of social security systems. Neither phenomenon played
    much of a role in analyses of the early 1970s. The development since
    then can be traced back to two currents that could not seem more
    different from one another. At first, flexibility was demanded in the
    name of a critique of the value system imposed by bureaucratic-bourgeois
    society (including the traditional organization of the workforce). It
    originated in the new social movements that had formed in the late
    1960s. Later on, toward the end of the 1970s, it then became one of the
    central points of the neoliberal critique of the welfare state. With
    completely different motives, both sides sang the praises of autonomy
    and spontaneity while rejecting the disciplinary nature of hierarchical
    organization. They demanded individuality and diversity rather than
    conformity to prescribed roles. Experimentation, openness to []{#Page_18
    type="pagebreak" title="18"}new ideas, flexibility, and change were now
    established as fundamental values with positive connotations. Both
    movements operated with the attractive idea of personal freedom. The new
    social movements understood this in a social sense as the freedom of
    personal development and coexistence, whereas neoliberals understood it
    in an economic sense as the freedom of the market. In the 1980s, the
    neoliberal ideas prevailed in large part because some of the values,
    strategies, and methods propagated by the new social movements were
    removed from their political context and appropriated in order to
    breathe new life -- a "new spirit" -- into capitalism and thus to rescue
    industrial society from its crisis.[^17^](#c1-note-0017){#c1-note-0017a}
    An army of management consultants, restructuring experts, and new
    companies began to promote flat hierarchies, self-responsibility, and
    innovation; with these aims in mind, they set about reorganizing large
    corporations into small and flexible units. Labor and leisure were no
    longer supposed to be separated, for all aspects of a given person could
    be integrated into his or her work. In order to achieve economic success
    in this new capitalism, it became necessary for every individual to
    identify himself or herself with his or her profession. Large
    corporations were restructured in such a way that entire departments
    found themselves transformed into independent "profit centers." This
    happened in the name of creating more leeway for decision-making and of
    optimizing the entrepreneurial spirit on all levels, the goals being to
    increase value creation and to provide management with more fine-grained
    powers of intervention. These measures, in turn, created the need for
    computers and the need for them to be networked. Large corporations
    reacted in this way to the emergence of highly specialized small
    companies which, by networking and cooperating with other firms,
    succeeded in quickly and flexibly exploiting niches in the expanding
    global markets. In the management literature of the 1980s, the
    catchphrases for this were "company networks" and "flexible
    specialization."[^18^](#c1-note-0018){#c1-note-0018a} By the middle of
    the 1990s, the sociologist Manuel Castells was able to conclude that the
    actual productive entity was no longer the individual company but rather
    the network consisting of companies and corporate divisions of various
    sizes. In Castells\'s estimation, the decisive advantage of the network
    is its ability to customize its elements and their configuration
    []{#Page_19 type="pagebreak" title="19"}to suit the rapidly changing
    requirements of the "project" at
    hand.[^19^](#c1-note-0019){#c1-note-0019a} Aside from a few exceptions,
    companies in their trad­itional forms came to function above all as
    strategic control centers and as economic and legal units.

    This economic structural transformation was already well under way when
    the internet emerged as a mass medium around the turn of the millennium.
    As a consequence, change became more radical and penetrated into an
    increasing number of areas of value creation. The political agenda
    oriented itself toward the vision of "creative industries," a concept
    developed in 1997 by the newly elected British government under Tony
    Blair. A Creative Industries Task Force was established right away, and
    its first step was to identify "those activities which have their
    origins in individual creativity, skill and talent and which have the
    potential for wealth and job creation through the generation and
    exploit­ation of intellectual
    property."[^20^](#c1-note-0020){#c1-note-0020a} Like Fritz Machlup at
    the beginning of the 1960s, the task force brought together existing
    areas of activity into a new category. Such activities included
    advertising, computer games, architecture, music, arts and antique
    markets, publishing, design, software and computer services, fashion,
    television and radio, and film and video. The latter were elevated to
    matters of political importance on account of their potential to create
    wealth and jobs. Not least because of this clever presentation of
    categories -- no distinction was made between the BBC, an almighty
    public-service provider, and fledgling companies in precarious
    circumstances -- it was possible to proclaim not only that the creative
    industries were contributing a relevant portion of the nation\'s
    economic output, but also that this sector was growing at an especially
    fast rate. It was reported that, in London, the creative industries were
    already responsible for one out of every five new jobs. When compared
    with traditional terms of employment as regards income, benefits, and
    prospects for advancement, however, many of these positions entailed a
    considerable downgrade for the employees in question (who were now
    treated as independent contractors). This fact was either ignored or
    explicitly interpreted as a sign of the sector\'s particular
    dynamism.[^21^](#c1-note-0021){#c1-note-0021a} Around the turn of the
    new millennium, the idea that individual creativity plays a central role
    in the economy was given further traction by []{#Page_20
    type="pagebreak" title="20"}the sociologist and consultant Richard
    Florida, who argued that creativity was essential to the future of
    cities and even announced the rise of the "creative class." As to the
    preconditions that have to be met in order to tap into this source of
    wealth, he devised a simple formula that would be easy for municipal
    bureaucrats to understand: "technology, tolerance and talent." Talent,
    as defined by Florida, is based on individual creativity and education
    and manifests itself in the ability to generate new jobs. He was thus
    able to declare talent a central element of economic
    growth.[^22^](#c1-note-0022){#c1-note-0022a} In order to "unleash" these
    resources, what we need in addition to technology is, above all,
    tolerance; that is, "an open culture -- one that does not discriminate,
    does not force people into boxes, allows us to be ourselves, and
    validates various forms of family and of human
    identity."[^23^](#c1-note-0023){#c1-note-0023a}

    The idea that a public welfare state should ensure the social security
    of individuals was considered obsolete. Collective institutions, which
    could have provided a degree of stability for people\'s lifestyles, were
    dismissed or regarded as bureaucratic obstacles. The more or less
    directly evoked role model for all of this was the individual artist,
    who was understood as an individual entrepreneur, a sort of genius
    suitable for the masses. For Florida, a central problem was that,
    according to his own calculations, only about a third of the people
    living in North American and European cities were working in the
    "creative sector," while the innate creativity of everyone else was
    going to waste. Even today, the term "creative industry," along with the
    assumption that the internet will provide increased opportunities,
    serves to legitimize the effort to restructure all areas of the economy
    according to the needs of the knowledge economy and to privilege the
    network over the institution. In times of social cutbacks and empty
    public purses, especially in municipalities, this message was warmly
    received. One mayor, who as the first openly gay top politician in
    Germany exemplified tolerance for diverse lifestyles, even adopted the
    slogan "poor but sexy" for his city. Everyone was supposed to exploit
    his or her own creativity to discover new niches and opportunities for
    monet­ization -- a magic formula that was supposed to bring about a new
    urban revival. Today there is hardly a city in Europe that does not
    issue a report about its creative economy, []{#Page_21 type="pagebreak"
    title="21"}and nearly all of these reports cite, directly or indirectly,
    Richard Florida.

    As already seen in the context of the knowledge economy, so too in the
    case of creative industries do measurable social change, wishful
    thinking, and political agendas blend together in such a way that it is
    impossible to identify a single cause for the developments taking place.
    The consequences, however, are significant. Over the last two
    generations, the demands of the labor market have fundamentally changed.
    Higher education and the ability to acquire new knowledge independently
    are now, to an increasing extent, required and expected as
    qualifications and personal attributes. The desired or enforced ability
    to be flexible at work, the widespread cooperation across institutions,
    the uprooted nature of labor, and the erosion of collective models for
    social security have displaced many activities, which once took place
    within clearly defined institutional or personal limits, into a new
    interstitial space that is neither private nor public in the classical
    sense. This is the space of networks, communities, and informal
    cooperation -- the space of sharing and exchange that has since been
    enabled by the emergence of ubiquitous digital communication. It allows
    an increasing number of people, whether willingly or otherwise, to
    envision themselves as active producers of information, knowledge,
    capability, and meaning. And because it is associated in various ways
    with the space of market-based exchange and with the bourgeois political
    sphere, it has lasting effects on both. This interstitial space becomes
    all the more important as fewer people are willing or able to rely on
    traditional institutions for their economic security. For, within it,
    personal and digital-based networks can and must be developed as
    alternatives, regardless of whether they prove sustainable for the long
    term. As a result, more and more actors, each with their own claims to
    meaning, have been rushing away from the private personal sphere into
    this new interstitial space. By now, this has become such a normal
    practice that whoever is *not* active in this ever-expanding
    interstitial space, which is rapidly becoming the main social sphere --
    whoever, that is, lacks a publicly visible profile on social mass media
    like Facebook, or does not number among those producing information and
    meaning and is thus so inconspicuous online as []{#Page_22
    type="pagebreak" title="22"}to yield no search results -- now stands out
    in a negative light (or, in far fewer cases, acquires a certain prestige
    on account of this very absence).
    :::

    ::: {.section}
    ### The erosion of heteronormativity {#c1-sec-0004}

    In this (sometimes more, sometimes less) public space for the continuous
    production of social meaning (and its exploit­ation), there is no
    question that the professional middle class is
    over-represented.[^24^](#c1-note-0024){#c1-note-0024a} It would be
    short-sighted, however, to reduce those seeking autonomy and the
    recognition of individuality and social diversity to the role of poster
    children for the new spirit of
    capitalism.[^25^](#c1-note-0025){#c1-note-0025a} The new social
    movements, for instance, initiated a social shift that has allowed an
    increasing number of people to demand, if nothing else, the right to
    participate in social life in a self-determined manner; that is,
    according to their own standards and values.

    Especially effective was the critique of patriarchal and heteronormative
    power relations, modes of conduct, and
    identities.[^26^](#c1-note-0026){#c1-note-0026a} In the context of the
    political upheavals at the end of the 1960s, the new women\'s and gay
    movements developed into influential actors. Their greatest achievement
    was to establish alternative cultural forms, lifestyles, and strategies
    of action in or around the mainstream of society. How this was done can
    be demonstrated by tracing, for example, the development of the gay
    movement in West Germany.

    In the fall of 1969, the liberalization of Paragraph 175 of the German
    Criminal Code came into effect. From then on, sexual activity between
    adult men was no longer punishable by law (women were not mentioned in
    this context). For the first time, a man could now express himself as a
    homosexual outside of semi-private space without immediately being
    exposed to the risk of criminal prosecution. This was a necessary
    precondition for the ability to defend one\'s own rights. As early as
    1971, the struggle for the recognition of gay life experiences reached
    the broader public when Rosa von Praunheim\'s film *It Is Not the
    Homosexual Who Is Perverse, but the Society in Which He Lives* was
    screened at the Berlin International Film Festival and then, shortly
    thereafter, broadcast on public television in North Rhine-Westphalia.
    The film, which is firmly situated in the agitprop tradition,
    []{#Page_23 type="pagebreak" title="23"}follows a young provincial man
    through the various milieus of Berlin\'s gay subcultures: from a
    monogamous relationship to nightclubs and public bathrooms until, at the
    end, he is enlightened by a political group of men who explain that it
    is not possible to lead a free life in a niche, as his own emancipation
    can only be achieved by a transformation of society as a whole. The film
    closes with a not-so-subtle call to action: "Out of the closets, into
    the streets!" Von Praunheim understood this emancipation to be a process
    that encompassed all areas of life and had to be carried out in public;
    it could only achieve success, moreover, in solidarity with other
    freedom movements such as the Black Panthers in the United States and
    the new women\'s movement. The goal, according to this film, is to
    articulate one\'s own identity as a specific and differentiated identity
    with its own experiences, values, and reference systems, and to anchor
    this identity within a society that not only tolerates it but also
    recognizes it as having equal validity.

    At first, however, the film triggered vehement controversies, even
    within the gay scene. The objection was that it attacked the gay
    subculture, which was not yet prepared to defend itself publicly against
    discrimination. Despite or (more likely) because of these controversies,
    more than 50 groups of gay activists soon formed in Germany. Such
    groups, largely composed of left-wing alternative students, included,
    for instance, the Homosexuelle Aktion Westberlin (HAW) and the Rote
    Zelle Schwul (RotZSchwul) in Frankfurt am
    Main.[^27^](#c1-note-0027){#c1-note-0027a} One focus of their activities
    was to have Paragraph 175 struck entirely from the legal code (which was
    not achieved until 1994). This cause was framed within a general
    struggle to overcome patriarchy and capitalism. At the earliest gay
    demonstrations in Germany, which took place in Münster in April 1972,
    protesters rallied behind the following slogan: "Brothers and sisters,
    gay or not, it is our duty to fight capitalism." This was understood as
    a necessary subordination to the greater struggle against what was known
    in the terminology of left-wing radical groups as the "main
    contradiction" of capitalism (that between capital and labor), and it
    led to strident differences within the gay movement. The dispute
    escalated during the next year. After the so-called *Tuntenstreit*, or
    "Battle of the Queens," which was []{#Page_24 type="pagebreak"
    title="24"}initiated by activists from Italy and France who had appeared
    in drag at the closing ceremony of the HAW\'s Spring Meeting in West
    Berlin, the gay movement was divided, or at least moving in a new
    direction. At the heart of the matter were the following questions: "Is
    there an inherent (many speak of an autonomous) position that gays hold
    with respect to the issue of homosexuality? Or can a position on
    homosexuality only be derived in association with the traditional
    workers\' movement?"[^28^](#c1-note-0028){#c1-note-0028a} In other
    words, was discrimination against homosexuality part of the social
    divide caused by capitalism (that is, one of its "ancillary
    contradictions") and thus only to be overcome by overcoming capitalism
    itself, or was it something unrelated to the "essence" of capitalism, an
    independent conflict requiring different strategies and methods? This
    conflict could never be fully resolved, but the second position, which
    was more interested in overcoming legal, social, and cultural
    discrimination than in struggling against economic exploitation, and
    which focused specifically on the social liberation of gays, proved to
    be far more dynamic in the long term. This was not least because both
    the old and new left were themselves not free of homophobia and because
    the entire radical student movement of the 1970s fell into crisis.

    Over the course of the 1970s and 1980s, "aesthetic self-empowerment" was
    realized through the efforts of artistic and (increasingly) commercial
    producers of images, texts, and
    sounds.[^29^](#c1-note-0029){#c1-note-0029a} Activists, artists, and
    intellectuals developed a language with which they could speak
    assertively in public about topics that had previously been taboo.
    Inspired by the expression "gay pride," which originated in the United
    States, they began to use the term *schwul* ("gay"), which until then
    had possessed negative connotations, with growing confidence. They
    founded numerous gay and lesbian cultural initiatives, theaters,
    publishing houses, magazines, bookstores, meeting places, and other
    associations in order to counter the misleading or (in their eyes)
    outright false representations of the mass media with their own
    multifarious media productions. In doing so, they typically followed a
    dual strategy: on the one hand, they wanted to create a space for the
    members of the movement in which it would be possible to formulate and
    live different identities; on the other hand, they were fighting to be
    accepted by society at large. While []{#Page_25 type="pagebreak"
    title="25"}a broader and broader spectrum of gay positions, experiences,
    and aesthetics was becoming visible to the public, the connection to
    left-wing radical contexts became weaker. Founded as early as 1974, and
    likewise in West Berlin, the General Homosexual Working Group
    (Allgemeine Homosexuelle Arbeitsgemeinschaft) sought to integrate gay
    politics into mainstream society by defining the latter -- on the basis
    of bourgeois, individual rights -- as a "politics of
    anti-discrimination." These efforts achieved a milestone in 1980 when,
    in the run-up to the parliamentary election, a podium discussion was
    held with representatives of all major political parties on the topic of
    the law governing sexual offences. The discussion took place in the
    Beethovenhalle in Bonn, which was the largest venue for political events
    in the former capital. Several participants considered the event to be a
    "disaster,"[^30^](#c1-note-0030){#c1-note-0030a} for it revived a number
    of internal conflicts (not least that between revolutionary and
    integrative positions). Yet the fact remains that representatives were
    present from every political party, and this alone was indicative of an
    unprecedented amount of public awareness for those demanding equal
    rights.

    The struggle against discrimination and for social recognition reached
    an entirely new level of urgency with the outbreak of HIV/AIDS. In 1983,
    the magazine *Der Spiegel* devoted its first cover story to the disease,
    thus bringing it to the awareness of the broader public. In the same
    year, the non-profit organization Deutsche Aids-Hilfe was founded to
    prevent further cases of discrimination, for *Der Spiegel* was not the
    only publication at the time to refer to AIDS as a "homosexual
    epidemic."[^31^](#c1-note-0031){#c1-note-0031a} The struggle against
    HIV/AIDS required a comprehensive mobilization. Funding had to be raised
    in order to deal with the social repercussions of the epidemic, to teach
    people about safe sexual practices for everyone and to direct research
    toward discovering causes and developing potential cures. The immediate
    threat that AIDS represented, especially while so little was known about
    the illness and its treatment remained a distant hope, created an
    impetus for mobilization that led to alliances between the gay movement,
    the healthcare system, and public authorities. Thus, the AIDS Inquiry
    Committee, sponsored by the conservative Christian Democratic Union,
    concluded in 1988 that, in the fight against the illness, "the
    homosexual subculture is []{#Page_26 type="pagebreak"
    title="26"}especially important. This informal structure should
    therefore neither be impeded nor repressed but rather, on the contrary,
    recognized and supported."[^32^](#c1-note-0032){#c1-note-0032a} The AIDS
    crisis proved to be a catalyst for advancing the integration of gays
    into society and for expanding what could be regarded as acceptable
    lifestyles, opinions, and cultural practices. As a consequence,
    homosexuals began to appear more frequently in the media, though their
    presence would never match that of hetero­sexuals. As of 1985, the
    television show *Lindenstraße* featured an openly gay protagonist, and
    the first kiss between men was aired in 1987. The episode still provoked
    a storm of protest -- Bayerische Rundfunk refused to broadcast it a
    second time -- but this was already a rearguard action and the
    integration of gays (and lesbians) into the social mainstream continued.
    In 1993, the first gay and lesbian city festival took place in Berlin,
    and the first Rainbow Parade was held in Vienna in 1996. In 2002, the
    Cologne Pride Day involved 1.2 million participants and attendees, thus
    surpassing for the first time the attendance at the traditional Rose
    Monday parade. By the end of the 1990s, the sociologist Rüdiger Lautmann
    was already prepared to maintain: "To be homosexual has become
    increasingly normalized, even if homophobia lives on in the depths of
    the collective disposition."[^33^](#c1-note-0033){#c1-note-0033a} This
    normalization was also reflected in a study published by the Ministry of
    Justice in the year 2000, which stressed "the similarity between
    homosexual and heterosexual relationships" and, on this basis, made an
    argument against discrimination.[^34^](#c1-note-0034){#c1-note-0034a}
    Around the year 2000, however, the classical gay movement had already
    passed its peak. A profound transformation had begun to take place in
    the middle of the 1990s. It lost its character as a new social movement
    (in the style of the 1970s) and began to splinter inwardly and
    outwardly. One could say that it transformed from a mass movement into a
    multitude of variously networked communities. The clearest sign of this
    transformation is the abbreviation "LGBT" (lesbian, gay, bisexual, and
    transgender), which, since the mid-1990s, has represented the internal
    heterogeneity of the movement as it has shifted toward becoming a
    network.[^35^](#c1-note-0035){#c1-note-0035a} At this point, the more
    radical actors were already speaking against the normalization of
    homosexuality. Queer theory, for example, was calling into question the
    "essentialist" definition of gender []{#Page_27 type="pagebreak"
    title="27"}-- that is, any definition reducing it to an immutable
    essence -- with respect to both its physical dimension (sex) and its
    social and cultural dimension (gender
    proper).[^36^](#c1-note-0036){#c1-note-0036a} It thus opened up a space
    for the articulation of experiences, self-descriptions, and lifestyles
    that, on every level, are located beyond the classical attributions of
    men and women. A new generation of intellectuals, activists, and artists
    took the stage and developed -- yet again through acts of aesthetic
    self-empowerment -- a language that enabled them to import, with
    confidence, different self-definitions into the public sphere. An
    example of this is the adoption of inclusive plural forms in German
    (*Aktivist\_innen* "activists," *Künstler\_innen* "artists"), which draw
    attention to the gaps and possibilities between male and female
    identities that are also expressed in the language itself. Just as with
    the terms "gay" or *schwul* some 30 years before, in this case, too, an
    important element was the confident and public adoption and semantic
    conversion of a formerly insulting word ("queer") by the very people and
    communities against whom it used to be
    directed.[^37^](#c1-note-0037){#c1-note-0037a} Likewise observable in
    these developments was the simultaneity of social (amateur) and
    artistic/scientific (professional) cultural production. The goal,
    however, was less to produce a clear antithesis than it was to oppose
    rigid attributions by underscoring mutability, hybridity, and
    uniqueness. Both the scope of what could be expressed in public and the
    circle of potential speakers expanded yet again. And, at least to some
    extent, the drag queen Conchita Wurst popularized complex gender
    constructions that went beyond the simple woman/man dualism. All of that
    said, the assertion by Rüdiger Lautmann quoted above -- "homophobia
    lives on in the depths of the collective dis­position" -- continued to
    hold true.

    If the gay movement is representative of the social liber­ation of the
    1970s and 1980s, then it is possible to regard its transformation into
    the LGBT movement during the 1990s -- with its multiplicity and fluidity
    of identity models and its stress on mutability and hybridity -- as a
    sign of the reinvention of this project within the context of an
    increasingly dominant digital condition. With this transformation,
    however, the diversification and fluidification of cultural practices
    and social roles have not yet come to an end. Ways of life that were
    initially subcultural and facing existential pressure []{#Page_28
    type="pagebreak" title="28"}are gradually entering the mainstream. They
    are expanding the range of readily available models of identity for
    anyone who might be interested, be it with respect to family forms
    (e.g., patchwork families, adoption by same-sex couples), diets (e.g.,
    vegetarianism and veganism), healthcare (e.g., anti-vaccination), or
    other principles of life and belief. All of them are seeking public
    recognition for a new frame of reference for social meaning that has
    originated from their own activity. This is necessarily a process
    characterized by conflicts and various degrees of resistance, including
    right-wing populism that seeks to defend "traditional values," but many
    of these movements will ultimately succeed in providing more people with
    the opportunity to speak in public, thus broadening the palette of
    themes that are considered to be important and legitimate.
    :::

    ::: {.section}
    ### Beyond center and periphery {#c1-sec-0005}

    In order to reach a better understanding of the complexity involved in
    the expanding social basis of cultural production, it is necessary to
    shift yet again to a different level. For, just as it would be myopic to
    examine the multiplication of cultural producers only in terms of
    professional knowledge workers from the middle class, it would likewise
    be insufficient to situate this multiplication exclusively in the
    centers of the West. The entire system of categories that justified the
    differentiation between the cultural "center" and the cultural
    "periphery" has begun to falter. This complex and multilayered process
    has been formulated and analyzed by the theory of "post-colonialism."
    Long before digital media made the challenge of cultural multiplicity a
    quotidian issue in the West, proponents of this theory had developed
    languages and terminologies for negotiating different positions without
    needing to impose a hierarchical order.

    Since the 1970s, the theoretical current of post-colonialism has been
    examining the cultural and epistemic dimensions of colonialism that,
    even after its end as a territorial system, have remained responsible
    for the continuation of dependent relations and power differentials. For
    my purposes -- which are to develop a European perspective on the
    factors ensuring that more and more people are able to participate in
    cultural []{#Page_29 type="pagebreak" title="29"}production -- two
    points are especially relevant because their effects reverberate in
    Europe itself. First is the deconstruction of the categories "West" (in
    the sense of the center) and "East" (in the sense of the periphery). And
    second is the focus on hybridity as a specific way for non-Western
    actors to deal with the dominant cultures of former colonial powers,
    which have continued to determine significant portions of globalized
    culture. The terms "West" and "East," "center" and "periphery," do not
    simply describe existing conditions; rather, they are categories that
    contribute, in an important way, to the creation of the very conditions
    that they presume to describe. This may sound somewhat circular, but it
    is precisely from this circularity that such cultural classifications
    derive their strength. The world that they illuminate is immersed in
    their own light. The category "East" -- or, to use the term of the
    literary theorist Edward Said,
    "orientalism"[^38^](#c1-note-0038){#c1-note-0038a} -- is a system of
    representation that pervades Western thinking. Within this system,
    Europe or the West (as the center) and the East (as the periphery)
    represent asymmetrical and antithetical concepts. This construction
    achieves a dual effect. As a self-description, on the one hand, it
    contributes to the formation of our own identity, for Europeans
    attrib­ute to themselves and to their continent such features as
    "rationality," "order," and "progress," while on the other hand
    identifying the alternative with "superstition," "chaos," or
    "stagnation." The East, moreover, is used as an exotic projection screen
    for our own suppressed desires. According to Said, a representational
    system of this sort can only take effect if it becomes "hegemonic"; that
    is, if it is perceived as self-evident and no longer as an act of
    attribution but rather as one of description, even and precisely by
    those against whom the system discriminates. Said\'s accomplishment is
    to have worked out how far-reaching this system was and, in many areas,
    it remains so today. It extended (and extends) from scientific
    disciplines, whose researchers discussed (until the 1980s) the theory of
    "oriental despotism,"[^39^](#c1-note-0039){#c1-note-0039a} to literature
    and art -- the motif of the harem was especially popular, particularly
    in paintings of the late nineteenth
    century[^40^](#c1-note-0040){#c1-note-0040a} -- all the way to everyday
    culture, where, as of 1913 in the United States, the cigarette brand
    Camel (introduced to compete with the then-leading brand, Fatima) was
    meant to evoke the []{#Page_30 type="pagebreak" title="30"}mystique and
    sensuality of the Orient.[^41^](#c1-note-0041){#c1-note-0041a} This
    system of representation, however, was more than a means of describing
    oneself and others; it also served to legitimize the allocation of all
    knowledge and agency on to one side, that of the West. Such an order was
    not restricted to culture; it also created and legitimized a sense of
    domination for colonial projects.[^42^](#c1-note-0042){#c1-note-0042a}
    This cultural legitimation, as Said points out, also persists after the
    end of formal colonial domination and continues to marginalize the
    postcolonial subjects. As before, they are unable to speak for
    themselves and therefore remain in the dependent periphery, which is
    defined by their subordinate position in relation to the center. Said
    directed the focus of critique to this arrangement of center and
    periphery, which he saw as being (re)produced and legitimized on the
    cultural level. From this arose the demand that everyone should have the
    right to speak, to place him- or herself in the center. To achieve this,
    it was necessary first of all to develop a language -- indeed, a
    cultural landscape -- that can manage without a hegemonic center and is
    thus oriented toward multiplicity instead of
    uniformity.[^43^](#c1-note-0043){#c1-note-0043a}

    A somewhat different approach has been taken by the literary theorist
    Homi K. Bhabha. He proceeds from the idea that the colonized never fully
    passively adopt the culture of the colonialists -- the "English book,"
    as he calls it. Their previous culture is never simply wiped out and
    replaced by another. What always and necessarily occurs is rather a
    process of hybridization. This concept, according to Bhabha,

    ::: {.extract}
    suggests that all of culture is constructed around negotiations and
    conflicts. Every cultural practice involves an attempt -- sometimes
    good, sometimes bad -- to establish authority. Even classical works of
    art, such as a painting by Brueghel or a composition by Beethoven, are
    concerned with the establishment of cultural authority. Now, this poses
    the following question: How does one function as a negotiator when
    one\'s own sense of agency is limited, for instance, on account of being
    excluded or oppressed? I think that, even in the role of the underdog,
    there are opportunities to upend the imposed cultural authorities -- to
    accept some aspects while rejecting others. It is in this way that
    symbols of authority are hybridized and made into something of one\'s
    own. For me, hybridization is not simply a mixture but rather a
    []{#Page_31 type="pagebreak" title="31"}strategic and selective
    appropriation of meanings; it is a way to create space for negotiators
    whose freedom and equality are
    endangered.[^44^](#c1-note-0044){#c1-note-0044a}
    :::

    Hybridization is thus a cultural strategy for evading marginality that
    is imposed from the outside: subjects, who from the dominant perspective
    are incapable of doing so, appropriate certain aspects of culture for
    themselves and transform them into something else. What is decisive is
    that this hybrid, created by means of active and unauthorized
    appropriation, opposes the dominant version and the resulting speech is
    thus legitimized from another -- that is, from one\'s own -- position.
    In this way, a cultural engagement is set under way and the superiority
    of one meaning or another is called into question. Who has the right to
    determine how and why a relationship with others should be entered,
    which resources should be appropriated from them, and how these
    resources should be used? At the heart of the matter lie the abilities
    of speech and interpretation; these can be seized in order to create
    space for a "cultural hybridity that entertains difference without an
    assumed or imposed hierarchy."[^45^](#c1-note-0045){#c1-note-0045a}

    At issue is thus a strategy for breaking down hegemonic cultural
    conditions, which distribute agency in a highly uneven manner, and for
    turning one\'s own cultural production -- which has been dismissed by
    cultural authorities as flawed, misconceived, or outright ignorant --
    into something negotiable and independently valuable. Bhabha is thus
    interested in fissures, differences, diversity, multiplicity, and
    processes of negotiation that generate something like shared meaning --
    culture, as he defines it -- instead of conceiving of it as something
    that precedes these processes and is threatened by them. Accordingly, he
    proceeds not from the idea of unity, which is threatened whenever
    "others" are empowered to speak and needs to be preserved, but rather
    from the irreducible multiplicity that, through laborious processes, can
    be brought into temporary and limited consensus. Bhabha\'s vision of
    culture is one without immutable authorities, interpretations, and
    truths. In theory, everything can be brought to the table. This is not a
    situation in which anything goes, yet the central meaning of
    negotiation, the contextuality of consensus, and the mutability of every
    frame of reference []{#Page_32 type="pagebreak" title="32"}-- none of
    which can be shared equally by everyone -- are always potentially
    negotiable.

    Post-colonialism draws attention to the "disruptive power of the
    excluded-included third," which becomes especially virulent when it
    "emerges in the middle of semantic
    structures."[^46^](#c1-note-0046){#c1-note-0046a} The recognition of
    this power reveals the increasing cultural independence of those
    formerly colonized, and it also transforms the cultural self-perception
    of the West, for, even in Western nations that were not significant
    colonial powers, there are multifaceted tensions between dominant
    cultures and those who are on the defensive against discrimination and
    attributions by others. Instead of relying on the old recipe of
    integration through assimilation (that is, the dissolution of the
    "other"), the right to self-determined difference is being called for
    more emphatically. In such a manner, collective identities, such as
    national identities, are freed from their questionable appeals to
    cultural homogeneity and essentiality, and reconceived in terms of the
    experience of immanent difference. Instead of one binding and
    unnegotiable frame of reference for everyone, which hierarchizes
    individual pos­itions and makes them appear unified, a new order without
    such limitations needs to be established. Ultimately, the aim is to
    provide nothing less than an "alternative reading of
    modernity,"[^47^](#c1-note-0047){#c1-note-0047a} which influences both
    the construction of the past and the modalities of the future. For
    European culture in particular, such a project is an immense challenge.

    Of course, these demands do not derive their everyday relevance
    primarily from theory but rather from the experiences of
    (de)colonization, migration, and globalization. Multifaceted as it is,
    however, the theory does provide forms and languages for articulating
    these phenomena, legitimizing new positions in public debates, and
    attacking persistent mechanisms of cultural marginalization. It helps to
    empower broader societal groups to become actively involved in cultural
    processes, namely people, such as migrants and their children, whose
    identity and experience are essentially shaped by non-Western cultures.
    The latter have been giving voice to their experiences more frequently
    and with greater confidence in all areas of public life, be it in
    politics, literature, music, or
    art.[^48^](#c1-note-0048){#c1-note-0048a} In Germany, for instance, the
    films by Fatih Akin (*Head-On* from 2004 and *Soul Kitchen* from 2009,
    to []{#Page_33 type="pagebreak" title="33"}name just two), in which the
    experience of immigration is represented as part of the German
    experience, have reached a wide public audience. In 2002, the group
    Kanak Attak organized a series of conferences with the telling motto *no
    integración*, and these did much to introduce postcolonial positions to
    the debates taking place in German-speaking
    countries.[^49^](#c1-note-0049){#c1-note-0049a} For a long time,
    politicians with "migration backgrounds" were considered to be competent
    in only one area, namely integration policy. This has since changed,
    though not entirely. In 2008, for instance, Cem Özdemir was elected
    co-chair of the Green Party and thus shares responsibility for all of
    its political positions. Developments of this sort have been enabled
    (and strengthened) by a shift in society\'s self-perception. In 2014,
    Cemile Giousouf, the integration commissioner for the conservative
    CDU/CSU alliance in the German Parliament, was able to make the
    following statement without inciting any controversy: "Over the past few
    years, Germany has become a modern land of
    immigration."[^50^](#c1-note-0050){#c1-note-0050a} A remarkable
    proclamation. Not ten years earlier, her party colleague Norbert Lammert
    had expressed, in his function as parliamentary president, interest in
    reviving the debate about the term "leading culture." The increasingly
    well-educated migrants of the first, second, or third gener­ation no
    longer accept the choice of being either marginalized as an exotic
    representative of the "other" or entirely assimilated. Rather, they are
    insisting on being able to introduce their specific experience as a
    constitutive contribution to the formation of the present -- in
    association and in conflict with other contributions, but at the same
    level and with the same legitimacy. It is no surprise that various forms
    of discrimin­ation and violence against "foreigners" not only continue
    in everyday life but have also been increasing in reaction to this new
    situation. Ultimately, established claims to power are being called into
    question.

    To summarize, at least three secular historical tendencies or movements,
    some of which can be traced back to the late nineteenth century but each
    of which gained considerable momentum during the last third of the
    twentieth (the spread of the knowledge economy, the erosion of
    heteronormativity, and the focus of post-colonialism on cultural
    hybridity), have greatly expanded the sphere of those who actively
    negotiate []{#Page_34 type="pagebreak" title="34"}social meaning. In
    large part, the patterns and cultural foundations of these processes
    developed long before the internet. Through the use of the internet, and
    through the experiences of dealing with it, they have encroached upon
    far greater portions of all societies.
    :::
    :::

    ::: {.section}
    The Culturalization of the World {#c1-sec-0006}
    --------------------------------

    The number of participants in cultural processes, however, is not the
    only thing that has increased. Parallel to that development, the field
    of the cultural has expanded as well -- that is, those areas of life
    that are not simply characterized by unalterable necessities, but rather
    contain or generate competing options and thus require conscious
    decisions.

    The term "culturalization of the economy" refers to the central position
    of knowledge-based, meaning-based, and affect-oriented processes in the
    creation of value. With the emergence of consumption as the driving
    force behind the production of goods and the concomitant necessity of
    having not only to satisfy existing demands but also to create new ones,
    the cultural and affective dimensions of the economy began to gain
    significance. I have already discussed the beginnings of product
    staging, advertising, and public relations. In addition to all of the
    continuities that remain with us from that time, it is also possible to
    point out a number of major changes that consumer society has undergone
    since the late 1960s. These changes can be delineated by examining the
    greater role played by design, which has been called the "core
    discipline of the creative
    economy."[^51^](#c1-note-0051){#c1-note-0051a}

    As a field of its own, design originated alongside industrialization,
    when, in collaborative processes, the activities of planning and
    designing were separated from those of carrying out
    production.[^52^](#c1-note-0052){#c1-note-0052a} It was not until the
    modern era that designers consciously endeavored to seek new forms for
    the logic inherent to mass production. With the aim of economic
    efficiency, they intended their designs to optimize the clearly defined
    functions of anonymous and endlessly reproducible objects. At the end of
    the nineteenth century, the architect Louis Sullivan, whose buildings
    still distinguish the skyline of Chicago, condensed this new attitude
    into the famous axiom []{#Page_35 type="pagebreak" title="35"}"form
    follows function." Mies van der Rohe, working as an architect in Chicago
    in the middle of the twentieth century, supplemented this with a pithy
    and famous formulation of his own: "less is more." The rationality of
    design, in the sense of isolating and improving specific functions, and
    the economical use of resources were of chief importance to modern
    (industrial) designers. Even the ten design principles of Dieter Rams,
    who led the design division of the consumer products company Braun from
    1965 to 1991 -- one of the main sources of inspiration for Jonathan Ive,
    Apple\'s chief design officer -- aimed to make products "usable,"
    "understandable," "honest," and "long-lasting." "Good design," according
    to his guiding principle, "is as little design as
    possible."[^53^](#c1-note-0053){#c1-note-0053a} This orientation toward
    the technical and functional promised to solve problems for everyone in
    a long-term and binding manner, for the inherent material and design
    qual­ities of an object were supposed to make it independent from
    changing times and from the tastes of consumers.

    ::: {.section}
    ### Beyond the object {#c1-sec-0007}

    At the end of the 1960s, a new generation of designers rebelled against
    this industrial and instrumental rationality, which was now felt to be
    authoritarian, soulless, and reductionist. In the works associated with
    "anti-design" or "radical design," the objectives of the discipline were
    redefined and a new formal language was developed. In the place of
    tech­nical and functional optimization, recombination -- ecological
    recycling or the postmodern interplay of forms -- emerged as a design
    method and aesthetic strategy. Moreover, the aspiration of design
    shifted from the individual object to its entire social and material
    environment. The processes of design and production, which had been
    closed off from one another and restricted to specialists, were opened
    up precisely to encourage the participation of non-designers, be it
    through interdisciplinary cooperation with other types of professions or
    through the empowerment of laymen. The objectives of design were
    radically expanded: rather than ending with the completion of an
    individual product, it was now supposed to engage with society. In the
    sense of cybernetics, this was regarded as a "system," controlled by
    feedback processes, []{#Page_36 type="pagebreak" title="36"}which
    connected social, technical, and biological dimensions to one
    another.[^54^](#c1-note-0054){#c1-note-0054a} Design, according to this
    new approach, was meant to be a "socially significant
    activity."[^55^](#c1-note-0055){#c1-note-0055a}

    Embedded in the social movements of the 1960s and 1970s, this new
    generation of designers was curious about the social and political
    potential of their discipline, and about possibilities for promoting
    flexibility and autonomy instead of rigid industrial efficiency. Design
    was no longer expected to solve problems once and for all, for such an
    idea did not correspond to the self-perception of an open and mutable
    society. Rather, it was expected to offer better opportun­ities for
    enabling people to react to continuously changing conditions. A radical
    proposal was developed by the Italian designer Enzo Mari, who in 1974
    published his handbook *Autoprogettazione* (Self-Design). It contained
    19 simple designs with which people could make, on their own,
    aesthetically and functionally sophisticated furniture out of pre-cut
    pieces of wood. In this case, the designs themselves were less important
    than the critique of conventional design as elitist and of consumer
    society as alienated and wasteful. Mari\'s aim was to reconceive the
    relations among designers, the manufacturing industry, and users.
    Increasingly, design came to be understood as a holistic and open
    process. Victor Papanek, the founder of ecological design, took things a
    step further. For him, design was "basic to all human activity. The
    planning and patterning of any act towards a desired, foreseeable end
    constitutes the design process. Any attempt to separate design, to make
    it a thing-by-itself, works counter to the inherent value of design as
    the primary underlying matrix of
    life."[^56^](#c1-note-0056){#c1-note-0056a}

    Potentially all aspects of life could therefore fall under the purview
    of design. This came about from the desire to oppose industrialism,
    which was blind to its catastrophic social and ecological consequences,
    with a new and comprehensive manner of seeing and acting that was
    unrestricted by economics.

    Toward the end of the 1970s, this expanded notion of design owed less
    and less to emancipatory social movements, and its socio-political goals
    began to fall by the wayside. Three fundamental patterns survived,
    however, which go beyond design and remain characteristic of the
    culturalization []{#Page_37 type="pagebreak" title="37"}of the economy:
    the discovery of the public as emancipated users and active
    participants; the use of appropriation, transformation, and
    recombination as methods for creating ever-new aesthetic
    differentiations; and, finally, the intention of shaping the lifeworld
    of the user.[^57^](#c1-note-0057){#c1-note-0057a}

    As these patterns became depoliticized and commercialized, the focus of
    designing the "lifeworld" shifted more and more toward designing the
    "experiential world." By the end of the 1990s, this had become so
    normalized that even management consultants could assert that
    "\[e\]xperiences represent an existing but previously unarticulated
    *genre of economic output*."[^58^](#c1-note-0058){#c1-note-0058a} It was
    possible to define the dimensions of the experiential world in various
    ways. For instance, it could be clearly delimited and product-oriented,
    like the flagship stores introduced by Nike in 1990, which, with their
    elaborate displays, were meant to turn shopping into an experience. This
    experience, as the company\'s executives hoped, radiated outward and
    influenced how the brand was perceived as a whole. The experiential
    world could also, however, be conceived in somewhat broader terms, for
    instance by design­ing entire institutions around the idea of creating a
    more attractive work environment and thereby increasing the commitment
    of employees. This approach is widespread today in creative industries
    and has become popularized through countless stories about ping-pong
    tables, gourmet cafeterias, and massage rooms in certain offices. In
    this case, the process of creativity is applied back to itself in order
    to systematize and optimize a given workplace\'s basis of operation. The
    development is comparable to the "invention of invention" that
    characterized industrial research around the end of the nineteenth
    century, though now the concept has been re­located to the field of
    knowledge production.

    Yet the "experiential world" can be expanded even further, for instance
    when entire cities attempt to make themselves attractive to
    international clientele and compete with others by building spectacular
    museums or sporting arenas. Displays in cities, as well as a few other
    central locations, are regularly constructed in order to produce a
    particular experience. This also entails, however, that certain forms of
    use that fail to fit the "urban
    script"[^59^](#c1-note-0059){#c1-note-0059a} are pushed to the margins
    or driven away.[^60^](#c1-note-0060){#c1-note-0060a} Thus, today, there
    is hardly a single area of life to []{#Page_38 type="pagebreak"
    title="38"}which the strategies and methods of design do not have
    access, and this access occurs at all levels. For some time, design has
    not been a purely visible matter, restricted to material objects; it
    rather forms and controls all of the senses. Cities, for example, have
    come to be understood increasingly as "sound spaces" and have
    accordingly been reconfigured with the goal of modulating their various
    noises.[^61^](#c1-note-0061){#c1-note-0061a} Yet design is no longer
    just a matter of objects, processes, and experiences. By now, in the
    context of reproductive medicine, it has even been applied to the
    biological foundations of life ("designer babies"). I will revisit this
    topic below.
    :::

    ::: {.section}
    ### Culture everywhere {#c1-sec-0008}

    Of course, design is not the only field of culture that has imposed
    itself over society as a whole. A similar development has occurred in
    the field of advertising, which, since the 1970s, has been integrated
    into many more physical and social spaces and by now has a broad range
    of methods at its disposal. Advertising is no longer found simply on
    billboards or in display windows. In the form of "guerilla marketing" or
    "product placement," it has penetrated every space and occupied every
    discourse -- by blending with political messages, for instance -- and
    can now even be spread, as "viral marketing," by the addressees of the
    advertisements themselves. Similar processes can be observed in the
    fields of art, fashion, music, theater, and sports. This has taken place
    perhaps most radically in the field of "gaming," which has drawn upon
    technical progress in the most direct possible manner and, with the
    spread of powerful computers and mobile applications, has left behind
    the confines of the traditional playing field. In alternate reality
    games, the realm of the virtual and fictitious has also been
    transcended, as physical spaces have been overlaid with their various
    scripts.[^62^](#c1-note-0062){#c1-note-0062a}

    This list could be extended, but the basic trend is clear enough,
    especially as the individual fields overlap and mutually influence one
    another. They are blending into a single interdependent field for
    generating social meaning in the form of economic activity. Moreover,
    through digitalization and networking, many new opportunities have
    arisen for large-scale involvement by the public in design processes.
    Thanks []{#Page_39 type="pagebreak" title="39"}to new communication
    technologies and flexible production processes, today\'s users can
    personalize and create products to suit their wishes. Here, the spectrum
    extends from tiny batches of creative-industrial products all the way to
    global processes of "mass customization," in which factory-based mass
    production is combined with personalization. One of the first
    applications of this was introduced in 1999 when, through its website, a
    sporting-goods company allowed customers to design certain elements of a
    shoe by altering it within a set of guidelines. This was taken a step
    further by the idea of "user-centered innovation," which relies on the
    specific knowledge of users to enhance a product, with the additional
    hope of discovering unintended applications and transforming these into
    new areas of business.[^63^](#c1-note-0063){#c1-note-0063a} It has also
    become possible for end users to take over the design process from the
    beginning, which has become considerably easier with the advent of
    specialized platforms for exchanging knowledge, alongside semi-automated
    production tools such as mechanical mills and 3D printers.
    Digitalization, which has allowed all content to be processed, and
    networking, which has created an endless amount of content ("raw
    material"), have turned appropriation and recombination into general
    methods of cultural production.[^64^](#c1-note-0064){#c1-note-0064a}
    This phenomenon will be examined more closely in the next chapter.

    Both the involvement of users in the production process and the methods
    of appropriation and recombination are extremely information-intensive
    and communication-intensive. Without the corresponding technological
    infrastructure, neither could be achieved efficiently or on a large
    scale. This was evident in the 1970s, when such approaches never made it
    beyond subcultures and conceptual studies. With today\'s search engines,
    every single user can trawl through an amount of information that, just
    a generation ago, would have been unmanageable even by professional
    archivists. A broad array of communication platforms (together with
    flexible production capacities and efficient logistics) not only weakens
    the contradiction between mass fabrication and personalization; it also
    allows users to network directly with one another in order to develop
    specialized knowledge together and thus to enable themselves to
    intervene directly in design processes, both as []{#Page_40
    type="pagebreak" title="40"}willing participants in and as critics of
    flexible global production processes.
    :::
    :::

    ::: {.section}
    The Technologization of Culture {#c1-sec-0009}
    -------------------------------

    That society is dependent on complex information technologies in order
    to organize its constitutive processes is, in itself, nothing new.
    Rather, this began as early as the late nineteenth century. It is
    directly correlated with the expansion and acceleration of the
    circulation of goods, which came about through industrialization. As the
    historian and sociologist James Beniger has noted, this led to a
    "control crisis," for administrative control centers were faced with the
    problem of losing sight of what was happening in their own factories,
    with their suppliers, and in the important markets of the time.
    Management was in a bind: decisions had to be made either on the basis
    of insufficient information or too late. The existing administrative and
    control mechanisms could no longer deal with the rapidly increasing
    complexity and time-sensitive nature of extensively organized production
    and distribution. The office became more important, and ever more people
    were needed there to fulfill a growing number of functions. Yet this was
    not enough for the crisis to subside. The old administrative methods,
    which involved manual information processing, simply could no longer
    keep up. The crisis reached its first dramatic peak in 1889 in the
    United States, with the realization that the census data from the year
    1880 had not yet been analyzed when the next census was already
    scheduled to take place during the subsequent year. In the same year,
    the Secretary of the Interior organized a conference to investigate
    faster methods of data processing. Two methods were tested for making
    manual labor more efficient, one of which had the potential to achieve
    greater efficiency by means of novel data-processing machines. The
    latter system emerged as the clear victor; developed by an engineer
    named Hermann Hollerith, it mechanically processed and stored data on
    punch cards. The idea was based on Hollerith\'s observations of the
    coup­ling and decoupling of railroad cars, which he interpreted as
    modular units that could be combined in any desired order. The punch
    card transferred this approach to information []{#Page_41
    type="pagebreak" title="41"}management. Data were no longer stored in
    fixed, linear arrangements (tables and lists) but rather in small units
    (the punch cards) that, like railroad cars, could be combined in any
    given way. The increase in efficiency -- with respect to speed *and*
    flexibility -- was enormous, and nearly a hundred of Hollerith\'s
    machines were used by the Census
    Bureau.[^65^](#c1-note-0065){#c1-note-0065a} This marked a turning point
    in the history of information processing, with technical means no longer
    being used exclusively to store data, but to process data as well. This
    was the only way to avoid the impending crisis, ensuring that
    bureaucratic management could maintain centralized control. Hollerith\'s
    machines proved to be a resounding success and were implemented in many
    more branches of government and corporate administration, where
    data-intensive processes had increased so rapidly they could not have
    been managed without such machines. This growth was accompanied by that
    of Hollerith\'s Tabulating Machine Company, which he founded in 1896 and
    which, after a number of mergers, was renamed in 1924 as the
    International Business Machines Corporation (IBM). Throughout the
    following decades, dependence on information-processing machines only
    deepened. The growing number of social, commercial, and military
    processes could only be managed by means of information technology. This
    largely took place, however, outside of public view, namely in the
    specialized divisions of large government and private organizations.
    These were the only institutions in command of the necessary resources
    for operating the complex technical infrastructure -- so-called
    mainframe computers -- that was essential to automatic information
    processing.

    ::: {.section}
    ### The independent media {#c1-sec-0010}

    As with so much else, this situation began to change in the 1960s. Mass
    media and information-processing technologies began to attract
    criticism, even though all of the involved subcultures, media activists,
    and hackers continued to act independently from one another until the
    1990s. The freedom-oriented social movements of the 1960s began to view
    the mass media as part of the political system against which they were
    struggling. The connections among the economy, politics, and the media
    were becoming more apparent, not []{#Page_42 type="pagebreak"
    title="42"}least because many mass media companies, especially those in
    Germany related to the Springer publishing house, were openly inimical
    to these social movements. Critical theor­ies arose that, borrowing
    Louis Althusser\'s influential term, regarded the media as part of the
    "ideological state apparatus"; that is, as one of the authorities whose
    task is to influence people to accept social relations to such a degree
    that the "repressive state apparatuses" (the police, the military, etc.)
    form a constant background in everyday
    life.[^66^](#c1-note-0066){#c1-note-0066a} Similarly influential,
    Antonio Gramsci\'s theory of "cultural hegemony" emphasized the
    condition in which the governed are manipulated to form a cultural
    consensus with the ruling class; they accept the latter\'s
    presuppositions (and the politics which are thus justified) even though,
    by doing so, they are forced to suffer economic
    disadvantages.[^67^](#c1-note-0067){#c1-note-0067a} Guy Debord and the
    Situationists attributed to the media a central role in the new form of
    rule known as "the spectacle," the glittery surfaces and superficial
    manifestations of which served to conceal society\'s true
    relations.[^68^](#c1-note-0068){#c1-note-0068a} In doing so, they
    aligned themselves with the critique of the "culture industry," which
    had been formulated by Max Horkheimer and Theodor W. Adorno at the
    beginning of the 1940s and had become a widely discussed key text by the
    1960s.

    Their differences aside, these perspectives were united in that they no
    longer understood the "public" as a neutral sphere, in which citizens
    could inform themselves freely and form their opinions, but rather as
    something that was created with specific intentions and consequences.
    From this grew an interest in "counter-publics"; that is, in forums
    where other actors could appear and negotiate theories of their own. The
    mass media thus became an important instrument for organizing the
    bourgeois--capitalist public, but they were also responsible for the
    development of alternatives. Media, according to one of the core ideas
    of these new approaches, are less a sphere in which an external reality
    is depicted; rather, they are themselves a constitutive element of
    reality.
    :::

    ::: {.section}
    ### Media as lifeworlds {#c1-sec-0011}

    Another branch of new media theories, that of Marshall McLuhan and the
    Toronto School of Communication,[^69^](#c1-note-0069){#c1-note-0069a}
    []{#Page_43 type="pagebreak" title="43"}reached a similar conclusion on
    different grounds. In 1964, McLuhan aroused a great deal of attention
    with his slogan "the medium is the message." He maintained that every
    medium of communication, by means of its media-specific characteristics,
    directly affected the consciousness, self-perception, and worldview of
    every individual.[^70^](#c1-note-0070){#c1-note-0070a} This, he
    believed, happens independently of and in addition to whatever specific
    message a medium might be conveying. From this perspective, reality does
    not exist outside of media, given that media codetermine our personal
    relation to and behavior in the world. For McLuhan and the Toronto
    School, media were thus not channels for transporting content but rather
    the all-encompassing environments -- galaxies -- in which we live.

    Such ideas were circulating much earlier and were intensively developed
    by artists, many of whom were beginning to experiment with new
    electronic media. An important starting point in this regard was the
    1963 exhibit *Exposition of Music -- Electronic Television* by the
    Korean artist Nam June Paik, who was then collaborating with Karlheinz
    Stockhausen in Düsseldorf. Among other things, Paik presented 12
    television sets, the screens of which were "distorted" by magnets. Here,
    however, "distorted" is a problematic term, for, as Paik explicitly
    noted, the electronic images were "a beautiful slap in the face of
    classic dualism in philosophy since the time of Plato. \[...\] Essence
    AND existence, essentia AND existentia. In the case of the electron,
    however, EXISTENTIA IS ESSENTIA."[^71^](#c1-note-0071){#c1-note-0071a}
    Paik no longer understood the electronic image on the television screen
    as a portrayal or representation of anything. Rather, it engendered in
    the moment of its appearance an autonomous reality beyond and
    independent of its representational function. A whole generation of
    artists began to explore forms of existence in electronic media, which
    they no longer understood as pure media of information. In his work
    *Video Corridor* (1969--70), Bruce Nauman stacked two monitors at the
    end of a corridor that was approximately 10 meters long but only 50
    centimeters wide. On the lower monitor ran a video showing the empty
    hallway. The upper monitor displayed an image captured by a camera
    installed at the entrance of the hall, about 3 meters high. If the
    viewer moved down the corridor toward the two []{#Page_44
    type="pagebreak" title="44"}monitors, he or she would thus be recorded
    by the latter camera. Yet the closer one came to the monitor, the
    farther one would be from the camera, so that one\'s image on the
    monitor would become smaller and smaller. Recorded from behind, viewers
    would thus watch themselves walking away from themselves. Surveillance
    by others, self-surveillance, recording, and disappearance were directly
    and intuitively connected with one another and thematized as fundamental
    issues of electronic media.

    Toward the end of the 1960s, the easier availability and mobility of
    analog electronic production technologies promoted the search for
    counter-publics and the exploration of media as comprehensive
    lifeworlds. In 1967, Sony introduced its first Portapak system: a
    battery-powered, self-contained recording system -- consisting of a
    camera, a cord, and a recorder -- with which it was possible to make
    (black-and-white) video recordings outside of a studio. Although the
    recording apparatus, which required additional devices for editing and
    projection, was offered at the relatively expensive price of \$1,500
    (which corresponds to about €8,000 today), it was still affordable for
    interested groups. Compared with the situation of traditional film
    cameras, these new cameras considerably lowered the initial hurdle for
    media production, for video tapes were not only much cheaper than film
    reels (and could be used for multiple recordings); they also made it
    possible to view recorded material immediately and on location. This
    enabled the production of works that were far more intuitive and
    spontaneous than earlier ones. The 1970s saw the formation of many video
    groups, media workshops, and other initiatives for the independent
    production of electronic media. Through their own distribution,
    festivals, and other channels, such groups created alternative public
    spheres. The latter became especially prominent in the United States
    where, at the end of the 1960s, the providers of cable networks were
    legally obligated to establish public-access channels, on which citizens
    were able to operate self-organized and non-commercial television
    programs. This gave rise to a considerable public-access movement there,
    which at one point extended across 4,000 cities and was responsible for
    producing programs from and for these different
    communities.[^72[]{#Page_45 type="pagebreak"
    title="45"}^](#c1-note-0072){#c1-note-0072a}

    What these initiatives shared in common, in Western Europe and the
    United States, was their attempt to close the gap between the
    consumption and production of media, to activate the public, and at
    least in part to experiment with the media themselves. Non-professional
    producers were empowered with the ability to control who told their
    stories and how this happened. Groups that previously had no access to
    the medial public sphere now had opportunities to represent themselves
    and their own interests. By working together on their own productions,
    such groups demystified the medium of television and simultaneously
    equipped it with a critical consciousness.

    Especially well received in Germany was the work of Hans Magnus
    Enzensberger, who in 1970 argued (on the basis of Bertolt Brecht\'s
    radio theory) in favor of distinguishing between "repressive" and
    "emancipatory" uses of media. For him, the emancipatory potential of
    media lay in the fact that "every receiver is \[...\] a potential
    transmitter" that can participate "interactively" in "collective
    production."[^73^](#c1-note-0073){#c1-note-0073a} In the same year, the
    first German video group, Telewissen, debuted in public with a
    demonstration in downtown Darmstadt. In 1980, at the peak of the
    movement for independent video production, there were approximately a
    hundred such groups throughout (West) Germany. The lack of distribution
    channels, however, represented a nearly insuperable obstacle and ensured
    that many independent productions were seldom viewed outside of
    small-scale settings. Tapes had to be exchanged between groups through
    the mail, and they were mainly shown at gatherings and events, and in
    bars. The dynamic of alternative media shifted toward a small subculture
    (though one networked throughout all of Europe) of pirate radio and
    television broadcasters. At the beginning of the 1980s and in the space
    of Radio Dreyeckland in Freiburg, which had been founded in 1977 as
    Radio Verte Fessenheim, operations began at Germany\'s first pirate or
    citizens\' radio station, which regularly broadcast information about
    the political protest movements that had arisen against the use of
    nuclear power in Fessenheim (France), Wyhl (Germany), and Kaiseraugst
    (Switzerland). The epicenter of the scene, however, was located in
    Amsterdam, where the group known as Rabotnik TV, which was an offshoot
    []{#Page_46 type="pagebreak" title="46"}of the squatter scene there,
    would illegally feed its signal through official television stations
    after their programming had ended at night (many stations then stopped
    broadcasting at midnight). In 1988, the group acquired legal
    broadcasting slots on the cable network and reached up to 50,000 viewers
    with their weekly experimental shows, which largely consisted of footage
    appropriated freely from elsewhere.[^74^](#c1-note-0074){#c1-note-0074a}
    Early in 1990, the pirate television station Kanal X was created in
    Leipzig; it produced its own citizens\' television programming in the
    quasi-lawless milieu of the GDR before
    reunification.[^75^](#c1-note-0075){#c1-note-0075a}

    These illegal, independent, or public-access stations only managed to
    establish themselves as real mass media to a very limited extent.
    Nevertheless, they played an important role in sensitizing an entire
    generation of media activists, whose opportunities expanded as the means
    of production became both better and cheaper. In the name of "tactical
    media," a new generation of artistic and political media activists came
    together in the middle of the
    1990s.[^76^](#c1-note-0076){#c1-note-0076a} They combined the "camcorder
    revolution," which in the late 1980s had made video equipment available
    to broader swaths of society, stirring visions of democratic media
    production, with the newly arrived medium of the internet. Despite still
    struggling with numerous technical difficulties, they remained constant
    in their belief that the internet would solve the hitherto intractable
    problem of distributing content. The transition from analog to digital
    media lowered the production hurdle yet again, not least through the
    ongoing development of improved software. Now, many stages of production
    that had previously required professional or semi-professional expertise
    and equipment could also be carried out by engaged laymen. As a
    consequence, the focus of interest broadened to include not only the
    development of alternative production groups but also the possibility of
    a flexible means of rapid intervention in existing structures. Media --
    both television and the internet -- were understood as environments in
    which one could act without directly representing a reality outside of
    the media. Television was analyzed down to its own legalities, which
    could then be manipulated to affect things beyond the media.
    Increasingly, culture jamming and the campaigns of so-called
    communication guerrillas were blurring the difference between media and
    political activity.[^77[]{#Page_47 type="pagebreak"
    title="47"}^](#c1-note-0077){#c1-note-0077a}

    This difference was dissolved entirely by a new generation of
    politically motivated artists, activists, and hackers, who transferred
    the tactics of civil disobedience -- blockading a building with a
    sit-in, for instance -- to the
    internet.[^78^](#c1-note-0078){#c1-note-0078a} When, in 1994, the
    Zapatista Army of National Liberation rose up in the south of Mexico,
    several media projects were created to support its mostly peaceful
    opposition and to make the movement known in Europe and North America.
    As part of this loose network, in 1998 the American artist collective
    Electronic Disturbance Theater developed a relatively simple computer
    program called FloodNet that enabled networked sympathizers to shut down
    websites, such as those of the Mexican government, in a targeted and
    temporary manner. The principle was easy enough: the program would
    automatic­ally reload a certain website over and over again in order to
    exhaust the capacities of its network
    servers.[^79^](#c1-note-0079){#c1-note-0079a} The goal was not to
    destroy data but rather to disturb the normal functioning of an
    institution in order to draw attention to the activities and interests
    of the protesters.
    :::

    ::: {.section}
    ### Networks as places of action {#c1-sec-0012}

    What this new generation of media activists shared in common with the
    hackers and pioneers of computer networks was the idea that
    communication media are spaces for agency. During the 1960s, these
    programmers were also in search of alternatives. The difference during
    the 1960s is that they did not pursue these alternatives in
    counter-publics, but rather in alternative lifestyles and communication.
    The rejection of bureaucracy as a form of social organization played a
    significant role in the critique of industrial society formulated by
    freedom-oriented social movements. At the beginning of the previous
    century, Max Weber had still regarded bureaucracy as a clear sign of
    progress toward a rational and method­ical
    organization.[^80^](#c1-note-0080){#c1-note-0080a} He based this
    assessment on processes that were impersonal, rule-bound, and
    transparent (in the sense that they were documented with files). But
    now, in the 1960s, bureaucracy was being criticized as soulless,
    alienated, oppressive, non-transparent, and unfit for an increasingly
    complex society. Whereas the first four of these points are in basic
    agreement with Weber\'s thesis about "disenchanting" []{#Page_48
    type="pagebreak" title="48"}the world, the last point represents a
    radical departure from his analysis. Bureaucracies were no longer
    regarded as hyper-efficient but rather as inefficient, and their size
    and rule-bound nature were no longer seen as strengths but rather as
    decisive weaknesses. The social bargain of offering prosperity and
    security in exchange for subordination to hierarchical relations struck
    many as being anything but attractive, and what blossomed instead was a
    broad interest in alternative forms of coexistence. New institutions
    were expected to be more flexible and more open. The desire to step away
    from the system was widespread, and many (mostly young) people set about
    doing exactly that. Alternative ways of life -- communes, shared
    apartments, and cooperatives -- were explored in the country and in
    cities. They were meant to provide the individual with greater autonomy
    and the opportunity to develop his or her own unique potential. Despite
    all of the differences between these concepts of life, they nevertheless
    shared something of a common denominator: the promise of
    reconceptualizing social institutions and the fundamentals of
    coexistence, with the aim of reformulating them in such a way as to
    allow everyone\'s personal potential to develop fully in the here and
    now.

    According to critics of such alternatives, bureaucracy was necessary in
    order to organize social life as it radically reduced the world\'s
    complexity by forcing it through the bottleneck of official procedures.
    However, the price paid for such efficiency involved the atrophying of
    human relationships, which had to be subordinated to rigid processes
    that were incapable of registering unique characteristics and
    differences and were unable to react in a timely manner to changing
    circumstances.

    In the 1960s, many countercultural attempts to find new forms of
    organization placed personal and open communication at the center of
    their efforts. Each individual was understood as a singular person with
    untapped potential rather than a carrier of abstract and clearly defined
    functions. It was soon realized, however, that every common activity and
    every common decision entailed processes that were time-intensive and
    communication-intensive. As soon as a group exceeded a certain size, it
    became practically impossible for it to reach any consensus. As a result
    of these experiences, an entire worldview emerged that propagated
    "smallness" as a central []{#Page_49 type="pagebreak" title="49"}value
    ("small is beautiful"). It was thought that in this way society might
    escape from bureaucracy with its ostensibly disastrous consequences for
    humanity and the environment.[^81^](#c1-note-0081){#c1-note-0081a} But
    this belief did not last for long. For, unlike the majority of European
    alternative movements, the counterculture in the United States was not
    overwhelmingly critical of technology. On the contrary, many actors
    there sought suitable technologies for solving the practical problems of
    social organization. At the end of the 1960s, a considerable amount of
    attention was devoted to the field of basic technological research. This
    field brought together the interests of the military, academics,
    businesses, and activists from the counterculture. The common ground for
    all of them was a cybernetic vision of institutions, or, in the words of
    the historian Fred Turner:

    ::: {.extract}
    a picture of humans and machines as dynamic, collaborating elements in a
    single, highly fluid, socio-technical system. Within that system,
    control emerged not from the mind of a commanding officer, but from the
    complex, probabilistic interactions of humans, machines and events
    around them. Moreover, the mechanical elements of the system in question
    -- in this case, the predictor -- enabled the human elements to achieve
    what all Americans would agree was a worthwhile goal. \[...\] Over the
    coming decades, this second vision of benevolent man-machine systems, of
    circular flows of information, would emerge as a driving force in the
    establishment of the military--industrial--academic complex and as a
    model of an alternative to that
    complex.[^82^](#c1-note-0082){#c1-note-0082a}
    :::

    This complex was possible because, as a theory, cybernetics was
    formulated in extraordinarily abstract terms, so much so that a whole
    variety of competing visions could be associated with
    it.[^83^](#c1-note-0083){#c1-note-0083a} With cybernetics as a
    meta-science, it was possible to investigate the common features of
    technical, social, and biological
    processes.[^84^](#c1-note-0084){#c1-note-0084a} They were analyzed as
    open, interactive, and information-processing systems. It was especially
    consequential that cybernetics defined control and communication as the
    same thing, namely as activities oriented toward informational
    feedback.[^85^](#c1-note-0085){#c1-note-0085a} The heterogeneous legacy
    of cybernetics and its synonymous treatment of the terms "communication"
    and "control" continue to influence information technology and the
    internet today.[]{#Page_50 type="pagebreak" title="50"}

    The various actors who contributed to the development of the internet
    shared a common interest for forms of organ­ization based on the
    comprehensive, dynamic, and open exchange of information. Both on the
    micro and macro level (and this is decisive at this point),
    decentralized and flexible communication technologies were meant to
    become the foundation of new organizational models. Militaries feared
    attacks on their command and communication centers; academics wanted to
    broaden their culture of autonomy, collaboration among peers, and the
    free exchange of information; businesses were looking for new areas of
    activity; and countercultural activists were longing for new forms of
    peaceful coexistence.[^86^](#c1-note-0086){#c1-note-0086a} They all
    rejected the bureaucratic model, and the counterculture provided them
    with the central catchword for their alternative vision: community.
    Though rather difficult to define, it was a powerful and positive term
    that somehow promised the opposite of bureaucracy: humanity,
    cooperation, horizontality, mutual trust, and consensus. Now, however,
    humanity was expected to be reconfigured as a community in cooperation
    with and inseparable from machines. And what was yearned for had become
    a liberating symbiosis of man and machine, an idea that the author
    Richard Brautigan was quick to mock in his poem "All Watched Over by
    Machines of Loving Grace" from 1967:

    ::: {.poem}
    ::: {.lineGroup}
    I like to think (and

    the sooner the better!)

    of a cybernetic meadow

    where mammals and computers

    live together in mutually

    programming harmony

    like pure water

    touching clear sky.[^87^](#c1-note-0087){#c1-note-0087a}
    :::
    :::

    Here, Brautigan is ridiculing both the impatience (*the sooner the
    better!*) and the naïve optimism (*harmony, clear sky*) of the
    countercultural activists. Primarily, he regarded the underlying vision
    as an innocent but amusing fantasy and not as a potential threat against
    which something had to be done. And there were also reasons to believe
    that, ultimately, the new communities would be free from the coercive
    nature that []{#Page_51 type="pagebreak" title="51"}had traditionally
    characterized the downside of community experiences. It was thought that
    the autonomy and freedom of the individual could be regained in and by
    means of the community. The conditions for this were that participation
    in the community had to be voluntary and that the rules of participation
    had to be self-imposed. I will return to this topic in greater detail
    below.

    In line with their solution-oriented engineering culture and the
    results-focused military funders who by and large set the agenda, a
    relatively small group of computer scientists now took it upon
    themselves to establish the technological foundations for new
    institutions. This was not an abstract goal for the distant future;
    rather, they wanted to change everyday practices as soon as possible. It
    was around this time that advanced technology became the basis of social
    communication, which now adopted forms that would have been
    inconceivable (not to mention impracticable) without these
    preconditions. Of course, effective communication technologies already
    existed at the time. Large corporations had begun long before then to
    operate their own computing centers. In contrast to the latter, however,
    the new infrastructure could also be used by individuals outside of
    established institutions and could be implemented for all forms of
    communication and exchange. This idea gave rise to a pragmatic culture
    of horizontal, voluntary cooperation. The clearest summary of this early
    ethos -- which originated at the unusual intersection of military,
    academic, and countercultural interests -- was offered by David D.
    Clark, a computer scientist who for some time coordinated the
    development of technical standards for the internet: "We reject: kings,
    presidents and voting. We believe in: rough consensus and running
    code."[^88^](#c1-note-0088){#c1-note-0088a}

    All forms of classical, formal hierarchies and their methods for
    resolving conflicts -- commands (by kings and presidents) and votes --
    were dismissed. Implemented in their place was a pragmatics of open
    cooperation that was oriented around two guiding principles. The first
    was that different views should be discussed without a single individual
    being able to block any final decisions. Such was the meaning of the
    expression "rough consensus." The second was that, in accordance with
    the classical engineering tradition, the focus should remain on concrete
    solutions that had to be measured against one []{#Page_52
    type="pagebreak" title="52"}another on the basis of transparent
    criteria. Such was the meaning of the expression "running code." In
    large part, this method was possible because the group oriented around
    these principles was, internally, relatively homogeneous: it consisted
    of top-notch computer scientists -- all of them men -- at respected
    American universities and research centers. For this very reason, many
    potential and fundamental conflicts were avoided, at least at first.
    This internal homogeneity lends rather dark undertones to their sunny
    vision, but this was hardly recognized at the time. Today these
    undertones are far more apparent, and I will return to them below.

    Not only were technical protocols developed on the basis of these
    principles, but organizational forms as well. Along with the Internet
    Engineering Task Force (which he directed), Clark created the so-called
    Request-for-Comments documents, with which ideas could be presented to
    interested members of the community and simultaneous feedback could be
    collected in order to work through the ideas in question and thus reach
    a rough consensus. If such a consensus could not be reached -- if, for
    instance, an idea failed to resonate with anyone or was too
    controversial -- then the matter would be dropped. The feedback was
    organized as a form of many-to-many communication through email lists,
    newsgroups, and online chat systems. This proved to be so effective that
    horizontal communication within large groups or between multiple groups
    could take place without resulting in chaos. This therefore invalidated
    the traditional trend that social units, once they reach a certain size,
    would necessarily introduce hierarchical structures for the sake of
    reducing complexity and communication. In other words, the foundations
    were laid for larger numbers of (changing) people to organize flexibly
    and with the aim of building an open consensus. For Manuel Castells,
    this combination of organizational flexibility and scalability in size
    is the decisive innovation that was enabled by the rise of the network
    society.[^89^](#c1-note-0089){#c1-note-0089a} At the same time, however,
    this meant that forms of organization spread that could only be possible
    on the basis of technologies that have formed (and continue to form)
    part of the infrastructure of the internet. Digital technology and the
    social activity of individual users were linked together to an
    unprecedented extent. Social and cultural agendas were now directly
    related []{#Page_53 type="pagebreak" title="53"}to and entangled with
    technical design. Each of the four original interest groups -- the
    military, scientists, businesses, and the counterculture -- implemented
    new technologies to pursue their own projects, which partly complemented
    and partly contradicted one another. As we know today, the first three
    groups still cooperate closely with each other. To a great extent, this
    has allowed the military and corporations, which are willingly supported
    by researchers in need of funding, to determine the technology and thus
    aspects of the social and cultural agendas that depend on it.

    The software developers\' immediate environment experienced its first
    major change in the late 1970s. Software, which for many had been a mere
    supplement to more expensive and highly specialized hardware, became a
    marketable good with stringent licensing restrictions. A new generation
    of businesses, led by Bill Gates, suddenly began to label co­operation
    among programmers as theft.[^90^](#c1-note-0090){#c1-note-0090a}
    Previously it had been par for the course, and above all necessary, for
    programmers to share software with one another. The former culture of
    horizontal cooperation between developers transformed into a
    hierarchical and commercially oriented relation between developers and
    users (many of whom, at least at the beginning, had developed programs
    of their own). For the first time, copyright came to play an important
    role in digital culture. In order to survive in this environment, the
    practice of open cooperation had to be placed on a new legal foundation.
    Copyright law, which served to separate programmers (producers) from
    users (consumers), had to be neutralized or circumvented. The first step
    in this direction was taken in 1984 by the activist and programmer
    Richard Stallman. Composed by Stallman, the GNU General Public License
    was and remains a brilliant hack that uses the letter of copyright law
    against its own spirit. This happens in the form of a license that
    defines "four freedoms":

    1. The freedom to run the program as you wish, for any purpose (freedom
    0).
    2. The freedom to study how the program works and change it so it does
    your computing as you wish (freedom 1).
    3. The freedom to redistribute copies so you can help your neighbor
    (freedom 2).[]{#Page_54 type="pagebreak" title="54"}
    4. The freedom to distribute copies of your modified versions to others
    (freedom 3). By doing this you can give the whole community a chance
    to benefit from your changes.[^91^](#c1-note-0091){#c1-note-0091a}

    Thanks to this license, people who were personally unacquainted and did
    not share a common social environment could now cooperate (freedoms 2
    and 3) and simultaneously remain autonomous and unrestricted (freedoms 0
    and 1). For many, the tension between the need to develop complex
    software in large teams and the desire to maintain one\'s own autonomy
    represented an incentive to try out new forms of
    cooperation.[^92^](#c1-note-0092){#c1-note-0092a}

    Stallman\'s influence was at first limited to a small circle of
    programmers. In the middle of the 1980s, the goal of developing a
    completely free operating system seemed a distant one. Communication
    between those interested in doing so was often slow and complicated. In
    part, program codes still had to be sent by mail. It was not until the
    beginning of the 1990s that students in technical departments at many
    universities could access the
    internet.[^93^](#c1-note-0093){#c1-note-0093a} One of the first to use
    these new opportunities in an innovative way was a Finnish student named
    Linus Torvalds. He built upon Stallman\'s work and programmed a kernel,
    which, as the most important module of an operating system, governs the
    interaction between hardware and software. He published the first free
    version of this in 1991 and encouraged anyone interested to give him
    feedback.[^94^](#c1-note-0094){#c1-note-0094a} And it poured in.
    Torvalds reacted promptly and issued new versions of his software in
    quick succession. Instead of understanding his software as a finished
    product, he treated it like an open-ended process. This, in turn,
    motiv­ated even more developers to participate, because they saw that
    their contributions were being adopted swiftly, which led to the
    formation of an open community of interested programmers who swapped
    ideas over the internet and continued writing software. In order to
    maintain an overview of the different versions of the program, which
    appeared in parallel with one another, it soon became necessary to
    employ specialized platforms. The fusion of social processes --
    horizontal and voluntary cooperation among developers -- and
    technological platforms, which enabled this form of cooperation
    []{#Page_55 type="pagebreak" title="55"}by providing archives, filter
    functions, and search capabil­ities that made it possible to organize
    large amounts of data, was thus advanced even further. The programmers
    were no longer primarily working on the development of the internet
    itself, which by then was functioning quite reliably, but were rather
    using the internet to apply their cooperative principles to other
    arenas. By the end of the 1990s, the free-software movement had
    established a new, internet-based form of organization and had
    demonstrated its efficiency in practice: horizontal, informal
    communities of actors -- voluntary, autonomous, and focused on a common
    interest -- that, on the basis of high-tech infrastructure, could
    include thousands of people without having to create formal hierarchies.
    :::
    :::

    ::: {.section}
    From the Margins to the Center of Society {#c1-sec-0013}
    -----------------------------------------

    It was around this same time that the technologies in question, which
    were already no longer very new, entered mainstream society. Within a
    few years, the internet became part of everyday life. Three years before
    the turn of the millennium, only about 6 percent of the entire German
    population used the internet, often only occasionally. Three years after
    the millennium, the number of users already exceeded 53 percent. Since
    then, this share has increased even further. In 2014, it was more than
    97 percent for people under the age of
    40.[^95^](#c1-note-0095){#c1-note-0095a} Parallel to these developments,
    data transfer rates increased considerably, broadband connections ousted
    the need for dial-up modems, and the internet was suddenly "here" and no
    longer "there." With the spread of mobile devices, especially since the
    year 2007 when the first iPhone was introduced, digital communication
    became available both extensively and continuously. Since then, the
    internet has been ubiquitous. The amount of time that users spend online
    has increased and, with the rapid ascent of social mass media such as
    Facebook, people have been online in almost every situation and
    circumstance in life.[^96^](#c1-note-0096){#c1-note-0096a} The internet,
    like water or electricity, has become for many people a utility that is
    simply taken for granted.

    In a BBC survey from 2010, 80 percent of those polled believed that
    internet access -- a precondition for participating []{#Page_56
    type="pagebreak" title="56"}in the now dominant digital condition --
    should be regarded as a fundamental human right. This idea was most
    popular in South Korea (96 percent) and Mexico (94 percent), while in
    Germany at least 72 percent were of the same
    opinion.[^97^](#c1-note-0097){#c1-note-0097a}

    On the basis of this new infrastructure, which is now relevant in all
    areas of life, the cultural developments described above have been
    severed from the specific historical conditions from which they emerged
    and have permeated society as a whole. Expressivity -- the ability to
    communicate something "unique" -- is no longer a trait of artists and
    know­ledge workers alone, but rather something that is required by an
    increasingly broader stratum of society and is already being taught in
    schools. Users of social mass media must produce (themselves). The
    development of specific, differentiated identities and the demand that
    each be treated equally are no longer promoted exclusively by groups who
    have to struggle against repression, existential threats, and
    marginalization, but have penetrated deeply into the former mainstream,
    not least because the present forms of capitalism have learned to profit
    from the spread of niches and segmentation. When even conservative
    parties have abandoned the idea of a "leading culture," then cultural
    differences can no longer be classified by enforcing an absolute and
    indisputable hierarchy, the top of which is occupied by specific
    (geographical and cultural) centers. Rather, a space has been opened up
    for endless negotiations, a space in which -- at least in principle --
    everything can be called into question. This is not, of course, a
    peaceful and egalitarian process. In addition to the practical hurdles
    that exist in polarizing societies, there are also violent backlashes
    and new forms of fundamentalism that are attempting once again to remove
    certain religious, social, cultural, or political dimensions of
    existence from the discussion. Yet these can only be understood in light
    of a sweeping cultural transformation that has already reached
    mainstream society.[^98^](#c1-note-0098){#c1-note-0098a} In other words,
    the digital condition has become quotidian and dominant. It forms a
    cultural constellation that determines all areas of life, and its
    characteristic features are clearly recognizable. These will be the
    focus of the next chapter.[]{#Page_57 type="pagebreak" title="57"}
    :::

    ::: {.section .notesSet type="rearnotes"}
    []{#notesSet}Notes {#c1-ntgp-9999}
    ------------------

    ::: {.section .notesList}
    [1](#c1-note-0001a){#c1-note-0001}  Kathrin Passig and Sascha Lobo,
    *Internet: Segen oder Fluch* (Berlin: Rowohlt, 2012) \[--trans.\].

    [2](#c1-note-0002a){#c1-note-0002}  The expression "heteronormatively
    behaving" is used here to mean that, while in the public eye, the
    behavior of the people []{#Page_177 type="pagebreak" title="177"}in
    question conformed to heterosexual norms regardless of their personal
    sexual orientations.

    [3](#c1-note-0003a){#c1-note-0003}  No order is ever entirely closed
    off. In this case, too, there was also room for exceptions and for
    collective moments of greater cultural multiplicity. That said, the
    social openness of the end of the 1920s, for instance, was restricted to
    particular milieus within large cities and was accordingly short-lived.

    [4](#c1-note-0004a){#c1-note-0004}  Fritz Machlup, *The Political
    Economy of Monopoly: Business, Labor and Government Policies*
    (Baltimore, MD: The Johns Hopkins University Press, 1952).

    [5](#c1-note-0005a){#c1-note-0005}  Machlup was a student of Ludwig von
    Mises, the most influential representative of this radically
    individualist school. See Hans-Hermann Hoppe, "Die Österreichische
    Schule und ihre Bedeutung für die moderne Wirtschaftswissenschaft," in
    Karl-Dieter Grüske (ed.), *Die Gemeinwirtschaft: Kommentarband zur
    Neuauflage von Ludwig von Mises' "Die Gemeinwirtschaft"* (Düsseldorf:
    Verlag Wirtschaft und Finanzen, 1996), pp. 65--90.

    [6](#c1-note-0006a){#c1-note-0006}  Fritz Machlup, *The Production and
    Distribution of Knowledge in the United States* (New York: John Wiley &
    Sons, 1962).

    [7](#c1-note-0007a){#c1-note-0007}  The term "knowledge worker" had
    already been introduced to the discussion a few years before; see Peter
    Drucker, *Landmarks of Tomorrow: A Report on the New* (New York: Harper,
    1959).

    [8](#c1-note-0008a){#c1-note-0008}  Peter Ecker, "Die
    Verwissenschaftlichung der Industrie: Zur Geschichte der
    Industrieforschung in den europäischen und amerikanischen
    Elektrokonzernen 1890--1930," *Zeitschrift für Unternehmensgeschichte*
    35 (1990): 73--94.

    [9](#c1-note-0009a){#c1-note-0009}  Edward Bernays was the son of
    Sigmund Freud\'s sister Anna and Ely Bernays, the brother of Freud\'s
    wife, Martha Bernays.

    [10](#c1-note-0010a){#c1-note-0010}  Edward L. Bernays, *Propaganda*
    (New York: Horace Liverlight, 1928).

    [11](#c1-note-0011a){#c1-note-0011}  James Beniger, *The Control
    Revolution: Technological and Economic Origins of the Information
    Society* (Cambridge, MA: Harvard University Press, 1986), p. 350.

    [12](#c1-note-0012a){#c1-note-0012}  Norbert Wiener, *Cybernetics: Or
    Control and Communication in the Animal and the Machine* (New York: J.
    Wiley, 1948).

    [13](#c1-note-0013a){#c1-note-0013}  Daniel Bell, *The Coming of
    Post-Industrial Society: A Venture in Social Forecasting* (New York:
    Basic Books, 1973).

    [14](#c1-note-0014a){#c1-note-0014}  Simon Nora and Alain Minc, *The
    Computerization of Society: A Report to the President of France*
    (Cambridge, MA: MIT Press, 1980).

    [15](#c1-note-0015a){#c1-note-0015}  Manuel Castells, *The Rise of the
    Network Society* (Oxford: Blackwell, 1996).

    [16](#c1-note-0016a){#c1-note-0016}  Hans-Dieter Kübler, *Mythos
    Wissensgesellschaft: Gesellschaft­licher Wandel zwischen Information,
    Medien und Wissen -- Eine Einführung* (Wiesbaden: Verlag für
    Sozialwissenschaften, 2009).[]{#Page_178 type="pagebreak" title="178"}

    [17](#c1-note-0017a){#c1-note-0017}  Luc Boltanski and Ève Chiapello,
    *The New Spirit of Capitalism*, trans. Gregory Elliott (London: Verso,
    2005).

    [18](#c1-note-0018a){#c1-note-0018}  Michael Piore and Charles Sabel,
    *The Second Industrial Divide: Possibilities of Prosperity* (New York:
    Basic Books, 1984).

    [19](#c1-note-0019a){#c1-note-0019}  Castells, *The Rise of the Network
    Society*. For a critical evaluation of Castells\'s work, see Felix
    Stalder, *Manuel Castells and the Theory of the Network Society*
    (Cambridge: Polity, 2006).

    [20](#c1-note-0020a){#c1-note-0020}  "UK Creative Industries Mapping
    Documents" (1998); quoted from Terry Flew, *The Creative Industries:
    Culture and Policy* (Los Angeles, CA: Sage, 2012), pp. 9--10.

    [21](#c1-note-0021a){#c1-note-0021}  The rise of the creative
    industries, and the hope that they inspired among politicians, did not
    escape criticism. Among the first works to draw attention to the
    precarious nature of working in such industries was Angela McRobbie\'s
    *British Fashion Design: Rag Trade or Image Industry?* (New York:
    Routledge, 1998).

    [22](#c1-note-0022a){#c1-note-0022}  This definition is not without a
    degree of tautology, given that economic growth is based on talent,
    which itself is defined by its ability to create new jobs; that is,
    economic growth. At the same time, he employs the term "talent" in an
    extremely narrow sense. Apparently, if something has nothing to do with
    job creation, it also has nothing to do with talent or creativity. All
    forms of creativity are thus measured and compared according to a common
    criterion.

    [23](#c1-note-0023a){#c1-note-0023}  Richard Florida, *Cities and the
    Creative Class* (New York: Routledge, 2005), p. 5.

    [24](#c1-note-0024a){#c1-note-0024}  One study has reached the
    conclusion that, despite mass participation, "a new form of
    communicative elite has developed, namely digitally and technically
    versed actors who inform themselves in this way, exchange ideas and thus
    gain influence. For them, the possibilities of platforms mainly
    represent an expansion of useful tools. Above all, the dissemination of
    digital technology makes it easier for versed and highly networked
    individuals to convey their news more simply -- and, for these groups of
    people, it lowers the threshold for active participation." Michael
    Bauer, "Digitale Technologien und Partizipation," in Clara Landler et
    al. (eds), *Netzpolitik in Österreich: Internet, Macht, Menschenrechte*
    (Krems: Donau-Universität Krems, 2013), pp. 219--24, at 224
    \[--trans.\].

    [25](#c1-note-0025a){#c1-note-0025}  Boltanski and Chiapello, *The New
    Spirit of Capitalism*.

    [26](#c1-note-0026a){#c1-note-0026}  According to Wikipedia,
    "Heteronormativity is the belief that people fall into distinct and
    complementary genders (man and woman) with natural roles in life. It
    assumes that heterosexuality is the only sexual orientation or only
    norm, and states that sexual and marital relations are most (or only)
    fitting between people of opposite sexes."[]{#Page_179 type="pagebreak"
    title="179"}

    [27](#c1-note-0027a){#c1-note-0027}  Jannis Plastargias, *RotZSchwul:
    Der Beginn einer Bewegung (1971--1975)* (Berlin: Querverlag, 2015).

    [28](#c1-note-0028a){#c1-note-0028}  Helmut Ahrens et al. (eds),
    *Tuntenstreit: Theoriediskussion der Homosexuellen Aktion Westberlin*
    (Berlin: Rosa Winkel, 1975), p. 4.

    [29](#c1-note-0029a){#c1-note-0029}  Susanne Regener and Katrin Köppert
    (eds), *Privat/öffentlich: Mediale Selbstentwürfe von Homosexualität*
    (Vienna: Turia + Kant, 2013).

    [30](#c1-note-0030a){#c1-note-0030}  Such, for instance, was the
    assessment of Manfred Bruns, the spokesperson for the Lesbian and Gay
    Association in Germany, in his text "Schwulenpolitik früher" (link no
    longer active). From today\'s perspective, however, the main problem
    with this event was the unclear position of the Green Party with respect
    to pedophilia. See Franz Walter et al. (eds), *Die Grünen und die
    Pädosexualität: Eine bundesdeutsche Geschichte* (Göttingen: Vandenhoeck
    & Ruprecht, 2014).

    [31](#c1-note-0031a){#c1-note-0031}  "AIDS: Tödliche Seuche," *Der
    Spiegel* 23 (1983) \[--trans.\].

    [32](#c1-note-0032a){#c1-note-0032}  Quoted from Frank Niggemeier, "Gay
    Pride: Schwules Selbst­bewußtsein aus dem Village," in Bernd Polster
    (ed.), *West-Wind: Die Amerikanisierung Europas* (Cologne: Dumont,
    1995), pp. 179--87, at 184 \[--trans.\].

    [33](#c1-note-0033a){#c1-note-0033}  Quoted from Regener and Köppert,
    *Privat/öffentlich*, p. 7 \[--trans.\].

    [34](#c1-note-0034a){#c1-note-0034}  Hans-Peter Buba and László A.
    Vaskovics, *Benachteiligung gleichgeschlechtlich orientierter Personen
    und Paare: Studie im Auftrag des Bundesministerium der Justiz* (Cologne:
    Bundes­anzeiger, 2001).

    [35](#c1-note-0035a){#c1-note-0035}  This process of internal
    differentiation has not yet reached its conclusion, and thus the
    acronyms have become longer and longer: LGBPTTQQIIAA+ stands for
    "lesbian, gay, bisexual, pansexual, transgender, transsexual, queer,
    questioning, intersex, intergender, asexual, ally."

    [36](#c1-note-0036a){#c1-note-0036}  Judith Butler, *Gender Trouble:
    Feminism and the Subversion of Identity* (New York: Routledge, 1989).

    [37](#c1-note-0037a){#c1-note-0037}  Andreas Krass, "Queer Studies: Eine
    Einführung," in Krass (ed.), *Queer denken: Gegen die Ordnung der
    Sexualität* (Frankfurt am Main: Suhrkamp, 2003), pp. 7--27.

    [38](#c1-note-0038a){#c1-note-0038}  Edward W. Said, *Orientalism* (New
    York: Vintage Books, 1978).

    [39](#c1-note-0039a){#c1-note-0039}  Kark August Wittfogel, *Oriental
    Despotism: A Comparative Study of Total Power* (New Haven, CT: Yale
    University Press, 1957).

    [40](#c1-note-0040a){#c1-note-0040}  Silke Förschler, *Bilder des Harem:
    Medienwandel und kultereller Austausch* (Berlin: Reimer, 2010).

    [41](#c1-note-0041a){#c1-note-0041}  The selection and effectiveness of
    these images is not a coincidence. Camel was one of the first brands of
    cigarettes for []{#Page_180 type="pagebreak" title="180"}which
    advertising, in the sense described above, was used in a systematic
    manner.

    [42](#c1-note-0042a){#c1-note-0042}  This would not exclude feelings of
    regret about the loss of an exotic and romantic way of life, such as
    those of T. E. Lawrence, whose activities in the Near East during the
    First World War were memorialized in the film *Lawrence of Arabia*
    (1962).

    [43](#c1-note-0043a){#c1-note-0043}  Said has often been criticized,
    however, for portraying orientalism so dominantly that there seems to be
    no way out of the existing dependent relations. For an overview of the
    debates that Said has instigated, see María do Mar Castro Varela and
    Nikita Dhawan, *Postkoloniale Theorie: Eine kritische Ein­führung*
    (Bielefeld: Transcript, 2005), pp. 37--46.

    [44](#c1-note-0044a){#c1-note-0044}  "Migration führt zu 'hybrider'
    Gesellschaft" (an interview with Homi K. Bhabha), *ORF Science*
    (November 9, 2007), online \[--trans.\].

    [45](#c1-note-0045a){#c1-note-0045}  Homi K. Bhabha, *The Location of
    Culture* (New York: Routledge, 1994), p. 4.

    [46](#c1-note-0046a){#c1-note-0046}  Elisabeth Bronfen and Benjamin
    Marius, "Hybride Kulturen: Einleitung zur anglo-amerikanischen
    Multikulturismusdebatte," in Bronfen et al. (eds), *Hybride Kulturen*
    (Tübingen: Stauffenburg), pp. 1--30, at 8 \[--trans.\].

    [47](#c1-note-0047a){#c1-note-0047}  "What Is Postcolonial Thinking? An
    Interview with Achille Mbembe," *Eurozine* (December 2006), online.

    [48](#c1-note-0048a){#c1-note-0048}  Migrants have always created their
    own culture, which deals in various ways with the experience of
    migration itself, but non-migrant populations have long tended to ignore
    this. Things have now begun to change in this regard, for instance
    through Imra Ayata and Bülent Kullukcu\'s compilation of songs by the
    Turkish diaspora of the 1970s and 1980s: *Songs of Gastarbeiter*
    (Munich: Trikont, 2013).

    [49](#c1-note-0049a){#c1-note-0049}  The conference programs can be
    found at: \<\>.

    [50](#c1-note-0050a){#c1-note-0050}  "Deutschland entwickelt sich zu
    einem attraktiven Einwanderungsland für hochqualifizierte Zuwanderer,"
    press release by the CDU/CSU Alliance in the German Parliament (June 4,
    2014), online \[--trans.\].

    [51](#c1-note-0051a){#c1-note-0051}  Andreas Reckwitz, *Die Erfindung
    der Kreativität: Zum Prozess gesellschaftlicher Ästhetisierung* (Berlin:
    Suhrkamp, 2011), p. 180 \[--trans.\]. An English translation of this
    book is forthcoming: *The Invention of Creativity: Modern Society and
    the Culture of the New*, trans. Steven Black (Cambridge: Polity, 2017).

    [52](#c1-note-0052a){#c1-note-0052}  Gert Selle, *Geschichte des Design
    in Deutschland* (Frankfurt am Main: Campus, 2007).

    [53](#c1-note-0053a){#c1-note-0053}  "Less Is More: The Design Ethos of
    Dieter Rams," *SFMOMA* (June 29, 2011), online.[]{#Page_181
    type="pagebreak" title="181"}

    [54](#c1-note-0054a){#c1-note-0054}  The cybernetic perspective was
    introduced to the field of design primarily by Buckminster Fuller. See
    Diedrich Diederichsen and Anselm Franke, *The Whole Earth: California
    and the Disappearance of the Outside* (Berlin: Sternberg, 2013).

    [55](#c1-note-0055a){#c1-note-0055}  Clive Dilnot, "Design as a Socially
    Significant Activity: An Introduction," *Design Studies* 3/3 (1982):
    139--46.

    [56](#c1-note-0056a){#c1-note-0056}  Victor J. Papanek, *Design for the
    Real World: Human Ecology and Social Change* (New York: Pantheon, 1972),
    p. 2.

    [57](#c1-note-0057a){#c1-note-0057}  Reckwitz, *Die Erfindung der
    Kreativität*.

    [58](#c1-note-0058a){#c1-note-0058}  B. Joseph Pine and James H.
    Gilmore, *The Experience Economy: Work Is Theater and Every Business Is
    a Stage* (Boston, MA: Harvard Business School Press, 1999), p. ix (the
    emphasis is original).

    [59](#c1-note-0059a){#c1-note-0059}  Mona El Khafif, *Inszenierter
    Urbanismus: Stadtraum für Kunst, Kultur und Konsum im Zeitalter der
    Erlebnisgesellschaft* (Saarbrücken: VDM Verlag Dr. Müller, 2013).

    [60](#c1-note-0060a){#c1-note-0060}  Konrad Becker and Martin Wassermair
    (eds), *Phantom Kulturstadt* (Vienna: Löcker, 2009).

    [61](#c1-note-0061a){#c1-note-0061}  See, for example, Andres Bosshard,
    *Stadt hören: Klang­spaziergänge durch Zürich* (Zurich: NZZ Libro,
    2009).

    [62](#c1-note-0062a){#c1-note-0062}  "An alternate realty game (ARG),"
    according to Wikipedia, "is an interactive networked narrative that uses
    the real world as a platform and employs transmedia storytelling to
    deliver a story that may be altered by players\' ideas or actions."

    [63](#c1-note-0063a){#c1-note-0063}  Eric von Hippel, *Democratizing
    Innovation* (Cambridge, MA: MIT Press, 2005).

    [64](#c1-note-0064a){#c1-note-0064}  It is often the case that the
    involvement of users simply serves to increase the efficiency of
    production processes and customer service. Many activities that were
    once undertaken at the expense of businesses now have to be carried out
    by the customers themselves. See Günter Voss, *Der arbeitende Kunde:
    Wenn Konsumenten zu unbezahlten Mitarbeitern werden* (Frankfurt am Main:
    Campus, 2005).

    [65](#c1-note-0065a){#c1-note-0065}  Beniger, *The Control Revolution*,
    pp. 411--16.

    [66](#c1-note-0066a){#c1-note-0066}  Louis Althusser, "Ideology and
    Ideological State Apparatuses (Notes towards an Investigation)," in
    Althusser, *Lenin and Philosophy and Other Essays*, trans. Ben Brewster
    (New York: Monthly Review Press, 1971), pp. 127--86.

    [67](#c1-note-0067a){#c1-note-0067}  Florian Becker et al. (eds),
    *Gramsci lesen! Einstiege in die Gefängnis­hefte* (Hamburg: Argument,
    2013), pp. 20--35.

    [68](#c1-note-0068a){#c1-note-0068}  Guy Debord, *The Society of the
    Spectacle*, trans. Fredy Perlman and Jon Supak (Detroit: Black & Red,
    1977).

    [69](#c1-note-0069a){#c1-note-0069}  Derrick de Kerckhove, "McLuhan and
    the Toronto School of Communication," *Canadian Journal of
    Communication* 14/4 (1989): 73--9.[]{#Page_182 type="pagebreak"
    title="182"}

    [70](#c1-note-0070a){#c1-note-0070}  Marshall McLuhan, *Understanding
    Media: The Extensions of Man* (New York: McGraw-Hill, 1964).

    [71](#c1-note-0071a){#c1-note-0071}  Nam Jun Paik, "Exposition of Music
    -- Electronic Television" (leaflet accompanying the exhibition). Quoted
    from Zhang Ga, "Sounds, Images, Perception and Electrons," *Douban*
    (March 3, 2016), online.

    [72](#c1-note-0072a){#c1-note-0072}  Laura R. Linder, *Public Access
    Television: America\'s Electronic Soapbox* (Westport, CT: Praeger,
    1999).

    [73](#c1-note-0073a){#c1-note-0073}  Hans Magnus Enzensberger,
    "Constituents of a Theory of the Media," in Noah Wardrip-Fruin and Nick
    Montfort (eds), *The New Media Reader* (Cambridge, MA: MIT Press, 2003),
    pp. 259--75.

    [74](#c1-note-0074a){#c1-note-0074}  Paul Groot, "Rabotnik TV,"
    *Mediamatic* 2/3 (1988), online.

    [75](#c1-note-0075a){#c1-note-0075}  Inke Arns, "Social Technologies:
    Deconstruction, Subversion and the Utopia of Democratic Communication,"
    *Medien Kunst Netz* (2004), online.

    [76](#c1-note-0076a){#c1-note-0076}  The term was coined at a series of
    conferences titled The Next Five Minutes (N5M), which were held in
    Amsterdam from 1993 to 2003. See \<\>.

    [77](#c1-note-0077a){#c1-note-0077}  Mark Dery, *Culture Jamming:
    Hacking, Slashing and Sniping in the Empire of Signs* (Westfield: Open
    Media, 1993); Luther Blisset et al., *Handbuch der
    Kommunikationsguerilla*, 5th edn (Berlin: Assoziationen A, 2012).

    [78](#c1-note-0078a){#c1-note-0078}  Critical Art Ensemble, *Electronic
    Civil Disobedience and Other Unpopular Ideas* (New York: Autonomedia,
    1996).

    [79](#c1-note-0079a){#c1-note-0079}  Today this method is known as a
    "distributed denial of service attack" (DDOS).

    [80](#c1-note-0080a){#c1-note-0080}  Max Weber, *Economy and Society: An
    Outline of Interpretive Sociology*, trans. Guenther Roth and Claus
    Wittich (Berkeley, CA: University of California Press, 1978), pp. 26--8.

    [81](#c1-note-0081a){#c1-note-0081}  Ernst Friedrich Schumacher, *Small
    Is Beautiful: Economics as if People Mattered*, 8th edn (New York:
    Harper Perennial, 2014).

    [82](#c1-note-0082a){#c1-note-0082}  Fred Turner, *From Counterculture
    to Cyberculture: Stewart Brand, the Whole Earth Movement and the Rise of
    Digital Utopianism* (Chicago, IL: University of Chicago Press, 2006), p.
    21. In this regard, see also the documentary films *Das Netz* by Lutz
    Dammbeck (2003) and *All Watched Over by Machines of Loving Grace* by
    Adam Curtis (2011).

    [83](#c1-note-0083a){#c1-note-0083}  It was possible to understand
    cybernetics as a language of free markets or also as one of centralized
    planned economies. See Slava Gerovitch, *From Newspeak to Cyberspeak: A
    History of Soviet Cybernetics* (Cambridge, MA: MIT Press, 2002). The
    great interest of Soviet scientists in cybernetics rendered the term
    rather suspicious in the West, where it was disassociated from
    artificial intelligence.[]{#Page_183 type="pagebreak" title="183"}

    [84](#c1-note-0084a){#c1-note-0084}  Claus Pias, "The Age of
    Cybernetics," in Pias (ed.), *Cybernetics: The Macy Conferences
    1946--1953* (Zurich: Diaphanes, 2016), pp. 11--27.

    [85](#c1-note-0085a){#c1-note-0085}  Norbert Wiener, one of the
    cofounders of cybernetics, explained this as follows in 1950: "In giving
    the definition of Cybernetics in the original book, I classed
    communication and control together. Why did I do this? When I
    communicate with another person, I impart a message to him, and when he
    communicates back with me he returns a related message which contains
    information primarily accessible to him and not to me. When I control
    the actions of another person, I communicate a message to him, and
    although this message is in the imperative mood, the technique of
    communication does not differ from that of a message of fact.
    Furthermore, if my control is to be effective I must take cognizance of
    any messages from him which may indicate that the order is understood
    and has been obeyed." Norbert Wiener, *The Human Use of Human Beings:
    Cybernetics and Society*, 2nd edn (London: Free Association Books,
    1989), p. 16.

    [86](#c1-note-0086a){#c1-note-0086}  Though presented here as distinct,
    these interests could in fact be held by one and the same person. In
    *From Counterculture to Cyberculture*, for instance, Turner discusses
    "countercultural entrepreneurs."

    [87](#c1-note-0087a){#c1-note-0087}  Richard Brautigan, "All Watched
    Over by Machines of Loving Grace," in *All Watched Over by Machines of
    Loving Grace*, by Brautigan (San Francisco: The Communication Company,
    1967).

    [88](#c1-note-0088a){#c1-note-0088}  David D. Clark, "A Cloudy Crystal
    Ball: Visions of the Future," *Internet Engineering Taskforce* (July
    1992), online.

    [89](#c1-note-0089a){#c1-note-0089}  Castells, *The Rise of the Network
    Society*.

    [90](#c1-note-0090a){#c1-note-0090}  Bill Gates, "An Open Letter to
    Hobbyists," *Homebrew Computer Club Newsletter* 2/1 (1976): 2.

    [91](#c1-note-0091a){#c1-note-0091}  Richard Stallman, "What Is Free
    Software?", *GNU Operating System*, online.

    [92](#c1-note-0092a){#c1-note-0092}  The fundamentally cooperative
    nature of programming was recognized early on. See Gerald M. Weinberg,
    *The Psychology of Computer Programming*, rev. edn (New York: Dorset
    House, 1998 \[originally published in 1971\]).

    [93](#c1-note-0093a){#c1-note-0093}  On the history of free software,
    see Volker Grassmuck, *Freie Software: Zwischen Privat- und
    Gemeineigentum* (Berlin: Bundeszentrale für politische Bildung, 2002).

    [94](#c1-note-0094a){#c1-note-0094}  In his first email on the topic, he
    wrote: "Hello everybody out there \[...\]. I'm doing a (free) operating
    system (just a hobby, won\'t be big and professional like gnu) \[...\].
    This has been brewing since April, and is starting to get ready. I\'d
    like any feedback on things people like/dislike." Linus Torvalds, "What
    []{#Page_184 type="pagebreak" title="184"}Would You Like to See Most in
    Minix," *Usenet Group* (August 1991), online.

    [95](#c1-note-0095a){#c1-note-0095}  ARD/ZDF, "Onlinestudie" (2015),
    online.

    [96](#c1-note-0096a){#c1-note-0096}  From 1997 to 2003, the average use
    of online media in Germany climbed from 76 to 138 minutes per day, and
    by 2013 it reached 169 minutes. Over the same span of time, the average
    frequency of use increased from 3.3 to 4.4 days per week, and by 2013 it
    was 5.8. From 2007 to 2013, the percentage of people who were members of
    private social networks like Facebook grew from 15 percent to 46
    percent. Of these, nearly 60 percent -- around 19 million people -- used
    such services on a daily basis. The source of this information is the
    article cited in the previous note.

    [97](#c1-note-0097a){#c1-note-0097}  "Internet Access Is 'a Fundamental
    Right'," *BBC News* (8 March 2010), online.

    [98](#c1-note-0098a){#c1-note-0098}  Manuel Castells, *The Power of
    Identity* (Oxford: Blackwell, 1997), pp. 7--22.
    :::
    :::

    [II]{.chapterNumber} [Forms]{.chapterTitle} {#c2}
  • ::: {.section}
    With the emergence of the internet around the turn of the millennium as
    an omnipresent infrastructure for communication and coordination,
    previously independent cultural developments began to spread beyond
    their specific original contexts, mutually influencing and enhancing one
    another, and becoming increasingly intertwined. Out of a disconnected
    conglomeration of more or less marginalized practices, a new and
    specific cultural environment thus took shape, usurping or marginalizing
    an ever greater variety of cultural constellations. The following
    discussion will focus on three *forms* of the digital condition; that
    is, on those formal qualities that (notwithstanding all of its internal
    conflicts and contradictions) lend a particular shape to this cultural
    environment as a whole: *referentiality*, *communality*, and
    *algorithmicity*. It is only because most of the cultural processes
    operating under the digital condition are characterized by common formal
    features such as these that it is reasonable to speak of the digital
    condition in the singular.

    "Referentiality" is a method with which individuals can inscribe
    themselves into cultural processes and constitute themselves as
    producers. Understood as shared social meaning, the arena of culture
    entails that such an undertaking cannot be limited to the individual.
    Rather, it takes place within a larger framework whose existence and
    development depend on []{#Page_58 type="pagebreak" title="58"}communal
    formations. "Algorithmicity" denotes those aspects of cultural processes
    that are (pre-)arranged by the activities of machines. Algorithms
    transform the vast quantities of data and information that characterize
    so many facets of present-day life into dimensions and formats that can
    be registered by human perception. It is impossible to read the content
    of billions of websites. Therefore we turn to services such as Google\'s
    search algorithm, which reduces the data flood ("big data") to a
    manageable amount and translates it into a format that humans can
    understand ("small data"). Without them, human beings could not
    comprehend or do anything within a culture built around digital
    technologies, but they influence our understanding and activity in an
    ambivalent way. They create new dependencies by pre-sorting and making
    the (informational) world available to us, yet simultaneously ensure our
    autonomy by providing the preconditions that enable us to act.
    :::

    ::: {.section}
    Referentiality {#c2-sec-0002}
    --------------

    In the digital condition, one of the methods (if not *the* most
    fundamental method) enabling humans to participate -- alone or in groups
    -- in the collective negotiation of meaning is the system of creating
    references. In a number of arenas, referential processes play an
    important role in the assignment of both meaning and form. According to
    the art historian André Rottmann, for instance, "one might claim that
    working with references has in recent years become the dominant
    production-aesthetic model in contemporary
    art."[^1^](#c2-note-0001){#c2-note-0001a} This burgeoning engagement
    with references, however, is hardly restricted to the world of
    contemporary art. Referentiality is a feature of many processes that
    encompass the operations of various genres of professional and everyday
    culture. In its essence, it is the use of materials that are already
    equipped with meaning -- as opposed to so-called raw material -- to
    create new meanings. The referential techniques used to achieve this are
    extremely diverse, a fact reflected in the numerous terms that exist to
    describe them: re-mix, re-make, re-enactment, appropriation, sampling,
    meme, imitation, homage, tropicália, parody, quotation, post-production,
    re-performance, []{#Page_59 type="pagebreak" title="59"}camouflage,
    (non-academic) research, re-creativity, mashup, transformative use, and
    so on.

    These processes have two important aspects in common: the
    recognizability of the sources and the freedom to deal with them however
    one likes. The first creates an internal system of references from which
    meaning and aesthetics are derived in an essential
    manner.[^2^](#c2-note-0002){#c2-note-0002a} The second is the
    precondition enabling the creation of something that is both new and on
    the same level as the re-used material. This represents a clear
    departure from the historical--critical method, which endeavors to embed
    a source in its original context in order to re-determine its meaning,
    but also a departure from classical forms of rendition such as
    translations, adaptations (for instance, adapting a book for a film), or
    cover versions, which, though they translate a work into another
    language or medium, still attempt to preserve its original meaning.
    Re-mixes produced by DJs are one example of the referential treatment of
    source material. In his book on the history of DJ culture, the
    journalist Ulf Poschardt notes: "The remixer isn\'t concerned with
    salvaging authenticity, but with creating a new
    authenticity."[^3^](#c2-note-0003){#c2-note-0003a} For instead of
    distancing themselves from the past, which would follow the (Western)
    logic of progress or the spirit of the avant-garde, these processes
    refer explicitly to precursors and to existing material. In one and the
    same gesture, both one\'s own new position and the context and cultural
    tradition that is being carried on in one\'s own work are constituted
    performatively; that is, through one\'s own activity in the moment. I
    will discuss this phenomenon in greater depth below.

    To work with existing cultural material is, in itself, nothing new. In
    modern montages, artists likewise drew upon available texts, images, and
    treated materials. Yet there is an important difference: montages were
    concerned with bringing together seemingly incongruous but stable
    "finished pieces" in a more or less unmediated and fragmentary manner.
    This is especially clear in the collages by the Dadaists or in
    Expressionist literature such as Alfred Döblin\'s *Berlin
    Alexanderplatz*. In these works, the experience of Modernity\'s many
    fractures -- its fragmentation and turmoil -- was given a new aesthetic
    form. In his reference to montages, Adorno thus observed that the
    "negation of synthesis becomes a principle []{#Page_60 type="pagebreak"
    title="60"}of form."[^4^](#c2-note-0004){#c2-note-0004a} At least for a
    brief moment, he considered them an adequate expression for the
    impossibility of reconciling the contradictions of capitalist culture.
    Influenced by Adorno, the literary theorist Peter Bürger went so far as
    to call the montage the true "paradigm of
    modernity."[^5^](#c2-note-0005){#c2-note-0005a} In today\'s referential
    processes, on the contrary, pieces are not brought together as much as
    they are integrated into one another by being altered, adapted, and
    transformed. Unlike the older arrangement, it is not the fissures
    between elements that are foregrounded but rather their synthesis in the
    present. Conchita Wurst, the bearded diva, is not torn between two
    conflicting poles. Rather, she represents a successful synthesis --
    something new and harmonious that distinguishes itself by showcasing
    elements of the old order (man/woman) and simultaneously transcending
    them.

    This synthesis, however, is usually just temporary, for at any time it
    can itself serve as material for yet another rendering. Of course, this
    is far easier to pull off with digital objects than with analog objects,
    though these categories have become increasingly porous and thus
    increasingly problematic as opposites. More and more objects exist both
    in an analog and in a digital form. Think of photographs and slides,
    which have become so easy to digitalize. Even three-dimensional objects
    can now be scanned and printed. In the future, programmable materials
    with controllable and reversible features will cause the difference
    between the two domains to vanish: analog is becoming more and more
    digital.

    Montages and referential processes can only become widespread methods
    if, in a given society, cultural objects are available in three
    different respects. The first is economic and organizational: they must
    be affordable and easily accessible. Whoever is unable to afford books
    or get hold of them by some other means will not be able to reconfigure
    any texts. The second is cultural: working with cultural objects --
    which can always create deviations from the source in unpredictable ways
    -- must not be treated as taboo or illegal, but rather as an everyday
    activity without any special preconditions. It is much easier to
    manipulate a text from a secular newspaper than one from a religious
    canon. The third is material: it must be possible to use the material
    and to change it.[^6[]{#Page_61 type="pagebreak"
    title="61"}^](#c2-note-0006){#c2-note-0006a}

    In terms of this third form of availability, montages differ from
    referential processes, for cultural objects can be integrated into one
    another -- instead of simply being placed side by side -- far more
    readily when they are digitally coded. Information is digitally coded
    when it is stored by means of a limited system of discrete (that is,
    separated by finite intervals or distances) signs that are meaningless
    in themselves. This allows information to be copied from one carrier to
    another without any loss and it allows the respective signs, whether
    individually or in groups, to be arranged freely. Seen in this way,
    digital coding is not necessarily bound to computers but can rather be
    realized with all materials: a mosaic is a digital process in which
    information is coded by means of variously colored tiles, just as a
    digital image consists of pixels. In the case of the mosaic, of course,
    the resolution is far lower. Alphabetic writing is a form of coding
    linguistic information by means of discrete signs that are, in
    themselves, meaningless. Consequently, Florian Cramer has argued that
    "every form of literature that is recorded alphabetically and not based
    on analog parameters such as ideograms or orality is already digital in
    that it is stored in discrete
    signs."[^7^](#c2-note-0007){#c2-note-0007a} However, the specific
    features of the alphabet, as Marshall McLuhan repeatedly underscored,
    did not fully develop until the advent of the printing
    press.[^8^](#c2-note-0008){#c2-note-0008a} It was the printing press, in
    other words, that first abstracted written signs from analog handwriting
    and transformed them into standardized symbols that could be repeated
    without any loss of information. In this practical sense, the printing
    press made writing digital, with the result that dealing with texts soon
    became radically different.

    ::: {.section}
    ### Information overload 1.0 {#c2-sec-0003}

    The printing press made texts available in the three respects mentioned
    above. For one thing, their number increased rapidly, while their price
    significantly sank. During the first two generations after Gutenberg\'s
    invention -- that is, between 1450 and 1500 -- more books were produced
    than during the thousand years
    before.[^9^](#c2-note-0009){#c2-note-0009a} And that was just the
    beginning. Dealing with books and their content changed from the ground
    up. In manuscript culture, every new copy represented a potential
    degradation of the original, and therefore []{#Page_62 type="pagebreak"
    title="62"}the oldest sources (those that had undergone as little
    corruption as possible) were valued above all. With the advent of print
    culture, the idea took hold that texts could be improved by the process
    of editing, not least because the availability of old sources, through
    reprints and facsimiles, had also improved dramatically. Pure
    reproduction was mechanized and overcome as a cultural challenge.

    According to the historian Elizabeth Eisenstein, one of the first
    consequences of the greatly increased availability of the printed book
    was that it overcame the "tyranny of major authorities, which was common
    in small libraries."[^10^](#c2-note-0010){#c2-note-0010a} Scientists
    were now able to compare texts with one another and critique them to an
    unprecedented extent. Their general orientation turned around: instead
    of looking back in order to preserve what they knew, they were now
    looking ahead toward what they might not (yet) know.

    In order to organize this information flood of rapidly amassing texts,
    it was necessary to create new conventions: books were now specified by
    their author, publisher, and date of publication, not to mention
    furnished with page numbers. This enabled large numbers of texts to be
    catalogued and every individual text -- indeed, every single passage --
    to be referenced.[^11^](#c2-note-0011){#c2-note-0011a} Scientists could
    legitimize the pursuit of new knowledge by drawing attention to specific
    mistakes or gaps in existing texts. In the scientific culture that was
    developing at the time, the close connection between old and new
    ma­terial was not simply regarded as something positive; it was also
    urgently prescribed as a method of argumentation. Every text had to
    contain an internal system of references, and this was the basis for the
    development of schools, disciplines, and specific discourses.

    The digital character of printed writing also made texts available in
    the third respect mentioned above. Because discrete signs could be
    reproduced without any loss of information, it was possible not only to
    make perfect copies but also to remove content from one carrier and
    transfer it to another. Materials were no longer simply arranged
    sequentially, as in medieval compilations and almanacs, but manipulated
    to give rise to a new and independent fluid text. A set of conventions
    was developed -- one that remains in use today -- for modifying embedded
    or quoted material in order for it []{#Page_63 type="pagebreak"
    title="63"}to fit into its new environment. In this manner, quotations
    could be altered in such a way that they could be integrated seamlessly
    into a new text while remaining recognizable as direct citations.
    Several of these conventions, for instance the use of square brackets to
    indicate additions ("\[ \]") or ellipses to indicate omissions ("..."),
    are also used in this very book. At the same time, the conventions for
    making explicit references led to the creation of an internal reference
    system that made the singular position of the new text legible within a
    collective field of work. "Printing," to quote Elizabeth Eisenstein once
    again, "encouraged forms of combinatory activity which were social as
    well as intellectual. It changed relationships between men of learning
    as well as between systems of
    ideas."[^12^](#c2-note-0012){#c2-note-0012a} Exchange between scholars,
    in the form of letters and visits, intensified. The seventeenth century
    saw the formation of the *respublica literaria* or the "Republic of
    Letters," a loose network of scholars devoted to promoting the ideas of
    the Enlightenment. Beginning in the eighteenth century, the rapidly
    growing number of scientific fields was arranged and institutionalized
    into clearly distinct disciplines. In the nineteenth and twentieth
    centuries, diverse media-technical innovations made images, sounds, and
    moving images available, though at first only in analog formats. These
    created the preconditions that enabled the montage in all of its forms
    -- film cuts, collages, readymades, *musique concrète*, found-footage
    films, literary cut-ups, and artistic assemblages (to name only the
    best-known genres) -- to become the paradigm of Modernity.
    :::

    ::: {.section}
    ### Information overload 2.0 {#c2-sec-0004}

    It was not until new technical possibilities for recording, storing,
    processing, and reproduction appeared over the course of the 1990s that
    it also became increasingly possible to code and edit images, audio, and
    video digitally. Through the networking that was taking place not far
    behind, society was flooded with an unprecedented amount of digit­ally
    coded information *of every sort*, and the circulation of this
    information accelerated. This was not, however, simply a quantitative
    change but also and above all a qualitative one. Cultural materials
    became available in a comprehensive []{#Page_64 type="pagebreak"
    title="64"}sense -- economically and organizationally, culturally
    (despite legal problems), and materially (because digitalized). Today it
    would not be bold to predict that nearly every text, image, or sound
    will soon exist in a digital form. Most of the new reproducible works
    are already "born digital" and digit­ally distributed, or they are
    physically produced according to digital instructions. Many initiatives
    are working to digitalize older, analog works. We are now anchored in
    the digital.

    Among the numerous digitalization projects currently under way, the most
    ambitious is that of Google Books, which, since its launch in 2004, has
    digitalized around 20 million books from the collections of large
    libraries and prepared them for full-text searches. Right from the
    start, a fierce debate arose about the legal and cultural acceptability
    of this project. One concern was whether Google\'s process infringed
    upon the rights of the authors and publishers of the scanned books or
    whether, according to American law, it qualified as "fair use," in which
    case there would be no obligation for the company to seek authorization
    or offer compensation. The second main concern was whether it would be
    culturally or politically appropriate for a private corporation to hold
    a de facto monopoly over the digital heritage of book culture. The first
    issue incited a complex legal battle that, in 2013, was decided in
    Google\'s favor by a judge on the United States District Court in New
    York.[^13^](#c2-note-0013){#c2-note-0013a} At the heart of the second
    issue was the question of how a public library should look in the
    twenty-first century.[^14^](#c2-note-0014){#c2-note-0014a} In November
    of 2008, the European Commission and the cultural minister of the
    European Union launched the virtual Europeana library, which occurred
    after a number of European countries had already invested hundreds of
    millions of euros in various digitalization
    initiatives.[^15^](#c2-note-0015){#c2-note-0015a} Today, Europeana
    serves as a common access point to the online archives of around 2,500
    European cultural institutions. By the end of 2015, its digital holdings
    had grown to include more than 40 million objects. This is still,
    however, a relatively small number, for it has been estimated that
    European archives and museums contain more than 220 million
    natural-historical and more than 260 million cultural-historical
    objects. In the United States, discussions about the future of libraries
    []{#Page_65 type="pagebreak" title="65"}led to the 2013 launch of the
    Digital Public Library of America (DPLA), which, like Europeana,
    provides common access to the digitalized holdings of archives, museums,
    and libraries. By now, more than 14 million items can be viewed there.

    In one way or another, however, both the private and the public projects
    of this sort have been limited by binding copyright laws. The librarian
    and book historian Robert Darnton, one of the most prominent advocates
    of the Digital Public Library of America, has accordingly stated: "The
    main impediment to the DPLA\'s growth is legal, not financial. Copyright
    laws could exclude everything published after 1964, most works published
    after 1923, and some that go back as far as
    1873."[^16^](#c2-note-0016){#c2-note-0016a} The legal situation in
    Europe is similar to that in the United States. It, too, massively
    obstructs the work of public
    institutions.[^17^](#c2-note-0017){#c2-note-0017a} In many cases, this
    has had the absurd consequence that certain materials, though they have
    been fully digitalized, may only be accessed in part or exclusively
    inside the facilities of a particular institution. Whereas companies
    such as Google can afford to wage long legal battles, and in the
    meantime create precedents, public institutions must proceed with great
    caution, not least to avoid the accusation of using public funds to
    violate copyright laws. Thus, they tend to fade into the background and
    leave users, who are unfamiliar with the complex legal situation, with
    the impression that they are even more out-of-date than they often are.

    Informal actors, who explicitly operate beyond the realm of copyright
    law, are not faced with such restrictions. UbuWeb, for instance, which
    is the largest online archive devoted to the history of
    twentieth-century avant-garde art, was not created by an art museum but
    rather by the initiative of an individual artist, Kenneth Goldsmith.
    Since 1996, he has been collecting historically relevant materials that
    were no longer in distribution and placing them online for free and
    without any stipulations. He forgoes the process of obtaining the rights
    to certain works of art because, as he remarks on the website, "Let\'s
    face it, if we had to get permission from everyone on UbuWeb, there
    would be no UbuWeb."[^18^](#c2-note-0018){#c2-note-0018a} It would
    simply be too demanding to do so. Because he pursues the project without
    any financial interest and has saved so much []{#Page_66
    type="pagebreak" title="66"}from oblivion, his efforts have provoked
    hardly any legal difficulties. On the contrary, UbuWeb has become so
    important that Goldsmith has begun to receive more and more material
    directly from artists and their heirs, who would like certain works not
    to be forgotten. Nevertheless, or perhaps for this very reason,
    Goldsmith repeatedly stresses the instability of his archive, which
    could disappear at any moment if he loses interest in maintaining it or
    if something else happens. Users are therefore able to download works
    from UbuWeb and archive, on their own, whatever items they find most
    important. Of course, this fragility contradicts the idea of an archive
    as a place for long-term preservation. Yet such a task could only be
    undertaken by an institution that is oriented toward the long term.
    Because of the existing legal conditions, however, it is hardly likely
    that such an institution will come about.

    Whereas Goldsmith is highly adept at operating within a niche that not
    only tolerates but also accepts the violation of formal copyright
    claims, large websites responsible for the uncontrolled dissemination of
    digital content do not bother with such niceties. Their purpose is
    rather to ensure that all popular content is made available digitally
    and for free, whether legally or not. These sites, too, have experienced
    uninterrupted growth. By the end of 2015, dozens of millions of people
    were simultaneously using the BitTorrent tracker The Pirate Bay -- the
    largest nodal point for file-sharing networks during the last decade --
    to exchange several million digital files with one
    another.[^19^](#c2-note-0019){#c2-note-0019a} And this was happening
    despite protracted attempts to block or close down the file-sharing site
    by legal means and despite a variety of competing services. Even when
    the founders of the website were sentenced in Sweden to pay large fines
    (around €3 million) and to serve time in prison, the site still did not
    disappear from the internet.[^20^](#c2-note-0020){#c2-note-0020a} At the
    same time, new providers have entered the market of free access; their
    method is not to facilitate distributed downloads but rather to offer,
    on account of the drastically reduced cost of data transfers, direct
    streaming. Although some of these services are relatively easy to locate
    and some have been legally banned -- the best-known case in Germany
    being that of the popular site kino.to -- more of them continue to
    appear.[^21^](#c2-note-0021){#c2-note-0021a} Moreover, this phenomenon
    []{#Page_67 type="pagebreak" title="67"}is not limited to music and
    films, but encompasses all media formats. For instance, it is
    foreseeable that the number of freely available plans for 3D objects
    will increase along with the popularity of 3D printing. It has almost
    escaped notice, however, that so-called "shadow libraries" have been
    popping up everywhere; the latter are not accessible to the public but
    rather to members, for instance, of closed exchange platforms or of
    university intranets. Few seminars take place any more without a corpus
    of scanned texts, regardless of whether this practice is legal or
    not.[^22^](#c2-note-0022){#c2-note-0022a}

    The lines between these different mechanisms of access are highly
    permeable. Content acquired legally can make its way to file-sharing
    networks as an illegal copy; content available for free can be sold in
    special editions; content from shadow libraries can make its way to
    publicly accessible sites; and, conversely, content that was once freely
    available can disappear into shadow libraries. As regards free access,
    the details of this rapidly changing landscape are almost
    inconsequential, for the general trend that has emerged from these
    various dynamics -- legal and illegal, public and private -- is
    unambiguous: in a comprehensive and practical sense, cultural works of
    all sorts will become freely available despite whatever legal and
    technical restrictions might be in place. Whether absolutely all
    material will be made available in this way is not the decisive factor,
    at least not for the individual, for, as the German Library Association
    has stated, "it is foreseeable that non-digitalized material will
    increasingly escape the awareness of users, who have understandably come
    to appreciate the ubiquitous availability and more convenient
    processability of the digital versions of analog
    objects."[^23^](#c2-note-0023){#c2-note-0023a} In this context of excess
    information, it is difficult to determine whether a particular work or a
    crucial reference is missing, given that a multitude of other works and
    references can be found in their place.

    At the same time, prodigious amounts of new material are being produced
    that, before the era of digitalization and networks, never could have
    existed at all or never would have left the private sphere. An example
    of this is amateur photography. This is nothing new in itself; as early
    as 1899, Kodak was marketing its films and apparatus with the slogan
    "You press the button, we do the rest," and ever since, []{#Page_68
    type="pagebreak" title="68"}drawers and albums have been overflowing
    with photographs. With the advent of digitalization, however, certain
    economic and material limitations ceased to exist that, until then, had
    caused most private photographers to think twice about how many shots
    they wanted to take. After all, they had to pay for the film to be
    developed and then store the pictures somewhere. Cameras also became
    increasingly "intelligent," which improved the technical quality of
    photo­graphs. Even complex procedures such as increasing the level of
    detail or the contrast ratio -- the difference between an image\'s
    brightest and darkest points -- no longer require any specialized
    knowledge of photochemical processes in the darkroom. Today, such
    features are often pre-installed in many cameras as an option (high
    dynamic range). Ever since the introduction of built-in digital cameras
    for smartphones, anyone with such a device can take pictures everywhere
    and at any time and then store them digitally. Images can then be posted
    on online platforms and shared with others. By the middle of 2015,
    Flickr -- the largest but certainly not the only specialized platform of
    this sort -- had more than 112 million registered users participating in
    more than 2 million groups. Every user has access to free storage space
    for about half a million of his or her own pictures. At that point, in
    other words, the platform was equipped to manage more than 55 billion
    photographs. Around 3.5 million images were being uploaded every day,
    many of which could be accessed by anyone. This may seem like a lot, but
    in reality it is just a small portion of the pictures that are posted
    online on a daily basis. Around that same time -- again, the middle of
    2015 -- approximately 350 million pictures were being posted on Facebook
    *every day*. The total number of photographs saved there has been
    estimated to be 250 billion. In addition, there are also large platforms
    for professional "stock photos" (supplies of pre-produced images that
    are supposed to depict generic situations) and the databanks of
    professional agencies such Getty Images or Corbis. All of these images
    can be found easily and acquired quickly (though not always for free).
    Yet photography is not unique in this regard. In all fields, the number
    of cultural artifacts available to the public on specialized platforms
    has been increasing rapidly in recent years.[]{#Page_69 type="pagebreak"
    title="69"}
    :::

    ::: {.section}
    ### The great disorder {#c2-sec-0005}

    The old orders that had been responsible for filtering, organ­izing, and
    publishing cultural material -- culture industries, mass media,
    libraries, museums, archives, etc. -- are incapable of managing almost
    any aspect of this deluge. They can barely function as gatekeepers any
    more between those realms that, with their help, were once defined as
    "private" and "public." Their decisions about what is or is not
    important matter less and less. Moreover, having already been subjected
    to a decades-long critique, their rules, which had been relatively
    binding and formative over long periods of time, are rapidly losing
    practical significance.

    Even Europeana, a relatively small project based on trad­itional museums
    and archives and with a mandate to make the European cultural heritage
    available online, has contributed to the disintegration of established
    orders: it indiscriminately brings together 2,500 previously separated
    institutions. The specific semantic contexts that formerly shaped the
    history and orientation of institutions have been dissolved or reduced
    to dry meta-data, and millions upon millions of cultural artifacts are
    now equidistant from one another. Instead of certain artifacts being
    firmly anchored in a location, for instance in an ethnographic
    collection devoted to the colonial history of France, it is now possible
    for everything to exist side by side. Europeana is not an archive in the
    traditional sense, or even a museum with a fixed and meaningful order;
    rather, it is just a standard database. Everything in it is just one
    search request away, and every search generates a unique order in the
    form of a sequence of visible artifacts. As a result, individual objects
    are freed from those meta-narratives, created by the museums and
    archives that preserve them, which situate them within broader contexts
    and assign more or less clear meanings to them. They consequently become
    more open to interpretation. A search result does not articulate an
    interpretive field of reference but merely a connection, created by
    constantly changing search algorithms, between a request and the corpus
    of material, which is likewise constantly changing.

    Precisely because it offers so many different approaches to more or less
    freely combinable elements of information, []{#Page_70 type="pagebreak"
    title="70"}the order of the database no longer really provides a
    framework for interpreting search results in a meaningful way.
    Al­together, the meaning of many objects and signs is becoming even more
    uncertain. On the one hand, this is because the connection to their
    original context is becoming fragile; on the other hand, it is because
    they can appear in every possible combination and in the greatest
    variety of reception contexts. In less official archives and in less
    specialized search engines, the dissolution of context is far more
    pronounced than it is in the case of the Europeana project. For the sake
    of orienting its users, for instance, YouTube provides the date when a
    video has been posted, but there is no indication of when a video was
    actually produced. Further information provided about a video, for
    example in the comments section, is essentially unreliable. It might be
    true -- or it might not. The internet researcher David Weinberger has
    called this the "new digital disorder," which, at least for many users,
    is an entirely apt description.[^24^](#c2-note-0024){#c2-note-0024a} For
    individuals, this disorder has created both the freedom to establish
    their own orders and the obligation of doing so, regardless of whether
    or not they are ready for the task.

    This tension between freedom and obligation is at its strongest online,
    where the excess of culture and its more or less free availability are
    immediate and omnipresent. In fact, everything that can be retrieved
    online is culture in the sense that everything -- from the deepest layer
    of hardware to the most superficial tweet -- has been made by someone
    with a particular intention, and everything has been made to fit a
    particular order. And it is precisely this excess of often contradictory
    meanings and limited, regional, and incompatible orders that leads to
    disorder and meaninglessness. This is not limited to the online world,
    however, because the latter is not self-contained. In an essential way,
    digital media also serve to organize the material world. On the basis of
    extremely complex and opaque yet highly efficient logistical and
    production processes, people are also confronted with constantly
    changing material things about whose origins and meanings they have
    little idea. Even something as simple to produce as yoghurt usually has
    a thousand kilometers behind it before it ends up on a shelf in the
    supermarket. The logistics that enable this are oriented toward
    flexibility; []{#Page_71 type="pagebreak" title="71"}they bring elements
    together as efficiently as possible. It is nearly impossible for final
    customers to find out anything about the ingredients. Customers are
    merely supposed to be oriented by signs and notices such as "new" or "as
    before," "natural," and "healthy," which are written by specialists and
    meant to manipulate shoppers as much as the law allows. Even here, in
    corporeal everyday life, every individual has to deal with a surge of
    excess and disorder that threatens to erode the original meaning
    conferred on every object -- even where such meaning was once entirely
    unproblematic, as in the case of
    yoghurt.[^25^](#c2-note-0025){#c2-note-0025a}
    :::

    ::: {.section}
    ### Selecting and organizing {#c2-sec-0006}

    In this situation, the creation of one\'s own system of references has
    become a ubiquitous and generally accessible method for organizing all
    of the ambivalent things that one encounters on a given day. Such things
    are thus arranged within a specific context of meaning that also
    (co)determines one\'s own relation to the world and subjective position
    in it. Referentiality takes place through three types of activity, the
    first being simply to attract attention to certain things, which affirms
    (at least implicitly) that they are important. With every single picture
    posted on Flickr, every tweet, every blog post, every forum post, and
    every status update, the user is doing exactly that; he or she is
    communicating to others: "Look over here! I think this is important!" Of
    course, there is nothing new to filtering and allocating meaning. What
    is new, however, is that these processes are no longer being carried out
    primarily by specialists at editorial offices, museums, or archives, but
    have become daily requirements for a large portion of the population,
    regardless of whether they possess the material and cultural resources
    that are necessary for the task.
    :::

    ::: {.section}
    ### The loop through the body {#c2-sec-0007}

    Given the flood of information that perpetually surrounds everyone, the
    act of focusing attention and reducing vast numbers of possibilities
    into something concrete has become a productive achievement, however
    banal each of these micro-activities might seem on its own, and even if,
    at first, []{#Page_72 type="pagebreak" title="72"}the only concern might
    be to focus the attention of the person doing it. The value of this
    (often very brief) activity is that it singles out elements from the
    uniform sludge of unmanageable complexity. Something plucked out in this
    way gains value because it has required the use of a resource that
    cannot be reproduced, that exists outside of the world of information
    and that is invariably limited for every individual: our own lifetime.
    Every status update that is not machine-generated means that someone has
    invested time, be it only a second, in order to point to this and not to
    something else. Thus, a process of validating what exists in the excess
    takes place in connection with the ultimate scarcity -- our own
    lifetimes, our own bodies. Even if the value generated by this act is
    minimal or diffuse, it is still -- to borrow from Gregory Bateson\'s
    famous definition of information -- a difference that makes a difference
    in this stream of equivalencies and
    meaninglessness.[^26^](#c2-note-0026){#c2-note-0026a} This singling out
    -- this use of one\'s own body to generate meaning -- does not, however,
    take place by means of mere micro-activities throughout the day; it is
    also a defining aspect of complex cultural strategies. In recent years,
    re-enactment (that is, the re-staging of historical situ­ations and
    events) has established itself as a common practice in contemporary art.
    Unlike traditional re-enactments, such as those of historically
    significant battles, which attempt to represent the past as faithfully
    as possible, "artistic re-enactments," according to the curator Inke
    Arns, "are not an affirmative confirmation of the past; rather, they are
    *questionings* of the present through reaching back to historical
    events," especially as they are represented in images and other forms of
    documentation. Thanks to search engines and databases, such
    representations are more or less always present, though in the form of
    indeterminate images, ambivalent documents, and contentious
    interpretations. Artists in this situation, as Arns explains,

    ::: {.extract}
    do not ask the naïve question about what really happened outside of the
    history represented in the media -- the "authenticity" beyond the images
    -- instead, they ask what the images we see might mean concretely to us,
    if we were to experience these situations personally. In this way the
    artistic reenactment confronts the general feeling of insecurity about
    the meaning []{#Page_73 type="pagebreak" title="73"}of images by using a
    paradoxical approach: through erasing distance to the images and at the
    same time distancing itself from the
    images.[^27^](#c2-note-0027){#c2-note-0027a}
    :::

    This paradox manifests itself in that the images are appropriated and
    sublated through the use of one\'s own body in the re-enactments. They
    simultaneously refer to the past and create a new reality in the
    present. In perhaps the best-known re-enactment of this type, the artist
    Jeremy Deller revived, in 2001, the Battle of Orgreave, one of the
    central episodes of the British miners\' strike of 1984 and 1985. This
    historical event is regarded as a turning point in the protracted
    conflict between Margaret Thatcher\'s government and the labor unions --
    a key moment in the implementation of Great Britain\'s neoliberal
    regime, which is still in effect today. In Deller\'s re-enactment, the
    heart of the matter is not historical accuracy, which is always
    controversial in such epoch-changing events. Rather, he focuses on the
    former participants -- the miners and police officers alike, who, along
    with non-professional actors, lived through the situation again -- in
    order to explore both the distance from the events and their
    representation in the media, as well as their ongoing biographical and
    societal presence.[^28^](#c2-note-0028){#c2-note-0028a}

    Elaborate practices of embodying medial images through processes of
    appropriation and distancing have also found their way into popular
    culture, for instance in so-called "cosplay." The term, which is a
    contraction of the words "costume" and "play," was coined by a Japanese
    man named Nobuyuki Takahashi. In 1984, while attending the World Science
    Fiction Convention in Los Angeles, he used the word to describe the
    practice of certain attendees to dress up as their favorite characters.
    Participants in cosplay embody fictitious figures -- mostly from the
    worlds of science fiction, comics/manga, or computer games -- by donning
    home-made costumes and striking characteristic
    poses.[^29^](#c2-note-0029){#c2-note-0029a} The often considerable
    effort that goes into this is mostly reflected in the costumes, not in
    the choreography or dramaturgy of the performance. What is significant
    is that these costumes are usually not exact replicas but are rather
    freely adapted by each player to represent the character as he or she
    interprets it to be. Accordingly, "Cosplay is a form of appropriation
    []{#Page_74 type="pagebreak" title="74"}that transforms, actualizes and
    performs an existing story in close connection to the fan\'s own
    identity."[^30^](#c2-note-0030){#c2-note-0030a} This practice,
    admittedly, goes back quite far in the history of fan culture, but it
    has experienced a striking surge through the opportunity for fans to
    network with one another around the world, to produce costumes and
    images of professional quality, and to place themselves on the same
    level as their (fictitious) idols. By now it has become a global
    subculture whose members are active not only online but also at hundreds
    of conventions throughout the world. In Germany, an annual cosplay
    competition has been held since 2007 (it is organized by the Frankfurt
    Book Fair and Animexx, the country\'s largest manga and anime
    community). The scene, which has grown and branched out considerably
    over the past few years, has slowly begun to professionalize, with
    shops, books, and players who make paid appearances. Even in fan
    culture, stars are born. As soon as the subculture has exceeded a
    certain size, this gradual onset of commercialization will undoubtedly
    lead to tensions within the community. For now, however, two of its
    noteworthy features remain: the power of the desire to appropriate, in a
    bodily manner, characters from vast cultural universes, and the
    widespread combination of free interpretation and meticulous attention
    to detail.
    :::

    ::: {.section}
    ### Lineages and transformations {#c2-sec-0008}

    Because of the great effort tha they require, re-enactment and cosplay
    are somewhat extreme examples of singling out, appropriating, and
    referencing. As everyday activities that almost take place incidentally,
    however, these three practices usually do not make any significant or
    lasting differences. Yet they do not happen just once, but over and over
    again. They accumulate and thus constitute referentiality\'s second type
    of activity: the creation of connections between the many things that
    have attracted attention. In such a way, paths are forged through the
    vast complexity. These paths, which can be formed, for instance, by
    referring to different things one after another, likewise serve to
    produce and filter meaning. Things that can potentially belong in
    multiple contexts are brought into a single, specific context. For the
    individual []{#Page_75 type="pagebreak" title="75"}producer, this is how
    fields of attention, reference systems, and contexts of meaning are
    first established. In the third step, the things that have been selected
    and brought together are changed. Perhaps something is removed to modify
    the meaning, or perhaps something is added that was previously absent or
    unavailable. Either way, referential culture is always producing
    something new.

    These processes are applied both within individual works (referentiality
    in a strict sense) and within currents of communication that consist of
    numerous molecular acts (referentiality in a broader sense). This latter
    sort of compilation is far more widespread than the creation of new
    re-mix works. Consider, for example, the billionfold sequences of status
    updates, which sometimes involve a link to an interesting video,
    sometimes a post of a photograph, then a short list of favorite songs, a
    top 10 chart from one\'s own feed, or anything else. Such methods of
    inscribing oneself into the world by means of references, combinations,
    or alterations are used to create meaning through one\'s own activity in
    the world and to constitute oneself in it, both for one\'s self and for
    others. In a culture that manifests itself to a great extent through
    mediatized communication, people have to constitute themselves through
    such acts, if only by posting
    "selfies."[^31^](#c2-note-0031){#c2-note-0031a} Not to do so would be to
    risk invisibility and being forgotten.

    On this basis, a genuine digital folk culture of re-mixing and mashups
    has formed in recent years on online platforms, in game worlds, but also
    through cultural-economic productions of individual pieces or short
    series. It is generated and maintained by innumerable people with
    varying degrees of intensity and ambition. Its common feature with
    trad­itional folk culture, in choirs or elsewhere, is that production
    and reception (but also reproduction and creation) largely coincide.
    Active participation admittedly requires a certain degree of
    proficiency, interest, and engagement, but usually not any extraordinary
    talent. Many classical institutions such as museums and archives have
    been attempting to take part in this folk culture by setting up their
    own re-mix services. They know that the "public" is no longer able or
    willing to limit its engagement with works of art and cultural history
    to one of quiet contemplation. At the end of 2013, even []{#Page_76
    type="pagebreak" title="76"}the Deutsches Symphonie-Orchester Berlin
    initiated a re-mix competition. A year earlier, the Rijksmuseum in
    Amsterdam launched so-called "Rijksstudios." Since then, the museum has
    made available on its website more than 200,000 high-resolution images
    from its collection. Users are free to use these to create their own
    re-mixes online and share them with others. Interestingly, the
    Rijksmuseum does not distinguish between the work involved in
    transforming existing pieces and that involved in curating its own
    online gallery.

    Referential processes have no beginning and no end. Any material that is
    used to make something new has a pre-history of its own, even if its
    traces are lost in clouds of uncertainty. Upon closer inspection, this
    cloud might clear a little bit, but it is extremely uncommon for a
    genuine beginning -- a *creatio ex nihilo* -- to be revealed. This
    raises the question of whether there can really be something like
    originality in the emphatic sense.[^32^](#c2-note-0032){#c2-note-0032a}
    Regardless of the answer to this question, the fact that by now many
    people select, combine, and alter objects on a daily basis has led to a
    slow shift in our perception and sensibilities. In light of the
    experiences that so many people are creating, the formerly exotic
    theories of deconstruction suddenly seem anything but outlandish. Nearly
    half a century ago, Roland Barthes defined the text as a fabric of
    quotations, and this incited vehement
    opposition.[^33^](#c2-note-0033){#c2-note-0033a} "But of course," one
    would be inclined to say today, "that can be statistically proven
    through software analysis!" Amazon identifies books by means of their
    "statistically improbable phrases"; that is, by means of textual
    elements that are highly unlikely to occur elsewhere. This implies, of
    course, that books contain many textual elements that are highly likely
    to be found in other texts, without suggesting that such elements would
    have to be regarded as plagiarism.

    In the Gutenberg Galaxy, with its fixation on writing, the earliest
    textual document is usually understood to represent a beginning. If no
    references to anything before can be identified, the text is then
    interpreted as a closed entity, as a new text. Thus, fairy tales and
    sagas, which are typical elements of oral culture, are still more
    strongly associated with the names of those who recorded them than with
    the names of those who narrated them. This does not seem very convincing
    today. In recent years, literary historians have made strong []{#Page_77
    type="pagebreak" title="77"}efforts to shift the focus of attention to
    the people (mostly women) who actually told certain fairy tales. In
    doing so, they have been able to work out to what extent the respective
    narrators gave shape to specific stories, which were written down as
    common versions, and to what extent these stories reflect their
    narrators\' personal histories.[^34^](#c2-note-0034){#c2-note-0034a}

    Today, after more than 40 years of deconstructionist theory and a change
    in our everyday practices, it is no longer controversial to read works
    -- even by canonical figures like Wagner or Mozart -- in such a way as
    to highlight the other works, either by the artists in question or by
    other artists, that are contained within
    them.[^35^](#c2-note-0035){#c2-note-0035a} This is not an expression of
    decreased appreciation but rather an indication that, as Zygmunt Bauman
    has stressed, "The way human beings understand the world tends to be at
    all times *praxeomorphic*: it is always shaped by the know-how of the
    day, by what people can do and how they usually go about doing
    it."[^36^](#c2-note-0036){#c2-note-0036a} And the everyday practice of
    today is one of singling out, bringing together, altering, and adding.
    Accordingly, not only has our view of current cultural production
    shifted; our view of cultural history has shifted as well. As always,
    the past is made to suit the sensibilities of the present.

    As a rule, however, things that have no beginning also have no end. This
    is not only because they can in turn serve as elements for other new
    contexts of meaning, but also because the attention paid to the context
    in which they take on specific meaning is sensitive to the work that has
    to be done to maintain the context itself. Even timelessness is an
    elaborate everyday business. The attempt to rescue works of art from the
    ravages of time -- to preserve them forever -- means that they regularly
    need to be restored. Every restoration inevit­ably stirs a debate about
    whether the planned interventions are appropriate and about how to deal
    with the traces of previous interventions, which, from the current
    perspective, often seem to be highly problematic. Whereas, just a
    generation ago, preservationists ensured that such interventions
    remained visible (as articulations of the historical fissures that are
    typical of Modernity), today greater emphasis is placed on reducing
    their visibility and re-creating the illusion of an "original condition"
    (without, however, impeding any new functionality that a piece might
    have in the present). []{#Page_78 type="pagebreak" title="78"}The
    historically faithful restoration of the Berlin City Palace, and yet its
    repurposed function as a museum and meeting place, are typical of this
    new attitude in dealing with our historical heritage.

    In everyday activity, too, the never-ending necessity of this work can
    be felt at all times. Here the issue is not timelessness, but rather
    that the established contexts of meaning quickly become obsolete and
    therefore have to be continuously affirmed, expanded, and changed in
    order to maintain the relevance of the field that they define. This
    lends referentiality a performative character that combines productive
    and reproductive dimensions. That which is not constantly used and
    renewed simply disappears. Often, however, this only means that it will
    sink into an endless archive and become unrealized potential until
    someone reactivates it, breathes new life into it, rouses it from its
    slumber, and incorporates it into a newly relevant context of meaning.
    "To be relevant," according to the artist Eran Schaerf, "things must be
    recyclable."[^37^](#c2-note-0037){#c2-note-0037a}

    Alone, everyone is overwhelmed by the task of having to generate meaning
    against this backdrop of all-encompassing meaninglessness. First, the
    challenge is too great for any individual to overcome; second, meaning
    itself is only created intersubjectively. While it can admittedly be
    asserted by a single person, others have to confirm it before it can
    become a part of culture. For this reason, the actual subject of
    cultural production under the digital condition is not the individual
    but rather the next-largest unit.
    :::
    :::

    ::: {.section}
    Communality {#c2-sec-0009}
    -----------

    As an individual, it is impossible to orient oneself within a complex
    environment. Meaning -- as well as the ability to act -- can only be
    created, reinforced, and altered in exchange with others. This is
    nothing noteworthy; biologically and culturally, people are social
    beings. What has changed historically is how people are integrated into
    larger contexts, how processes of exchange are organized, and what every
    individual is expected to do in order to become a fully fledged
    participant in these processes. For nearly 50 years, traditional
    []{#Page_79 type="pagebreak" title="79"}institutions -- that is,
    hierarchically and bureaucratically organ­ized civic institutions such
    as established churches, labor unions, and political parties -- have
    continuously been losing members.[^38^](#c2-note-0038){#c2-note-0038a}
    In tandem with this, the overall commitment to the identities, family
    values, and lifestyles promoted by these institutions has likewise been
    in decline. The great mech­anisms of socialization from the late stages
    of the Gutenberg Galaxy have been losing more and more of their
    influence, though at different speeds and to different extents. All
    told, however, explicitly and collectively normative impulses are
    decreasing, while others (implicitly economic, above all) are on the
    rise. According to mainstream sociology, a cause or consequence of this
    is the individualization and atomization of society. As early as the
    middle of the 1980s, Ulrich Beck claimed: "In the individualized society
    the individual must therefore learn, on pain of permanent disadvantage,
    to conceive of himself or herself as the center of action, as the
    planning office with respect to his/her own biography, abil­ities,
    orientations, relationships and so
    on."[^39^](#c2-note-0039){#c2-note-0039a} Over the past three decades,
    the dominant neoliberal political orientation, with its strong stress on
    the freedom of the individual -- to realize oneself as an individual
    actor in the allegedly open market and in opposition to allegedly
    domineering collective mechanisms -- has radicalized these tendencies
    even further. The ability to act, however, is not only a question of
    one\'s personal attitude but also of material resources. And it is this
    same neoliberal politics that deprives so many people of the resources
    needed to take advantage of these new freedoms in their own lives. As a
    result they suffer, in Ulrich Beck\'s terms, "permanent disadvantage."

    Under the digital condition, this process has permeated the finest
    structures of social life. Individualization, commercialization, and the
    production of differences (through design, for instance) are ubiquitous.
    Established civic institutions are not alone in being hollowed out;
    relatively new collectives are also becoming more differentiated, a
    development that I outlined above with reference to the transformation
    of the gay movement into the LGBT community. Yet nevertheless, or
    perhaps for this very reason, new forms of communality are being formed
    in these offshoots -- in the small activities of everyday life. And
    these new communal formations -- rather []{#Page_80 type="pagebreak"
    title="80"}than individual people -- are the actual subjects who create
    the shared meaning that we call culture.

    ::: {.section}
    ### The problem of the "community" {#c2-sec-0010}

    I have chosen the rather cumbersome expression "communal formation" in
    order to avoid the term "community" (*Gemeinschaft*), although the
    latter is used increasingly often in discussions of digital cultures and
    has played an import­ant role, from the beginning, in conceptions of
    networking. Viewed analytically, however, "community" is a problematic
    term because it is almost hopelessly overloaded. Particularly in the
    German-speaking tradition, Ferdinand Tönnies\'s polar distinction
    between "community" (*Gemeinschaft*) and "society" (*Gesellschaft*),
    which he introduced in 1887, remains
    influential.[^40^](#c2-note-0040){#c2-note-0040a} Tönnies contrasted two
    fundamentally different and exclusive types of social relations. Whereas
    community is characterized by the overlapping multidimensional nature of
    social relationships, society is defined by the functional separation of
    its sectors and spheres. Community embeds every individual into complex
    social relationships, all of which tend to be simultaneously present. In
    the traditional village community ("communities of place," in Tönnies\'s
    terms), neighbors are involved with one another, for better or for
    worse, both on a familiar basis and economically or religiously. Every
    activity takes place on several different levels at the same time.
    Communities are comprehensive social institutions that penetrate all
    areas of life, endowing them with meaning. Through mutual dependency,
    they create stability and security, but they also obstruct change and
    hinder social mobility. Because everyone is connected with each other,
    no can leave his or her place without calling into question the
    arrangement as a whole. Communities are thus structurally conservative.
    Because every human activity is embedded in multifaceted social
    relationships, every change requires adjustments across the entire
    interrelational web -- a task that is not easy to accomplish.
    Accordingly, the trad­itional communities of the eighteenth and
    nineteenth centuries fiercely opposed the establishment of capitalist
    society. In order to impose the latter, the old community structures
    were broken apart with considerable violence. This is what Marx
    []{#Page_81 type="pagebreak" title="81"}and Engels were referring to in
    that famous passage from *The Communist Manifesto*: "All the settled,
    age-old relations with their train of time-honoured preconceptions and
    viewpoints are dissolved. \[...\] Everything feudal and fixed goes up in
    smoke, everything sacred is
    profaned."[^41^](#c2-note-0041){#c2-note-0041a}

    The defining feature of society, on the contrary, is that it frees the
    individual from such multifarious relationships. Society, according to
    Tönnies, separates its members from one another. Although they
    coordinate their activity with others, they do so in order to pursue
    partial, short-term, and personal goals. Not only are people separated,
    but so too are different areas of life. In a market-oriented society,
    for instance, the economy is conceptualized as an independent sphere. It
    can therefore break away from social connections to be organized simply
    by limited formal or legal obligations between actors who, beyond these
    obligations, have nothing else to do with one another. Costs or benefits
    that inadvertently affect people who are uninvolved in a given market
    transaction are referred to by economists as "externalities," and market
    participants do not need to care about these because they are strictly
    pursuing their own private interests. One of the consequences of this
    form of social relationship is a heightened social dynamic, for now it
    is possible to introduce changes into one area of life without
    considering its effects on other areas. In the end, the dissolution of
    mutual obligations, increased uncertainty, and the reduction of many
    social connections go hand in hand with what Marx and Engels referred to
    in *The Communist Manifesto* as "unfeeling hard cash."

    From this perspective, the historical development looks like an
    ambivalent process of modernization in which society (dynamic, but cold)
    is erected over the ruins of community (static, but warm). This is an
    unusual combination of romanticism and progress-oriented thinking, and
    the problems with this influential perspective are numerous. There is,
    first, the matter of its dichotomy; that is, its assumption that there
    can only be these two types of arrangement, community and society. Or
    there is the notion that the one form can be completely ousted by the
    other, even though aspects of community and aspects of society exist at
    the same time in specific historical situations, be it in harmony or in
    conflict.[^42^](#c2-note-0042){#c2-note-0042a} []{#Page_82
    type="pagebreak" title="82"}These impressions, however, which are so
    firmly associated with the German concept of *Gemeinschaft*, make it
    rather difficult to comprehend the new forms of communality that have
    developed in the offshoots of networked life. This is because, at least
    for now, these latter forms do not represent a genuine alternative to
    societal types of social
    connectedness.[^43^](#c2-note-0043){#c2-note-0043a} The English word
    "community" is somewhat more open. The opposition between community and
    society resonates with it as well, although the dichotomy is not as
    clear-cut. American communitarianism, for instance, considers the
    difference between community and society to be gradual and not
    categorical. Its primary aim is to strengthen civic institutions and
    mechanisms, and it regards community as an intermediary level between
    the individual and society.[^44^](#c2-note-0044){#c2-note-0044a} But
    there is a related English term, which seems even more productive for my
    purposes, namely "community of practice," a concept that is more firmly
    grounded in the empirical observation of concrete social relationships.
    The term was introduced at the beginning of the 1990s by the social
    researchers Jean Lave and Étienne Wenger. They observed that, in most
    cases, professional learning (for instance, in their case study of
    midwives) does not take place as a one-sided transfer of knowledge or
    proficiency, but rather as an open exchange, often outside of the formal
    learning environment, between people with different levels of knowledge
    and experience. In this sense, learning is an activity that, though
    distinguishable, cannot easily be separated from other "normal"
    activities of everyday life. As Lave and Wenger stress, however, the
    community of practice is not only a social space of exchange; it is
    rather, and much more fundamentally, "an intrinsic condition for the
    existence of knowledge, not least because it provides the interpretive
    support necessary for making sense of its
    heritage."[^45^](#c2-note-0045){#c2-note-0045a} Communities of practice
    are thus always epistemic communities that form around certain ways of
    looking at the world and one\'s own activity in it. What constitutes a
    community of practice is thus the joint acquisition, development, and
    preservation of a specific field of practice that contains abstract
    knowledge, concrete proficiencies, the necessary material and social
    resources, guidelines, expectations, and room to interpret one\'s own
    activity. All members are active participants in the constitution of
    this field, and this reinforces the stress on []{#Page_83
    type="pagebreak" title="83"}practice. Each of them, however, brings
    along different presuppositions and experiences, for their situations
    are embedded within numerous and specific situations of life or work.
    The processes within the community are mostly informal, and yet they are
    thoroughly structured, for authority is distributed unequally and is
    based on the extent to which the members value each other\'s (and their
    own) levels of knowledge and experience. At first glance, then, the term
    "community of practice" seems apt to describe the meaning-generating
    communal formations that are at issue here. It is also somewhat
    problematic, however, because, having since been subordinated to
    management strategies, its use is now narrowly applied to professional
    learning and managing knowledge.[^46^](#c2-note-0046){#c2-note-0046a}

    From these various notions of community, it is possible to develop the
    following way of looking at new types of communality: they are formed in
    a field of practice, characterized by informal yet structured exchange,
    focused on the generation of new ways of knowing and acting, and
    maintained through the reflexive interpretation of their own activity.
    This last point in particular -- the communal creation, preservation,
    and alteration of the interpretive framework in which actions,
    processes, and objects acquire a firm meaning and connection -- can be
    seen as the central role of communal formations.

    Communication is especially significant to them. Indi­viduals must
    continuously communicate in order to constitute themselves within the
    fields and practices, or else they will remain invisible. The mass of
    tweets, updates, emails, blogs, shared pictures, texts, posts on
    collaborative platforms, and databases (etc.) that are necessary for
    this can only be produced and processed by means of digital
    technologies. In this act of incessant communication, which is a
    constitutive element of social existence, the personal desire for
    self-constitution and orientation becomes enmeshed with the outward
    pressure of having to be present and available to form a new and binding
    set of requirements. This relation between inward motivation and outward
    pressure can vary highly, depending on the character of the communal
    formation and the position of the individual within it (although it is
    not the individual who determines what successful communication is, what
    represents a contribution to the communal formation, or in which form
    one has to be present). []{#Page_84 type="pagebreak" title="84"}Such
    decisions are made by other members of the formation in the form of
    positive or negative feedback (or none at all), and they are made with
    recourse to the interpretive framework that has been developed in
    common. These communal and continuous acts of learning, practicing, and
    orientation -- the exchange, that is, between "novices" and "experts" on
    the same field, be it concerned with internet politics, illegal street
    racing, extreme right-wing music, body modification, or a free
    encyclopedia -- serve to maintain the framework of shared meaning,
    expand the constituted field, recruit new members, and adapt the
    framework of interpretation and activity to changing conditions. Such
    communal formations constitute themselves; they preserve and modify
    themselves by constantly working out the foundations of their
    constitution. This may sound circular, for the process of reflexive
    self-constitution -- "autopoiesis" in the language of systems theory --
    is circular in the sense that control is maintained through continuous,
    self-generating feedback. Self-referentiality is a structural feature of
    these formations.
    :::

    ::: {.section}
    ### Singularity and communality {#c2-sec-0011}

    The new communal formations are informal forms of organ­ization that are
    based on voluntary action. No one is born into them, and no one
    possesses the authority to force anyone else to join or remain against
    his or her will, or to assign anyone with tasks that he or she might be
    unwilling to do. Such a formation is not an enclosed disciplinary
    institution in Foucault\'s sense,[^47^](#c2-note-0047){#c2-note-0047a}
    and, within it, power is not exercised through commands, as in the
    classical sense formulated by Max
    Weber.[^48^](#c2-note-0048){#c2-note-0048a} The condition of not being
    locked up and not being subordinated can, at least at first, represent
    for the individual a gain in freedom. Under a given set of conditions,
    everyone can (and must) choose which formations to participate in, and
    he or she, in doing so, will have a better or worse chance to influence
    the communal field of reference.

    On the everyday level of communicative self-constitution and creating a
    personal cognitive horizon -- in innumerable streams, updates, and
    timelines on social mass media -- the most important resource is the
    attention of others; that is, their feedback and the mutual recognition
    that results from it. []{#Page_85 type="pagebreak" title="85"}And this
    recognition may simply be in the form of a quickly clicked "like," which
    is the smallest unit that can assure the sender that, somewhere out
    there, there is a receiver. Without the latter, communication has no
    meaning. The situation is somewhat menacing if no one clicks the "like"
    button beneath a post or a photo. It is a sign that communication has
    broken, and the result is the dissolution of one\'s own communicatively
    constituted social existence. In this context, the boundaries are
    blurred between the categories of information, communication, and
    activity. Making information available always involves the active --
    that is, communicating -- person, and not only in the case of ubiquitous
    selfies, for in an overwhelming and chaotic environment, as discussed
    above, selection itself is of such central importance that the
    differences between the selected and the selecting become fluid,
    particularly when the goal of the latter is to experience confirmation
    from others. In this back-and-forth between one\'s own presence and the
    validation of others, one\'s own motives and those of the community are
    not in opposition but rather mutually depend on one another. Condensed
    to simple norms and to a basic set of guidelines within the context of
    an image-oriented social mass media service, the rule (or better:
    friendly tip) that one need not but probably ought to follow is this:

    ::: {.extract}
    Be an active member of the Instagram community to receive likes and
    comments. Take time to comment on a friend\'s photo, or to like photos.
    If you do this, others will reciprocate. If you never acknowledge your
    followers\' photos, then they won\'t acknowledge
    you.[^49^](#c2-note-0049){#c2-note-0049a}
    :::

    The context of this widespread and highly conventional piece of advice
    is not, for instance, a professional marketing campaign; it is simply
    about personally positioning oneself within a social network. The goal
    is to establish one\'s own, singular, identity. The process required to
    do so is not primarily inward-oriented; it is not based on questions
    such as: "Who am I really, apart from external influences?" It is rather
    outward-oriented. It takes place through making connections with others
    and is concerned with questions such as: "Who is in my network, and what
    is my position within it?" It is []{#Page_86 type="pagebreak"
    title="86"}revealing that none of the tips in the collection cited above
    offers advice about achieving success within a community of
    photographers; there are not suggestions, for instance, about how to
    take high-quality photographs. With smart cameras and built-in filters
    for post-production, this is not especially challenging any more,
    especially because individual pictures, to be examined closely and on
    their own terms, have become less important gauges of value than streams
    of images that are meant to be quickly scrolled through. Moreover, the
    function of the critic, who once monopolized the right to interpret and
    evaluate an image for everyone, is no longer of much significance.
    Instead, the quality of a picture is primarily judged according to
    whether "others like it"; that is, according to its performance in the
    ongoing popularity contest within a specific niche. But users do not
    rely on communal formations and the feedback they provide just for the
    sharing and evaluation of pictures. Rather, this dynamic has come to
    determine more and more facets of life. Users experience the
    constitution of singularity and communality, in which a person can be
    perceived as such, as simultaneous and reciprocal processes. A million
    times over and nearly subconsciously (because it is so commonplace),
    they engage in a relationship between the individual and others that no
    longer really corresponds to the liberal opposition between
    individuality and society, between personal and group identity. Instead
    of viewing themselves as exclusive entities (either in terms of the
    emphatic affirmation of individuality or its dissolution within a
    homogeneous group), the new formations require that the production of
    difference and commonality takes place
    simultaneously.[^50^](#c2-note-0050){#c2-note-0050a}
    :::

    ::: {.section}
    ### Authenticity and subjectivity {#c2-sec-0012}

    Because members have decided to participate voluntarily in the
    community, their expressions and actions are regarded as authentic, for
    it is implicitly assumed that, in making these gestures, they are not
    following anyone else\'s instructions but rather their own motivations.
    The individual does not act as a representative or functionary of an
    organization but rather as a private and singular (that is, unique)
    person. While at a gathering of the Occupy movement, a sure way to be
    kicked out to is to stick stubbornly to a party line, even if this way
    []{#Page_87 type="pagebreak" title="87"}of thinking happens to agree
    with that of the movement. Not only at Occupy gatherings, however, but
    in all new communal formations it is expected that everyone there is
    representing his or her own interests. As most people are aware, this
    assumption is theoretically naïve and often proves to be false in
    practice. Even spontaneity can be calculated, and in many cases it is.
    Nevertheless, the expectation of authenticity is relevant because it
    creates a minimum of trust. As the basis of social trust, such
    contra-factual expectations exist elsewhere as well. Critical readers of
    newspapers, for instance, must assume that what they are reading has
    been well researched and is presented as objectively as possible, even
    though they know that objectivity is theoretically a highly problematic
    concept -- to this extent, postmodern theory has become common knowledge
    -- and that newspapers often pursue (hidden) interests or lead
    campaigns. Yet without such contra-factual assumptions, the respective
    orders of knowledge and communication would not function, for they
    provide the normative framework within which deviations can be
    perceived, criticized, and sanctioned.

    In a seemingly traditional manner, the "authentic self" is formulated
    with reference to one\'s inner world, for instance to personal
    knowledge, interests, or desires. As the core of personality, however,
    this inner world no longer represents an immutable and essential
    characteristic but rather a temporary position. Today, even someone\'s
    radical reinvention can be regarded as authentic. This is the central
    difference from the classical, bourgeois conception of the subject. The
    self is no longer understood in essentialist terms but rather
    performatively. Accordingly, the main demand on the individual who
    voluntarily opts to participate in a communal formation is no longer to
    be self-aware but rather to be
    self-motivated.[^51^](#c2-note-0051){#c2-note-0051a} Nor is it necessary
    any more for one\'s core self to be coherent. It is not a contradiction
    to appear in various communal formations, each different from the next,
    as a different "I myself," for every formation is comprehensive, in that
    it appeals to the whole person, and simultaneously partial, in that it
    is oriented toward a particular goal and not toward all areas of life.
    As in the case of re-mixes and other referential processes, the concern
    here is not to preserve authenticity but rather to create it in the
    moment. The success or failure []{#Page_88 type="pagebreak"
    title="88"}of these efforts is determined by the continuous feedback of
    others -- one like after another.

    These practices have led to a modified form of subject constitution for
    which some sociologists, engaged in empir­ical research, have introduced
    the term "networked individualism."[^52^](#c2-note-0052){#c2-note-0052a}
    The idea is based on the observation that people in Western societies
    (the case studies were mostly in North America) are defining their
    identity less and less by their family, profession, or other stable
    collective, but rather increasingly in terms of their personal social
    networks; that is, according to the communal formations in which they
    are active as individuals and in which they are perceived as singular
    people. In this regard, individualization and atomization no longer
    necessarily go hand in hand. On the contrary, the intertwined nature of
    personal identity and communality can be experienced on an everyday
    level, given that both are continuously created, adapted, and affirmed
    by means of personal communication. This makes the networks in question
    simultaneously fragile and stable. Fragile because they require the
    ongoing presence of every individual and because communication can break
    down quickly. Stable because the networks of relationships that can
    support a single person -- as regards the number of those included,
    their geograph­ical distribution, and the duration of their cohesion --
    have expanded enormously by means of digital communication technologies.

    Here the issue is not that of close friendships, whose number remains
    relatively constant for most people and over long periods of
    time,[^53^](#c2-note-0053){#c2-note-0053a} but rather so-called "weak
    ties"; that is, more or less loose acquaintances that can be tapped for
    new information and resources that do not exist within one\'s close
    circle of friends.[^54^](#c2-note-0054){#c2-note-0054a} The more they
    are expanded, the more sustainable and valuable these networks become,
    for they bring together a large number of people and thus multiply the
    material and organizational resources that are (potentially) accessible
    to the individual. It is impossible to make a sweeping statement as to
    whether these formations actually represent communities in a
    comprehensive sense and how stable they really are, especially in times
    of crisis, for this is something that can only be found out on a
    case-by-case basis. It is relevant that the development of personal
    networks []{#Page_89 type="pagebreak" title="89"}has not taken place in
    a vacuum. The disintegration of institutions that were formerly
    influential in the formation of identity and meaning began long before
    the large-scale spread of networks. For most people, there is no other
    choice but to attempt to orient and organize oneself, regardless of how
    provisional or uncertain this may be. Or, as Manuel Castells somewhat
    melodramatically put it, "At the turn of the millennium, the king and
    the queen, the state and civil society, are both naked, and their
    children-citizens are wandering around a variety of foster
    homes."[^55^](#c2-note-0055){#c2-note-0055a}
    :::

    ::: {.section}
    ### Space and time as a communal practice {#c2-sec-0013}

    Although participation in a communal formation is voluntary, it is not
    unselfish. Quite the contrary: an important motivation is to gain access
    to a formation\'s constitutive field of practice and to the resources
    associated with it. A communal formation ultimately does more than
    simply steer the attention of its members toward one another. Through
    the common production of culture, it also structures how the members
    perceive the world and how they are able to design themselves and their
    potential actions in it. It is thus a co­operative mechanism of
    filtering, interpretation, and constitution. Through the everyday
    referential work of its members, the community selects a manageable
    amount of information from the excess of potentially available
    information and brings it into a meaningful context, whereby it
    validates the selection itself and orients the activity of each of its
    members.

    The new communal formations consist of self-referential worlds whose
    constructive common practice affects the foundations of social activity
    itself -- the constitution of space and time. How? The spatio-temporal
    horizon of digital communication is a global (that is, placeless) and
    ongoing present. The technical vision of digital communication is always
    the here and now. With the instant transmission of information,
    everything that is not "here" is inaccessible and everything that is not
    "now" has disappeared. Powerful infrastructure has been built to achieve
    these effects: data centers, intercontinental networks of cables,
    satellites, high-performance nodes, and much more. Through globalized
    high-frequency trading, actors in the financial markets have realized
    this []{#Page_90 type="pagebreak" title="90"}technical vision to its
    broadest extent by creating a never-ending global present whose expanse
    is confined to milliseconds. This process is far from coming to an end,
    for massive amounts of investment are allocated to accomplish even the
    smallest steps toward this goal. On November 3, 2015, a 4,600-kilometer,
    300-million-dollar transatlantic telecommunications cable (Hibernia
    Express) was put into operation between London and New York -- the first
    in more than 10 years -- with the single goal of accelerating automated
    trading between the two places by 5.2 milliseconds.

    For social and biological processes, this technical horizon of space and
    time is neither achievable nor desirable. Such processes, on the
    contrary, are existentially dependent on other spatial and temporal
    orders. Yet because of the existence of this non-geographical and
    atemporal horizon, the need -- as well as the possibility -- has arisen
    to redefine the parameters of space and time themselves in order to
    counteract the mire of technically defined spacelessness and
    timelessness. If space and time are not simply to vanish in this
    spaceless, ongoing present, how then should they be defined? Communal
    formations create spaces for action not least by determining their own
    geographies and temporal rhythms. They negotiate what is near and far
    and also which places are disregarded (that is, not even perceived). If
    every place is communicatively (and physically) reachable, every person
    must decide which place he or she would like to reach in practice. This,
    however, is not an individual decision but rather a task that can only
    be approached collectively. Those places which are important and thus
    near are determined by communal formations. This takes place in the form
    of a rough consensus through the blogs that "one" has to read, the
    exhibits that "one" has to see, the events and conferences that "one"
    has to attend, the places that "one" has to visit before they are
    overrun by tourists, the crises in which "the West" has to intervene,
    the targets that "lend themselves" to a terrorist attack, and so on. On
    its own, however, selection is not enough. Communal formations are
    especially powerful when they generate the material and organizational
    resources that are necessary for their members to implement their shared
    worldview through actions -- to visit, for instance, the places that
    have been chosen as important. This can happen if they enable access
    []{#Page_91 type="pagebreak" title="91"}to stipends, donations, price
    reductions, ride shares, places to stay, tips, links, insider knowledge,
    public funds, airlifts, explosives, and so on. It is in this way that
    each formation creates its respective spatial constructs, which define
    distances in a great variety of ways. At the same time that war-torn
    Syria is unreachably distant even for seasoned reporters and their
    staff, veritable travel agencies are being set up in order to bring
    Western jihadists there in large numbers.

    Things are similar for the temporal dimensions of social and biological
    processes. Permanent presence is a temporality that is inimical to life
    but, under its influence, temporal rhythms have to be redefined as well.
    What counts as fast? What counts as slow? In what order should things
    proceed? On the everyday level, for instance, the matter can be as
    simple as how quickly to respond to an email. Because the transmission
    of information hardly takes any time, every delay is a purely social
    creation. But how much is acceptable? There can be no uniform answer to
    this. The members of each communal formation have to negotiate their own
    rules with one another, even in areas of life that are otherwise highly
    formalized. In an interview with the magazine *Zeit*, for instance, a
    lawyer with expertise in labor law was asked whether a boss may require
    employees to be reachable at all times. Instead of answering by
    referring to any binding legal standards, the lawyer casually advised
    that this was a matter of flexible negotiation: "Express your misgivings
    openly and honestly about having to be reachable after hours and,
    together with your boss, come up with an agreeable rule to
    follow."[^56^](#c2-note-0056){#c2-note-0056a} If only it were that easy.

    Temporalities that, in many areas, were once simply taken for granted by
    everyone on account of the factuality of things now have to be
    culturally determined -- that is, explicitly negotiated -- in a greater
    number of contexts. Under the conditions of capitalism, which is always
    creating new competitions and incentives, one consequence is the
    often-lamented "acceleration of time." We are asked to produce, consume,
    or accomplish more and more in less and less
    time.[^57^](#c2-note-0057){#c2-note-0057a} This change in the
    structuring of time is not limited to linear acceleration. It reaches
    deep into the foundations of life and has even reconfigured biological
    processes themselves. Today there is an entire industry that specializes
    in freezing the stem []{#Page_92 type="pagebreak" title="92"}cells of
    newborns in liquid nitrogen -- that is, in suspending cellular
    biological time -- in case they might be needed later on in life for a
    transplant or for the creation of artificial organs. Children can be
    born even if their physical mothers are already dead. Or they can be
    "produced" from ova that have been stored for many years at minus 196
    degrees.[^58^](#c2-note-0058){#c2-note-0058a} At the same time,
    questions now have to be addressed every day whose grand temporal
    dimensions were once the matter of myth. In the case of atomic energy,
    for instance, there is the issue of permanent disposal. Where can we
    deposit nuclear waste for the next hundred thousand years without it
    causing catastrophic damage? How can the radioactive material even be
    transported there, wherever that is, within the framework of everday
    traffic laws?[^59^](#c2-note-0059){#c2-note-0059a}

    The construction of temporal dimensions and sequences has thus become an
    everyday cultural question. Whereas throughout Europe, for example,
    committees of experts and ethicists still meet to discuss reproductive
    medicine and offer their various recommendations, many couples are
    concerned with the specific question of whether or how they can fulfill
    their wish to have children. Without a coherent set of rules, questions
    such as these have to be answered by each individual with recourse to
    his or her personally relevant communal formation. If there is no
    cultural framework that at least claims to be binding for everyone, then
    the individual must negotiate independently within each communal
    formation with the goal of acquiring the resources necessary to act
    according to communal values and objectives.
    :::

    ::: {.section}
    ### Self-generating orders {#c2-sec-0014}

    These three functions -- selection, interpretation, and the constitutive
    ability to act -- make communal formations the true subject of the
    digital condition. In principle, these functions are nothing new;
    rather, they are typical of fields that are organized without reference
    to external or irrefutable authorities. The state of scholarship, for
    instance, is determined by what is circulated in refereed publications.
    In this case, "refereed" means that scientists at the same professional
    rank mutually evaluate each other\'s work. The scientific community (or
    better: the sub-community of a specialized discourse) []{#Page_93
    type="pagebreak" title="93"}evaluates the contributions of individual
    scholars. They decide what should be considered valuable, and this
    consensus can theoretically be revised at any time. It is based on a
    particular catalog of criteria, on an interpretive framework that
    provides lines of inquiry, methods, appraisals, and conventions of
    presentation. With every article, this framework is confirmed and
    reconstituted. If the framework changes, this can lead in the most
    extreme case to a paradigm shift, which overturns fundamental
    orientations, assumptions, and
    certainties.[^60^](#c2-note-0060){#c2-note-0060a} The result of this is
    not only a change in how scientific contributions are evaluated but also
    a change in how the external world is perceived and what activities are
    possible in it. Precisely because the sciences claim to define
    themselves, they have the ability to revise their own foundations.

    The sciences were the first large sphere of society to achieve
    comprehensive cultural autonomy; that is, the ability to determine its
    own binding meaning. Art was the second that began to organize itself on
    the basis of internal feedback. It was during the era of Romanticism
    that artists first laid claim to autonomy. They demanded "to absolve art
    from all conditions, to represent it as a realm -- indeed as the only
    realm -- in which truth and beauty are expressed in their pure form, a
    realm in which everything truly human is
    transcended."[^61^](#c2-note-0061){#c2-note-0061a} With the spread of
    photography in the second half of the nineteenth century, art also
    liberated itself from its final task, which was hoisted upon it from the
    outside, namely the need to represent external reality. Instead of
    having to represent the external world, artists could now focus on their
    own subjectivity. This gave rise to a radical individualism, which found
    its clearest summation in Marcel Duchamp\'s assertion that only the
    artist could determine what is art. This he claimed in 1917 by way of
    explaining how an industrially produced urinal, exhibited as a signed
    piece with the title "Fountain," could be considered a work of art.

    With the rise of the knowledge economy and the expansion of cultural
    fields, including the field of art and the artists active within it,
    this individualism quickly swelled to unmanageable levels. As a
    consequence, the task of defining what should be regarded as art shifted
    from the individual artist to the curator. It now fell upon the latter
    to select a few works from the surplus of competing scenes and thus
    bring temporary []{#Page_94 type="pagebreak" title="94"}order to the
    constantly diversifying and changing world of contemporary art. This
    order was then given expression in the form of exhibits, which were
    intended to be more than the sum of their parts. The beginning of this
    practice can be traced to the 1969 exhibition When Attitudes Become
    Form, which was curated by Harald Szeemann for the Kunsthalle Bern (it
    was also sponsored by Philip Morris). The works were not neatly
    separated from one another and presented without reference to their
    environment, but were connected with each other both spatially and in
    terms of their content. The effect of the exhibition could be felt at
    least as much through the collection of works as a whole as it could
    through the individual pieces, many of which had been specially
    commissioned for the exhibition itself. It not only cemented Szeemann\'s
    reputation as one of the most significant curators of the twentieth
    century; it also completely redefined the function of the curator as a
    central figure within the art system.

    This was more than 40 years ago and in a system that functioned
    differently from that of today. The distance from this exhibition, but
    also its ongoing relevance, was negotiated, significantly, in a
    re-enactment at the 2013 Biennale in Venice. For this, the old rooms at
    the Kunsthalle Bern were reconstructed in the space of the Fondazione
    Prada in such a way that both could be seen simultaneously. As is
    typical with such re-enactments, the curators of the project described
    its goals in terms of appropriation and distancing: "This was the
    challenge: how could we find and communicate a limit to a non-limit,
    creating a place that would reflect exactly the architectural structures
    of the Kunsthalle, but also an asymmetrical space with respect to our
    time and imbued with an energy and tension equivalent to that felt at
    Bern?"[^62^](#c2-note-0062){#c2-note-0062a}

    Curation -- that is, selecting works and associating them with one
    another -- has become an omnipresent practice in the art system. No
    exhibition takes place any more without a curator. Nevertheless,
    curators have lost their extraordinary
    position,[^63^](#c2-note-0063){#c2-note-0063a} with artists taking on
    more of this work themselves, not only because the boundaries between
    artistic and curatorial activities have become fluid but also because
    many artists explicitly co-produce the context of their work by
    incorporating a multitude of references into their pieces. It is with
    precisely this in mind that André Rottmann, in the []{#Page_95
    type="pagebreak" title="95"}quotation cited at the beginning of this
    chapter, can assert that referentiality has become the dominant
    production-aesthetic model in contemporary art. This practice enables
    artists to objectify themselves by explicitly placing themselves into a
    historical and social context. At the same time, it also enables them to
    subjectify the historical and social context by taking the liberty to
    select and arrange the references
    themselves.[^64^](#c2-note-0064){#c2-note-0064a}

    Such strategies are no longer specific to art. Self-generated spaces of
    reference and agency are now deeply embedded in everyday life. The
    reason for this is that a growing number of questions can no longer be
    answered in a generally binding way (such as those about what
    constitutes fine art), while the enormous expansion of the cultural
    requires explicit decisions to be made in more aspects of life. The
    reaction to this dilemma has been radical subjectivation. This has not,
    however, been taking place at the level of the individual but rather at
    that of communal formations. There is now a patchwork of answers to
    large questions and a multitude of reactions to large challenges, all of
    which are limited in terms of their reliability and scope.
    :::

    ::: {.section}
    ### Ambivalent voluntariness {#c2-sec-0015}

    Even though participation in new formations is voluntary and serves the
    interests of their members, it is not without preconditions. The most
    important of these is acceptance, the willing adoption of the
    interpretive framework that is generated by the communal formation. The
    latter is formed from the social, cultural, legal, and technical
    protocols that lend to each of these formations its concrete
    constitution and specific character. Protocols are common sets of rules;
    they establish, according to the network theorist Alexander Galloway,
    "the essential points necessary to enact an agreed-upon standard of
    action." They provide, he goes on, "etiquette for autonomous
    agents."[^65^](#c2-note-0065){#c2-note-0065a} Protocols are
    simul­taneously voluntary and binding; they allow actors to meet
    eye-to-eye instead of entering into hierarchical relations with one
    another. If everyone voluntarily complies with the protocols, then it is
    not necessary for one actor to give instructions to another. Whoever
    accepts the relevant protocols can interact with others who do the same;
    whoever opts not to []{#Page_96 type="pagebreak" title="96"}accept them
    will remain on the outside. Protocols establish, for example, common
    languages, technical standards, or social conventions. The fundamental
    protocol for the internet is the Transmission Control Protocol/Internet
    Protocol (TCP/IP). This suite of protocols defines the common language
    for exchanging data. Every device that exchanges information over the
    internet -- be it a smartphone, a supercomputer in a data center, or a
    networked thermostat -- has to use these protocols. In growing areas of
    social contexts, the common language is English. Whoever wishes to
    belong has to speak it increasingly often. In the natural sciences,
    communication now takes place almost exclusively in English. Non-native
    speakers who accept this norm may pay a high price: they have to learn a
    new language and continually improve their command of it or else resign
    themselves to being unable to articulate things as they would like --
    not to mention losing the possibility of expressing something for which
    another language would perhaps be more suitable, or forfeiting
    trad­itions that cannot be expressed in English. But those who refuse to
    go along with these norms pay an even higher price, risking
    self-marginalization. Those who "voluntarily" accept conventions gain
    access to a field of practice, even though within this field they may be
    structurally disadvantaged. But unwillingness to accept such
    conventions, with subsequent denial of access to this field, might have
    even greater disadvantages.[^66^](#c2-note-0066){#c2-note-0066a}

    In everyday life, the factors involved with this trade-off are often
    presented in the form of subtle cultural codes. For instance, in order
    to participate in a project devoted to the development of free software,
    it is not enough for someone to possess the necessary technical
    knowledge; he or she must also be able to fit into a wide-ranging
    informal culture with a characteristic style of expression, humor, and
    preferences. Ultimately, software developers do not form a professional
    corps in the traditional sense -- in which functionaries meet one
    another in the narrow and regulated domain of their profession -- but
    rather a communal formation in which the engagement of the whole person,
    both one\'s professional and social self, is scrutinized. The
    abolishment of the separ­ation between different spheres of life,
    requiring interaction of a more holistic nature, is in fact a key
    attraction of []{#Page_97 type="pagebreak" title="97"}these communal
    formations and is experienced by some as a genuine gain in freedom. In
    this situation, one is no longer subjected to rules imposed from above
    but rather one is allowed to -- and indeed ought to -- be authentically
    pursuing his or her own interests.

    But for others the experience can be quite the opposite because the
    informality of the communal formation also allows forms of exclusion and
    discrimination that are no longer acceptable in formally organized
    realms of society. Discrimination is more difficult to identify when it
    takes place within the framework of voluntary togetherness, for no one
    is forced to participate. If you feel uncomfortable or unwelcome, you
    are free to leave at any time. But this is a specious argument. The
    areas of free software or Wikipedia are difficult places for women. In
    these clubby atmospheres of informality, they are often faced with
    blatant sexism, and this is one of the reasons why many women choose to
    stay away from such projects.[^67^](#c2-note-0067){#c2-note-0067a} In
    2007, according to estimates by the American National Center for Women &
    Information Technology, whereas approximately 27 percent of all jobs
    related to computer science were held by women, their representation at
    the same time was far lower in the field of free software -- on average
    less than 2 percent. And for years, the proportion of women who edit
    texts on Wikipedia has hovered at around 10
    percent.[^68^](#c2-note-0068){#c2-note-0068a}

    The consequences of such widespread, informal, and elusive
    discrimination are not limited to the fact that certain values and
    prejudices of the shared culture are included in these products, while
    different viewpoints and areas of knowledge are
    excluded.[^69^](#c2-note-0069){#c2-note-0069a} What is more, those who
    are excluded or do not wish to expose themselves to discrimination (and
    thus do not even bother to participate in any communal formations) do
    not receive access to the resources that circulate there (attention and
    support, valuable and timely knowledge, or job offers). Many people are
    thus faced with the choice of either enduring the discrimination within
    a community or remaining on the outside and thus invisible. That this
    decision is made on a voluntary basis and on one\'s own responsibility
    hardly mitigates the coercive nature of the situation. There may be a
    choice, but it would be misleading to call it a free one.[]{#Page_98
    type="pagebreak" title="98"}
    :::

    ::: {.section}
    ### The power of sociability {#c2-sec-0016}

    In order to explain the peculiar coercive nature of the (nom­inally)
    voluntary acceptance of protocols, rules, and norms, the political
    scientist David Singh Grewal, drawing on the work of Max Weber and
    Michel Foucault, has distinguished between the "power of sovereignty"
    and the "power of sociabil­ity."[^70^](#c2-note-0070){#c2-note-0070a}
    The former develops on the basis of dominance and subordination, as
    imposed by authorities, police officers, judges, or other figures within
    formal hierarchies. Their power is anchored in disciplinary
    institutions, and the dictum of this sort of power is: "You must!" The
    power of sociability, on the contrary, functions by prescribing the
    conditions or protocols under which people are able to enter into an
    exchange with one another. The dictum of this sort of power is: "You
    can!" The more people accept certain protocols and standards, the more
    powerful these become. Accordingly, the sociability that they structure
    also becomes more comprehensive, and those not yet involved have to ask
    themselves all the more urgently whether they can afford not to accept
    these protocols and standards. Whereas the first type of power is
    ultimately based on the monopoly of violence and on repression, the
    second is founded on voluntary submission. When the entire internet
    speaks TCP/IP, then an individual\'s decision to use it may be voluntary
    in nominal terms, but at the same time it is an indispensable
    precondition for existing within the network at all. Protocols exert
    power without there having to be anyone present to possess the power in
    question. Whereas the sovereign can be located, the effects of
    sociability\'s power are diffuse and omnipresent. They are not
    repressive but rather constitutive. No one forces a scientist to publish
    in English or a woman editor to tolerate disparaging remarks on
    Wikipedia. People accept these often implicit behavioral norms (sexist
    comments are permitted, for instance) out of their own interests in
    order to acquire access to the resources circulating within the networks
    and to constitute themselves within it. In this regard, Singh
    distinguishes between the "intrinsic" and "extrinsic" reasons for
    abiding by certain protocols.[^71^](#c2-note-0071){#c2-note-0071a} In
    the first case, the motivation is based on a new protocol being better
    suited than existing protocols for carrying out []{#Page_99
    type="pagebreak" title="99"}a specific objective. People thus submit
    themselves to certain rules because they are especially efficient,
    transparent, or easy to use. In the second case, a protocol is accepted
    not because but in spite of its features. It is simply a precondition
    for gaining access to a space of agency in which resources and
    opportunities are available that cannot be found anywhere else. In the
    first case, it is possible to speak subjectively of voluntariness,
    whereas the second involves some experience of impersonal compunction.
    One is forced to do something that might potentially entail grave
    disadvantages in order to have access, at least, to another level of
    opportunities or to create other advantages for oneself.
    :::

    ::: {.section}
    ### Homogeneity, difference and authority {#c2-sec-0017}

    Protocols are present on more than a technical level; as interpretive
    frameworks, they structure viewpoints, rules, and patterns of behavior
    on all levels. Thus, they provide a degree of cultural homogeneity, a
    set of commonalities that lend these new formations their communal
    nature. Viewed from the outside, these formations therefore seem
    inclined toward consensus and uniformity, for their members have already
    accepted and internalized certain aspects in common -- the protocols
    that enable exchange itself -- whereas everyone on the outside has not
    done so. When everyone is speaking in English, the conversation sounds
    quite monotonous to someone who does not speak the language.

    Viewed from the inside, the experience is something different: in order
    to constitute oneself within a communal formation, not only does one
    have to accept its rules voluntarily and in a self-motivated manner; one
    also has to make contributions to the reproduction and development of
    the field. Everyone is urged to contribute something; that is, to
    produce, on the basis of commonalities, differences that simultaneously
    affirm, modify, and enhance these commonalities. This leads to a
    pronounced and occasionally highly competitive internal differentiation
    that can only be understood, however, by someone who has accepted the
    commonalities. To an outsider, this differentiation will seem
    irrelevant. Whoever is not well versed in the universe of *Star Wars*
    will not understand why the various character interpretations at
    []{#Page_100 type="pagebreak" title="100"}cosplay conventions, which I
    discussed above, might be brilliant or even controversial. To such a
    person, they will all seem equally boring and superficial.

    These formations structure themselves internally through the production
    of differences; that is, by constantly changing their common ground.
    Those who are able to add many novel aspects to the common resources
    gain a degree of authority. They assume central positions and they
    influence, through their behavior, the development of the field more
    than others do. However, their authority, influence, and de facto power
    are not based on any means of coercion. As Niklas Luhmann noted, "In the
    end, one participant\'s achievements in making selections \[...\] are
    accepted by another participant \[...\] as a limitation of the latter\'s
    potential experiences and activities without him having to make the
    selection on his own."[^72^](#c2-note-0072){#c2-note-0072a} Even this is
    a voluntary and self-interested act: the members of the formation
    recognize that this person has contributed more to the common field and
    to the resources within it. This, in turn, is to everyone\'s advantage,
    for each member would ultimately like to make use of the field\'s
    resources to achieve his or her own goals. This arrangement, which can
    certainly take on hierarchical qualities, is experienced as something
    meritocratically legitimized and voluntarily
    accepted.[^73^](#c2-note-0073){#c2-note-0073a} In the context of free
    software, there has therefore been some discussion of "benevolent
    dictators."[^74^](#c2-note-0074){#c2-note-0074a} The matter of
    "dictators" is raised because projects are often led by charismatic
    figures without a formal mandate. They are "benevolent" because their
    pos­ition of authority is based on the fact that a critical mass of
    participating producers has voluntarily subordinated itself for its own
    self-interest. If the consensus breaks over whose contributions have
    been carrying the most weight, then the formation will be at risk of
    losing its internal structure and splitting apart ("forking," in the
    jargon of free software).
    :::
    :::

    ::: {.section}
    Algorithmicity {#c2-sec-0018}
    --------------

    Through personal communication, referential processes in communal
    formations create cultural zones of various sizes and scopes. They
    expand into the empty spaces that have been created by the erosion of
    established institutions and []{#Page_101 type="pagebreak"
    title="101"}processes, and once these new processes have been
    established the process of erosion intensifies. Multiple processes of
    exchange take place alongside one another, creating a patchwork of
    interconnected, competing, or entirely unrelated spheres of meaning,
    each with specific goals and resources and its own preconditions and
    potentials. The structures of knowledge, order, and activity that are
    generated by this are holistic as well as partial and limited. The
    participants in such structures are simultaneously addressed on many
    levels that were once functionally separated; previously independent
    spheres, such as work and leisure, are now mixed together, but usually
    only with respect to the subdivisions of one\'s own life. And, at first,
    the structures established in this way are binding only for active
    participants.

    ::: {.section}
    ### Exiting the "Library of Babel" {#c2-sec-0019}

    For one person alone, however, these new processes would not be able to
    generate more than a local island of meaning from the enormous clamor of
    chaotic spheres of information. In his 1941 story "The Library of
    Babel," Jorge Luis Borges fashioned a fitting image for such a
    situation. He depicts the world as a library of unfathomable and
    possibly infinite magnitude. The characters in the story do not know
    whether there is a world outside of the library. There are reasons to
    believe that there is, and reasons that suggest otherwise. The library
    houses the complete collection of all possible books that can be written
    on exactly 410 pages. Contained in these volumes is the promise that
    there is "no personal or universal problem whose eloquent solution
    \[does\] not exist," for every possible combination of letters, and thus
    also every possible pronouncement, is recorded in one book or another.
    No catalog has yet been found for the library (though it must exist
    somewhere), and it is impossible to identify any order in its
    arrangement of books. The "men of the library," according to Borges,
    wander round in search of the one book that explains everything, but
    their actual discoveries are far more modest. Only once in a while are
    books found that contain more than haphazard combinations of signs. Even
    small regularities within excerpts of texts are heralded as sensational
    discoveries, and it is around these discoveries that competing
    []{#Page_102 type="pagebreak" title="102"}schools of interpretation
    develop. Despite much labor and effort, however, the knowledge gained is
    minimal and fragmentary, so the prevailing attitude in the library is
    bleak. By the time of the narrator\'s generation, "nobody expects to
    discover anything."[^75^](#c2-note-0075){#c2-note-0075a}

    Although this vision has now been achieved from a quantitative
    perspective -- no one can survey the "library" of digital information,
    which in practical terms is infinitely large, and all of the growth
    curves continue to climb steeply -- today\'s cultural reality is
    nevertheless entirely different from that described by Borges. Our
    ability to deal with massive amounts of data has radically improved, and
    thus our faith in the utility of information is not only unbroken but
    rather gaining strength. What is new is precisely such large quantities
    of data ("big data"), which, as we are promised or forewarned, will lead
    to new knowledge, to a comprehensive understanding of the world, indeed
    even to "omniscience."[^76^](#c2-note-0076){#c2-note-0076a} This faith
    in data is based above all on the fact that the two processes described
    above -- referentiality and communality -- are not the only new
    mechanisms for filtering, sorting, aggregating, and evaluating things.
    Beneath or ahead of the social mechanisms of decentralized and networked
    cultural production, there are algorithmic processes that pre-sort the
    immeasurably large volumes of data and convert them into a format that
    can be apprehended by individuals, evaluated by communities, and
    invested with meaning.

    Strictly speaking, it is impossible to maintain a categorical
    distinction between social processes that take place in and by means of
    technological infrastructures and technical pro­cesses that are socially
    constructed. In both cases, social actors attempt to realize their own
    interests with the resources at their disposal. The methods of
    (attempted) realization, the available resources, and the formulation of
    interests mutually influence one another. The technological resources
    are inscribed in the formulation of goals. These open up fields of
    imagination and desire, which in turn inspire technical
    development.[^77^](#c2-note-0077){#c2-note-0077a} Although it is
    impossible to draw clear theoretical lines, the attempt to make such a
    distinction can nevertheless be productive in practice, for in this way
    it is possible to gain different perspectives about the same object of
    investigation.[]{#Page_103 type="pagebreak" title="103"}
    :::

    ::: {.section}
    ### The rise of algorithms {#c2-sec-0020}

    An algorithm is a set of instructions for converting a given input into
    a desired output by means of a finite number of steps: algorithms are
    used to solve predefined problems. For a set of instructions to become
    an algorithm, it has to be determined in three different respects.
    First, the necessary steps -- individually and as a whole -- have to be
    described unambiguously and completely. To do this, it is usually
    neces­sary to use a formal language, such as mathematics, or a
    programming language, in order to avoid the characteristic imprecision
    and ambiguity of natural language and to ensure instructions can be
    followed without interpretation. Second, it must be possible in practice
    to execute the individual steps together. For this reason, every
    algorithm is tied to the context of its realization. If the context
    changes, so do the operating processes that can be formalized as
    algorithms and thus also the ways in which algorithms can partake in the
    constitution of the world. Third, it must be possible to execute an
    operating instruction mechanically so that, under fixed conditions, it
    always produces the same result.

    Defined in such general terms, it would also be possible to understand
    the instruction manual for a typical piece of Ikea furniture as an
    algorithm. It is a set of instructions for creating, with a finite
    number of steps, a specific and predefined piece of furniture (output)
    from a box full of individual components (input). The instructions are
    composed in a formal language, pictograms, which define each step as
    unambiguously as possible, and they can be executed by a single person
    with simple tools. The process can be repeated, for the final result is
    always the same: a Billy box will always yield a Billy shelf. In this
    case, a person takes over the role of a machine, which (unambiguous
    pictograms aside) can lead to problems, be it that scratches and other
    traces on the finished piece of furniture testify to the unique nature
    of the (unsuccessful) execution, or that, inspired by the micro-trend of
    "Ikea hacking," the official instructions are intentionally ignored.

    Because such imprecision is supposed to be avoided, the most important
    domain of algorithms in practice is mathematics and its implementation
    on the computer. The term []{#Page_104 type="pagebreak"
    title="104"}"algorithm" derives from the Persian mathematician,
    astronomer, and geographer Muḥammad ibn Mūsā al-Khwārizmī. His book *On
    the Calculation with Hindu Numerals*, which was written in Baghdad in
    825, was known widely in the Western Middle Ages through a Latin
    translation and made the essential contribution of introducing
    Indo-Arabic nu­merals and the number zero to Europe. The work begins
    with the formula *dixit algorizmi* ... ("Algorismi said ..."). During
    the Middle Ages, *algorizmi* or *algorithmi* soon became a general term
    for advanced methods of
    calculation.[^78^](#c2-note-0078){#c2-note-0078a}

    The modern effort to build machines that could mechanic­ally carry out
    instructions achieved its first breakthrough with Gottfried Wilhelm
    Leibniz. He has often been credited with making the following remark:
    "It is unworthy of excellent men to lose hours like slaves in the labour
    of calculation which could be done by any peasant with the aid of a
    machine."[^79^](#c2-note-0079){#c2-note-0079a} This vision already
    contains a distinction between higher cognitive and interpretive
    activities, which are regarded as being truly human, and lower processes
    that involve pure execution and can therefore be mechanized. To this
    end, Leibniz himself developed the first calculating machine, which
    could carry out all four of the basic types of arithmetic. He was not
    motivated to do this by the practical necessities of production and
    business (although conceptually groundbreaking, Leibniz\'s calculating
    machine remained, on account of its mechanical complexity, a unique item
    and was never used).[^80^](#c2-note-0080){#c2-note-0080a} In the
    estimation of the philosopher Sybille Krämer, calculating machines "were
    rather speculative masterpieces of a century that, like none before it,
    was infatuated by the idea of mechanizing 'intellectual'
    processes."[^81^](#c2-note-0081){#c2-note-0081a} Long before machines
    were implemented on a large scale to increase the efficiency of material
    production, Leibniz had already speculated about using them to enhance
    intellectual labor. And this vision has never since disappeared. Around
    a century and a half later, the English polymath Charles Babbage
    formulated it anew, now in direct connection with industrial
    mechanization and its imperative of time-saving
    efficiency.[^82^](#c2-note-0082){#c2-note-0082a} Yet he, too, failed to
    overcome the problem of practically realizing such a machine.

    The decisive step that turned the vision of calculating machines into
    reality was made by Alan Turing in 1937. With []{#Page_105
    type="pagebreak" title="105"}a theoretical model, he demonstrated that
    every algorithm could be executed by a machine as long as it could read
    an incremental set of signs, manipulate them according to established
    rules, and then write them out again. The validity of his model did not
    depend on whether the machine would be analog or digital, mechanical or
    electronic, for the rules of manipulation were not at first conceived as
    being a fixed component of the machine itself (that is, as being
    implemented in its hardware). The electronic and digital approach came
    to be preferred because it was hoped that even the instructions could be
    read by the machine itself, so that the machine would be able to execute
    not only one but (theoretically) every written algorithm. The
    Hungarian-born mathematician John von Neumann made it his goal to
    implement this idea. In 1945, he published a model in which the program
    (the algorithm) and the data (the input and output) were housed in a
    common storage device. Thus, both could be manipulated simultaneously
    without having to change the hardware. In this way, he converted the
    "Turing machine" into the "universal Turing machine"; that is, the
    modern computer.[^83^](#c2-note-0083){#c2-note-0083a}

    Gordon Moore, the co-founder of the chip manufacturer Intel,
    prognosticated 20 years later that the complexity of integrated circuits
    and thus the processing power of computer chips would double every 18 to
    24 months. Since the 1970s, his prediction has been known as Moore\'s
    Law and has essentially been correct. This technical development has
    indeed taken place exponentially, not least because the semi-conductor
    industry has been oriented around
    it.[^84^](#c2-note-0084){#c2-note-0084a} An IBM 360/40 mainframe
    computer, which was one of the first of its kind to be produced on a
    large scale, could make approximately 40,000 calculations per second and
    its cost, when it was introduced to the market in 1965, was \$1.5
    million per unit. Just 40 years later, a standard server (with a
    quad-core Intel processor) could make more than 40 billion calculations
    per second, and this at a price of little more than \$1,500. This
    amounts to an increase in performance by a factor of a million and a
    corresponding price reduction by a factor of a thousand; that is, an
    improvement in the price-to-performance ratio by a factor of a billion.
    With inflation taken into consideration, this factor would be even
    higher. No less dramatic were the increases in performance -- or rather
    []{#Page_106 type="pagebreak" title="106"}the price reductions -- in the
    area of data storage. In 1980, it cost more than \$400,000 to store a
    gigabyte of data, whereas 30 years later it would cost just 10 cents to
    do the same -- a price reduction by a factor of 4 million. And in both
    areas, this development has continued without pause.

    These increases in performance have formed the material basis for the
    rapidly growing number of activities carried out by means of algorithms.
    We have now reached a point where Leibniz\'s distinction between
    creative mental functions and "simple calculations" is becoming
    increasingly fuzzy. Recent discussions about the allegedly threatening
    "domination of the computer" have been kindled less by the increased use
    of algorithms as such than by the gradual blurring of this distinction
    with new possibilities to formalize and mechanize increasing areas of
    creative thinking.[^85^](#c2-note-0085){#c2-note-0085a} Activities that
    not long ago were reserved for human intelligence, such as composing
    texts or analyzing the content of images, are now frequently done by
    machines. As early as 2010, a program called Stats Monkey was introduced
    to produce short reports about baseball games. All that the program
    needs for this is comprehensive data about the games, which can be
    accumulated mechanically and which have since become more detailed due
    to improved image recognition and sensors. From these data, the program
    extracts the decisive moments and players of a game, recognizes
    characteristic patterns throughout the course of play (such as
    "extending an early lead," "a dramatic comeback," etc.), and on this
    basis generates its own report. Regarding the reports themselves, a
    number of variables can be determined in advance, for instance whether
    the story should be written from the perspective of a neutral observer
    or from the standpoint of one of the two teams. If writing about little
    league games, the program can be instructed to ignore the errors made by
    children -- because no parent wants to read about those -- and simply
    focus on their heroics. The algorithm was soon patented, and a start-up
    business was created from the original interdisciplinary research
    project: Narrative Science. In addition to sport reports it now offers
    texts of all sorts, but above all financial reports -- another field for
    which there is a great deal of available data. These texts have been
    published by reputable media outlets such as the business magazine
    *Forbes*, in which their authorship []{#Page_107 type="pagebreak"
    title="107"}is credited to "Narrative Science." Although these
    contributions are still limited to relatively simple topics, this will
    not remain the case for long. When asked about the percentage of news
    that would be written by computers 15 years from now, Narrative
    Science\'s chief technology officer and co-founder Kristian Hammond
    confidently predicted "\[m\]ore than 90 percent." He added that, within
    the next five years, an algorithm could even win a Pulitzer
    Prize.[^86^](#c2-note-0086){#c2-note-0086a} This may be blatant hype and
    self-promotion but, as a general estimation, Hammond\'s assertion is not
    entirely beyond belief. It remains to be seen whether algorithms will
    replace or simply supplement traditional journalism. Yet because media
    companies are now under strong financial pressure, it is certainly
    reasonable to predict that many journalistic texts will be automated in
    the future. Entirely different applications, however, have also been
    conceived. Alexander Pschera, for instance, foresees a new age in the
    relationship between humans and nature, for, as soon as animals are
    equipped with transmitters and sensors and are thus able to tell their
    own stories through the appropriate software, they will be regarded as
    individuals and not merely as generic members of a
    species.[^87^](#c2-note-0087){#c2-note-0087a}

    We have not yet reached this point. However, given that the CIA has also
    expressed interest in Narrative Science and has invested in it through
    its venture-capital firm In-Q-Tel, there are indications that
    applications are being developed beyond the field of journalism. For the
    purpose of spreading propaganda, for instance, algorithms can easily be
    used to create a flood of entries on online forums and social mass
    media.[^88^](#c2-note-0088){#c2-note-0088a} Narrative Science is only
    one of many companies offering automated text analysis and production.
    As implemented by IBM and other firms, so-called E-discovery software
    promises to reduce dramatically the amount of time and effort required
    to analyze the constantly growing numbers of files that are relevant to
    complex legal cases. Without such software, it would be impossible in
    practice for lawyers to deal with so many documents. Numerous bots
    (automated editing programs) are active in the production of Wikipedia
    as well. Whereas, in the German edition, bots are forbidden from writing
    their own articles, this is not the case in the Swedish version.
    Measured by the number of entries, the latter is now the second-largest
    edition of the online encyclopedia in the []{#Page_108 type="pagebreak"
    title="108"}world, for, in the summer of 2013, a single bot contributed
    more than 200,000 articles to it.[^89^](#c2-note-0089){#c2-note-0089a}
    Since 2013, moreover, the company Epagogix has offered software that
    uses histor­ical data to evaluate the market potential of film scripts.
    At least one major Hollywood studio uses this software behind the backs
    of scriptwriters and directors, for, according to the company\'s CEO,
    the latter would be "nervous" to learn that their creative work was
    being analyzed in such a way.[^90^](#c2-note-0090){#c2-note-0090a}
    Think, too, of the typical statement that is made at the beginning of a
    call to a telephone hotline -- "This call may be recorded for training
    purposes." Increasingly, this training is not intended for the employees
    of the call center but rather for algorithms. The latter are expected to
    learn how to recognize the personality type of the caller and, on that
    basis, to produce an appropriate script to be read by its poorly
    educated and part-time human
    co-workers.[^91^](#c2-note-0091){#c2-note-0091a} Another example is the
    use of algorithms to grade student
    essays,[^92^](#c2-note-0092){#c2-note-0092a} or ... But there is no need
    to expand this list any further. Even without additional references to
    comparable developments in the fields of image, sound, language, and
    film analysis, it is clear by now that, on many fronts, the borders
    between the creative and the mechanical have
    shifted.[^93^](#c2-note-0093){#c2-note-0093a}
    :::

    ::: {.section}
    ### Dynamic algorithms {#c2-sec-0021}

    The algorithms used for such tasks, however, are no longer simple
    sequences of static instructions. They are no longer repeated unchanged,
    over and over again, but are dynamic and adaptive to a high degree. The
    computing power available today is used to write programs that modify
    and improve themselves semi-automatically and in response to feedback.

    What this means can be illustrated by the example of evolutionary and
    self-learning algorithms. An evolutionary algorithm is developed in an
    iterative process that continues to run until the desired result has
    been achieved. In most cases, the values of the variables of the first
    generation of algorithms are chosen at random in order to diminish the
    influence of the programmer\'s presuppositions on the results. These
    cannot be avoided entirely, however, because the type of variables
    (independent of their value) has to be determined in the first place. I
    will return to this problem later on. This is []{#Page_109
    type="pagebreak" title="109"}followed by a phase of evaluation: the
    output of every tested algorithm is evaluated according to how close it
    is to the desired solution. The best are then chosen and combined with
    one another. In addition, mutations (that is, random changes) are
    introduced. These steps are then repeated as often as necessary until,
    according to the specifications in question, the algorithm is
    "sufficient" or cannot be improved any further. By means of intensive
    computational processes, algorithms are thus "cultivated"; that is,
    large numbers of these are tested instead of a single one being designed
    analytically and then implemented. At the heart of this pursuit is a
    functional solution that proves itself experimentally and in practice,
    but about which it might no longer be possible to know why it functions
    or whether it actually is the best possible solution. The fundamental
    methods behind this process largely derive from the 1970s (the first
    stage of artificial intelligence), the difference being that today they
    can be carried out far more effectively. One of the best-known examples
    of an evolutionary algorithm is that of Google Flu Trends. In order to
    predict which regions will be especially struck by the flu in a given
    year, it evaluates the geographic distribution of internet searches for
    particular terms ("cold remedies," for instance). To develop the
    program, Google tested 450 million different models until one emerged
    that could reliably identify local flu epidemics one to two weeks ahead
    of the national health authorities.[^94^](#c2-note-0094){#c2-note-0094a}

    In pursuits of this magnitude, the necessary processes can only be
    administered by computer programs. The series of tests are no longer
    conducted by programmers but rather by algorithms. In short, algorithms
    are implemented in order to write new algorithms or determine their
    variables. If this reflexive process, in turn, is built into an
    algorithm, then the latter becomes "self-learning": the programmers do
    not set the rules for its execution but rather the rules according to
    which the algorithm is supposed to know how to accomplish a particular
    goal. In many cases, the solution strategies are so complex that they
    are incomprehensible in retrospect. They can no longer be tested
    logically, only experimentally. Such algorithms are essentially black
    boxes -- objects that can only be understood by their outer behavior but
    whose internal structure cannot be known.[]{#Page_110 type="pagebreak"
    title="110"}

    Automatic facial recognition, as used in surveillance technologies and
    for authorizing access to certain things, is based on the fact that
    computers can evaluate large numbers of facial images, first to produce
    a general model for a face, then to identify the variables that make a
    face unique and therefore recognizable. With so-called "unsupervised" or
    "deep-learning" algorithms, some developers and companies have even
    taken this a step further: computers are expected to extract faces from
    unstructured images -- that is, from volumes of images that contain
    images both with faces and without them -- and to do so without
    possessing in advance any model of the face in question. So far, the
    extraction and evaluation of unknown patterns from unstructured material
    has only been achieved in the case of very simple patterns -- with edges
    or surfaces in images, for instance -- for it is extremely complex and
    computationally intensive to program such learning processes. In recent
    years, however, there have been enormous leaps in available computing
    power, and both the data inputs and the complexity of the learning
    models have increased exponentially. Today, on the basis of simple
    patterns, algorithms are developing improved recognition of the complex
    content of images. They are refining themselves on their own. The term
    "deep learning" is meant to denote this very complexity. In 2012, Google
    was able to demonstrate the performance capacity of its new programs in
    an impressive manner: from a collection of randomly chosen YouTube
    videos, analyzed in a cluster by 1,000 computers with 16,000 processors,
    it was possible to create a model in just three days that increased
    facial recognition in unstructured images by 70
    percent.[^95^](#c2-note-0095){#c2-note-0095a} Of course, the algorithm
    does not "know" what a face is, but it reliably recognizes a class of
    forms that humans refer to as a face. One advantage of a model that is
    not created on the basis of prescribed parameters is that it can also
    identify faces in non-standard situ­ations (for instance if a person is
    in the background, if a face is half-concealed, or if it has been
    recorded at a sharp angle). Thanks to this technique, it is possible to
    search the content of images directly and not, as before, primarily by
    searching their descriptions. Such algorithms are also being used to
    identify people in images and to connect them in social networks with
    the profiles of the people in question, and this []{#Page_111
    type="pagebreak" title="111"}without any cooperation from the users
    themselves. Such algorithms are also expected to assist in directly
    controlling activity in "unstructured" reality, for instance in
    self-driving cars or other autonomous mobile applications that are of
    great interest to the military in particular.

    Algorithms of this sort can react and adjust themselves directly to
    changes in the environment. This feedback, however, also shortens the
    timeframe within which they are able to generate repetitive and
    therefore predictable results. Thus, algorithms and their predictive
    powers can themselves become unpredictable. Stock markets have
    frequently experi­enced so-called "sub-second extreme events"; that is,
    price fluctuations that happen in less than a
    second.[^96^](#c2-note-0096){#c2-note-0096a} Dramatic "flash crashes,"
    however, such as that which occurred on May 6, 2010, when the Dow Jones
    Index dropped almost a thousand points in a few minutes (and was thus
    perceptible to humans), have not been terribly
    uncommon.[^97^](#c2-note-0097){#c2-note-0097a} With the introduction of
    voice commands on mobile phones (Apple\'s Siri, for example, which came
    out in 2011), programs based on self-learning algorithms have now
    reached the public at large and have infiltrated increased areas of
    everyday life.
    :::

    ::: {.section}
    ### Sorting, ordering, extracting {#c2-sec-0022}

    Orders generated by algorithms are a constitutive element of the digital
    condition. On the one hand, the mechanical pre-sorting of the
    (informational) world is a precondition for managing immense and
    unstructured amounts of data. On the other hand, these large amounts of
    data and the computing centers in which they are stored and processed
    provide the material precondition for developing increasingly complex
    algorithms. Necessities and possibilities are mutually motivating one
    another.[^98^](#c2-note-0098){#c2-note-0098a}

    Perhaps the best-known algorithms that sort the digital infosphere and
    make it usable in its present form are those of search engines, above
    all Google\'s PageRank. Thanks to these, we can find our way around in a
    world of unstructured information and transfer increasingly larger parts
    of the (informational) world into the order of unstructuredness without
    giving rise to the "Library of Babel." Here, "unstructured" means that
    there is no prescribed order such as (to stick []{#Page_112
    type="pagebreak" title="112"}with the image of the library) a cataloging
    system that assigns to each book a specific place on a shelf. Rather,
    the books are spread all over the place and are dynamically arranged,
    each according to a search, so that the appropriate books for each
    visitor are always standing ready at the entrance. Yet the metaphor of
    books being strewn all about is problematic, for "unstructuredness" does
    not simply mean the absence of any structure but rather the presence of
    another type of order -- a meta-structure, a potential for order -- out
    of which innumerable specific arrangements can be generated on an ad hoc
    basis. This meta-structure is created by algorithms. They subsequently
    derive from it an actual order, which the user encounters, for instance,
    when he or she scrolls through a list of hits produced by a search
    engine. What the user does not see are the complex preconditions for
    assembling the search results. By the middle of 2014, according to the
    company\'s own information, the Google index alone included more than a
    hundred million gigabytes of data.

    Originally (that is, in the second half of the 1990s), Page­Rank
    functioned in such a way that the algorithm analyzed the structure of
    links on the World Wide Web, first by noting the number of links that
    referred to a given document, and second by evaluating the "relevance"
    of the site that linked to the document in question. The relevance of a
    site, in turn, was determined by the number of links that led to it.
    From these two variables, every document registered by the search engine
    was assigned a value, the PageRank. The latter served to present the
    documents found with a given search term as a hierarchical list (search
    results), whereby the document with the highest value was listed
    first.[^99^](#c2-note-0099){#c2-note-0099a} This algorithm was extremely
    successful because it reduced the unfathomable chaos of the World Wide
    Web to a task that could be managed without difficulty by an individual
    user: inputting a search term and selecting from one of the presented
    "hits." The simplicity of the user\'s final choice, together with the
    quality of the algorithmic pre-selection, quickly pushed Google past its
    competition.

    Underlying this process is the assumption that every link is an
    indication of relevance, and that links from frequently linked (that is,
    popular) sources are more important than those from less frequently
    linked (that is, unpopular) sources. []{#Page_113 type="pagebreak"
    title="113"}The advantage of this assumption is that it can be
    understood in terms of purely quantitative variables and it is not
    necessary to have any direct understanding of a document\'s content or
    of the context in which it exists.

    In the middle of the 1990s, when the first version of the PageRank
    algorithm was developed, the problem of judging the relevance of
    documents whose content could only partially be evaluated was not a new
    one. Science administrators at universities and funding agencies had
    been facing this difficulty since the 1950s. During the rise of the
    knowledge economy, the number of scientific publications increased
    rapidly. Scientific fields, perspectives, and methods also multiplied
    and diversified during this time, so that even experts could not survey
    all of the work being done in their own areas of
    research.[^100^](#c2-note-0100){#c2-note-0100a} Thus, instead of reading
    and evaluating the content of countless new publications, they shifted
    their analysis to a higher level of abstraction. They began to count how
    often an article or book was cited and applied this information to
    assess the value of a given author or
    publication.[^101^](#c2-note-0101){#c2-note-0101a} The underlying
    assumption was (and remains) that only important things are referenced,
    and therefore every citation and every reference can be regarded as an
    indirect vote for something\'s relevance.

    In both cases -- classifying a chaotic sphere of information and
    administering an expanding industry of knowledge -- the challenge is to
    develop dynamic orders for rapidly changing fields, enabling the
    evaluation of the importance of individual documents without knowledge
    of their content. Because the analysis of citations or links operates on
    a purely quantitative basis, large amounts of data can be quickly
    structured with them, and especially relevant positions can be
    determined. The second advantage of this approach is that it does not
    require any assumptions about the contours of different fields or their
    relationships to one another. This enables the organ­ization of
    disordered or dynamic content. In both cases, references made by the
    actors themselves are used: citations in a scientific text, links on
    websites. Their value for establishing the order of a field as a whole,
    however, is only visible in the aggregate, for instance in the frequency
    with which a given article is
    cited.[^102^](#c2-note-0102){#c2-note-0102a} In both cases, the shift
    from analyzing "data" (the content of documents in the traditional
    sense) to []{#Page_114 type="pagebreak" title="114"}analyzing
    "meta-data" (describing documents in light of their relationships to one
    another) is a precondition for being able to make any use at all of
    growing amounts of information.[^103^](#c2-note-0103){#c2-note-0103a}
    This shift introduced a new level of abstraction. Information is no
    longer understood as a representation of external reality; its
    significance is not evaluated with regard to the relation between
    "information" and "the world," for instance with a qualitative criterion
    such as "true"/"false." Rather, the sphere of information is treated as
    a self-referential, closed world, and documents are accordingly only
    evaluated in terms of their position within this world, though with
    quantitative criteria such as "central"/"peripheral."

    Even though the PageRank algorithm was highly effective and assisted
    Google\'s rapid ascent to a market-leading position, at the beginning it
    was still relatively simple and its mode of operation was at least
    partially transparent. It followed the classical statistical model of an
    algorithm. A document or site referred to by many links was considered
    more important than one to which fewer links
    referred.[^104^](#c2-note-0104){#c2-note-0104a} The algorithm analyzed
    the given structural order of information and determined the position of
    every document therein, and this was largely done independently of the
    context of the search and without making any assumptions about it. This
    approach functioned relatively well as long as the volume of information
    did not exceed a certain size, and as long as the users and their
    searches were somewhat similar to one another. In both respects, this is
    no longer the case. The amount of information to be pre-sorted is
    increasing, and users are searching in all possible situations and
    places for everything under the sun. At the time Google was founded, no
    one would have thought to check the internet, quickly and while on
    one\'s way, for today\'s menu at the restaurant round the corner. Now,
    thanks to smartphones, this is an obvious thing to do.
    :::

    ::: {.section}
    ### Algorithm clouds {#c2-sec-0023}

    In order to react to such changes in user behavior -- and simultaneously
    to advance it further -- Google\'s search algorithm is constantly being
    modified. It has become increasingly complex and has assimilated a
    greater amount of contextual []{#Page_115 type="pagebreak"
    title="115"}information, which influences the value of a site within
    Page­Rank and thus the order of search results. The algorithm is no
    longer a fixed object or unchanging recipe but is transforming into a
    dynamic process, an opaque cloud composed of multiple interacting
    algorithms that are continuously refined (between 500 and 600 times a
    year, according to some estimates). These ongoing developments are so
    extensive that, since 2003, several new versions of the algorithm cloud
    have appeared each year with their own names. In 2014 alone, Google
    carried out 13 large updates, more than ever
    before.[^105^](#c2-note-0105){#c2-note-0105a}

    These changes continue to bring about new levels of abstraction, so that
    the algorithm takes into account add­itional variables such as the time
    and place of a search, alongside a person\'s previously recorded
    behavior -- but also his or her involvement in social environments, and
    much more. Personalization and contextualization were made part of
    Google\'s search algorithm in 2005. At first it was possible to choose
    whether or not to use these. Since 2009, however, they have been a fixed
    and binding component for everyone who conducts a search through
    Google.[^106^](#c2-note-0106){#c2-note-0106a} By the middle of 2013, the
    search algorithm had grown to include at least 200
    variables.[^107^](#c2-note-0107){#c2-note-0107a} What is relevant is
    that the algorithm no longer determines the position of a document
    within a dynamic informational world that exists for everyone
    externally. Instead, it now assigns a rank to their content within a
    dynamic and singular universe of information that is tailored to every
    individual user. For every person, an entirely different order is
    created instead of just an excerpt from a previously existing order. The
    world is no longer being represented; it is generated uniquely for every
    user and then presented. Google is not the only company that has gone
    down this path. Orders produced by algorithms have become increasingly
    oriented toward creating, for each user, his or her own singular world.
    Facebook, dating services, and other social mass media have been
    pursuing this approach even more radically than Google.
    :::

    ::: {.section}
    ### From the data shadow to the synthetic profile {#c2-sec-0024}

    This form of generating the world requires not only detailed information
    about the external world (that is, the reality []{#Page_116
    type="pagebreak" title="116"}shared by everyone) but also information
    about every individual\'s own relation to the
    latter.[^108^](#c2-note-0108){#c2-note-0108a} To this end, profiles are
    established for every user, and the more extensive they are, the better
    they are for the algorithms. A profile created by Google, for instance,
    identifies the user on three levels: as a "knowledgeable person" who is
    informed about the world (this is established, for example, by recording
    a person\'s searches, browsing behavior, etc.), as a "physical person"
    who is located and mobile in the world (a component established, for
    example, by tracking someone\'s location through a smartphone, sensors
    in a smart home, or body signals), and as a "social person" who
    interacts with other people (a facet that can be determined, for
    instance, by following someone\'s activity on social mass
    media).[^109^](#c2-note-0109){#c2-note-0109a}

    Unlike the situation in the 1990s, however, these profiles are no longer
    simply representations of singular people -- they are not "digital
    personas" or "data shadows." They no longer represent what is
    conventionally referred to as "individuality," in the sense of a
    spatially and temporally uniform identity. On the one hand, profiles
    rather consist of sub-individual elements -- of fragments of recorded
    behavior that can be evaluated on the basis of a particular search
    without promising to represent a person as a whole -- and they consist,
    on the other hand, of clusters of multiple people, so that the person
    being modeled can simultaneously occupy different positions in time.
    This temporal differentiation enables predictions of the following sort
    to be made: a person who has already done *x* will, with a probability
    of *y*, go on to engage in activity *z*. It is in this way that Amazon
    assembles its book recommendations, for the company knows that, within
    the cluster of people that constitutes part of every person\'s profile,
    a certain percentage of them have already gone through this sequence of
    activity. Or, as the data-mining company Science Rockstars (!) once
    pointedly expressed on its website, "Your next activity is a function of
    the behavior of others and your own past."

    Google and other providers of algorithmically generated orders have been
    devoting increased resources to the prognostic capabilities of their
    programs in order to make the confusing and potentially time-consuming
    step of the search obsolete. The goal is to minimize a rift that comes
    to light []{#Page_117 type="pagebreak" title="117"}in the act of
    searching, namely that between the world as everyone experiences it --
    plagued by uncertainty, for searching implies "not knowing something" --
    and the world of algorithmically generated order, in which certainty
    prevails, for everything has been well arranged in advance. Ideally,
    questions should be answered before they are asked. The first attempt by
    Google to eliminate this rift is called Google Now, and its slogan is
    "The right information at just the right time." The program, which was
    originally developed as an app but has since been made available on
    Chrome, Google\'s own web browser, attempts to anticipate, on the basis
    of existing data, a user\'s next step, and to provide the necessary
    information before it is searched for in order that such steps take
    place efficiently. Thus, for instance, it draws upon information from a
    user\'s calendar in order to figure out where he or she will have to go
    next. On the basis of real-time traffic data, it will then suggest the
    optimal way to get there. For those driving cars, the amount of traffic
    on the road will be part of the equation. This is ascertained by
    analyzing the motion profiles of other drivers, which will allow the
    program to determine whether the traffic is flowing or stuck in a jam.
    If enough historical data is taken into account, the hope is that it
    will be possible to redirect cars in such a way that traffic jams should
    no longer occur.[^110^](#c2-note-0110){#c2-note-0110a} For those who use
    public transport, Google Now evaluates real-time data about the
    locations of various transport services. With this information, it will
    suggest the optimal route and, depending on the calculated travel time,
    it will send a reminder (sometimes earlier, sometimes later) when it is
    time to go. That which Google is just experimenting with and testing in
    a limited and unambiguous context is already part of Facebook\'s
    everyday operations. With its EdgeRank algorithm, Facebook already
    organizes everyone\'s newsfeed, entirely in the background and without
    any explicit user interaction. On the basis of three variables -- user
    affinity (previous interactions between two users), content weight (the
    rate of interaction between all users and a specific piece of content),
    and currency (the age of a post) -- the algorithm selects content from
    the status updates made by one\'s friends to be displayed on one\'s own
    page.[^111^](#c2-note-0111){#c2-note-0111a} In this way, Facebook
    ensures that the stream of updates remains easy to scroll through, while
    also -- it is safe []{#Page_118 type="pagebreak" title="118"}to assume
    -- leaving enough room for advertising. This potential for manipulation,
    which algorithms possess as they work away in the background, will be
    the topic of my next section.
    :::

    ::: {.section}
    ### Variables and correlations {#c2-sec-0025}

    Every complex algorithm contains a multitude of variables and usually an
    even greater number of ways to make connections between them. Every
    variable and every relation, even if they are expressed in technical or
    mathematical terms, codifies assumptions that express a specific
    position in the world. There can be no purely descriptive variables,
    just as there can be no such thing as "raw
    data."[^112^](#c2-note-0112){#c2-note-0112a} Both -- data and variables
    -- are always already "cooked"; that is, they are engendered through
    cultural operations and formed within cultural
    categories.[^113^](#c2-note-0113){#c2-note-0113a} With every use of
    produced data and with every execution of an algorithm, the assumptions
    embedded in them are activated, and the positions contained within them
    have effects on the world that the algorithm generates and presents.

    As already mentioned, the early version of the PageRank algorithm was
    essentially based on the rather simple assumption that frequently linked
    content is more relevant than content that is only seldom linked to, and
    that links to sites that are themselves frequently linked to should be
    given more weight than those found on sites with fewer links to them.
    Replacing the qualitative criterion of "relevance" with the quantitative
    criterion of "popularity" not only proved to be tremendously practical
    but also extremely consequential, for search engines not only describe
    the world; they create it as well. That which search engines put at the
    top of this list is not just already popular but will remain so. A third
    of all users click on the first search result, and around 95 percent do
    not look past the first 10.[^114^](#c2-note-0114){#c2-note-0114a} Even
    the earliest version of the PageRank algorithm did not represent
    existing reality but rather (co-)constituted it.

    Popularity, however, is not the only element with which algorithms
    actively give shape to the user\'s world. A search engine can only sort,
    weigh, and make available that portion of information which has already
    been incorporated into its index. Everything else remains invisible. The
    relation between []{#Page_119 type="pagebreak" title="119"}the recorded
    part of the internet (the "surface web") and the unrecorded part (the
    "deep web") is difficult to determine. Estimates have varied between
    ratios of 1:5 and 1:500.[^115^](#c2-note-0115){#c2-note-0115a} There are
    many reasons why content might be inaccessible to search engines.
    Perhaps the information has been saved in formats that search engines
    cannot read or can only poorly read, or perhaps it has been hidden
    behind proprietary barriers such as paywalls. In order to expand the
    realm of things that can be exploited by their algorithms, the operators
    of search engines offer extensive guidance about how providers should
    design their sites so that search tools can find them in an optimal
    manner. It is not necessary to follow this guidance, but given the
    central role of search engines in sorting and filtering information, it
    is clear that they exercise a great deal of power by setting the
    standards.[^116^](#c2-note-0116){#c2-note-0116a}

    That the individual must "voluntarily" submit to this authority is
    typical of the power of networks, which do not give instructions but
    rather constitute preconditions. Yet it is in the interest of (almost)
    every producer of information to optimize its position in a search
    engine\'s index, and thus there is a strong incentive to accept the
    preconditions in question. Considering, moreover, the nearly
    monopolistic character of many providers of algorithmically generated
    orders and the high price that one would have to pay if one\'s own site
    were barely (or not at all) visible to others, the term "voluntary"
    begins to take on a rather foul taste. This is a more or less subtle way
    of pre-formatting the world so that it can be optimally recorded by
    algorithms.[^117^](#c2-note-0117){#c2-note-0117a}

    The providers of search engines usually justify such methods in the name
    of offering "more efficient" services and "more relevant" results.
    Ostensibly technical and neutral terms such as "efficiency" and
    "relevance" do little, however, to conceal the political nature of
    defining variables. Efficient with respect to what? Relevant for whom?
    These are issues that are decided without much discussion by the
    developers and institutions that regard the algorithms as their own
    property. Every now and again such questions incite public debates,
    mostly when the interests of one provider happen to collide with those
    of its competition. Thus, for instance, the initiative known as
    FairSearch has argued that Google abuses its market power as a search
    engine to privilege its []{#Page_120 type="pagebreak" title="120"}own
    content and thus to showcase it prominently in search
    results.[^118^](#c2-note-0118){#c2-note-0118a} FairSearch\'s
    representatives alleged, for example, that Google favors its own map
    service in the case of address searches and its own price comparison
    service in the case of product searches. The argument had an effect. In
    November of 2010, the European Commission initiated an antitrust
    investigation against Google. In 2014, a settlement was proposed that
    would have required the American internet giant to pay certain
    concessions, but the members of the Commission, the EU Parliament, and
    consumer protection agencies were not satisfied with the agreement. In
    April 2015, the anti-trust proceedings were recommenced by a newly
    appointed Commission, its reasoning being that "Google does not apply to
    its own comparison shopping service the system of penalties which it
    applies to other comparison shopping services on the basis of defined
    parameters, and which can lead to the lowering of the rank in which they
    appear in Google\'s general search results
    pages."[^119^](#c2-note-0119){#c2-note-0119a} In other words, the
    Commission accused the company of manipulating search results to its own
    advantage and the disadvantage of users.

    This is not the only instance in which the political side of search
    algorithms has come under public scrutiny. In the summer of 2012, Google
    announced that sites with higher numbers of copyright removal notices
    would henceforth appear lower in its
    rankings.[^120^](#c2-note-0120){#c2-note-0120a} The company thus
    introduced explicitly political and economic criteria in order to
    influence what, according to the standards of certain powerful players
    (such as film studios), users were able to
    view.[^121^](#c2-note-0121){#c2-note-0121a} In this case, too, it would
    be possible to speak of the personalization of searching, except that
    the heart of the situation was not the natural person of the user but
    rather the juridical person of the copyright holder. It was according to
    the latter\'s interests and preferences that searching was being
    reoriented. Amazon has employed similar tactics. In 2014, the online
    merchant changed its celebrated recommendation algorithm with the goal
    of reducing the presence of books released by irritating publishers that
    dared to enter into price negotiations with the
    company.[^122^](#c2-note-0122){#c2-note-0122a}

    Controversies over the methods of Amazon or Google, however, are the
    exception rather than the rule. Necessary (but never neutral) decisions
    about recording and evaluating data []{#Page_121 type="pagebreak"
    title="121"}with algorithms are being made almost all the time without
    any discussion whatsoever. The logic of the original Page­Rank algorithm
    was criticized as early as the year 2000 for essentially representing
    the commercial logic of mass media, systematically disadvantaging
    less-popular though perhaps otherwise relevant information, and thus
    undermining the "substantive vision of the web as an inclusive
    democratic space."[^123^](#c2-note-0123){#c2-note-0123a} The changes to
    the search algorithm that have been adopted since then may have modified
    this tendency, but they have certainly not weakened it. In addition to
    concentrating on what is popular, the new variables privilege recently
    uploaded and constantly updated content. The selection of search results
    is now contingent upon the location of the user, and it takes into
    account his or her social networking. It is oriented toward the average
    of a dynamically modeled group. In other words, Google\'s new algorithm
    favors that which is gaining popularity within a user\'s social network.
    The global village is thus becoming more and more
    provincial.[^124^](#c2-note-0124){#c2-note-0124a}
    :::

    ::: {.section}
    ### Data behaviorism {#c2-sec-0026}

    Algorithms such as Google\'s thus reiterate and reinforce a tendency
    that has already been apparent on both the level of individual users and
    that of communal formations: in order to deal with the vast amounts and
    complexity of information, they direct their gaze inward, which is not
    to say toward the inner being of individual people. As a level of
    reference, the individual person -- with an interior world and with
    ideas, dreams, and wishes -- is irrelevant. For algorithms, people are
    black boxes that can only be understood in terms of their reactions to
    stimuli. Consciousness, perception, and intention do not play any role
    for them. In this regard, the legal philosopher Antoinette Rouvroy has
    written about "data behaviorism."[^125^](#c2-note-0125){#c2-note-0125a}
    With this, she is referring to the gradual return of a long-discredited
    approach to behavioral psychology that postulated that human behavior
    could be explained, predicted, and controlled purely by our outwardly
    observable and measurable actions.[^126^](#c2-note-0126){#c2-note-0126a}
    Psychological dimensions were ignored (and are ignored in this new
    version of behaviorism) because it is difficult to observe them
    empiric­ally. Accordingly, this approach also did away with the need
    []{#Page_122 type="pagebreak" title="122"}to question people directly or
    take into account their subjective experiences, thoughts, and feelings.
    People were regarded (and are so again today) as unreliable, as poor
    judges of themselves, and as only partly honest when disclosing
    information. Any strictly empirical science, or so the thinking went,
    required its practitioners to disregard everything that did not result
    in physical and observable action. From this perspective, it was
    possible to break down even complex behavior into units of stimulus and
    reaction. This led to the conviction that someone observing another\'s
    activity always knows more than the latter does about himself or herself
    for, unlike the person being observed, whose impressions can be
    inaccurate, the observer is in command of objective and complete
    information. Even early on, this approach faced a wave of critique. It
    was held to be mechanistic, reductionist, and authoritarian because it
    privileged the observing scientist over the subject. In practice, it
    quickly ran into its own limitations: it was simply too expensive and
    complicated to gather data about human behavior.

    Yet that has changed radically in recent years. It is now possible to
    measure ever more activities, conditions, and contexts empirically.
    Algorithms like Google\'s or Amazon\'s form the technical backdrop for
    the revival of a mechanistic, reductionist, and authoritarian approach
    that has resurrected the long-lost dream of an objective view -- the
    view from nowhere.[^127^](#c2-note-0127){#c2-note-0127a} Every critique
    of this positivistic perspective -- that every measurement result, for
    instance, reflects not only the measured but also the measurer -- is
    brushed aside with reference to the sheer amounts of data that are now
    at our disposal.[^128^](#c2-note-0128){#c2-note-0128a} This attitude
    substantiates the claim of those in possession of these new and
    comprehensive powers of observation (which, in addition to Google and
    Facebook, also includes the intelligence services of Western nations),
    namely that they know more about individuals than individuals know about
    themselves, and are thus able to answer our questions before we ask
    them. As mentioned above, this is a goal that Google expressly hopes to
    achieve.

    At issue with this "inward turn" is thus the space of communal
    formations, which is constituted by the sum of all of the activities of
    their interacting participants. In this case, however, a communal
    formation is not consciously created []{#Page_123 type="pagebreak"
    title="123"}and maintained in a horizontal process, but rather
    synthetic­ally constructed as a computational function. Depending on the
    context and the need, individuals can either be assigned to this
    function or removed from it. All of this happens behind the user\'s back
    and in accordance with the goals and pos­itions that are relevant to the
    developers of a given algorithm, be it to optimize profit or
    surveillance, create social norms, improve services, or whatever else.
    The results generated in this way are sold to users as a personalized
    and efficient service that provides a quasi-magical product. Out of the
    enormous haystack of searchable information, results are generated that
    are made to seem like the very needle that we have been looking for. At
    best, it is only partially transparent how these results came about and
    which positions in the world are strengthened or weakened by them. Yet,
    as long as the needle is somewhat functional, most users are content,
    and the algorithm registers this contentedness to validate itself. In
    this dynamic world of unmanageable complexity, users are guided by a
    sort of radical, short-term pragmatism. They are happy to have the world
    pre-sorted for them in order to improve their activity in it. Regarding
    the matter of whether the information being provided represents the
    world accurately or not, they are unable to formulate an adequate
    assessment for themselves, for it is ultimately impossible to answer
    this question without certain resources. Outside of rapidly shrinking
    domains of specialized or everyday know­ledge, it is becoming
    increasingly difficult to gain an overview of the world without
    mechanisms that pre-sort it. Users are only able to evaluate search
    results pragmatically; that is, in light of whether or not they are
    helpful in solving a concrete problem. In this regard, it is not
    paramount that they find the best solution or the correct answer but
    rather one that is available and sufficient. This reality lends an
    enormous amount of influence to the institutions and processes that
    provide the solutions and answers.[]{#Page_124 type="pagebreak"
    title="124"}
    :::
    :::

    ::: {.section .notesSet type="rearnotes"}
    []{#notesSet}Notes {#c2-ntgp-9999}
    ------------------

    ::: {.section .notesList}
    [1](#c2-note-0001a){#c2-note-0001}  André Rottmann, "Reflexive Systems
    of Reference: Approximations to 'Referentialism' in Contemporary Art,"
    trans. Gerrit Jackson, in Dirk Snauwaert et al. (eds), *Rehabilitation:
    The Legacy of the Modern Movement* (Ghent: MER, 2010), pp. 97--106, at
    99.

    [2](#c2-note-0002a){#c2-note-0002}  The recognizability of the sources
    distinguishes these processes from plagiarism. The latter operates with
    the complete opposite aim, namely that of borrowing sources without
    acknow­ledging them.

    [3](#c2-note-0003a){#c2-note-0003}  Ulf Poschardt, *DJ Culture* (London:
    Quartet Books, 1998), p. 34.

    [4](#c2-note-0004a){#c2-note-0004}  Theodor W. Adorno, *Aesthetic
    Theory*, trans. Robert Hullot-Kentor (Minneapolis, MN: University of
    Minnesota Press, 1997), p. 151.

    [5](#c2-note-0005a){#c2-note-0005}  Peter Bürger, *Theory of the
    Avant-Garde*, trans. Michael Shaw (Minneapolis, MN: University of
    Minnesota Press, 1984).

    [6](#c2-note-0006a){#c2-note-0006}  Felix Stalder, "Neun Thesen zur
    Remix-Kultur," *i-rights.info* (May 25, 2009), online.

    [7](#c2-note-0007a){#c2-note-0007}  Florian Cramer, *Exe.cut(up)able
    Statements: Poetische Kalküle und Phantasmen des selbstausführenden
    Texts* (Munich: Wilhelm Fink, 2011), pp. 9--10 \[--trans.\]

    [8](#c2-note-0008a){#c2-note-0008}  McLuhan stressed that, despite using
    the alphabet, every manuscript is unique because it not only depended on
    the sequence of letters but also on the individual ability of a given
    scribe to []{#Page_185 type="pagebreak" title="185"}lend these letters a
    particular shape. With the rise of the printing press, the alphabet shed
    these last elements of calligraphy and became typography.

    [9](#c2-note-0009a){#c2-note-0009}  Elisabeth L. Eisenstein, *The
    Printing Revolution in Early Modern Europe* (Cambridge: Cambridge
    University Press, 1983), p. 15.

    [10](#c2-note-0010a){#c2-note-0010}  Eisenstein, *The Printing
    Revolution in Early Modern Europe*, p. 204.

    [11](#c2-note-0011a){#c2-note-0011}  The fundamental aspects of these
    conventions were formulated as early as the beginning of the sixteenth
    century; see Michael Giesecke, *Der Buchdruck in der frühen Neuzeit:
    Eine historische Fallstudie über die Durchsetzung neuer Informations-
    und Kommunikationstechnologien* (Frankfurt am Main: Suhrkamp, 1991), pp.
    420--40.

    [12](#c2-note-0012a){#c2-note-0012}  Eisenstein, *The Printing
    Revolution in Early Modern Europe*, p. 49.

    [13](#c2-note-0013a){#c2-note-0013}  In April 2014, the Authors Guild --
    the association of American writers that had sued Google -- filed an
    appeal to overturn the decision and made a public statement demanding
    that a new organization be established to license the digital rights of
    out-of-print books. See "Authors Guild: Amazon was Google's Target,"
    *The Authors Guild: Industry & Advocacy News* (April 11, 2014), online.
    In October 2015, however, the next-highest authority -- the United
    States Court of Appeals for the Second Circuit -- likewise decided in
    Google\'s favor. The Authors Guild promptly announced its intention to
    take the case to the Supreme Court.

    [14](#c2-note-0014a){#c2-note-0014}  Jean-Noël Jeanneney, *Google and
    the Myth of Universal Knowledge: A View from Europe*, trans. Teresa
    Lavender Fagan (Chicago, IL: University of Chicago Press, 2007).

    [15](#c2-note-0015a){#c2-note-0015}  Within the framework of the Images
    for the Future project (2007--14), the Netherlands alone invested more
    than €170 million to digitize the collections of the most important
    audiovisual archives. Over 10 years, the cost of digitizing the entire
    cultural heritage of Europe has been estimated to be around €100
    billion. See Nick Poole, *The Cost of Digitising Europe\'s Cultural
    Heritage: A Report for the Comité des Sages of the European Commission*
    (November 2010), online.

    [16](#c2-note-0016a){#c2-note-0016}  Richard Darnton, "The National
    Digital Public Library Is Launched!", *New York Review of Books* (April
    25, 2013), online.

    [17](#c2-note-0017a){#c2-note-0017}  According to estimates by the
    British Library, so-called "orphan works" alone -- that is, works still
    legally protected but whose right holders are unknown -- make up around
    40 percent of the books in its collection that still fall under
    copyright law. In an effort to alleviate this problem, the European
    Parliament and the European Commission issued a directive []{#Page_186
    type="pagebreak" title="186"}in 2012 concerned with "certain permitted
    uses of orphan works." This has allowed libraries and archives to make
    works available online without permission if, "after carrying out
    diligent searches," the copyright holders cannot be found. What
    qualifies as a "diligent search," however, is so strictly formulated
    that the German Library Association has called the directive
    "impracticable." Deutscher Bibliotheksverband, "Rechtlinie über
    bestimmte zulässige Formen der Nutzung verwaister Werke" (February 27,
    2012), online.

    [18](#c2-note-0018a){#c2-note-0018}  UbuWeb, "Frequently Asked
    Questions," online.

    [19](#c2-note-0019a){#c2-note-0019}  The numbers in this area of
    activity are notoriously unreliable, and therefore only rough estimates
    are possible. It seems credible, however, that the Pirate Bay was
    attracting around a billion page views per month by the end of 2013.
    That would make it the seventy-fourth most popular internet destination.
    See Ernesto, "Top 10 Most Popular Torrent Sites of 2014" (January 4,
    2014), online.

    [20](#c2-note-0020a){#c2-note-0020}  See the documentary film *TPB AFK:
    The Pirate Bay Away from Keyboard* (2013), directed by Simon Klose.

    [21](#c2-note-0021a){#c2-note-0021}  In technical terms, there is hardly
    any difference between a "stream" and a "download." In both cases, a
    complete file is transferred to the user\'s computer and played.

    [22](#c2-note-0022a){#c2-note-0022}  The practice is legal in Germany
    but illegal in Austria, though digitized texts are routinely made
    available there in seminars. See Seyavash Amini Khanimani and Nikolaus
    Forgó, "Rechtsgutachten über die Erforderlichkeit einer freien
    Werknutzung im österreichischen Urheberrecht zur Privilegierung
    elektronisch unterstützter Lehre," *Forum Neue Medien Austria* (January
    2011), online.

    [23](#c2-note-0023a){#c2-note-0023}  Deutscher Bibliotheksverband,
    "Digitalisierung" (2015), online \[--trans\].

    [24](#c2-note-0024a){#c2-note-0024}  David Weinberger, *Everything Is
    Miscellaneous: The Power of the New Digital Disorder* (New York: Times
    Books, 2007).

    [25](#c2-note-0025a){#c2-note-0025}  This is not a question of material
    wealth. Those who are economically or socially marginalized are
    confronted with the same phenomenon. Their primary experience of this
    excess is with cheap goods and junk.

    [26](#c2-note-0026a){#c2-note-0026}  See Gregory Bateson, "Form,
    Substance and Difference," in Bateson, *Steps to an Ecology of Mind:
    Collected Essays in Anthropology, Psychiatry, Evolution and
    Epistemology* (London: Jason Aronson, 1972), pp. 455--71, at 460:
    "\[I\]n fact, what we mean by information -- the elementary unit of
    information -- is *a difference which makes a difference*" (the emphasis
    is original).

    [27](#c2-note-0027a){#c2-note-0027}  Inke Arns and Gabriele Horn,
    *History Will Repeat Itself* (Frankfurt am Main: Revolver, 2007), p.
    42.[]{#Page_187 type="pagebreak" title="187"}

    [28](#c2-note-0028a){#c2-note-0028}  See the film *The Battle of
    Orgreave* (2001), directed by Mike Figgis.

    [29](#c2-note-0029a){#c2-note-0029}  Theresa Winge, "Costuming the
    Imagination: Origins of Anime and Manga Cosplay," *Mechademia* 1 (2006),
    pp. 65--76.

    [30](#c2-note-0030a){#c2-note-0030}  Nicolle Lamerichs, "Stranger than
    Fiction: Fan Identity in Cosplay," *Transformative Works and Cultures* 7
    (2011), online.

    [31](#c2-note-0
  • Stalder
    The Digital Condition
    2018


    ---
    lang: en
    title: The Digital Condition
    ---

    ::: {.figure}
    []{#coverstart}

    ![Cover page](images/cover.jpg)
    :::

    Table of Contents

    1. [Preface to the English Edition](#fpref)
    2. [Acknowledgments](#ack)
    3. [Introduction: After the End of the Gutenberg Galaxy](#cintro)
    1. [Notes](#f6-ntgp-9999)
    4. [I: Evolution](#c1)
    1. [The Expansion of the Social Basis of Culture](#c1-sec-0002)
    2. [The Culturalization of the World](#c1-sec-0006)
    3. [The Technologization of Culture](#c1-sec-0009)
    4. [From the Margins to the Center of Society](#c1-sec-0013)
    5. [Notes](#c1-ntgp-9999)
    5. [II: Forms](#c2)
    1. [Referentiality](#c2-sec-0002)
    2. [Communality](#c2-sec-0009)
    3. [Algorithmicity](#c2-sec-0018)
    4. [Notes](#c2-ntgp-9999)
    6. [III: Politics](#c3)
    1. [Post-democracy](#c3-sec-0002)
    2. [Commons](#c3-sec-0011)
    3. [Against a Lack of Alternatives](#c3-sec-0017)
    4. [Notes](#c3-ntgp-9999)

    [Preface to the English Edition]{.chapterTitle} {#fpref}

    ::: {.section}
    This book posits that we in the societies of the (transatlantic) West
    find ourselves in a new condition. I call it "the digital condition"
    because it gained its dominance as computer networks became established
    as the key infrastructure for virtually all aspects of life. However,
    the emergence of this condition pre-dates computer networks. In fact, it
    has deep historical roots, some of which go back to the late nineteenth
    century, but it really came into being after the late 1960s. As many of
    the cultural and political institutions shaped by the previous condition
    -- which McLuhan called the Gutenberg Galaxy -- fell into crisis, new
    forms of personal and collective orientation and organization emerged
    which have been shaped by the affordances of this new condition. Both
    the historical processes which unfolded over a very long time and the
    structural transformation which took place in a myriad of contexts have
    been beyond any deliberate influence. Although obviously caused by
    social actors, the magnitude of such changes was simply too great, too
    distributed, and too complex to be attributed to, or molded by, any
    particular (set of) actor(s).

    Yet -- and this is the core of what motivated me to write this book --
    this does not mean that we have somehow moved beyond the political,
    beyond the realm in which identifiable actors and their projects do
    indeed shape our collective []{#Page_vii type="pagebreak"
    title="vii"}existence, or that there are no alternatives to future
    development already expressed within contemporary dynamics. On the
    contrary, we can see very clearly that as the center -- the established
    institutions shaped by the affordances of the previous condition -- is
    crumbling, more economic and political projects are rushing in to fill
    that void with new institutions that advance their competing agendas.
    These new institutions are well adapted to the digital condition, with
    its chaotic production of vast amounts of information and innovative
    ways of dealing with that.

    From this, two competing trajectories have emerged which are
    simultaneously transforming the space of the political. First, I used
    the term "post-democracy" because it expands possibilities, and even
    requirements, of (personal) participation, while ever larger aspects of
    (collective) decision-making are moved to arenas that are structurally
    disconnected from those of participation. In effect, these arenas are
    forming an authoritarian reality in which a small elite is vastly
    empowered at the expense of everyone else. The purest incarnation of
    this tendency can be seen in the commercial social mass media, such as
    Facebook, Google, and the others, as they were newly formed in this
    condition and have not (yet) had to deal with the complications of
    transforming their own legacy.

    For the other trajectory, I applied the term "commons" because it
    expands both the possibilities of personal participation and agency, and
    those of collective decision-making. This tendency points to a
    redefinition of democracy beyond the hollowed-out forms of political
    representation characterizing the legacy institutions of liberal
    democracy. The purest incarnation of this tendency can be found in the
    institutions that produce the digital commons, such as Wikipedia and the
    various Free Software communities whose work has been and still is
    absolutely crucial for the infrastructural dimensions of the digital
    networks. They are the most advanced because, again, they have not had
    to deal with institutional legacies. But both tendencies are no longer
    confined to digital networks and are spreading across all aspects of
    social life, creating a reality that is, on the structural level,
    surprisingly coherent and, on the social and political level, full of
    contradictions and thus opportunities.[]{#Page_viii type="pagebreak"
    title="viii"}

    I traced some aspects of these developments right up to early 2016, when
    the German version of this book went into production. Since then a lot
    has happened, but I resisted the temptation to update the book for the
    English translation because ideas are always an expression of their
    historical moment and, as such, updating either turns into a completely
    new version or a retrospective adjustment of the historical record.

    What has become increasingly obvious during 2016 and into 2017 is that
    central institutions of liberal democracy are crumbling more quickly and
    dramatically than was expected. The race to replace them has kicked into
    high gear. The main events driving forward an authoritarian renewal of
    politics took place on a national level, in particular the vote by the
    UK to leave the EU (Brexit) and the election of Donald Trump to the
    office of president of the United States of America. The main events
    driving the renewal of democracy took place on a metropolitan level,
    namely the emergence of a network of "rebel cities," led by Barcelona
    and Madrid. There, community-based social movements established their
    candidates in the highest offices. These cities are now putting in place
    practical examples that other cities could emulate and adapt. For the
    concerns of this book, the most important concept put forward is that of
    "technological sovereignty": to bring the technological infrastructure,
    and its developmental potential, back under the control of those who are
    using it and are affected by it; that is, the citizens of the
    metropolis.

    Over the last 18 months, the imbalances between the two trajectories
    have become even more extreme because authoritarian tendencies and
    surveillance capitalism have been strengthened more quickly than the
    commons-oriented practices could establish themselves. But it does not
    change the fact that there are fundamental alternatives embedded in the
    digital condition. Despite structural transformations that affect how we
    do things, there is no inevitability about what we want to do
    individually and, even more importantly, collectively.

    ::: {.poem}
    ::: {.lineGroup}
    Zurich/Vienna, July 2017[]{#Page_ix type="pagebreak" title="ix"}
    :::
    :::
    :::

    [Acknowledgments]{.chapterTitle} {#ack}

    ::: {.section}
    While it may be conventional to cite one person as the author of a book,
    writing is a process with many collective elements. This book in
    particular draws upon many sources, most of which I am no longer able to
    acknowledge with any certainty. Far too often, important references came
    to me in parenthetical remarks, in fleeting encounters, during trips, at
    the fringes of conferences, or through discussions of things that,
    though entirely new to me, were so obvious to others as not to warrant
    any explication. Often, too, my thinking was influenced by long
    conversations, and it is impossible for me now to identify the precise
    moments of inspiration. As far as the themes of this book are concerned,
    four settings were especially important. The international discourse
    network "nettime," which has a mailing list of 4,500 members and which I
    have been moderating since the late 1990s, represents an inexhaustible
    source of internet criticism and, as a collaborative filter, has enabled
    me to follow a wide range of developments from a particular point of
    view. I am also indebted to the Zurich University of the Arts, where I
    have taught for more than 10 years and where the students have been
    willing to explain to me, again and again, what is already self-evident
    to them. Throughout my time there, I have been able to observe a
    dramatic shift. For today\'s students, the "new" is no longer new but
    simply obvious, whereas they []{#Page_x type="pagebreak" title="x"}have
    experienced many things previously regarded as normal -- such as
    checking out a book from a library (instead of downloading it) -- as
    needlessly complicated. In Vienna, the hub of my life, the World
    Information Institute has for many years provided a platform for
    conferences, publications, and interventions that have repeatedly raised
    the stakes of the discussion and have brought together the most
    interesting range of positions without regard to any disciplinary
    boundaries. Housed in Vienna, too, is the Technopolitics Project, a
    non-institutionalized circle of researchers and artists whose
    discussions of techno-economic paradigms have informed this book in
    fundamental ways and which has offered multiple opportunities for me to
    workshop inchoate ideas.

    Not everything, however, takes place in diffuse conversations and
    networks. I was also able to rely on the generous support of several
    individuals who, at one stage or another, read through, commented upon,
    and made crucial improvements to the manuscript: Leonhard Dobusch,
    Günther Hack, Katja Meier, Florian Cramer, Cornelia Sollfrank, Beat
    Brogle, Volker Grassmuck, Ursula Stalder, Klaus Schönberger, Konrad
    Becker, Armin Medosch, Axel Stockburger, and Gerald Nestler. Special
    thanks are owed to Rebina Erben-Hartig, who edited the original German
    manuscript and greatly improved its readability. I am likewise grateful
    to Heinrich Greiselberger and Christian Heilbronn of the Suhrkamp
    Verlag, whose faith in the book never wavered despite several delays.
    Regarding the English version at hand, it has been a privilege to work
    with a translator as skillful as Valentine Pakis. Over the past few
    years, writing this book might have been the most import­ant project in
    my life had it not been for Andrea Mayr. In this regard, I have been
    especially fortunate.[]{#Page_xi type="pagebreak"
    title="xi"}[]{#Page_xii type="pagebreak" title="xii"}
    :::

    Introduction [After the End of the Gutenberg Galaxy]{.chapterTitle} []{.chapterSubTitle} {#cintro}

    ::: {.section}
    The show had already been going on for more than three hours, but nobody
    was bothered by this. Quite the contrary. The tension in the venue was
    approaching its peak, and the ratings were through the roof. Throughout
    all of Europe, 195 million people were watching the spectacle on
    television, and the social mass media were gaining steam. On Twitter,
    more than 47,000 messages were being sent every minute with the hashtag
    \#Eurovision.[^1^](#f6-note-0001){#f6-note-0001a} The outcome was
    decided shortly after midnight: Conchita Wurst, the bearded diva, was
    announced the winner of the 2014 Eurovision Song Contest. Cheers erupted
    as the public celebrated the victor -- but also itself. At long last,
    there was more to the event than just another round of tacky television
    programming ("This is Ljubljana calling!"). Rather, a statement was made
    -- a statement in favor of tolerance and against homophobia, for
    diversity and for the right to define oneself however one pleases. And
    Europe sent this message in the midst of a crisis and despite ongoing
    hostilities, not to mention all of the toxic rumblings that could be
    heard about decadence, cultural decay, and Gayropa. Visibly moved, the
    Austrian singer let out an exclamation -- "We are unity, and we are
    unstoppable!" -- as she returned to the stage with wobbly knees to
    accept the trophy.

    With her aesthetically convincing performance, Conchita succeeded in
    unleashing a strong desire for personal []{#Page_1 type="pagebreak"
    title="1"}self-discovery, for community, and for overcoming stale
    conventions. And she did this through a character that mainstream
    society would have considered paradoxical and deviant not long ago but
    has since come to understand: attractive beyond the dichotomy of man and
    woman, explicitly artificial and yet entirely authentic. This peculiar
    conflation of artificiality and naturalness is equally present in
    Berndnaut Smilde\'s photographic work of a real indoor cloud (*Nimbus*,
    2010) on the cover of this book. Conchita\'s performance was also on a
    formal level seemingly paradoxical: extremely focused and completely
    open. Unlike most of the other acts, she took the stage alone, and
    though she hardly moved at all, she nevertheless incited the audience to
    participate in numerous ways and genuinely to act out the motto of the
    contest ("Join us!"). Throughout the early rounds of the competition,
    the beard, which was at first so provocative, transformed into a
    free-floating symbol that the public began to appropriate in various
    ways. Men and women painted Conchita-like beards on their faces,
    newspapers printed beards to be cut out, and fans crocheted beards. Not
    only did someone Photoshop a beard on to a painting of Empress Sissi of
    Austria, but King Willem-Alexander of the Netherlands even tweeted a
    deceptively realistic portrait of his wife, Queen Máxima, wearing a
    beard. From one of the biggest stages of all, the evening of Wurst\'s
    victory conveyed an impression of how much the culture of Europe had
    changed in recent years, both in terms of its content and its forms.
    That which had long been restricted to subcultural niches -- the
    fluidity of gender iden­tities, appropriation as a cultural technique,
    or the conflation of reception and production, for instance -- was now
    part of the mainstream. Even while sitting in front of the television,
    this mainstream was no longer just a private audience but rather a
    multitude of singular producers whose networked activity -- on location
    or on social mass media -- lent particular significance to the occasion
    as a moment of collective self-perception.

    It is more than half a century since Marshall McLuhan announced the end
    of the Modern era, a cultural epoch that he called the Gutenberg Galaxy
    in honor of the print medium by which it was so influenced. What was
    once just an abstract speculation of media theory, however, now
    describes []{#Page_2 type="pagebreak" title="2"}the concrete reality of
    our everyday life. What\'s more, we have moved well past McLuhan\'s
    diagnosis: the erosion of old cultural forms, institutions, and
    certainties is not just something we affirm, but new ones have already
    formed whose contours are easy to identify not only in niche sectors but
    in the mainstream. Shortly before Conchita\'s triumph, Facebook thus
    expanded the gender-identity options for its billion-plus users from 2
    to 60. In addition to "male" and "female," users of the English version
    of the site can now choose from among the following categories:

    ::: {.extract}
    Agender, Androgyne, Androgynes, Androgynous, Asexual, Bigender, Cis, Cis
    Female, Cis Male, Cis Man, Cis Woman, Cisgender, Cisgender Female,
    Cisgender Male, Cisgender Man, Cisgender Woman, Female to Male (FTM),
    Female to Male Trans Man, Female to Male Transgender Man, Female to Male
    Transsexual Man, Gender Fluid, Gender Neutral, Gender Nonconforming,
    Gender Questioning, Gender Variant, Genderqueer, Hermaphrodite,
    Intersex, Intersex Man, Intersex Person, Intersex Woman, Male to Female
    (MTF), Male to Female Trans Woman, Male to Female Transgender Woman,
    Male to Female Transsexual Woman, Neither, Neutrois, Non-Binary, Other,
    Pangender, Polygender, T\*Man, Trans, Trans Female, Trans Male, Trans
    Man, Trans Person, Trans\*Female, Trans\*Male, Trans\*Man,
    Trans\*Person, Trans\*Woman, Transexual, Transexual Female, Transexual
    Male, Transexual Man, Transexual Person, Transexual Woman, Transgender
    Female, Transgender Person, Transmasculine, T\*Woman, Two\*Person,
    Two-Spirit, Two-Spirit Person.
    :::

    This enormous proliferation of cultural possibilities is an expression
    of what I will refer to below as the digital condition. Far from being
    universally welcomed, its growing presence has also instigated waves of
    nostalgia, diffuse resentments, and intellectual panic. Conservative and
    reactionary movements, which oppose such developments and desire to
    preserve or even re-create previous conditions, have been on the rise.
    Likewise in 2014, for instance, a cultural dispute broke out in normally
    subdued Baden-Würtemberg over which forms of sexual partnership should
    be mentioned positively in the sexual education curriculum. Its impetus
    was a working paper released at the end of 2013 by the state\'s
    []{#Page_3 type="pagebreak" title="3"}Ministry of Culture. Among other
    things, it proposed that adolescents "should confront their own sexual
    identity and orientation \[...\] from a position of acceptance with
    respect to sexual diversity."[^2^](#f6-note-0002){#f6-note-0002a} In a
    short period of time, a campaign organized mainly through social mass
    media collected more than 200,000 signatures in opposition to the
    proposal and submitted them to the petitions committee at the state
    parliament. At that point, the government responded by putting the
    initiative on ice. However, according to the analysis presented in this
    book, leaving it on ice creates a precarious situation.

    The rise and spread of the digital condition is the result of a
    wide-ranging and irreversible cultural transformation, the beginnings of
    which can in part be traced back to the nineteenth century. Since the
    1960s, however, this shift has accelerated enormously and has
    encompassed increasingly broader spheres of social life. More and more
    people have been participating in cultural processes; larger and larger
    dimensions of existence have become battlegrounds for cultural disputes;
    and social activity has been intertwined with increasingly complex
    technologies, without which it would hardly be possible to conceive of
    these processes, let alone achieve them. The number of competing
    cultural projects, works, reference points, and reference systems has
    been growing rapidly. This, in turn, has caused an escalating crisis for
    the established forms and institutions of culture, which are poorly
    equipped to deal with such an inundation of new claims to meaning. Since
    roughly the year 2000, many previously independent developments have
    been consolidating, gaining strength and modifying themselves to form a
    new cultural constellation that encompasses broad segments of society --
    a new galaxy, as McLuhan might have
    said.[^3^](#f6-note-0003){#f6-note-0003a} These days it is relatively
    easy to recognize the specific forms that characterize it as a whole and
    how these forms have contributed to new, contradictory and
    conflict-laden political dynamics.

    My argument, which is restricted to cultural developments in the
    (transatlantic) West, is divided into three chapters. In the first, I
    will outline the *historical* developments that have given rise to this
    quantitative and qualitative change and have led to the crisis faced by
    the institutions of the late phase of the Gutenberg Galaxy, which
    defined the last third []{#Page_4 type="pagebreak" title="4"}of the
    twentieth century.[^4^](#f6-note-0004){#f6-note-0004a} The expansion of
    the social basis of cultural processes will be traced back to changes in
    the labor market, to the self-empowerment of marginalized groups, and to
    the dissolution of centralized cultural geography. The broadening of
    cultural fields will be discussed in terms of the rise of design as a
    general creative discipline, and the growing significance of complex
    technologies -- as fundamental components of everyday life -- will be
    tracked from the beginnings of independent media up to the development
    of the internet as a mass medium. These processes, which at first
    unfolded on their own and may have been reversible on an individual
    basis, are integrated today and represent a socially domin­ant component
    of the coherent digital condition. From the perspective of cultural
    studies and media theory, the second chapter will delineate the already
    recognizable features of this new culture. Concerned above all with the
    analysis of forms, its focus is thus on the question of "how" cultural
    practices operate. It is only because specific forms of culture,
    exchange, and expression are prevalent across diverse var­ieties of
    content, social spheres, and locations that it is even possible to speak
    of the digital condition in the singular. Three examples of such forms
    stand out in particular. *Referentiality* -- that is, the use of
    existing cultural materials for one\'s own production -- is an essential
    feature of many methods for inscribing oneself into cultural processes.
    In the context of unmanageable masses of shifting and semantically open
    reference points, the act of selecting things and combining them has
    become fundamental to the production of meaning and the constitution of
    the self. The second feature that characterizes these processes is
    *communality*. It is only through a collectively shared frame of
    reference that meanings can be stabilized, possible courses of action
    can be determined, and resources can be made available. This has given
    rise to communal formations that generate self-referential worlds, which
    in turn modulate various dimensions of existence -- from aesthetic
    preferences to the methods of biological reproduction and the rhythms of
    space and time. In these worlds, the dynamics of network power have
    reconfigured notions of voluntary and involuntary behavior, autonomy,
    and coercion. The third feature of the new cultural landscape is its
    *algorithmicity*. It is characterized, in other []{#Page_5
    type="pagebreak" title="5"}words, by automated decision-making processes
    that reduce and give shape to the glut of information, by extracting
    information from the volume of data produced by machines. This extracted
    information is then accessible to human perception and can serve as the
    basis of singular and communal activity. Faced with the enormous amount
    of data generated by people and machines, we would be blind were it not
    for algorithms.

    The third chapter will focus on *political dimensions*. These are the
    factors that enable the formal dimensions described in the preceding
    chapter to manifest themselves in the form of social, political, and
    economic projects. Whereas the first chapter is concerned with long-term
    and irreversible histor­ical processes, and the second outlines the
    general cultural forms that emerged from these changes with a certain
    degree of inevitability, my concentration here will be on open-ended
    dynamics that can still be influenced. A contrast will be made between
    two political tendencies of the digital condition that are already quite
    advanced: *post-democracy* and *commons*. Both take full advantage of
    the possibilities that have arisen on account of structural changes and
    have advanced them even further, though in entirely different
    directions. "Post-democracy" refers to strategies that counteract the
    enormously expanded capacity for social communication by disconnecting
    the possibility to participate in things from the ability to make
    decisions about them. Everyone is allowed to voice his or her opinion,
    but decisions are ultimately made by a select few. Even though growing
    numbers of people can and must take responsibility for their own
    activity, they are unable to influence the social conditions -- the
    social texture -- under which this activity has to take place. Social
    mass media such as Facebook and Google will receive particular attention
    as the most conspicuous manifestations of this tendency. Here, under new
    structural provisions, a new combination of behavior and thought has
    been implemented that promotes the normalization of post-democracy and
    contributes to its otherwise inexplicable acceptance in many areas of
    society. "Commons," on the contrary, denotes approaches for developing
    new and comprehensive institutions that not only directly combine
    participation and decision-making but also integrate economic, social,
    and ethical spheres -- spheres that Modernity has tended to keep
    apart.[]{#Page_6 type="pagebreak" title="6"}

    Post-democracy and commons can be understood as two lines of development
    that point beyond the current crisis of liberal democracy and represent
    new political projects. One can be characterized as an essentially
    authoritarian system, the other as a radical expansion and renewal of
    democracy, from the notion of representation to that of participation.

    Even though I have brought together a number of broad perspectives, I
    have refrained from discussing certain topics that a book entitled *The
    Digital Condition* might be expected to address, notably the matter of
    copyright, for one example. This is easy to explain. As regards the new
    forms at the heart of this book, none of these developments requires or
    justifies copyright law in its present form. In any case, my thoughts on
    the matter were published not long ago in another book, so there is no
    need to repeat them here.[^5^](#f6-note-0005){#f6-note-0005a} The theme
    of privacy will also receive little attention. This is not because I
    share the view, held by proponents of "post-privacy," that it would be
    better for all personal information to be made available to everyone. On
    the contrary, this position strikes me as superficial and naïve. That
    said, the political function of privacy -- to safeguard a degree of
    personal autonomy from powerful institutions -- is based on fundamental
    concepts that, in light of the developments to be described below,
    urgently need to be updated. This is a task, however, that would take me
    far beyond the scope of the present
    book.[^6^](#f6-note-0006){#f6-note-0006a}

    Before moving on to the first chapter, I should first briefly explain my
    somewhat unorthodox understanding of the central concepts in the title
    of the book -- "condition" and "digital." In what follows, the term
    "condition" will be used to designate a cultural condition whereby the
    processes of social meaning -- that is, the normative dimension of
    existence -- are explicitly or implicitly negotiated and realized by
    means of singular and collective activity. Meaning, however, does not
    manifest itself in signs and symbols alone; rather, the practices that
    engender it and are inspired by it are consolidated into artifacts,
    institutions, and lifeworlds. In other words, far from being a symbolic
    accessory or mere overlay, culture in fact directs our actions and gives
    shape to society. By means of materialization and repetition, meaning --
    both as claim and as reality -- is made visible, productive, and
    negotiable. People are free to accept it, reject it, or ignore
    []{#Page_7 type="pagebreak" title="7"}it altogether. Social meaning --
    that is, meaning shared by multiple people -- can only come about
    through processes of exchange within larger or smaller formations.
    Production and reception (to the extent that it makes any sense to
    distinguish between the two) do not proceed linearly here, but rather
    loop back and reciprocally influence one another. In such processes, the
    participants themselves determine, in a more or less binding manner, how
    they stand in relation to themselves, to each other, and to the world,
    and they determine the frame of reference in which their activity is
    oriented. Accordingly, culture is not something static or something that
    is possessed by a person or a group, but rather a field of dispute that
    is subject to the activities of multiple ongoing changes, each happening
    at its own pace. It is characterized by processes of dissolution and
    constitution that may be collaborative, oppositional, or simply
    operating side by side. The field of culture is pervaded by competing
    claims to power and mechanisms for exerting it. This leads to conflicts
    about which frames of reference should be adopted for different fields
    and within different social groups. In such conflicts,
    self-determination and external determination interact until a point is
    reached at which both sides are mutually constituted. This, in turn,
    changes the conditions that give rise to shared meaning and personal
    identity.

    In what follows, this broadly post-structuralist perspective will inform
    my discussion of the causes and formational conditions of cultural
    orders and their practices. Culture will be conceived throughout as
    something heterogeneous and hybrid. It draws from many sources; it is
    motivated by the widest possible variety of desires, intentions, and
    compulsions; and it mobilizes whatever resources might be necessary for
    the constitution of meaning. This emphasis on the materiality of culture
    is also reflected in the concept of the digital. Media are relational
    technologies, which means that they facilitate certain types of
    connection between humans and
    objects.[^7^](#f6-note-0007){#f6-note-0007a} "Digital" thus denotes the
    set of relations that, on the infrastructural basis of digital networks,
    is realized today in the production, use, and transform­ation of
    material and immaterial goods, and in the constitution and coordination
    of personal and collective activity. In this regard, the focus is less
    on the dominance of a certain class []{#Page_8 type="pagebreak"
    title="8"}of technological artifacts -- the computer, for instance --
    and even less on distinguishing between "digital" and "analog,"
    "material" and "immaterial." Even in the digital condition, the analog
    has not gone away. Rather, it has been re-evaluated and even partially
    upgraded. The immaterial, moreover, is never entirely without
    materiality. On the contrary, the fleeting impulses of digital
    communication depend on global and unmistakably material infrastructures
    that extend from mines beneath the surface of the earth, from which rare
    earth metals are extracted, all the way into outer space, where
    satellites are circling around above us. Such things may be ignored
    because they are outside the experience of everyday life, but that does
    not mean that they have disappeared or that they are of any less
    significance. "Digital" thus refers to historically new possibilities
    for constituting and connecting various human and non-human actors,
    which is not limited to digital media but rather appears everywhere as a
    relational paradigm that alters the realm of possibility for numerous
    materials and actors. My understanding of the digital thus approximates
    the concept of the "post-digital," which has been gaining currency over
    the past few years within critical media cultures. Here, too, the
    distinction between "new" and "old" media and all of the ideological
    baggage associated with it -- for instance, that the new represents the
    future while the old represents the past -- have been rejected. The
    aesthetic projects that continue to define the image of the "digital" --
    immateriality, perfection, and virtuality -- have likewise been
    discarded.[^8^](#f6-note-0008){#f6-note-0008a} Above all, the
    "post-digital" is a critical response to this techno-utopian aesthetic
    and its attendant economic and political perspectives. According to the
    cultural theorist Florian Cramer, the concept accommodates the fact that
    "new ethical and cultural conventions which became mainstream with
    internet communities and open-source culture are being retroactively
    applied to the making of non-digital and post-digital media
    products."[^9^](#f6-note-0009){#f6-note-0009a} He thus cites the trend
    that process-based practices oriented toward open interaction, which
    first developed within digital media, have since begun to appear in more
    and more contexts and in an increasing number of
    materials.[^10[]{#Page_9 type="pagebreak"
    title="9"}^](#f6-note-0010){#f6-note-0010a}

    For the historical, cultural-theoretical, and political perspectives
    developed in this book, however, the concept of the post-digital is
    somewhat problematic, for it requires the narrow context of media art
    and its fixation on technology in order to become a viable
    counter-position. Without this context, certain misunderstandings are
    impossible to avoid. The prefix "post-," for instance, is often
    interpreted in the sense that something is over or that we have at least
    grasped the matters at hand and can thus turn to something new. The
    opposite is true. The most enduringly relevant developments are only now
    beginning to adopt a specific form, long after digital infrastructures
    and the practices made popular by them have become part of our everyday
    lives. Or, as the communication theorist and consultant Clay Shirky puts
    it, "Communication tools don\'t get socially interesting until they get
    technologically boring."[^11^](#f6-note-0011){#f6-note-0011a} For it is
    only today, now that our fascination for this technology has waned and
    its promises sound hollow, that culture and society are being defined by
    the digital condition in a comprehensive sense. Before, this was the
    case in just a few limited spheres. It is this hybridization and
    solidification of the digital -- the presence of the digital beyond
    digital media -- that lends the digital condition its dominance. As to
    the concrete realities in which these things will materialize, this is
    currently being decided in an open and ongoing process. The aim of this
    book is to contribute to our understanding of this process.[]{#Page_10
    type="pagebreak" title="10"}
    :::

    ::: {.section .notesSet type="rearnotes"}
    []{#notesSet}Notes {#f6-ntgp-9999}
    ------------------

    ::: {.section .notesList}
    [1](#f6-note-0001a){#f6-note-0001}  Dan Biddle, "Five Million Tweets for
    \#Eurovision 2014," *Twitter UK* (May 11, 2014), online.

    [2](#f6-note-0002a){#f6-note-0002}  Ministerium für Kultus, Jugend und
    Sport -- Baden-Württemberg, "Bildungsplanreform 2015/2016 -- Verankerung
    von Leitprinzipien," online \[--trans.\].

    [3](#f6-note-0003a){#f6-note-0003}  As early as 1995, Wolfgang Coy
    suggested that McLuhan\'s metaphor should be supplanted by the concept
    of the "Turing Galaxy," but this never caught on. See his introduction
    to the German edition of *The Gutenberg Galaxy*: "Von der Gutenbergschen
    zur Turingschen Galaxis: Jenseits von Buchdruck und Fernsehen," in
    Marshall McLuhan, *Die Gutenberg Galaxis: Das Ende des Buchzeitalters*,
    (Cologne: Addison-Wesley, 1995), pp. vii--xviii.[]{#Page_176
    type="pagebreak" title="176"}

    [4](#f6-note-0004a){#f6-note-0004}  According to the analysis of the
    Spanish sociologist Manuel Castells, this crisis began almost
    simultaneously in highly developed capitalist and socialist societies,
    and it did so for the same reason: the paradigm of "industrialism" had
    reached the limits of its productivity. Unlike the capitalist societies,
    which were flexible enough to tame the crisis and reorient their
    economies, the socialism of the 1970s and 1980s experienced stagnation
    until it ultimately, in a belated effort to reform, collapsed. See
    Manuel Castells, *End of Millennium*, 2nd edn (Oxford: Wiley-Blackwell,
    2010), pp. 5--68.

    [5](#f6-note-0005a){#f6-note-0005}  Felix Stalder, *Der Autor am Ende
    der Gutenberg Galaxis* (Zurich: Buch & Netz, 2014).

    [6](#f6-note-0006a){#f6-note-0006}  For my preliminary thoughts on this
    topic, see Felix Stalder, "Autonomy and Control in the Era of
    Post-Privacy," *Open: Cahier on Art and the Public Domain* 19 (2010):
    78--86; and idem, "Privacy Is Not the Antidote to Surveillance,"
    *Surveillance & Society* 1 (2002): 120--4. For a discussion of these
    approaches, see the working paper by Maja van der Velden, "Personal
    Autonomy in a Post-Privacy World: A Feminist Technoscience Perspective"
    (2011), online.

    [7](#f6-note-0007a){#f6-note-0007}  Accordingly, the "new social" media
    are mass media in the sense that they influence broadly disseminated
    patterns of social relations and thus shape society as much as the
    traditional mass media had done before them.

    [8](#f6-note-0008a){#f6-note-0008}  Kim Cascone, "The Aesthetics of
    Failure: 'Post-Digital' Tendencies in Contemporary Computer Music,"
    *Computer Music Journal* 24/2 (2000): 12--18.

    [9](#f6-note-0009a){#f6-note-0009}  Florian Cramer, "What Is
    'Post-Digital'?" *Post-Digital Research* 3 (2014), online.

    [10](#f6-note-0010a){#f6-note-0010}  In the field of visual arts,
    similar considerations have been made regarding "post-internet art." See
    Artie Vierkant, "The Image Object Post-Internet,"
    [jstchillin.org](http://jstchillin.org) (December 2010), online; and Ian
    Wallace, "What Is Post-Internet Art? Understanding the Revolutionary New
    Art Movement," *Artspace* (March 18, 2014), online.

    [11](#f6-note-0011a){#f6-note-0011}  Clay Shirky, *Here Comes Everybody:
    The Power of Organizing without Organizations* (New York: Penguin,
    2008), p. 105.
    :::
    :::

    [I]{.chapterNumber} [Evolution]{.chapterTitle} {#c1}
    =
    ::: {.section}
    Many authors have interpreted the new cultural realities that
    characterize our daily lives as a direct consequence of technological
    developments: the internet is to blame! This assumption is not only
    empirically untenable; it also leads to a problematic assessment of the
    current situation. Apparatuses are represented as "central actors," and
    this suggests that new technologies have suddenly revolutionized a
    situation that had previously been stable. Depending on one\'s point of
    view, this is then regarded as "a blessing or a
    curse."[^1^](#c1-note-0001){#c1-note-0001a} A closer examination,
    however, reveals an entirely different picture. Established cultural
    practices and social institutions had already been witnessing the
    erosion of their self-evident justification and legitimacy, long before
    they were faced with new technologies and the corresponding demands
    these make on individuals. Moreover, the allegedly new types of
    coordination and cooperation are also not so new after all. Many of them
    have existed for a long time. At first most of them were totally
    separate from the technologies for which, later on, they would become
    relevant. It is only in retrospect that these developments can be
    identified as beginnings, and it can be seen that much of what we regard
    today as novel or revolutionary was in fact introduced at the margins of
    society, in cultural niches that were unnoticed by the dominant actors
    and institutions. The new technologies thus evolved against a
    []{#Page_11 type="pagebreak" title="11"}background of processes of
    societal transformation that were already under way. They could only
    have been developed once a vision of their potential had been
    formulated, and they could only have been disseminated where demand for
    them already existed. This demand was created by social, political, and
    economic crises, which were themselves initiated by changes that were
    already under way. The new technologies seemed to provide many differing
    and promising answers to the urgent questions that these crises had
    prompted. It was thus a combination of positive vision and pressure that
    motivated a great variety of actors to change, at times with
    considerable effort, the established processes, mature institutions, and
    their own behavior. They intended to appropriate, for their own
    projects, the various and partly contradictory possibilities that they
    saw in these new technologies. Only then did a new technological
    infrastructure arise.

    This, in turn, created the preconditions for previously independent
    developments to come together, strengthening one another and enabling
    them to spread beyond the contexts in which they had originated. Thus,
    they moved from the margins to the center of culture. And by
    intensifying the crisis of previously established cultural forms and
    institutions, they became dominant and established new forms and
    institutions of their own.
    :::

    ::: {.section}
    The Expansion of the Social Basis of Culture {#c1-sec-0002}
    --------------------------------------------

    Watching television discussions from the 1950s and 1960s today, one is
    struck not only by the billows of cigarette smoke in the studio but also
    by the homogeneous spectrum of participants. Usually, it was a group of
    white and heteronormatively behaving men speaking with one
    another,[^2^](#c1-note-0002){#c1-note-0002a} as these were the people
    who held the important institutional positions in the centers of the
    West. As a rule, those involved were highly specialized representatives
    from the cultural, economic, scientific, and political spheres. Above
    all, they were legitimized to appear in public to articulate their
    opinions, which were to be regarded by others as relevant and worthy of
    discussion. They presided over the important debates of their time. With
    few exceptions, other actors and their deviant opinions -- there
    []{#Page_12 type="pagebreak" title="12"}has never been a time without
    them -- were either not taken seriously at all or were categorized as
    indecent, incompetent, perverse, irrelevant, backward, exotic, or
    idiosyncratic.[^3^](#c1-note-0003){#c1-note-0003a} Even at that time,
    the social basis of culture was beginning to expand, though the actors
    at the center of the discourse had failed to notice this. Communicative
    and cultural pro­cesses were gaining significance in more and more
    places, and excluded social groups were self-consciously developing
    their own language in order to intervene in the discourse. The rise of
    the knowledge economy, the increasingly loud critique of
    heteronormativity, and a fundamental cultural critique posed by
    post-colonialism enabled a greater number of people to participate in
    public discussions. In what follows, I will subject each of these three
    phenomena to closer examin­ation. In order to do justice to their
    complexity, I will treat them on different levels: I will depict the
    rise of the knowledge economy as a structural change in labor; I will
    reconstruct the critique of heteronormativity by outlining the origins
    and transformations of the gay movement in West Germany; and I will
    discuss post-colonialism as a theory that introduced new concepts of
    cultural multiplicity and hybridization -- concepts that are now
    influencing the digital condition far beyond the limits of the
    post-colonial discourse, and often without any reference to this
    discourse at all.

    ::: {.section}
    ### The growth of the knowledge economy {#c1-sec-0003}

    At the beginning of the 1950s, the Austrian-American economist Fritz
    Machlup was immersed in his study of the polit­ical economy of
    monopoly.[^4^](#c1-note-0004){#c1-note-0004a} Among other things, he was
    concerned with patents and copyright law. In line with the neo-classical
    Austrian School, he considered both to be problematic (because
    state-created) monopolies.[^5^](#c1-note-0005){#c1-note-0005a} The
    longer he studied the monopoly of the patent system in particular, the
    more far-reaching its consequences seemed to him. He maintained that the
    patent system was intertwined with something that might be called the
    "economy of invention" -- ultimately, patentable insights had to be
    produced in the first place -- and that this was in turn part of a much
    larger economy of knowledge. The latter encompassed government agencies
    as well as institutions of education, research, and development
    []{#Page_13 type="pagebreak" title="13"}(that is, schools, universities,
    and certain corporate laboratories), which had been increasing steadily
    in number since Roosevelt\'s New Deal. Yet it also included the
    expanding media sector and those industries that were responsible for
    providing technical infrastructure. Machlup subsumed all of these
    institutions and sectors under the concept of the "knowledge economy," a
    term of his own invention. Their common feature was that essential
    aspects of their activities consisted in communicating things to other
    people ("telling anyone anything," as he put it). Thus, the employees
    were not only recipients of information or instructions; rather, in one
    way or another, they themselves communicated, be it merely as a
    secretary who typed up, edited, and forwarded a piece of shorthand
    dictation. In his book *The Production and Distribution of Knowledge in
    the United States*, published in 1962, Machlup gathered empirical
    material to demonstrate that the American economy had entered a new
    phase that was distinguished by the production, exchange, and
    application of abstract, codified
    knowledge.[^6^](#c1-note-0006){#c1-note-0006a} This opinion was no
    longer entirely novel at the time, but it had never before been
    presented in such an empirically detailed and comprehensive
    manner.[^7^](#c1-note-0007){#c1-note-0007a} The extent of the knowledge
    economy surprised Machlup himself: in his book, he concluded that as
    much as 43 percent of all labor activity was already engaged in this
    sector. This high number came about because, until then, no one had put
    forward the idea of understanding such a variety of activities as a
    single unit.

    Machlup\'s categorization was indeed quite innovative, for the dynamics
    that propelled the sectors that he associated with one another not only
    were very different but also had originated as an integral component in
    the development of the industrial production of goods. They were more of
    an extension of such production than a break with it. The production and
    circulation of goods had been expanding and accelerating as early as the
    nineteenth century, though at highly divergent rates from one region or
    sector to another. New markets were created in order to distribute goods
    that were being produced in greater numbers; new infrastructure for
    transportation and communication was established in order to serve these
    large markets, which were mostly in the form of national territories
    (including their colonies). This []{#Page_14 type="pagebreak"
    title="14"}enabled even larger factories to be built in order to
    exploit, to an even greater extent, the cost advantages of mass
    production. In order to control these complex processes, new professions
    arose with different types of competencies and working conditions. The
    office became a workplace for an increasing number of people -- men and
    women alike -- who, in one form or another, had something to do with
    information processing and communication. Yet all of this required not
    only new management techniques. Production and products also became more
    complex, so that entire corporate sectors had to be restructured.
    Whereas the first decisive inventions of the industrial era were still
    made by more or less educated tinkerers, during the last third of the
    nineteenth century, invention itself came to be institutionalized. In
    Germany, Siemens (founded in 1847 as the Telegraphen-Bauanstalt von
    Siemens & Halske) exemplifies this transformation. Within 50 years, a
    company that began in a proverbial workshop in a Berlin backyard became
    a multinational high-tech corporation. It was in such corporate
    laboratories, which were established around the year 1900, that the
    "industrialization of invention" or the "scientification of industrial
    production" took place.[^8^](#c1-note-0008){#c1-note-0008a} In other
    words, even the processes employed in factories and the goods that they
    produced became knowledge-intensive. Their invention, planning, and
    production required a steadily growing expansion of activities, which
    today we would refer to as research and development. The informatization
    of the economy -- the acceleration of mass production, the comprehensive
    application of scientific methods to the organization of labor, and the
    central role of research and development in industry -- was hastened
    enormously by a world war that was waged on an industrial scale to an
    extent that had never been seen before.

    Another important factor for the increasing significance of the
    knowledge economy was the development of the consumer society. Over the
    course of the last third of the nineteenth century, despite dramatic
    regional and social disparities, an increasing number of people profited
    from the economic growth that the Industrial Revolution had instigated.
    Wages increased and basic needs were largely met, so that a new social
    stratum arose, the middle class, which was able to spend part of its
    income on other things. But on what? First, []{#Page_15 type="pagebreak"
    title="15"}new needs had to be created. The more production capacities
    increased, the more they had to be rethought in terms of consumption.
    Thus, in yet another way, the economy became more knowledge-intensive.
    It was now necessary to become familiar with, understand, and stimulate
    the interests and preferences of consumers, in order to entice them to
    purchase products that they did not urgently need. This knowledge did
    little to enhance the material or logistical complexity of goods or
    their production; rather, it was reflected in the increasingly extensive
    communication about and through these goods. The beginnings of this
    development were captured by Émile Zola in his 1883 novel *The Ladies\'
    Paradise*, which was set in the new world of a semi-fictitious
    department store bearing that name. In its opening scene, the young
    protagonist Denise Baudu and her brother Jean, both of whom have just
    moved to Paris from a provincial town, encounter for the first time the
    artfully arranged women\'s clothing -- exhibited with all sorts of
    tricks involving lighting, mirrors, and mannequins -- in the window
    displays of the store. The sensuality of the staged goods is so
    overwhelming that both of them are not only struck dumb, but Jean even
    blushes.
    It was the economy of affects that brought blood to Jean\'s cheeks. At
    that time, strategies for attracting the attention of customers did not
    yet have a scientific and systematic basis. Just as the first inventions
    in the age of industrialization were made by amateurs, so too was the
    economy of affects developed intuitively and gradually rather than as a
    planned or conscious paradigm shift. That it was possible to induce and
    direct affects by means of targeted communication was the pioneering
    discovery of the Austrian-American Edward Bernays. During the 1920s, he
    combined the ideas of his uncle Sigmund Freud about unconscious
    motivations with the sociological research methods of opinion surveys to
    form a new discipline: market
    research.[^9^](#c1-note-0009){#c1-note-0009a} It became the scientific
    basis of a new field of activity, which he at first called "propa­ganda"
    but then later referred to as "public
    relations."[^10^](#c1-note-0010){#c1-note-0010a} Public communication,
    be it for economic or political ends, was now placed on a systematic
    foundation that came to distance itself more and more from the pure
    "conveyance of information." Communication became a strategic field for
    corporate and political disputes, and the mass media []{#Page_16
    type="pagebreak" title="16"}became their locus of negotiation. Between
    1880 and 1917, for instance, commercial advertising costs in the United
    States increased by more than 800 percent, and the leading advertising
    firms, using the same techniques with which they attracted consumers to
    products, were successful in selling to the American public the idea of
    their nation entering World War I. Thus, a media industry in the modern
    sense was born, and it expanded along with the rapidly growing market
    for advertising.[^11^](#c1-note-0011){#c1-note-0011a}

    In his studies of labor markets conducted at the beginning of the 1960s,
    Machlup brought these previously separ­ate developments together and
    thus explained the existence of an already advanced knowledge economy in
    the United States. His arguments fell on extremely fertile soil, for an
    intellectual transformation had taken place in other areas of science as
    well. A few years earlier, for instance, cybernetics had given the
    concepts "information" and "communication" their first scientifically
    precise (if somewhat idiosyncratic) definitions and had assigned to them
    a position of central importance in all scientific disciplines, not to
    mention life in general.[^12^](#c1-note-0012){#c1-note-0012a} Machlup\'s
    investigation seemed to confirm this in the case of the economy, given
    that the knowledge economy was primarily concerned with information and
    communication. Since then, numerous analyses, formulas, and slogans have
    repeated, modified, refined, and criticized the idea that the
    knowledge-based activities of the economy have become increasingly
    important. In the 1970s this discussion was associated above all with
    the notion of the "post-industrial
    society,"[^13^](#c1-note-0013){#c1-note-0013a} in the 1980s the guiding
    idea was the "information society,"[^14^](#c1-note-0014){#c1-note-0014a}
    and in the 1990s the debate revolved around the "network
    society"[^15^](#c1-note-0015){#c1-note-0015a} -- to name just the most
    popular concepts. What these approaches have in common is that they each
    diagnose a comprehensive societal transformation that, as regards the
    creation of economic value or jobs, has shifted the balance from
    productive to communicative activ­ities. Accordingly, they presuppose
    that we know how to distinguish the former from the latter. This is not
    unproblematic, however, because in practice the two are usually tightly
    intertwined. Moreover, whoever maintains that communicative activities
    have taken the place of industrial production in our society has adopted
    a very narrow point of []{#Page_17 type="pagebreak" title="17"}view.
    Factory jobs have not simply disappeared; they have just been partially
    relocated outside of Western economies. The assertion that communicative
    activities are somehow of "greater value" hardly chimes with the reality
    of today\'s new "service jobs," many of which pay no more than the
    minimum wage.[^16^](#c1-note-0016){#c1-note-0016a} Critiques of this
    sort, however, have done little to reduce the effectiveness of this
    analysis -- especially its political effectiveness -- for it does more
    than simply describe a condition. It also contains a set of political
    instructions that imply or directly demand that precisely those sectors
    should be promoted that it considers economically promising, and that
    society should be reorganized accordingly. Since the 1970s, there has
    thus been a feedback loop between scientific analysis and political
    agendas. More often than not, it is hardly possible to distinguish
    between the two. Especially in Britain and the United States, the
    economic transformation of the 1980s was imposed insistently and with
    political calculation (the weakening of labor unions).

    There are, however, important differences between the developments of
    the so-called "post-industrial society" of the 1970s and those of the
    so-called "network society" of the 1990s, even if both terms are
    supposed to stress the increased significance of information, knowledge,
    and communication. With regard to the digital condition, the most
    important of these differences are the greater flexibility of economic
    activity in general and employment relations in particular, as well as
    the dismantling of social security systems. Neither phenomenon played
    much of a role in analyses of the early 1970s. The development since
    then can be traced back to two currents that could not seem more
    different from one another. At first, flexibility was demanded in the
    name of a critique of the value system imposed by bureaucratic-bourgeois
    society (including the traditional organization of the workforce). It
    originated in the new social movements that had formed in the late
    1960s. Later on, toward the end of the 1970s, it then became one of the
    central points of the neoliberal critique of the welfare state. With
    completely different motives, both sides sang the praises of autonomy
    and spontaneity while rejecting the disciplinary nature of hierarchical
    organization. They demanded individuality and diversity rather than
    conformity to prescribed roles. Experimentation, openness to []{#Page_18
    type="pagebreak" title="18"}new ideas, flexibility, and change were now
    established as fundamental values with positive connotations. Both
    movements operated with the attractive idea of personal freedom. The new
    social movements understood this in a social sense as the freedom of
    personal development and coexistence, whereas neoliberals understood it
    in an economic sense as the freedom of the market. In the 1980s, the
    neoliberal ideas prevailed in large part because some of the values,
    strategies, and methods propagated by the new social movements were
    removed from their political context and appropriated in order to
    breathe new life -- a "new spirit" -- into capitalism and thus to rescue
    industrial society from its crisis.[^17^](#c1-note-0017){#c1-note-0017a}
    An army of management consultants, restructuring experts, and new
    companies began to promote flat hierarchies, self-responsibility, and
    innovation; with these aims in mind, they set about reorganizing large
    corporations into small and flexible units. Labor and leisure were no
    longer supposed to be separated, for all aspects of a given person could
    be integrated into his or her work. In order to achieve economic success
    in this new capitalism, it became necessary for every individual to
    identify himself or herself with his or her profession. Large
    corporations were restructured in such a way that entire departments
    found themselves transformed into independent "profit centers." This
    happened in the name of creating more leeway for decision-making and of
    optimizing the entrepreneurial spirit on all levels, the goals being to
    increase value creation and to provide management with more fine-grained
    powers of intervention. These measures, in turn, created the need for
    computers and the need for them to be networked. Large corporations
    reacted in this way to the emergence of highly specialized small
    companies which, by networking and cooperating with other firms,
    succeeded in quickly and flexibly exploiting niches in the expanding
    global markets. In the management literature of the 1980s, the
    catchphrases for this were "company networks" and "flexible
    specialization."[^18^](#c1-note-0018){#c1-note-0018a} By the middle of
    the 1990s, the sociologist Manuel Castells was able to conclude that the
    actual productive entity was no longer the individual company but rather
    the network consisting of companies and corporate divisions of various
    sizes. In Castells\'s estimation, the decisive advantage of the network
    is its ability to customize its elements and their configuration
    []{#Page_19 type="pagebreak" title="19"}to suit the rapidly changing
    requirements of the "project" at
    hand.[^19^](#c1-note-0019){#c1-note-0019a} Aside from a few exceptions,
    companies in their trad­itional forms came to function above all as
    strategic control centers and as economic and legal units.

    This economic structural transformation was already well under way when
    the internet emerged as a mass medium around the turn of the millennium.
    As a consequence, change became more radical and penetrated into an
    increasing number of areas of value creation. The political agenda
    oriented itself toward the vision of "creative industries," a concept
    developed in 1997 by the newly elected British government under Tony
    Blair. A Creative Industries Task Force was established right away, and
    its first step was to identify "those activities which have their
    origins in individual creativity, skill and talent and which have the
    potential for wealth and job creation through the generation and
    exploit­ation of intellectual
    property."[^20^](#c1-note-0020){#c1-note-0020a} Like Fritz Machlup at
    the beginning of the 1960s, the task force brought together existing
    areas of activity into a new category. Such activities included
    advertising, computer games, architecture, music, arts and antique
    markets, publishing, design, software and computer services, fashion,
    television and radio, and film and video. The latter were elevated to
    matters of political importance on account of their potential to create
    wealth and jobs. Not least because of this clever presentation of
    categories -- no distinction was made between the BBC, an almighty
    public-service provider, and fledgling companies in precarious
    circumstances -- it was possible to proclaim not only that the creative
    industries were contributing a relevant portion of the nation\'s
    economic output, but also that this sector was growing at an especially
    fast rate. It was reported that, in London, the creative industries were
    already responsible for one out of every five new jobs. When compared
    with traditional terms of employment as regards income, benefits, and
    prospects for advancement, however, many of these positions entailed a
    considerable downgrade for the employees in question (who were now
    treated as independent contractors). This fact was either ignored or
    explicitly interpreted as a sign of the sector\'s particular
    dynamism.[^21^](#c1-note-0021){#c1-note-0021a} Around the turn of the
    new millennium, the idea that individual creativity plays a central role
    in the economy was given further traction by []{#Page_20
    type="pagebreak" title="20"}the sociologist and consultant Richard
    Florida, who argued that creativity was essential to the future of
    cities and even announced the rise of the "creative class." As to the
    preconditions that have to be met in order to tap into this source of
    wealth, he devised a simple formula that would be easy for municipal
    bureaucrats to understand: "technology, tolerance and talent." Talent,
    as defined by Florida, is based on individual creativity and education
    and manifests itself in the ability to generate new jobs. He was thus
    able to declare talent a central element of economic
    growth.[^22^](#c1-note-0022){#c1-note-0022a} In order to "unleash" these
    resources, what we need in addition to technology is, above all,
    tolerance; that is, "an open culture -- one that does not discriminate,
    does not force people into boxes, allows us to be ourselves, and
    validates various forms of family and of human
    identity."[^23^](#c1-note-0023){#c1-note-0023a}

    The idea that a public welfare state should ensure the social security
    of individuals was considered obsolete. Collective institutions, which
    could have provided a degree of stability for people\'s lifestyles, were
    dismissed or regarded as bureaucratic obstacles. The more or less
    directly evoked role model for all of this was the individual artist,
    who was understood as an individual entrepreneur, a sort of genius
    suitable for the masses. For Florida, a central problem was that,
    according to his own calculations, only about a third of the people
    living in North American and European cities were working in the
    "creative sector," while the innate creativity of everyone else was
    going to waste. Even today, the term "creative industry," along with the
    assumption that the internet will provide increased opportunities,
    serves to legitimize the effort to restructure all areas of the economy
    according to the needs of the knowledge economy and to privilege the
    network over the institution. In times of social cutbacks and empty
    public purses, especially in municipalities, this message was warmly
    received. One mayor, who as the first openly gay top politician in
    Germany exemplified tolerance for diverse lifestyles, even adopted the
    slogan "poor but sexy" for his city. Everyone was supposed to exploit
    his or her own creativity to discover new niches and opportunities for
    monet­ization -- a magic formula that was supposed to bring about a new
    urban revival. Today there is hardly a city in Europe that does not
    issue a report about its creative economy, []{#Page_21 type="pagebreak"
    title="21"}and nearly all of these reports cite, directly or indirectly,
    Richard Florida.

    As already seen in the context of the knowledge economy, so too in the
    case of creative industries do measurable social change, wishful
    thinking, and political agendas blend together in such a way that it is
    impossible to identify a single cause for the developments taking place.
    The consequences, however, are significant. Over the last two
    generations, the demands of the labor market have fundamentally changed.
    Higher education and the ability to acquire new knowledge independently
    are now, to an increasing extent, required and expected as
    qualifications and personal attributes. The desired or enforced ability
    to be flexible at work, the widespread cooperation across institutions,
    the uprooted nature of labor, and the erosion of collective models for
    social security have displaced many activities, which once took place
    within clearly defined institutional or personal limits, into a new
    interstitial space that is neither private nor public in the classical
    sense. This is the space of networks, communities, and informal
    cooperation -- the space of sharing and exchange that has since been
    enabled by the emergence of ubiquitous digital communication. It allows
    an increasing number of people, whether willingly or otherwise, to
    envision themselves as active producers of information, knowledge,
    capability, and meaning. And because it is associated in various ways
    with the space of market-based exchange and with the bourgeois political
    sphere, it has lasting effects on both. This interstitial space becomes
    all the more important as fewer people are willing or able to rely on
    traditional institutions for their economic security. For, within it,
    personal and digital-based networks can and must be developed as
    alternatives, regardless of whether they prove sustainable for the long
    term. As a result, more and more actors, each with their own claims to
    meaning, have been rushing away from the private personal sphere into
    this new interstitial space. By now, this has become such a normal
    practice that whoever is *not* active in this ever-expanding
    interstitial space, which is rapidly becoming the main social sphere --
    whoever, that is, lacks a publicly visible profile on social mass media
    like Facebook, or does not number among those producing information and
    meaning and is thus so inconspicuous online as []{#Page_22
    type="pagebreak" title="22"}to yield no search results -- now stands out
    in a negative light (or, in far fewer cases, acquires a certain prestige
    on account of this very absence).
    :::

    ::: {.section}
    ### The erosion of heteronormativity {#c1-sec-0004}

    In this (sometimes more, sometimes less) public space for the continuous
    production of social meaning (and its exploit­ation), there is no
    question that the professional middle class is
    over-represented.[^24^](#c1-note-0024){#c1-note-0024a} It would be
    short-sighted, however, to reduce those seeking autonomy and the
    recognition of individuality and social diversity to the role of poster
    children for the new spirit of
    capitalism.[^25^](#c1-note-0025){#c1-note-0025a} The new social
    movements, for instance, initiated a social shift that has allowed an
    increasing number of people to demand, if nothing else, the right to
    participate in social life in a self-determined manner; that is,
    according to their own standards and values.

    Especially effective was the critique of patriarchal and heteronormative
    power relations, modes of conduct, and
    identities.[^26^](#c1-note-0026){#c1-note-0026a} In the context of the
    political upheavals at the end of the 1960s, the new women\'s and gay
    movements developed into influential actors. Their greatest achievement
    was to establish alternative cultural forms, lifestyles, and strategies
    of action in or around the mainstream of society. How this was done can
    be demonstrated by tracing, for example, the development of the gay
    movement in West Germany.

    In the fall of 1969, the liberalization of Paragraph 175 of the German
    Criminal Code came into effect. From then on, sexual activity between
    adult men was no longer punishable by law (women were not mentioned in
    this context). For the first time, a man could now express himself as a
    homosexual outside of semi-private space without immediately being
    exposed to the risk of criminal prosecution. This was a necessary
    precondition for the ability to defend one\'s own rights. As early as
    1971, the struggle for the recognition of gay life experiences reached
    the broader public when Rosa von Praunheim\'s film *It Is Not the
    Homosexual Who Is Perverse, but the Society in Which He Lives* was
    screened at the Berlin International Film Festival and then, shortly
    thereafter, broadcast on public television in North Rhine-Westphalia.
    The film, which is firmly situated in the agitprop tradition,
    []{#Page_23 type="pagebreak" title="23"}follows a young provincial man
    through the various milieus of Berlin\'s gay subcultures: from a
    monogamous relationship to nightclubs and public bathrooms until, at the
    end, he is enlightened by a political group of men who explain that it
    is not possible to lead a free life in a niche, as his own emancipation
    can only be achieved by a transformation of society as a whole. The film
    closes with a not-so-subtle call to action: "Out of the closets, into
    the streets!" Von Praunheim understood this emancipation to be a process
    that encompassed all areas of life and had to be carried out in public;
    it could only achieve success, moreover, in solidarity with other
    freedom movements such as the Black Panthers in the United States and
    the new women\'s movement. The goal, according to this film, is to
    articulate one\'s own identity as a specific and differentiated identity
    with its own experiences, values, and reference systems, and to anchor
    this identity within a society that not only tolerates it but also
    recognizes it as having equal validity.

    At first, however, the film triggered vehement controversies, even
    within the gay scene. The objection was that it attacked the gay
    subculture, which was not yet prepared to defend itself publicly against
    discrimination. Despite or (more likely) because of these controversies,
    more than 50 groups of gay activists soon formed in Germany. Such
    groups, largely composed of left-wing alternative students, included,
    for instance, the Homosexuelle Aktion Westberlin (HAW) and the Rote
    Zelle Schwul (RotZSchwul) in Frankfurt am
    Main.[^27^](#c1-note-0027){#c1-note-0027a} One focus of their activities
    was to have Paragraph 175 struck entirely from the legal code (which was
    not achieved until 1994). This cause was framed within a general
    struggle to overcome patriarchy and capitalism. At the earliest gay
    demonstrations in Germany, which took place in Münster in April 1972,
    protesters rallied behind the following slogan: "Brothers and sisters,
    gay or not, it is our duty to fight capitalism." This was understood as
    a necessary subordination to the greater struggle against what was known
    in the terminology of left-wing radical groups as the "main
    contradiction" of capitalism (that between capital and labor), and it
    led to strident differences within the gay movement. The dispute
    escalated during the next year. After the so-called *Tuntenstreit*, or
    "Battle of the Queens," which was []{#Page_24 type="pagebreak"
    title="24"}initiated by activists from Italy and France who had appeared
    in drag at the closing ceremony of the HAW\'s Spring Meeting in West
    Berlin, the gay movement was divided, or at least moving in a new
    direction. At the heart of the matter were the following questions: "Is
    there an inherent (many speak of an autonomous) position that gays hold
    with respect to the issue of homosexuality? Or can a position on
    homosexuality only be derived in association with the traditional
    workers\' movement?"[^28^](#c1-note-0028){#c1-note-0028a} In other
    words, was discrimination against homosexuality part of the social
    divide caused by capitalism (that is, one of its "ancillary
    contradictions") and thus only to be overcome by overcoming capitalism
    itself, or was it something unrelated to the "essence" of capitalism, an
    independent conflict requiring different strategies and methods? This
    conflict could never be fully resolved, but the second position, which
    was more interested in overcoming legal, social, and cultural
    discrimination than in struggling against economic exploitation, and
    which focused specifically on the social liberation of gays, proved to
    be far more dynamic in the long term. This was not least because both
    the old and new left were themselves not free of homophobia and because
    the entire radical student movement of the 1970s fell into crisis.

    Over the course of the 1970s and 1980s, "aesthetic self-empowerment" was
    realized through the efforts of artistic and (increasingly) commercial
    producers of images, texts, and
    sounds.[^29^](#c1-note-0029){#c1-note-0029a} Activists, artists, and
    intellectuals developed a language with which they could speak
    assertively in public about topics that had previously been taboo.
    Inspired by the expression "gay pride," which originated in the United
    States, they began to use the term *schwul* ("gay"), which until then
    had possessed negative connotations, with growing confidence. They
    founded numerous gay and lesbian cultural initiatives, theaters,
    publishing houses, magazines, bookstores, meeting places, and other
    associations in order to counter the misleading or (in their eyes)
    outright false representations of the mass media with their own
    multifarious media productions. In doing so, they typically followed a
    dual strategy: on the one hand, they wanted to create a space for the
    members of the movement in which it would be possible to formulate and
    live different identities; on the other hand, they were fighting to be
    accepted by society at large. While []{#Page_25 type="pagebreak"
    title="25"}a broader and broader spectrum of gay positions, experiences,
    and aesthetics was becoming visible to the public, the connection to
    left-wing radical contexts became weaker. Founded as early as 1974, and
    likewise in West Berlin, the General Homosexual Working Group
    (Allgemeine Homosexuelle Arbeitsgemeinschaft) sought to integrate gay
    politics into mainstream society by defining the latter -- on the basis
    of bourgeois, individual rights -- as a "politics of
    anti-discrimination." These efforts achieved a milestone in 1980 when,
    in the run-up to the parliamentary election, a podium discussion was
    held with representatives of all major political parties on the topic of
    the law governing sexual offences. The discussion took place in the
    Beethovenhalle in Bonn, which was the largest venue for political events
    in the former capital. Several participants considered the event to be a
    "disaster,"[^30^](#c1-note-0030){#c1-note-0030a} for it revived a number
    of internal conflicts (not least that between revolutionary and
    integrative positions). Yet the fact remains that representatives were
    present from every political party, and this alone was indicative of an
    unprecedented amount of public awareness for those demanding equal
    rights.

    The struggle against discrimination and for social recognition reached
    an entirely new level of urgency with the outbreak of HIV/AIDS. In 1983,
    the magazine *Der Spiegel* devoted its first cover story to the disease,
    thus bringing it to the awareness of the broader public. In the same
    year, the non-profit organization Deutsche Aids-Hilfe was founded to
    prevent further cases of discrimination, for *Der Spiegel* was not the
    only publication at the time to refer to AIDS as a "homosexual
    epidemic."[^31^](#c1-note-0031){#c1-note-0031a} The struggle against
    HIV/AIDS required a comprehensive mobilization. Funding had to be raised
    in order to deal with the social repercussions of the epidemic, to teach
    people about safe sexual practices for everyone and to direct research
    toward discovering causes and developing potential cures. The immediate
    threat that AIDS represented, especially while so little was known about
    the illness and its treatment remained a distant hope, created an
    impetus for mobilization that led to alliances between the gay movement,
    the healthcare system, and public authorities. Thus, the AIDS Inquiry
    Committee, sponsored by the conservative Christian Democratic Union,
    concluded in 1988 that, in the fight against the illness, "the
    homosexual subculture is []{#Page_26 type="pagebreak"
    title="26"}especially important. This informal structure should
    therefore neither be impeded nor repressed but rather, on the contrary,
    recognized and supported."[^32^](#c1-note-0032){#c1-note-0032a} The AIDS
    crisis proved to be a catalyst for advancing the integration of gays
    into society and for expanding what could be regarded as acceptable
    lifestyles, opinions, and cultural practices. As a consequence,
    homosexuals began to appear more frequently in the media, though their
    presence would never match that of hetero­sexuals. As of 1985, the
    television show *Lindenstraße* featured an openly gay protagonist, and
    the first kiss between men was aired in 1987. The episode still provoked
    a storm of protest -- Bayerische Rundfunk refused to broadcast it a
    second time -- but this was already a rearguard action and the
    integration of gays (and lesbians) into the social mainstream continued.
    In 1993, the first gay and lesbian city festival took place in Berlin,
    and the first Rainbow Parade was held in Vienna in 1996. In 2002, the
    Cologne Pride Day involved 1.2 million participants and attendees, thus
    surpassing for the first time the attendance at the traditional Rose
    Monday parade. By the end of the 1990s, the sociologist Rüdiger Lautmann
    was already prepared to maintain: "To be homosexual has become
    increasingly normalized, even if homophobia lives on in the depths of
    the collective disposition."[^33^](#c1-note-0033){#c1-note-0033a} This
    normalization was also reflected in a study published by the Ministry of
    Justice in the year 2000, which stressed "the similarity between
    homosexual and heterosexual relationships" and, on this basis, made an
    argument against discrimination.[^34^](#c1-note-0034){#c1-note-0034a}
    Around the year 2000, however, the classical gay movement had already
    passed its peak. A profound transformation had begun to take place in
    the middle of the 1990s. It lost its character as a new social movement
    (in the style of the 1970s) and began to splinter inwardly and
    outwardly. One could say that it transformed from a mass movement into a
    multitude of variously networked communities. The clearest sign of this
    transformation is the abbreviation "LGBT" (lesbian, gay, bisexual, and
    transgender), which, since the mid-1990s, has represented the internal
    heterogeneity of the movement as it has shifted toward becoming a
    network.[^35^](#c1-note-0035){#c1-note-0035a} At this point, the more
    radical actors were already speaking against the normalization of
    homosexuality. Queer theory, for example, was calling into question the
    "essentialist" definition of gender []{#Page_27 type="pagebreak"
    title="27"}-- that is, any definition reducing it to an immutable
    essence -- with respect to both its physical dimension (sex) and its
    social and cultural dimension (gender
    proper).[^36^](#c1-note-0036){#c1-note-0036a} It thus opened up a space
    for the articulation of experiences, self-descriptions, and lifestyles
    that, on every level, are located beyond the classical attributions of
    men and women. A new generation of intellectuals, activists, and artists
    took the stage and developed -- yet again through acts of aesthetic
    self-empowerment -- a language that enabled them to import, with
    confidence, different self-definitions into the public sphere. An
    example of this is the adoption of inclusive plural forms in German
    (*Aktivist\_innen* "activists," *Künstler\_innen* "artists"), which draw
    attention to the gaps and possibilities between male and female
    identities that are also expressed in the language itself. Just as with
    the terms "gay" or *schwul* some 30 years before, in this case, too, an
    important element was the confident and public adoption and semantic
    conversion of a formerly insulting word ("queer") by the very people and
    communities against whom it used to be
    directed.[^37^](#c1-note-0037){#c1-note-0037a} Likewise observable in
    these developments was the simultaneity of social (amateur) and
    artistic/scientific (professional) cultural production. The goal,
    however, was less to produce a clear antithesis than it was to oppose
    rigid attributions by underscoring mutability, hybridity, and
    uniqueness. Both the scope of what could be expressed in public and the
    circle of potential speakers expanded yet again. And, at least to some
    extent, the drag queen Conchita Wurst popularized complex gender
    constructions that went beyond the simple woman/man dualism. All of that
    said, the assertion by Rüdiger Lautmann quoted above -- "homophobia
    lives on in the depths of the collective dis­position" -- continued to
    hold true.

    If the gay movement is representative of the social liber­ation of the
    1970s and 1980s, then it is possible to regard its transformation into
    the LGBT movement during the 1990s -- with its multiplicity and fluidity
    of identity models and its stress on mutability and hybridity -- as a
    sign of the reinvention of this project within the context of an
    increasingly dominant digital condition. With this transformation,
    however, the diversification and fluidification of cultural practices
    and social roles have not yet come to an end. Ways of life that were
    initially subcultural and facing existential pressure []{#Page_28
    type="pagebreak" title="28"}are gradually entering the mainstream. They
    are expanding the range of readily available models of identity for
    anyone who might be interested, be it with respect to family forms
    (e.g., patchwork families, adoption by same-sex couples), diets (e.g.,
    vegetarianism and veganism), healthcare (e.g., anti-vaccination), or
    other principles of life and belief. All of them are seeking public
    recognition for a new frame of reference for social meaning that has
    originated from their own activity. This is necessarily a process
    characterized by conflicts and various degrees of resistance, including
    right-wing populism that seeks to defend "traditional values," but many
    of these movements will ultimately succeed in providing more people with
    the opportunity to speak in public, thus broadening the palette of
    themes that are considered to be important and legitimate.
    :::

    ::: {.section}
    ### Beyond center and periphery {#c1-sec-0005}

    In order to reach a better understanding of the complexity involved in
    the expanding social basis of cultural production, it is necessary to
    shift yet again to a different level. For, just as it would be myopic to
    examine the multiplication of cultural producers only in terms of
    professional knowledge workers from the middle class, it would likewise
    be insufficient to situate this multiplication exclusively in the
    centers of the West. The entire system of categories that justified the
    differentiation between the cultural "center" and the cultural
    "periphery" has begun to falter. This complex and multilayered process
    has been formulated and analyzed by the theory of "post-colonialism."
    Long before digital media made the challenge of cultural multiplicity a
    quotidian issue in the West, proponents of this theory had developed
    languages and terminologies for negotiating different positions without
    needing to impose a hierarchical order.

    Since the 1970s, the theoretical current of post-colonialism has been
    examining the cultural and epistemic dimensions of colonialism that,
    even after its end as a territorial system, have remained responsible
    for the continuation of dependent relations and power differentials. For
    my purposes -- which are to develop a European perspective on the
    factors ensuring that more and more people are able to participate in
    cultural []{#Page_29 type="pagebreak" title="29"}production -- two
    points are especially relevant because their effects reverberate in
    Europe itself. First is the deconstruction of the categories "West" (in
    the sense of the center) and "East" (in the sense of the periphery). And
    second is the focus on hybridity as a specific way for non-Western
    actors to deal with the dominant cultures of former colonial powers,
    which have continued to determine significant portions of globalized
    culture. The terms "West" and "East," "center" and "periphery," do not
    simply describe existing conditions; rather, they are categories that
    contribute, in an important way, to the creation of the very conditions
    that they presume to describe. This may sound somewhat circular, but it
    is precisely from this circularity that such cultural classifications
    derive their strength. The world that they illuminate is immersed in
    their own light. The category "East" -- or, to use the term of the
    literary theorist Edward Said,
    "orientalism"[^38^](#c1-note-0038){#c1-note-0038a} -- is a system of
    representation that pervades Western thinking. Within this system,
    Europe or the West (as the center) and the East (as the periphery)
    represent asymmetrical and antithetical concepts. This construction
    achieves a dual effect. As a self-description, on the one hand, it
    contributes to the formation of our own identity, for Europeans
    attrib­ute to themselves and to their continent such features as
    "rationality," "order," and "progress," while on the other hand
    identifying the alternative with "superstition," "chaos," or
    "stagnation." The East, moreover, is used as an exotic projection screen
    for our own suppressed desires. According to Said, a representational
    system of this sort can only take effect if it becomes "hegemonic"; that
    is, if it is perceived as self-evident and no longer as an act of
    attribution but rather as one of description, even and precisely by
    those against whom the system discriminates. Said\'s accomplishment is
    to have worked out how far-reaching this system was and, in many areas,
    it remains so today. It extended (and extends) from scientific
    disciplines, whose researchers discussed (until the 1980s) the theory of
    "oriental despotism,"[^39^](#c1-note-0039){#c1-note-0039a} to literature
    and art -- the motif of the harem was especially popular, particularly
    in paintings of the late nineteenth
    century[^40^](#c1-note-0040){#c1-note-0040a} -- all the way to everyday
    culture, where, as of 1913 in the United States, the cigarette brand
    Camel (introduced to compete with the then-leading brand, Fatima) was
    meant to evoke the []{#Page_30 type="pagebreak" title="30"}mystique and
    sensuality of the Orient.[^41^](#c1-note-0041){#c1-note-0041a} This
    system of representation, however, was more than a means of describing
    oneself and others; it also served to legitimize the allocation of all
    knowledge and agency on to one side, that of the West. Such an order was
    not restricted to culture; it also created and legitimized a sense of
    domination for colonial projects.[^42^](#c1-note-0042){#c1-note-0042a}
    This cultural legitimation, as Said points out, also persists after the
    end of formal colonial domination and continues to marginalize the
    postcolonial subjects. As before, they are unable to speak for
    themselves and therefore remain in the dependent periphery, which is
    defined by their subordinate position in relation to the center. Said
    directed the focus of critique to this arrangement of center and
    periphery, which he saw as being (re)produced and legitimized on the
    cultural level. From this arose the demand that everyone should have the
    right to speak, to place him- or herself in the center. To achieve this,
    it was necessary first of all to develop a language -- indeed, a
    cultural landscape -- that can manage without a hegemonic center and is
    thus oriented toward multiplicity instead of
    uniformity.[^43^](#c1-note-0043){#c1-note-0043a}

    A somewhat different approach has been taken by the literary theorist
    Homi K. Bhabha. He proceeds from the idea that the colonized never fully
    passively adopt the culture of the colonialists -- the "English book,"
    as he calls it. Their previous culture is never simply wiped out and
    replaced by another. What always and necessarily occurs is rather a
    process of hybridization. This concept, according to Bhabha,

    ::: {.extract}
    suggests that all of culture is constructed around negotiations and
    conflicts. Every cultural practice involves an attempt -- sometimes
    good, sometimes bad -- to establish authority. Even classical works of
    art, such as a painting by Brueghel or a composition by Beethoven, are
    concerned with the establishment of cultural authority. Now, this poses
    the following question: How does one function as a negotiator when
    one\'s own sense of agency is limited, for instance, on account of being
    excluded or oppressed? I think that, even in the role of the underdog,
    there are opportunities to upend the imposed cultural authorities -- to
    accept some aspects while rejecting others. It is in this way that
    symbols of authority are hybridized and made into something of one\'s
    own. For me, hybridization is not simply a mixture but rather a
    []{#Page_31 type="pagebreak" title="31"}strategic and selective
    appropriation of meanings; it is a way to create space for negotiators
    whose freedom and equality are
    endangered.[^44^](#c1-note-0044){#c1-note-0044a}
    :::

    Hybridization is thus a cultural strategy for evading marginality that
    is imposed from the outside: subjects, who from the dominant perspective
    are incapable of doing so, appropriate certain aspects of culture for
    themselves and transform them into something else. What is decisive is
    that this hybrid, created by means of active and unauthorized
    appropriation, opposes the dominant version and the resulting speech is
    thus legitimized from another -- that is, from one\'s own -- position.
    In this way, a cultural engagement is set under way and the superiority
    of one meaning or another is called into question. Who has the right to
    determine how and why a relationship with others should be entered,
    which resources should be appropriated from them, and how these
    resources should be used? At the heart of the matter lie the abilities
    of speech and interpretation; these can be seized in order to create
    space for a "cultural hybridity that entertains difference without an
    assumed or imposed hierarchy."[^45^](#c1-note-0045){#c1-note-0045a}

    At issue is thus a strategy for breaking down hegemonic cultural
    conditions, which distribute agency in a highly uneven manner, and for
    turning one\'s own cultural production -- which has been dismissed by
    cultural authorities as flawed, misconceived, or outright ignorant --
    into something negotiable and independently valuable. Bhabha is thus
    interested in fissures, differences, diversity, multiplicity, and
    processes of negotiation that generate something like shared meaning --
    culture, as he defines it -- instead of conceiving of it as something
    that precedes these processes and is threatened by them. Accordingly, he
    proceeds not from the idea of unity, which is threatened whenever
    "others" are empowered to speak and needs to be preserved, but rather
    from the irreducible multiplicity that, through laborious processes, can
    be brought into temporary and limited consensus. Bhabha\'s vision of
    culture is one without immutable authorities, interpretations, and
    truths. In theory, everything can be brought to the table. This is not a
    situation in which anything goes, yet the central meaning of
    negotiation, the contextuality of consensus, and the mutability of every
    frame of reference []{#Page_32 type="pagebreak" title="32"}-- none of
    which can be shared equally by everyone -- are always potentially
    negotiable.

    Post-colonialism draws attention to the "disruptive power of the
    excluded-included third," which becomes especially virulent when it
    "emerges in the middle of semantic
    structures."[^46^](#c1-note-0046){#c1-note-0046a} The recognition of
    this power reveals the increasing cultural independence of those
    formerly colonized, and it also transforms the cultural self-perception
    of the West, for, even in Western nations that were not significant
    colonial powers, there are multifaceted tensions between dominant
    cultures and those who are on the defensive against discrimination and
    attributions by others. Instead of relying on the old recipe of
    integration through assimilation (that is, the dissolution of the
    "other"), the right to self-determined difference is being called for
    more emphatically. In such a manner, collective identities, such as
    national identities, are freed from their questionable appeals to
    cultural homogeneity and essentiality, and reconceived in terms of the
    experience of immanent difference. Instead of one binding and
    unnegotiable frame of reference for everyone, which hierarchizes
    individual pos­itions and makes them appear unified, a new order without
    such limitations needs to be established. Ultimately, the aim is to
    provide nothing less than an "alternative reading of
    modernity,"[^47^](#c1-note-0047){#c1-note-0047a} which influences both
    the construction of the past and the modalities of the future. For
    European culture in particular, such a project is an immense challenge.

    Of course, these demands do not derive their everyday relevance
    primarily from theory but rather from the experiences of
    (de)colonization, migration, and globalization. Multifaceted as it is,
    however, the theory does provide forms and languages for articulating
    these phenomena, legitimizing new positions in public debates, and
    attacking persistent mechanisms of cultural marginalization. It helps to
    empower broader societal groups to become actively involved in cultural
    processes, namely people, such as migrants and their children, whose
    identity and experience are essentially shaped by non-Western cultures.
    The latter have been giving voice to their experiences more frequently
    and with greater confidence in all areas of public life, be it in
    politics, literature, music, or
    art.[^48^](#c1-note-0048){#c1-note-0048a} In Germany, for instance, the
    films by Fatih Akin (*Head-On* from 2004 and *Soul Kitchen* from 2009,
    to []{#Page_33 type="pagebreak" title="33"}name just two), in which the
    experience of immigration is represented as part of the German
    experience, have reached a wide public audience. In 2002, the group
    Kanak Attak organized a series of conferences with the telling motto *no
    integración*, and these did much to introduce postcolonial positions to
    the debates taking place in German-speaking
    countries.[^49^](#c1-note-0049){#c1-note-0049a} For a long time,
    politicians with "migration backgrounds" were considered to be competent
    in only one area, namely integration policy. This has since changed,
    though not entirely. In 2008, for instance, Cem Özdemir was elected
    co-chair of the Green Party and thus shares responsibility for all of
    its political positions. Developments of this sort have been enabled
    (and strengthened) by a shift in society\'s self-perception. In 2014,
    Cemile Giousouf, the integration commissioner for the conservative
    CDU/CSU alliance in the German Parliament, was able to make the
    following statement without inciting any controversy: "Over the past few
    years, Germany has become a modern land of
    immigration."[^50^](#c1-note-0050){#c1-note-0050a} A remarkable
    proclamation. Not ten years earlier, her party colleague Norbert Lammert
    had expressed, in his function as parliamentary president, interest in
    reviving the debate about the term "leading culture." The increasingly
    well-educated migrants of the first, second, or third gener­ation no
    longer accept the choice of being either marginalized as an exotic
    representative of the "other" or entirely assimilated. Rather, they are
    insisting on being able to introduce their specific experience as a
    constitutive contribution to the formation of the present -- in
    association and in conflict with other contributions, but at the same
    level and with the same legitimacy. It is no surprise that various forms
    of discrimin­ation and violence against "foreigners" not only continue
    in everyday life but have also been increasing in reaction to this new
    situation. Ultimately, established claims to power are being called into
    question.

    To summarize, at least three secular historical tendencies or movements,
    some of which can be traced back to the late nineteenth century but each
    of which gained considerable momentum during the last third of the
    twentieth (the spread of the knowledge economy, the erosion of
    heteronormativity, and the focus of post-colonialism on cultural
    hybridity), have greatly expanded the sphere of those who actively
    negotiate []{#Page_34 type="pagebreak" title="34"}social meaning. In
    large part, the patterns and cultural foundations of these processes
    developed long before the internet. Through the use of the internet, and
    through the experiences of dealing with it, they have encroached upon
    far greater portions of all societies.
    :::
    :::

    ::: {.section}
    The Culturalization of the World {#c1-sec-0006}
    --------------------------------

    The number of participants in cultural processes, however, is not the
    only thing that has increased. Parallel to that development, the field
    of the cultural has expanded as well -- that is, those areas of life
    that are not simply characterized by unalterable necessities, but rather
    contain or generate competing options and thus require conscious
    decisions.

    The term "culturalization of the economy" refers to the central position
    of knowledge-based, meaning-based, and affect-oriented processes in the
    creation of value. With the emergence of consumption as the driving
    force behind the production of goods and the concomitant necessity of
    having not only to satisfy existing demands but also to create new ones,
    the cultural and affective dimensions of the economy began to gain
    significance. I have already discussed the beginnings of product
    staging, advertising, and public relations. In addition to all of the
    continuities that remain with us from that time, it is also possible to
    point out a number of major changes that consumer society has undergone
    since the late 1960s. These changes can be delineated by examining the
    greater role played by design, which has been called the "core
    discipline of the creative
    economy."[^51^](#c1-note-0051){#c1-note-0051a}

    As a field of its own, design originated alongside industrialization,
    when, in collaborative processes, the activities of planning and
    designing were separated from those of carrying out
    production.[^52^](#c1-note-0052){#c1-note-0052a} It was not until the
    modern era that designers consciously endeavored to seek new forms for
    the logic inherent to mass production. With the aim of economic
    efficiency, they intended their designs to optimize the clearly defined
    functions of anonymous and endlessly reproducible objects. At the end of
    the nineteenth century, the architect Louis Sullivan, whose buildings
    still distinguish the skyline of Chicago, condensed this new attitude
    into the famous axiom []{#Page_35 type="pagebreak" title="35"}"form
    follows function." Mies van der Rohe, working as an architect in Chicago
    in the middle of the twentieth century, supplemented this with a pithy
    and famous formulation of his own: "less is more." The rationality of
    design, in the sense of isolating and improving specific functions, and
    the economical use of resources were of chief importance to modern
    (industrial) designers. Even the ten design principles of Dieter Rams,
    who led the design division of the consumer products company Braun from
    1965 to 1991 -- one of the main sources of inspiration for Jonathan Ive,
    Apple\'s chief design officer -- aimed to make products "usable,"
    "understandable," "honest," and "long-lasting." "Good design," according
    to his guiding principle, "is as little design as
    possible."[^53^](#c1-note-0053){#c1-note-0053a} This orientation toward
    the technical and functional promised to solve problems for everyone in
    a long-term and binding manner, for the inherent material and design
    qual­ities of an object were supposed to make it independent from
    changing times and from the tastes of consumers.

    ::: {.section}
    ### Beyond the object {#c1-sec-0007}

    At the end of the 1960s, a new generation of designers rebelled against
    this industrial and instrumental rationality, which was now felt to be
    authoritarian, soulless, and reductionist. In the works associated with
    "anti-design" or "radical design," the objectives of the discipline were
    redefined and a new formal language was developed. In the place of
    tech­nical and functional optimization, recombination -- ecological
    recycling or the postmodern interplay of forms -- emerged as a design
    method and aesthetic strategy. Moreover, the aspiration of design
    shifted from the individual object to its entire social and material
    environment. The processes of design and production, which had been
    closed off from one another and restricted to specialists, were opened
    up precisely to encourage the participation of non-designers, be it
    through interdisciplinary cooperation with other types of professions or
    through the empowerment of laymen. The objectives of design were
    radically expanded: rather than ending with the completion of an
    individual product, it was now supposed to engage with society. In the
    sense of cybernetics, this was regarded as a "system," controlled by
    feedback processes, []{#Page_36 type="pagebreak" title="36"}which
    connected social, technical, and biological dimensions to one
    another.[^54^](#c1-note-0054){#c1-note-0054a} Design, according to this
    new approach, was meant to be a "socially significant
    activity."[^55^](#c1-note-0055){#c1-note-0055a}

    Embedded in the social movements of the 1960s and 1970s, this new
    generation of designers was curious about the social and political
    potential of their discipline, and about possibilities for promoting
    flexibility and autonomy instead of rigid industrial efficiency. Design
    was no longer expected to solve problems once and for all, for such an
    idea did not correspond to the self-perception of an open and mutable
    society. Rather, it was expected to offer better opportun­ities for
    enabling people to react to continuously changing conditions. A radical
    proposal was developed by the Italian designer Enzo Mari, who in 1974
    published his handbook *Autoprogettazione* (Self-Design). It contained
    19 simple designs with which people could make, on their own,
    aesthetically and functionally sophisticated furniture out of pre-cut
    pieces of wood. In this case, the designs themselves were less important
    than the critique of conventional design as elitist and of consumer
    society as alienated and wasteful. Mari\'s aim was to reconceive the
    relations among designers, the manufacturing industry, and users.
    Increasingly, design came to be understood as a holistic and open
    process. Victor Papanek, the founder of ecological design, took things a
    step further. For him, design was "basic to all human activity. The
    planning and patterning of any act towards a desired, foreseeable end
    constitutes the design process. Any attempt to separate design, to make
    it a thing-by-itself, works counter to the inherent value of design as
    the primary underlying matrix of
    life."[^56^](#c1-note-0056){#c1-note-0056a}

    Potentially all aspects of life could therefore fall under the purview
    of design. This came about from the desire to oppose industrialism,
    which was blind to its catastrophic social and ecological consequences,
    with a new and comprehensive manner of seeing and acting that was
    unrestricted by economics.

    Toward the end of the 1970s, this expanded notion of design owed less
    and less to emancipatory social movements, and its socio-political goals
    began to fall by the wayside. Three fundamental patterns survived,
    however, which go beyond design and remain characteristic of the
    culturalization []{#Page_37 type="pagebreak" title="37"}of the economy:
    the discovery of the public as emancipated users and active
    participants; the use of appropriation, transformation, and
    recombination as methods for creating ever-new aesthetic
    differentiations; and, finally, the intention of shaping the lifeworld
    of the user.[^57^](#c1-note-0057){#c1-note-0057a}

    As these patterns became depoliticized and commercialized, the focus of
    designing the "lifeworld" shifted more and more toward designing the
    "experiential world." By the end of the 1990s, this had become so
    normalized that even management consultants could assert that
    "\[e\]xperiences represent an existing but previously unarticulated
    *genre of economic output*."[^58^](#c1-note-0058){#c1-note-0058a} It was
    possible to define the dimensions of the experiential world in various
    ways. For instance, it could be clearly delimited and product-oriented,
    like the flagship stores introduced by Nike in 1990, which, with their
    elaborate displays, were meant to turn shopping into an experience. This
    experience, as the company\'s executives hoped, radiated outward and
    influenced how the brand was perceived as a whole. The experiential
    world could also, however, be conceived in somewhat broader terms, for
    instance by design­ing entire institutions around the idea of creating a
    more attractive work environment and thereby increasing the commitment
    of employees. This approach is widespread today in creative industries
    and has become popularized through countless stories about ping-pong
    tables, gourmet cafeterias, and massage rooms in certain offices. In
    this case, the process of creativity is applied back to itself in order
    to systematize and optimize a given workplace\'s basis of operation. The
    development is comparable to the "invention of invention" that
    characterized industrial research around the end of the nineteenth
    century, though now the concept has been re­located to the field of
    knowledge production.

    Yet the "experiential world" can be expanded even further, for instance
    when entire cities attempt to make themselves attractive to
    international clientele and compete with others by building spectacular
    museums or sporting arenas. Displays in cities, as well as a few other
    central locations, are regularly constructed in order to produce a
    particular experience. This also entails, however, that certain forms of
    use that fail to fit the "urban
    script"[^59^](#c1-note-0059){#c1-note-0059a} are pushed to the margins
    or driven away.[^60^](#c1-note-0060){#c1-note-0060a} Thus, today, there
    is hardly a single area of life to []{#Page_38 type="pagebreak"
    title="38"}which the strategies and methods of design do not have
    access, and this access occurs at all levels. For some time, design has
    not been a purely visible matter, restricted to material objects; it
    rather forms and controls all of the senses. Cities, for example, have
    come to be understood increasingly as "sound spaces" and have
    accordingly been reconfigured with the goal of modulating their various
    noises.[^61^](#c1-note-0061){#c1-note-0061a} Yet design is no longer
    just a matter of objects, processes, and experiences. By now, in the
    context of reproductive medicine, it has even been applied to the
    biological foundations of life ("designer babies"). I will revisit this
    topic below.
    :::

    ::: {.section}
    ### Culture everywhere {#c1-sec-0008}

    Of course, design is not the only field of culture that has imposed
    itself over society as a whole. A similar development has occurred in
    the field of advertising, which, since the 1970s, has been integrated
    into many more physical and social spaces and by now has a broad range
    of methods at its disposal. Advertising is no longer found simply on
    billboards or in display windows. In the form of "guerilla marketing" or
    "product placement," it has penetrated every space and occupied every
    discourse -- by blending with political messages, for instance -- and
    can now even be spread, as "viral marketing," by the addressees of the
    advertisements themselves. Similar processes can be observed in the
    fields of art, fashion, music, theater, and sports. This has taken place
    perhaps most radically in the field of "gaming," which has drawn upon
    technical progress in the most direct possible manner and, with the
    spread of powerful computers and mobile applications, has left behind
    the confines of the traditional playing field. In alternate reality
    games, the realm of the virtual and fictitious has also been
    transcended, as physical spaces have been overlaid with their various
    scripts.[^62^](#c1-note-0062){#c1-note-0062a}

    This list could be extended, but the basic trend is clear enough,
    especially as the individual fields overlap and mutually influence one
    another. They are blending into a single interdependent field for
    generating social meaning in the form of economic activity. Moreover,
    through digitalization and networking, many new opportunities have
    arisen for large-scale involvement by the public in design processes.
    Thanks []{#Page_39 type="pagebreak" title="39"}to new communication
    technologies and flexible production processes, today\'s users can
    personalize and create products to suit their wishes. Here, the spectrum
    extends from tiny batches of creative-industrial products all the way to
    global processes of "mass customization," in which factory-based mass
    production is combined with personalization. One of the first
    applications of this was introduced in 1999 when, through its website, a
    sporting-goods company allowed customers to design certain elements of a
    shoe by altering it within a set of guidelines. This was taken a step
    further by the idea of "user-centered innovation," which relies on the
    specific knowledge of users to enhance a product, with the additional
    hope of discovering unintended applications and transforming these into
    new areas of business.[^63^](#c1-note-0063){#c1-note-0063a} It has also
    become possible for end users to take over the design process from the
    beginning, which has become considerably easier with the advent of
    specialized platforms for exchanging knowledge, alongside semi-automated
    production tools such as mechanical mills and 3D printers.
    Digitalization, which has allowed all content to be processed, and
    networking, which has created an endless amount of content ("raw
    material"), have turned appropriation and recombination into general
    methods of cultural production.[^64^](#c1-note-0064){#c1-note-0064a}
    This phenomenon will be examined more closely in the next chapter.

    Both the involvement of users in the production process and the methods
    of appropriation and recombination are extremely information-intensive
    and communication-intensive. Without the corresponding technological
    infrastructure, neither could be achieved efficiently or on a large
    scale. This was evident in the 1970s, when such approaches never made it
    beyond subcultures and conceptual studies. With today\'s search engines,
    every single user can trawl through an amount of information that, just
    a generation ago, would have been unmanageable even by professional
    archivists. A broad array of communication platforms (together with
    flexible production capacities and efficient logistics) not only weakens
    the contradiction between mass fabrication and personalization; it also
    allows users to network directly with one another in order to develop
    specialized knowledge together and thus to enable themselves to
    intervene directly in design processes, both as []{#Page_40
    type="pagebreak" title="40"}willing participants in and as critics of
    flexible global production processes.
    :::
    :::

    ::: {.section}
    The Technologization of Culture {#c1-sec-0009}
    -------------------------------

    That society is dependent on complex information technologies in order
    to organize its constitutive processes is, in itself, nothing new.
    Rather, this began as early as the late nineteenth century. It is
    directly correlated with the expansion and acceleration of the
    circulation of goods, which came about through industrialization. As the
    historian and sociologist James Beniger has noted, this led to a
    "control crisis," for administrative control centers were faced with the
    problem of losing sight of what was happening in their own factories,
    with their suppliers, and in the important markets of the time.
    Management was in a bind: decisions had to be made either on the basis
    of insufficient information or too late. The existing administrative and
    control mechanisms could no longer deal with the rapidly increasing
    complexity and time-sensitive nature of extensively organized production
    and distribution. The office became more important, and ever more people
    were needed there to fulfill a growing number of functions. Yet this was
    not enough for the crisis to subside. The old administrative methods,
    which involved manual information processing, simply could no longer
    keep up. The crisis reached its first dramatic peak in 1889 in the
    United States, with the realization that the census data from the year
    1880 had not yet been analyzed when the next census was already
    scheduled to take place during the subsequent year. In the same year,
    the Secretary of the Interior organized a conference to investigate
    faster methods of data processing. Two methods were tested for making
    manual labor more efficient, one of which had the potential to achieve
    greater efficiency by means of novel data-processing machines. The
    latter system emerged as the clear victor; developed by an engineer
    named Hermann Hollerith, it mechanically processed and stored data on
    punch cards. The idea was based on Hollerith\'s observations of the
    coup­ling and decoupling of railroad cars, which he interpreted as
    modular units that could be combined in any desired order. The punch
    card transferred this approach to information []{#Page_41
    type="pagebreak" title="41"}management. Data were no longer stored in
    fixed, linear arrangements (tables and lists) but rather in small units
    (the punch cards) that, like railroad cars, could be combined in any
    given way. The increase in efficiency -- with respect to speed *and*
    flexibility -- was enormous, and nearly a hundred of Hollerith\'s
    machines were used by the Census
    Bureau.[^65^](#c1-note-0065){#c1-note-0065a} This marked a turning point
    in the history of information processing, with technical means no longer
    being used exclusively to store data, but to process data as well. This
    was the only way to avoid the impending crisis, ensuring that
    bureaucratic management could maintain centralized control. Hollerith\'s
    machines proved to be a resounding success and were implemented in many
    more branches of government and corporate administration, where
    data-intensive processes had increased so rapidly they could not have
    been managed without such machines. This growth was accompanied by that
    of Hollerith\'s Tabulating Machine Company, which he founded in 1896 and
    which, after a number of mergers, was renamed in 1924 as the
    International Business Machines Corporation (IBM). Throughout the
    following decades, dependence on information-processing machines only
    deepened. The growing number of social, commercial, and military
    processes could only be managed by means of information technology. This
    largely took place, however, outside of public view, namely in the
    specialized divisions of large government and private organizations.
    These were the only institutions in command of the necessary resources
    for operating the complex technical infrastructure -- so-called
    mainframe computers -- that was essential to automatic information
    processing.

    ::: {.section}
    ### The independent media {#c1-sec-0010}

    As with so much else, this situation began to change in the 1960s. Mass
    media and information-processing technologies began to attract
    criticism, even though all of the involved subcultures, media activists,
    and hackers continued to act independently from one another until the
    1990s. The freedom-oriented social movements of the 1960s began to view
    the mass media as part of the political system against which they were
    struggling. The connections among the economy, politics, and the media
    were becoming more apparent, not []{#Page_42 type="pagebreak"
    title="42"}least because many mass media companies, especially those in
    Germany related to the Springer publishing house, were openly inimical
    to these social movements. Critical theor­ies arose that, borrowing
    Louis Althusser\'s influential term, regarded the media as part of the
    "ideological state apparatus"; that is, as one of the authorities whose
    task is to influence people to accept social relations to such a degree
    that the "repressive state apparatuses" (the police, the military, etc.)
    form a constant background in everyday
    life.[^66^](#c1-note-0066){#c1-note-0066a} Similarly influential,
    Antonio Gramsci\'s theory of "cultural hegemony" emphasized the
    condition in which the governed are manipulated to form a cultural
    consensus with the ruling class; they accept the latter\'s
    presuppositions (and the politics which are thus justified) even though,
    by doing so, they are forced to suffer economic
    disadvantages.[^67^](#c1-note-0067){#c1-note-0067a} Guy Debord and the
    Situationists attributed to the media a central role in the new form of
    rule known as "the spectacle," the glittery surfaces and superficial
    manifestations of which served to conceal society\'s true
    relations.[^68^](#c1-note-0068){#c1-note-0068a} In doing so, they
    aligned themselves with the critique of the "culture industry," which
    had been formulated by Max Horkheimer and Theodor W. Adorno at the
    beginning of the 1940s and had become a widely discussed key text by the
    1960s.

    Their differences aside, these perspectives were united in that they no
    longer understood the "public" as a neutral sphere, in which citizens
    could inform themselves freely and form their opinions, but rather as
    something that was created with specific intentions and consequences.
    From this grew an interest in "counter-publics"; that is, in forums
    where other actors could appear and negotiate theories of their own. The
    mass media thus became an important instrument for organizing the
    bourgeois--capitalist public, but they were also responsible for the
    development of alternatives. Media, according to one of the core ideas
    of these new approaches, are less a sphere in which an external reality
    is depicted; rather, they are themselves a constitutive element of
    reality.
    :::

    ::: {.section}
    ### Media as lifeworlds {#c1-sec-0011}

    Another branch of new media theories, that of Marshall McLuhan and the
    Toronto School of Communication,[^69^](#c1-note-0069){#c1-note-0069a}
    []{#Page_43 type="pagebreak" title="43"}reached a similar conclusion on
    different grounds. In 1964, McLuhan aroused a great deal of attention
    with his slogan "the medium is the message." He maintained that every
    medium of communication, by means of its media-specific characteristics,
    directly affected the consciousness, self-perception, and worldview of
    every individual.[^70^](#c1-note-0070){#c1-note-0070a} This, he
    believed, happens independently of and in addition to whatever specific
    message a medium might be conveying. From this perspective, reality does
    not exist outside of media, given that media codetermine our personal
    relation to and behavior in the world. For McLuhan and the Toronto
    School, media were thus not channels for transporting content but rather
    the all-encompassing environments -- galaxies -- in which we live.

    Such ideas were circulating much earlier and were intensively developed
    by artists, many of whom were beginning to experiment with new
    electronic media. An important starting point in this regard was the
    1963 exhibit *Exposition of Music -- Electronic Television* by the
    Korean artist Nam June Paik, who was then collaborating with Karlheinz
    Stockhausen in Düsseldorf. Among other things, Paik presented 12
    television sets, the screens of which were "distorted" by magnets. Here,
    however, "distorted" is a problematic term, for, as Paik explicitly
    noted, the electronic images were "a beautiful slap in the face of
    classic dualism in philosophy since the time of Plato. \[...\] Essence
    AND existence, essentia AND existentia. In the case of the electron,
    however, EXISTENTIA IS ESSENTIA."[^71^](#c1-note-0071){#c1-note-0071a}
    Paik no longer understood the electronic image on the television screen
    as a portrayal or representation of anything. Rather, it engendered in
    the moment of its appearance an autonomous reality beyond and
    independent of its representational function. A whole generation of
    artists began to explore forms of existence in electronic media, which
    they no longer understood as pure media of information. In his work
    *Video Corridor* (1969--70), Bruce Nauman stacked two monitors at the
    end of a corridor that was approximately 10 meters long but only 50
    centimeters wide. On the lower monitor ran a video showing the empty
    hallway. The upper monitor displayed an image captured by a camera
    installed at the entrance of the hall, about 3 meters high. If the
    viewer moved down the corridor toward the two []{#Page_44
    type="pagebreak" title="44"}monitors, he or she would thus be recorded
    by the latter camera. Yet the closer one came to the monitor, the
    farther one would be from the camera, so that one\'s image on the
    monitor would become smaller and smaller. Recorded from behind, viewers
    would thus watch themselves walking away from themselves. Surveillance
    by others, self-surveillance, recording, and disappearance were directly
    and intuitively connected with one another and thematized as fundamental
    issues of electronic media.

    Toward the end of the 1960s, the easier availability and mobility of
    analog electronic production technologies promoted the search for
    counter-publics and the exploration of media as comprehensive
    lifeworlds. In 1967, Sony introduced its first Portapak system: a
    battery-powered, self-contained recording system -- consisting of a
    camera, a cord, and a recorder -- with which it was possible to make
    (black-and-white) video recordings outside of a studio. Although the
    recording apparatus, which required additional devices for editing and
    projection, was offered at the relatively expensive price of \$1,500
    (which corresponds to about €8,000 today), it was still affordable for
    interested groups. Compared with the situation of traditional film
    cameras, these new cameras considerably lowered the initial hurdle for
    media production, for video tapes were not only much cheaper than film
    reels (and could be used for multiple recordings); they also made it
    possible to view recorded material immediately and on location. This
    enabled the production of works that were far more intuitive and
    spontaneous than earlier ones. The 1970s saw the formation of many video
    groups, media workshops, and other initiatives for the independent
    production of electronic media. Through their own distribution,
    festivals, and other channels, such groups created alternative public
    spheres. The latter became especially prominent in the United States
    where, at the end of the 1960s, the providers of cable networks were
    legally obligated to establish public-access channels, on which citizens
    were able to operate self-organized and non-commercial television
    programs. This gave rise to a considerable public-access movement there,
    which at one point extended across 4,000 cities and was responsible for
    producing programs from and for these different
    communities.[^72[]{#Page_45 type="pagebreak"
    title="45"}^](#c1-note-0072){#c1-note-0072a}

    What these initiatives shared in common, in Western Europe and the
    United States, was their attempt to close the gap between the
    consumption and production of media, to activate the public, and at
    least in part to experiment with the media themselves. Non-professional
    producers were empowered with the ability to control who told their
    stories and how this happened. Groups that previously had no access to
    the medial public sphere now had opportunities to represent themselves
    and their own interests. By working together on their own productions,
    such groups demystified the medium of television and simultaneously
    equipped it with a critical consciousness.

    Especially well received in Germany was the work of Hans Magnus
    Enzensberger, who in 1970 argued (on the basis of Bertolt Brecht\'s
    radio theory) in favor of distinguishing between "repressive" and
    "emancipatory" uses of media. For him, the emancipatory potential of
    media lay in the fact that "every receiver is \[...\] a potential
    transmitter" that can participate "interactively" in "collective
    production."[^73^](#c1-note-0073){#c1-note-0073a} In the same year, the
    first German video group, Telewissen, debuted in public with a
    demonstration in downtown Darmstadt. In 1980, at the peak of the
    movement for independent video production, there were approximately a
    hundred such groups throughout (West) Germany. The lack of distribution
    channels, however, represented a nearly insuperable obstacle and ensured
    that many independent productions were seldom viewed outside of
    small-scale settings. Tapes had to be exchanged between groups through
    the mail, and they were mainly shown at gatherings and events, and in
    bars. The dynamic of alternative media shifted toward a small subculture
    (though one networked throughout all of Europe) of pirate radio and
    television broadcasters. At the beginning of the 1980s and in the space
    of Radio Dreyeckland in Freiburg, which had been founded in 1977 as
    Radio Verte Fessenheim, operations began at Germany\'s first pirate or
    citizens\' radio station, which regularly broadcast information about
    the political protest movements that had arisen against the use of
    nuclear power in Fessenheim (France), Wyhl (Germany), and Kaiseraugst
    (Switzerland). The epicenter of the scene, however, was located in
    Amsterdam, where the group known as Rabotnik TV, which was an offshoot
    []{#Page_46 type="pagebreak" title="46"}of the squatter scene there,
    would illegally feed its signal through official television stations
    after their programming had ended at night (many stations then stopped
    broadcasting at midnight). In 1988, the group acquired legal
    broadcasting slots on the cable network and reached up to 50,000 viewers
    with their weekly experimental shows, which largely consisted of footage
    appropriated freely from elsewhere.[^74^](#c1-note-0074){#c1-note-0074a}
    Early in 1990, the pirate television station Kanal X was created in
    Leipzig; it produced its own citizens\' television programming in the
    quasi-lawless milieu of the GDR before
    reunification.[^75^](#c1-note-0075){#c1-note-0075a}

    These illegal, independent, or public-access stations only managed to
    establish themselves as real mass media to a very limited extent.
    Nevertheless, they played an important role in sensitizing an entire
    generation of media activists, whose opportunities expanded as the means
    of production became both better and cheaper. In the name of "tactical
    media," a new generation of artistic and political media activists came
    together in the middle of the
    1990s.[^76^](#c1-note-0076){#c1-note-0076a} They combined the "camcorder
    revolution," which in the late 1980s had made video equipment available
    to broader swaths of society, stirring visions of democratic media
    production, with the newly arrived medium of the internet. Despite still
    struggling with numerous technical difficulties, they remained constant
    in their belief that the internet would solve the hitherto intractable
    problem of distributing content. The transition from analog to digital
    media lowered the production hurdle yet again, not least through the
    ongoing development of improved software. Now, many stages of production
    that had previously required professional or semi-professional expertise
    and equipment could also be carried out by engaged laymen. As a
    consequence, the focus of interest broadened to include not only the
    development of alternative production groups but also the possibility of
    a flexible means of rapid intervention in existing structures. Media --
    both television and the internet -- were understood as environments in
    which one could act without directly representing a reality outside of
    the media. Television was analyzed down to its own legalities, which
    could then be manipulated to affect things beyond the media.
    Increasingly, culture jamming and the campaigns of so-called
    communication guerrillas were blurring the difference between media and
    political activity.[^77[]{#Page_47 type="pagebreak"
    title="47"}^](#c1-note-0077){#c1-note-0077a}

    This difference was dissolved entirely by a new generation of
    politically motivated artists, activists, and hackers, who transferred
    the tactics of civil disobedience -- blockading a building with a
    sit-in, for instance -- to the
    internet.[^78^](#c1-note-0078){#c1-note-0078a} When, in 1994, the
    Zapatista Army of National Liberation rose up in the south of Mexico,
    several media projects were created to support its mostly peaceful
    opposition and to make the movement known in Europe and North America.
    As part of this loose network, in 1998 the American artist collective
    Electronic Disturbance Theater developed a relatively simple computer
    program called FloodNet that enabled networked sympathizers to shut down
    websites, such as those of the Mexican government, in a targeted and
    temporary manner. The principle was easy enough: the program would
    automatic­ally reload a certain website over and over again in order to
    exhaust the capacities of its network
    servers.[^79^](#c1-note-0079){#c1-note-0079a} The goal was not to
    destroy data but rather to disturb the normal functioning of an
    institution in order to draw attention to the activities and interests
    of the protesters.
    :::

    ::: {.section}
    ### Networks as places of action {#c1-sec-0012}

    What this new generation of media activists shared in common with the
    hackers and pioneers of computer networks was the idea that
    communication media are spaces for agency. During the 1960s, these
    programmers were also in search of alternatives. The difference during
    the 1960s is that they did not pursue these alternatives in
    counter-publics, but rather in alternative lifestyles and communication.
    The rejection of bureaucracy as a form of social organization played a
    significant role in the critique of industrial society formulated by
    freedom-oriented social movements. At the beginning of the previous
    century, Max Weber had still regarded bureaucracy as a clear sign of
    progress toward a rational and method­ical
    organization.[^80^](#c1-note-0080){#c1-note-0080a} He based this
    assessment on processes that were impersonal, rule-bound, and
    transparent (in the sense that they were documented with files). But
    now, in the 1960s, bureaucracy was being criticized as soulless,
    alienated, oppressive, non-transparent, and unfit for an increasingly
    complex society. Whereas the first four of these points are in basic
    agreement with Weber\'s thesis about "disenchanting" []{#Page_48
    type="pagebreak" title="48"}the world, the last point represents a
    radical departure from his analysis. Bureaucracies were no longer
    regarded as hyper-efficient but rather as inefficient, and their size
    and rule-bound nature were no longer seen as strengths but rather as
    decisive weaknesses. The social bargain of offering prosperity and
    security in exchange for subordination to hierarchical relations struck
    many as being anything but attractive, and what blossomed instead was a
    broad interest in alternative forms of coexistence. New institutions
    were expected to be more flexible and more open. The desire to step away
    from the system was widespread, and many (mostly young) people set about
    doing exactly that. Alternative ways of life -- communes, shared
    apartments, and cooperatives -- were explored in the country and in
    cities. They were meant to provide the individual with greater autonomy
    and the opportunity to develop his or her own unique potential. Despite
    all of the differences between these concepts of life, they nevertheless
    shared something of a common denominator: the promise of
    reconceptualizing social institutions and the fundamentals of
    coexistence, with the aim of reformulating them in such a way as to
    allow everyone\'s personal potential to develop fully in the here and
    now.

    According to critics of such alternatives, bureaucracy was necessary in
    order to organize social life as it radically reduced the world\'s
    complexity by forcing it through the bottleneck of official procedures.
    However, the price paid for such efficiency involved the atrophying of
    human relationships, which had to be subordinated to rigid processes
    that were incapable of registering unique characteristics and
    differences and were unable to react in a timely manner to changing
    circumstances.

    In the 1960s, many countercultural attempts to find new forms of
    organization placed personal and open communication at the center of
    their efforts. Each individual was understood as a singular person with
    untapped potential rather than a carrier of abstract and clearly defined
    functions. It was soon realized, however, that every common activity and
    every common decision entailed processes that were time-intensive and
    communication-intensive. As soon as a group exceeded a certain size, it
    became practically impossible for it to reach any consensus. As a result
    of these experiences, an entire worldview emerged that propagated
    "smallness" as a central []{#Page_49 type="pagebreak" title="49"}value
    ("small is beautiful"). It was thought that in this way society might
    escape from bureaucracy with its ostensibly disastrous consequences for
    humanity and the environment.[^81^](#c1-note-0081){#c1-note-0081a} But
    this belief did not last for long. For, unlike the majority of European
    alternative movements, the counterculture in the United States was not
    overwhelmingly critical of technology. On the contrary, many actors
    there sought suitable technologies for solving the practical problems of
    social organization. At the end of the 1960s, a considerable amount of
    attention was devoted to the field of basic technological research. This
    field brought together the interests of the military, academics,
    businesses, and activists from the counterculture. The common ground for
    all of them was a cybernetic vision of institutions, or, in the words of
    the historian Fred Turner:

    ::: {.extract}
    a picture of humans and machines as dynamic, collaborating elements in a
    single, highly fluid, socio-technical system. Within that system,
    control emerged not from the mind of a commanding officer, but from the
    complex, probabilistic interactions of humans, machines and events
    around them. Moreover, the mechanical elements of the system in question
    -- in this case, the predictor -- enabled the human elements to achieve
    what all Americans would agree was a worthwhile goal. \[...\] Over the
    coming decades, this second vision of benevolent man-machine systems, of
    circular flows of information, would emerge as a driving force in the
    establishment of the military--industrial--academic complex and as a
    model of an alternative to that
    complex.[^82^](#c1-note-0082){#c1-note-0082a}
    :::

    This complex was possible because, as a theory, cybernetics was
    formulated in extraordinarily abstract terms, so much so that a whole
    variety of competing visions could be associated with
    it.[^83^](#c1-note-0083){#c1-note-0083a} With cybernetics as a
    meta-science, it was possible to investigate the common features of
    technical, social, and biological
    processes.[^84^](#c1-note-0084){#c1-note-0084a} They were analyzed as
    open, interactive, and information-processing systems. It was especially
    consequential that cybernetics defined control and communication as the
    same thing, namely as activities oriented toward informational
    feedback.[^85^](#c1-note-0085){#c1-note-0085a} The heterogeneous legacy
    of cybernetics and its synonymous treatment of the terms "communication"
    and "control" continue to influence information technology and the
    internet today.[]{#Page_50 type="pagebreak" title="50"}

    The various actors who contributed to the development of the internet
    shared a common interest for forms of organ­ization based on the
    comprehensive, dynamic, and open exchange of information. Both on the
    micro and macro level (and this is decisive at this point),
    decentralized and flexible communication technologies were meant to
    become the foundation of new organizational models. Militaries feared
    attacks on their command and communication centers; academics wanted to
    broaden their culture of autonomy, collaboration among peers, and the
    free exchange of information; businesses were looking for new areas of
    activity; and countercultural activists were longing for new forms of
    peaceful coexistence.[^86^](#c1-note-0086){#c1-note-0086a} They all
    rejected the bureaucratic model, and the counterculture provided them
    with the central catchword for their alternative vision: community.
    Though rather difficult to define, it was a powerful and positive term
    that somehow promised the opposite of bureaucracy: humanity,
    cooperation, horizontality, mutual trust, and consensus. Now, however,
    humanity was expected to be reconfigured as a community in cooperation
    with and inseparable from machines. And what was yearned for had become
    a liberating symbiosis of man and machine, an idea that the author
    Richard Brautigan was quick to mock in his poem "All Watched Over by
    Machines of Loving Grace" from 1967:

    ::: {.poem}
    ::: {.lineGroup}
    I like to think (and

    the sooner the better!)

    of a cybernetic meadow

    where mammals and computers

    live together in mutually

    programming harmony

    like pure water

    touching clear sky.[^87^](#c1-note-0087){#c1-note-0087a}
    :::
    :::

    Here, Brautigan is ridiculing both the impatience (*the sooner the
    better!*) and the naïve optimism (*harmony, clear sky*) of the
    countercultural activists. Primarily, he regarded the underlying vision
    as an innocent but amusing fantasy and not as a potential threat against
    which something had to be done. And there were also reasons to believe
    that, ultimately, the new communities would be free from the coercive
    nature that []{#Page_51 type="pagebreak" title="51"}had traditionally
    characterized the downside of community experiences. It was thought that
    the autonomy and freedom of the individual could be regained in and by
    means of the community. The conditions for this were that participation
    in the community had to be voluntary and that the rules of participation
    had to be self-imposed. I will return to this topic in greater detail
    below.

    In line with their solution-oriented engineering culture and the
    results-focused military funders who by and large set the agenda, a
    relatively small group of computer scientists now took it upon
    themselves to establish the technological foundations for new
    institutions. This was not an abstract goal for the distant future;
    rather, they wanted to change everyday practices as soon as possible. It
    was around this time that advanced technology became the basis of social
    communication, which now adopted forms that would have been
    inconceivable (not to mention impracticable) without these
    preconditions. Of course, effective communication technologies already
    existed at the time. Large corporations had begun long before then to
    operate their own computing centers. In contrast to the latter, however,
    the new infrastructure could also be used by individuals outside of
    established institutions and could be implemented for all forms of
    communication and exchange. This idea gave rise to a pragmatic culture
    of horizontal, voluntary cooperation. The clearest summary of this early
    ethos -- which originated at the unusual intersection of military,
    academic, and countercultural interests -- was offered by David D.
    Clark, a computer scientist who for some time coordinated the
    development of technical standards for the internet: "We reject: kings,
    presidents and voting. We believe in: rough consensus and running
    code."[^88^](#c1-note-0088){#c1-note-0088a}

    All forms of classical, formal hierarchies and their methods for
    resolving conflicts -- commands (by kings and presidents) and votes --
    were dismissed. Implemented in their place was a pragmatics of open
    cooperation that was oriented around two guiding principles. The first
    was that different views should be discussed without a single individual
    being able to block any final decisions. Such was the meaning of the
    expression "rough consensus." The second was that, in accordance with
    the classical engineering tradition, the focus should remain on concrete
    solutions that had to be measured against one []{#Page_52
    type="pagebreak" title="52"}another on the basis of transparent
    criteria. Such was the meaning of the expression "running code." In
    large part, this method was possible because the group oriented around
    these principles was, internally, relatively homogeneous: it consisted
    of top-notch computer scientists -- all of them men -- at respected
    American universities and research centers. For this very reason, many
    potential and fundamental conflicts were avoided, at least at first.
    This internal homogeneity lends rather dark undertones to their sunny
    vision, but this was hardly recognized at the time. Today these
    undertones are far more apparent, and I will return to them below.

    Not only were technical protocols developed on the basis of these
    principles, but organizational forms as well. Along with the Internet
    Engineering Task Force (which he directed), Clark created the so-called
    Request-for-Comments documents, with which ideas could be presented to
    interested members of the community and simultaneous feedback could be
    collected in order to work through the ideas in question and thus reach
    a rough consensus. If such a consensus could not be reached -- if, for
    instance, an idea failed to resonate with anyone or was too
    controversial -- then the matter would be dropped. The feedback was
    organized as a form of many-to-many communication through email lists,
    newsgroups, and online chat systems. This proved to be so effective that
    horizontal communication within large groups or between multiple groups
    could take place without resulting in chaos. This therefore invalidated
    the traditional trend that social units, once they reach a certain size,
    would necessarily introduce hierarchical structures for the sake of
    reducing complexity and communication. In other words, the foundations
    were laid for larger numbers of (changing) people to organize flexibly
    and with the aim of building an open consensus. For Manuel Castells,
    this combination of organizational flexibility and scalability in size
    is the decisive innovation that was enabled by the rise of the network
    society.[^89^](#c1-note-0089){#c1-note-0089a} At the same time, however,
    this meant that forms of organization spread that could only be possible
    on the basis of technologies that have formed (and continue to form)
    part of the infrastructure of the internet. Digital technology and the
    social activity of individual users were linked together to an
    unprecedented extent. Social and cultural agendas were now directly
    related []{#Page_53 type="pagebreak" title="53"}to and entangled with
    technical design. Each of the four original interest groups -- the
    military, scientists, businesses, and the counterculture -- implemented
    new technologies to pursue their own projects, which partly complemented
    and partly contradicted one another. As we know today, the first three
    groups still cooperate closely with each other. To a great extent, this
    has allowed the military and corporations, which are willingly supported
    by researchers in need of funding, to determine the technology and thus
    aspects of the social and cultural agendas that depend on it.

    The software developers\' immediate environment experienced its first
    major change in the late 1970s. Software, which for many had been a mere
    supplement to more expensive and highly specialized hardware, became a
    marketable good with stringent licensing restrictions. A new generation
    of businesses, led by Bill Gates, suddenly began to label co­operation
    among programmers as theft.[^90^](#c1-note-0090){#c1-note-0090a}
    Previously it had been par for the course, and above all necessary, for
    programmers to share software with one another. The former culture of
    horizontal cooperation between developers transformed into a
    hierarchical and commercially oriented relation between developers and
    users (many of whom, at least at the beginning, had developed programs
    of their own). For the first time, copyright came to play an important
    role in digital culture. In order to survive in this environment, the
    practice of open cooperation had to be placed on a new legal foundation.
    Copyright law, which served to separate programmers (producers) from
    users (consumers), had to be neutralized or circumvented. The first step
    in this direction was taken in 1984 by the activist and programmer
    Richard Stallman. Composed by Stallman, the GNU General Public License
    was and remains a brilliant hack that uses the letter of copyright law
    against its own spirit. This happens in the form of a license that
    defines "four freedoms":

    1. The freedom to run the program as you wish, for any purpose (freedom
    0).
    2. The freedom to study how the program works and change it so it does
    your computing as you wish (freedom 1).
    3. The freedom to redistribute copies so you can help your neighbor
    (freedom 2).[]{#Page_54 type="pagebreak" title="54"}
    4. The freedom to distribute copies of your modified versions to others
    (freedom 3). By doing this you can give the whole community a chance
    to benefit from your changes.[^91^](#c1-note-0091){#c1-note-0091a}

    Thanks to this license, people who were personally unacquainted and did
    not share a common social environment could now cooperate (freedoms 2
    and 3) and simultaneously remain autonomous and unrestricted (freedoms 0
    and 1). For many, the tension between the need to develop complex
    software in large teams and the desire to maintain one\'s own autonomy
    represented an incentive to try out new forms of
    cooperation.[^92^](#c1-note-0092){#c1-note-0092a}

    Stallman\'s influence was at first limited to a small circle of
    programmers. In the middle of the 1980s, the goal of developing a
    completely free operating system seemed a distant one. Communication
    between those interested in doing so was often slow and complicated. In
    part, program codes still had to be sent by mail. It was not until the
    beginning of the 1990s that students in technical departments at many
    universities could access the
    internet.[^93^](#c1-note-0093){#c1-note-0093a} One of the first to use
    these new opportunities in an innovative way was a Finnish student named
    Linus Torvalds. He built upon Stallman\'s work and programmed a kernel,
    which, as the most important module of an operating system, governs the
    interaction between hardware and software. He published the first free
    version of this in 1991 and encouraged anyone interested to give him
    feedback.[^94^](#c1-note-0094){#c1-note-0094a} And it poured in.
    Torvalds reacted promptly and issued new versions of his software in
    quick succession. Instead of understanding his software as a finished
    product, he treated it like an open-ended process. This, in turn,
    motiv­ated even more developers to participate, because they saw that
    their contributions were being adopted swiftly, which led to the
    formation of an open community of interested programmers who swapped
    ideas over the internet and continued writing software. In order to
    maintain an overview of the different versions of the program, which
    appeared in parallel with one another, it soon became necessary to
    employ specialized platforms. The fusion of social processes --
    horizontal and voluntary cooperation among developers -- and
    technological platforms, which enabled this form of cooperation
    []{#Page_55 type="pagebreak" title="55"}by providing archives, filter
    functions, and search capabil­ities that made it possible to organize
    large amounts of data, was thus advanced even further. The programmers
    were no longer primarily working on the development of the internet
    itself, which by then was functioning quite reliably, but were rather
    using the internet to apply their cooperative principles to other
    arenas. By the end of the 1990s, the free-software movement had
    established a new, internet-based form of organization and had
    demonstrated its efficiency in practice: horizontal, informal
    communities of actors -- voluntary, autonomous, and focused on a common
    interest -- that, on the basis of high-tech infrastructure, could
    include thousands of people without having to create formal hierarchies.
    :::
    :::

    ::: {.section}
    From the Margins to the Center of Society {#c1-sec-0013}
    -----------------------------------------

    It was around this same time that the technologies in question, which
    were already no longer very new, entered mainstream society. Within a
    few years, the internet became part of everyday life. Three years before
    the turn of the millennium, only about 6 percent of the entire German
    population used the internet, often only occasionally. Three years after
    the millennium, the number of users already exceeded 53 percent. Since
    then, this share has increased even further. In 2014, it was more than
    97 percent for people under the age of
    40.[^95^](#c1-note-0095){#c1-note-0095a} Parallel to these developments,
    data transfer rates increased considerably, broadband connections ousted
    the need for dial-up modems, and the internet was suddenly "here" and no
    longer "there." With the spread of mobile devices, especially since the
    year 2007 when the first iPhone was introduced, digital communication
    became available both extensively and continuously. Since then, the
    internet has been ubiquitous. The amount of time that users spend online
    has increased and, with the rapid ascent of social mass media such as
    Facebook, people have been online in almost every situation and
    circumstance in life.[^96^](#c1-note-0096){#c1-note-0096a} The internet,
    like water or electricity, has become for many people a utility that is
    simply taken for granted.

    In a BBC survey from 2010, 80 percent of those polled believed that
    internet access -- a precondition for participating []{#Page_56
    type="pagebreak" title="56"}in the now dominant digital condition --
    should be regarded as a fundamental human right. This idea was most
    popular in South Korea (96 percent) and Mexico (94 percent), while in
    Germany at least 72 percent were of the same
    opinion.[^97^](#c1-note-0097){#c1-note-0097a}

    On the basis of this new infrastructure, which is now relevant in all
    areas of life, the cultural developments described above have been
    severed from the specific historical conditions from which they emerged
    and have permeated society as a whole. Expressivity -- the ability to
    communicate something "unique" -- is no longer a trait of artists and
    know­ledge workers alone, but rather something that is required by an
    increasingly broader stratum of society and is already being taught in
    schools. Users of social mass media must produce (themselves). The
    development of specific, differentiated identities and the demand that
    each be treated equally are no longer promoted exclusively by groups who
    have to struggle against repression, existential threats, and
    marginalization, but have penetrated deeply into the former mainstream,
    not least because the present forms of capitalism have learned to profit
    from the spread of niches and segmentation. When even conservative
    parties have abandoned the idea of a "leading culture," then cultural
    differences can no longer be classified by enforcing an absolute and
    indisputable hierarchy, the top of which is occupied by specific
    (geographical and cultural) centers. Rather, a space has been opened up
    for endless negotiations, a space in which -- at least in principle --
    everything can be called into question. This is not, of course, a
    peaceful and egalitarian process. In addition to the practical hurdles
    that exist in polarizing societies, there are also violent backlashes
    and new forms of fundamentalism that are attempting once again to remove
    certain religious, social, cultural, or political dimensions of
    existence from the discussion. Yet these can only be understood in light
    of a sweeping cultural transformation that has already reached
    mainstream society.[^98^](#c1-note-0098){#c1-note-0098a} In other words,
    the digital condition has become quotidian and dominant. It forms a
    cultural constellation that determines all areas of life, and its
    characteristic features are clearly recognizable. These will be the
    focus of the next chapter.[]{#Page_57 type="pagebreak" title="57"}
    :::

    ::: {.section .notesSet type="rearnotes"}
    []{#notesSet}Notes {#c1-ntgp-9999}
    ------------------

    ::: {.section .notesList}
    [1](#c1-note-0001a){#c1-note-0001}  Kathrin Passig and Sascha Lobo,
    *Internet: Segen oder Fluch* (Berlin: Rowohlt, 2012) \[--trans.\].

    [2](#c1-note-0002a){#c1-note-0002}  The expression "heteronormatively
    behaving" is used here to mean that, while in the public eye, the
    behavior of the people []{#Page_177 type="pagebreak" title="177"}in
    question conformed to heterosexual norms regardless of their personal
    sexual orientations.

    [3](#c1-note-0003a){#c1-note-0003}  No order is ever entirely closed
    off. In this case, too, there was also room for exceptions and for
    collective moments of greater cultural multiplicity. That said, the
    social openness of the end of the 1920s, for instance, was restricted to
    particular milieus within large cities and was accordingly short-lived.

    [4](#c1-note-0004a){#c1-note-0004}  Fritz Machlup, *The Political
    Economy of Monopoly: Business, Labor and Government Policies*
    (Baltimore, MD: The Johns Hopkins University Press, 1952).

    [5](#c1-note-0005a){#c1-note-0005}  Machlup was a student of Ludwig von
    Mises, the most influential representative of this radically
    individualist school. See Hans-Hermann Hoppe, "Die Österreichische
    Schule und ihre Bedeutung für die moderne Wirtschaftswissenschaft," in
    Karl-Dieter Grüske (ed.), *Die Gemeinwirtschaft: Kommentarband zur
    Neuauflage von Ludwig von Mises' "Die Gemeinwirtschaft"* (Düsseldorf:
    Verlag Wirtschaft und Finanzen, 1996), pp. 65--90.

    [6](#c1-note-0006a){#c1-note-0006}  Fritz Machlup, *The Production and
    Distribution of Knowledge in the United States* (New York: John Wiley &
    Sons, 1962).

    [7](#c1-note-0007a){#c1-note-0007}  The term "knowledge worker" had
    already been introduced to the discussion a few years before; see Peter
    Drucker, *Landmarks of Tomorrow: A Report on the New* (New York: Harper,
    1959).

    [8](#c1-note-0008a){#c1-note-0008}  Peter Ecker, "Die
    Verwissenschaftlichung der Industrie: Zur Geschichte der
    Industrieforschung in den europäischen und amerikanischen
    Elektrokonzernen 1890--1930," *Zeitschrift für Unternehmensgeschichte*
    35 (1990): 73--94.

    [9](#c1-note-0009a){#c1-note-0009}  Edward Bernays was the son of
    Sigmund Freud\'s sister Anna and Ely Bernays, the brother of Freud\'s
    wife, Martha Bernays.

    [10](#c1-note-0010a){#c1-note-0010}  Edward L. Bernays, *Propaganda*
    (New York: Horace Liverlight, 1928).

    [11](#c1-note-0011a){#c1-note-0011}  James Beniger, *The Control
    Revolution: Technological and Economic Origins of the Information
    Society* (Cambridge, MA: Harvard University Press, 1986), p. 350.

    [12](#c1-note-0012a){#c1-note-0012}  Norbert Wiener, *Cybernetics: Or
    Control and Communication in the Animal and the Machine* (New York: J.
    Wiley, 1948).

    [13](#c1-note-0013a){#c1-note-0013}  Daniel Bell, *The Coming of
    Post-Industrial Society: A Venture in Social Forecasting* (New York:
    Basic Books, 1973).

    [14](#c1-note-0014a){#c1-note-0014}  Simon Nora and Alain Minc, *The
    Computerization of Society: A Report to the President of France*
    (Cambridge, MA: MIT Press, 1980).

    [15](#c1-note-0015a){#c1-note-0015}  Manuel Castells, *The Rise of the
    Network Society* (Oxford: Blackwell, 1996).

    [16](#c1-note-0016a){#c1-note-0016}  Hans-Dieter Kübler, *Mythos
    Wissensgesellschaft: Gesellschaft­licher Wandel zwischen Information,
    Medien und Wissen -- Eine Einführung* (Wiesbaden: Verlag für
    Sozialwissenschaften, 2009).[]{#Page_178 type="pagebreak" title="178"}

    [17](#c1-note-0017a){#c1-note-0017}  Luc Boltanski and Ève Chiapello,
    *The New Spirit of Capitalism*, trans. Gregory Elliott (London: Verso,
    2005).

    [18](#c1-note-0018a){#c1-note-0018}  Michael Piore and Charles Sabel,
    *The Second Industrial Divide: Possibilities of Prosperity* (New York:
    Basic Books, 1984).

    [19](#c1-note-0019a){#c1-note-0019}  Castells, *The Rise of the Network
    Society*. For a critical evaluation of Castells\'s work, see Felix
    Stalder, *Manuel Castells and the Theory of the Network Society*
    (Cambridge: Polity, 2006).

    [20](#c1-note-0020a){#c1-note-0020}  "UK Creative Industries Mapping
    Documents" (1998); quoted from Terry Flew, *The Creative Industries:
    Culture and Policy* (Los Angeles, CA: Sage, 2012), pp. 9--10.

    [21](#c1-note-0021a){#c1-note-0021}  The rise of the creative
    industries, and the hope that they inspired among politicians, did not
    escape criticism. Among the first works to draw attention to the
    precarious nature of working in such industries was Angela McRobbie\'s
    *British Fashion Design: Rag Trade or Image Industry?* (New York:
    Routledge, 1998).

    [22](#c1-note-0022a){#c1-note-0022}  This definition is not without a
    degree of tautology, given that economic growth is based on talent,
    which itself is defined by its ability to create new jobs; that is,
    economic growth. At the same time, he employs the term "talent" in an
    extremely narrow sense. Apparently, if something has nothing to do with
    job creation, it also has nothing to do with talent or creativity. All
    forms of creativity are thus measured and compared according to a common
    criterion.

    [23](#c1-note-0023a){#c1-note-0023}  Richard Florida, *Cities and the
    Creative Class* (New York: Routledge, 2005), p. 5.

    [24](#c1-note-0024a){#c1-note-0024}  One study has reached the
    conclusion that, despite mass participation, "a new form of
    communicative elite has developed, namely digitally and technically
    versed actors who inform themselves in this way, exchange ideas and thus
    gain influence. For them, the possibilities of platforms mainly
    represent an expansion of useful tools. Above all, the dissemination of
    digital technology makes it easier for versed and highly networked
    individuals to convey their news more simply -- and, for these groups of
    people, it lowers the threshold for active participation." Michael
    Bauer, "Digitale Technologien und Partizipation," in Clara Landler et
    al. (eds), *Netzpolitik in Österreich: Internet, Macht, Menschenrechte*
    (Krems: Donau-Universität Krems, 2013), pp. 219--24, at 224
    \[--trans.\].

    [25](#c1-note-0025a){#c1-note-0025}  Boltanski and Chiapello, *The New
    Spirit of Capitalism*.

    [26](#c1-note-0026a){#c1-note-0026}  According to Wikipedia,
    "Heteronormativity is the belief that people fall into distinct and
    complementary genders (man and woman) with natural roles in life. It
    assumes that heterosexuality is the only sexual orientation or only
    norm, and states that sexual and marital relations are most (or only)
    fitting between people of opposite sexes."[]{#Page_179 type="pagebreak"
    title="179"}

    [27](#c1-note-0027a){#c1-note-0027}  Jannis Plastargias, *RotZSchwul:
    Der Beginn einer Bewegung (1971--1975)* (Berlin: Querverlag, 2015).

    [28](#c1-note-0028a){#c1-note-0028}  Helmut Ahrens et al. (eds),
    *Tuntenstreit: Theoriediskussion der Homosexuellen Aktion Westberlin*
    (Berlin: Rosa Winkel, 1975), p. 4.

    [29](#c1-note-0029a){#c1-note-0029}  Susanne Regener and Katrin Köppert
    (eds), *Privat/öffentlich: Mediale Selbstentwürfe von Homosexualität*
    (Vienna: Turia + Kant, 2013).

    [30](#c1-note-0030a){#c1-note-0030}  Such, for instance, was the
    assessment of Manfred Bruns, the spokesperson for the Lesbian and Gay
    Association in Germany, in his text "Schwulenpolitik früher" (link no
    longer active). From today\'s perspective, however, the main problem
    with this event was the unclear position of the Green Party with respect
    to pedophilia. See Franz Walter et al. (eds), *Die Grünen und die
    Pädosexualität: Eine bundesdeutsche Geschichte* (Göttingen: Vandenhoeck
    & Ruprecht, 2014).

    [31](#c1-note-0031a){#c1-note-0031}  "AIDS: Tödliche Seuche," *Der
    Spiegel* 23 (1983) \[--trans.\].

    [32](#c1-note-0032a){#c1-note-0032}  Quoted from Frank Niggemeier, "Gay
    Pride: Schwules Selbst­bewußtsein aus dem Village," in Bernd Polster
    (ed.), *West-Wind: Die Amerikanisierung Europas* (Cologne: Dumont,
    1995), pp. 179--87, at 184 \[--trans.\].

    [33](#c1-note-0033a){#c1-note-0033}  Quoted from Regener and Köppert,
    *Privat/öffentlich*, p. 7 \[--trans.\].

    [34](#c1-note-0034a){#c1-note-0034}  Hans-Peter Buba and László A.
    Vaskovics, *Benachteiligung gleichgeschlechtlich orientierter Personen
    und Paare: Studie im Auftrag des Bundesministerium der Justiz* (Cologne:
    Bundes­anzeiger, 2001).

    [35](#c1-note-0035a){#c1-note-0035}  This process of internal
    differentiation has not yet reached its conclusion, and thus the
    acronyms have become longer and longer: LGBPTTQQIIAA+ stands for
    lesbian, gay, bisexual, pansexual, transgender, transsexual, queer,
    questioning, intersex, intergender, asexual, ally.
    [36](#c1-note-0036a){#c1-note-0036}  Judith Butler, *Gender Trouble:
    Feminism and the Subversion of Identity* (New York: Routledge, 1989).

    [37](#c1-note-0037a){#c1-note-0037}  Andreas Krass, "Queer Studies: Eine
    Einführung," in Krass (ed.), *Queer denken: Gegen die Ordnung der
    Sexualität* (Frankfurt am Main: Suhrkamp, 2003), pp. 7--27.

    [38](#c1-note-0038a){#c1-note-0038}  Edward W. Said, *Orientalism* (New
    York: Vintage Books, 1978).

    [39](#c1-note-0039a){#c1-note-0039}  Kark August Wittfogel, *Oriental
    Despotism: A Comparative Study of Total Power* (New Haven, CT: Yale
    University Press, 1957).

    [40](#c1-note-0040a){#c1-note-0040}  Silke Förschler, *Bilder des Harem:
    Medienwandel und kultereller Austausch* (Berlin: Reimer, 2010).

    [41](#c1-note-0041a){#c1-note-0041}  The selection and effectiveness of
    these images is not a coincidence. Camel was one of the first brands of
    cigarettes for []{#Page_180 type="pagebreak" title="180"}which
    advertising, in the sense described above, was used in a systematic
    manner.

    [42](#c1-note-0042a){#c1-note-0042}  This would not exclude feelings of
    regret about the loss of an exotic and romantic way of life, such as
    those of T. E. Lawrence, whose activities in the Near East during the
    First World War were memorialized in the film *Lawrence of Arabia*
    (1962).

    [43](#c1-note-0043a){#c1-note-0043}  Said has often been criticized,
    however, for portraying orientalism so dominantly that there seems to be
    no way out of the existing dependent relations. For an overview of the
    debates that Said has instigated, see María do Mar Castro Varela and
    Nikita Dhawan, *Postkoloniale Theorie: Eine kritische Ein­führung*
    (Bielefeld: Transcript, 2005), pp. 37--46.

    [44](#c1-note-0044a){#c1-note-0044}  "Migration führt zu 'hybrider'
    Gesellschaft" (an interview with Homi K. Bhabha), *ORF Science*
    (November 9, 2007), online \[--trans.\].

    [45](#c1-note-0045a){#c1-note-0045}  Homi K. Bhabha, *The Location of
    Culture* (New York: Routledge, 1994), p. 4.

    [46](#c1-note-0046a){#c1-note-0046}  Elisabeth Bronfen and Benjamin
    Marius, "Hybride Kulturen: Einleitung zur anglo-amerikanischen
    Multikulturismusdebatte," in Bronfen et al. (eds), *Hybride Kulturen*
    (Tübingen: Stauffenburg), pp. 1--30, at 8 \[--trans.\].

    [47](#c1-note-0047a){#c1-note-0047}  "What Is Postcolonial Thinking? An
    Interview with Achille Mbembe," *Eurozine* (December 2006), online.

    [48](#c1-note-0048a){#c1-note-0048}  Migrants have always created their
    own culture, which deals in various ways with the experience of
    migration itself, but non-migrant populations have long tended to ignore
    this. Things have now begun to change in this regard, for instance
    through Imra Ayata and Bülent Kullukcu\'s compilation of songs by the
    Turkish diaspora of the 1970s and 1980s: *Songs of Gastarbeiter*
    (Munich: Trikont, 2013).

    [49](#c1-note-0049a){#c1-note-0049}  The conference programs can be
    found at: \<\>.

    [50](#c1-note-0050a){#c1-note-0050}  "Deutschland entwickelt sich zu
    einem attraktiven Einwanderungsland für hochqualifizierte Zuwanderer,"
    press release by the CDU/CSU Alliance in the German Parliament (June 4,
    2014), online \[--trans.\].

    [51](#c1-note-0051a){#c1-note-0051}  Andreas Reckwitz, *Die Erfindung
    der Kreativität: Zum Prozess gesellschaftlicher Ästhetisierung* (Berlin:
    Suhrkamp, 2011), p. 180 \[--trans.\]. An English translation of this
    book is forthcoming: *The Invention of Creativity: Modern Society and
    the Culture of the New*, trans. Steven Black (Cambridge: Polity, 2017).

    [52](#c1-note-0052a){#c1-note-0052}  Gert Selle, *Geschichte des Design
    in Deutschland* (Frankfurt am Main: Campus, 2007).

    [53](#c1-note-0053a){#c1-note-0053}  "Less Is More: The Design Ethos of
    Dieter Rams," *SFMOMA* (June 29, 2011), online.[]{#Page_181
    type="pagebreak" title="181"}

    [54](#c1-note-0054a){#c1-note-0054}  The cybernetic perspective was
    introduced to the field of design primarily by Buckminster Fuller. See
    Diedrich Diederichsen and Anselm Franke, *The Whole Earth: California
    and the Disappearance of the Outside* (Berlin: Sternberg, 2013).

    [55](#c1-note-0055a){#c1-note-0055}  Clive Dilnot, "Design as a Socially
    Significant Activity: An Introduction," *Design Studies* 3/3 (1982):
    139--46.

    [56](#c1-note-0056a){#c1-note-0056}  Victor J. Papanek, *Design for the
    Real World: Human Ecology and Social Change* (New York: Pantheon, 1972),
    p. 2.

    [57](#c1-note-0057a){#c1-note-0057}  Reckwitz, *Die Erfindung der
    Kreativität*.

    [58](#c1-note-0058a){#c1-note-0058}  B. Joseph Pine and James H.
    Gilmore, *The Experience Economy: Work Is Theater and Every Business Is
    a Stage* (Boston, MA: Harvard Business School Press, 1999), p. ix (the
    emphasis is original).

    [59](#c1-note-0059a){#c1-note-0059}  Mona El Khafif, *Inszenierter
    Urbanismus: Stadtraum für Kunst, Kultur und Konsum im Zeitalter der
    Erlebnisgesellschaft* (Saarbrücken: VDM Verlag Dr. Müller, 2013).

    [60](#c1-note-0060a){#c1-note-0060}  Konrad Becker and Martin Wassermair
    (eds), *Phantom Kulturstadt* (Vienna: Löcker, 2009).

    [61](#c1-note-0061a){#c1-note-0061}  See, for example, Andres Bosshard,
    *Stadt hören: Klang­spaziergänge durch Zürich* (Zurich: NZZ Libro,
    2009).

    [62](#c1-note-0062a){#c1-note-0062}  "An alternate realty game (ARG),"
    according to Wikipedia, "is an interactive networked narrative that uses
    the real world as a platform and employs transmedia storytelling to
    deliver a story that may be altered by players\' ideas or actions."

    [63](#c1-note-0063a){#c1-note-0063}  Eric von Hippel, *Democratizing
    Innovation* (Cambridge, MA: MIT Press, 2005).

    [64](#c1-note-0064a){#c1-note-0064}  It is often the case that the
    involvement of users simply serves to increase the efficiency of
    production processes and customer service. Many activities that were
    once undertaken at the expense of businesses now have to be carried out
    by the customers themselves. See Günter Voss, *Der arbeitende Kunde:
    Wenn Konsumenten zu unbezahlten Mitarbeitern werden* (Frankfurt am Main:
    Campus, 2005).

    [65](#c1-note-0065a){#c1-note-0065}  Beniger, *The Control Revolution*,
    pp. 411--16.

    [66](#c1-note-0066a){#c1-note-0066}  Louis Althusser, "Ideology and
    Ideological State Apparatuses (Notes towards an Investigation)," in
    Althusser, *Lenin and Philosophy and Other Essays*, trans. Ben Brewster
    (New York: Monthly Review Press, 1971), pp. 127--86.

    [67](#c1-note-0067a){#c1-note-0067}  Florian Becker et al. (eds),
    *Gramsci lesen! Einstiege in die Gefängnis­hefte* (Hamburg: Argument,
    2013), pp. 20--35.

    [68](#c1-note-0068a){#c1-note-0068}  Guy Debord, *The Society of the
    Spectacle*, trans. Fredy Perlman and Jon Supak (Detroit: Black & Red,
    1977).

    [69](#c1-note-0069a){#c1-note-0069}  Derrick de Kerckhove, "McLuhan and
    the Toronto School of Communication," *Canadian Journal of
    Communication* 14/4 (1989): 73--9.[]{#Page_182 type="pagebreak"
    title="182"}

    [70](#c1-note-0070a){#c1-note-0070}  Marshall McLuhan, *Understanding
    Media: The Extensions of Man* (New York: McGraw-Hill, 1964).

    [71](#c1-note-0071a){#c1-note-0071}  Nam Jun Paik, "Exposition of Music
    -- Electronic Television" (leaflet accompanying the exhibition). Quoted
    from Zhang Ga, "Sounds, Images, Perception and Electrons," *Douban*
    (March 3, 2016), online.

    [72](#c1-note-0072a){#c1-note-0072}  Laura R. Linder, *Public Access
    Television: America\'s Electronic Soapbox* (Westport, CT: Praeger,
    1999).

    [73](#c1-note-0073a){#c1-note-0073}  Hans Magnus Enzensberger,
    "Constituents of a Theory of the Media," in Noah Wardrip-Fruin and Nick
    Montfort (eds), *The New Media Reader* (Cambridge, MA: MIT Press, 2003),
    pp. 259--75.

    [74](#c1-note-0074a){#c1-note-0074}  Paul Groot, "Rabotnik TV,"
    *Mediamatic* 2/3 (1988), online.

    [75](#c1-note-0075a){#c1-note-0075}  Inke Arns, "Social Technologies:
    Deconstruction, Subversion and the Utopia of Democratic Communication,"
    *Medien Kunst Netz* (2004), online.

    [76](#c1-note-0076a){#c1-note-0076}  The term was coined at a series of
    conferences titled The Next Five Minutes (N5M), which were held in
    Amsterdam from 1993 to 2003. See \<\>.

    [77](#c1-note-0077a){#c1-note-0077}  Mark Dery, *Culture Jamming:
    Hacking, Slashing and Sniping in the Empire of Signs* (Westfield: Open
    Media, 1993); Luther Blisset et al., *Handbuch der
    Kommunikationsguerilla*, 5th edn (Berlin: Assoziationen A, 2012).

    [78](#c1-note-0078a){#c1-note-0078}  Critical Art Ensemble, *Electronic
    Civil Disobedience and Other Unpopular Ideas* (New York: Autonomedia,
    1996).

    [79](#c1-note-0079a){#c1-note-0079}  Today this method is known as a
    "distributed denial of service attack" (DDOS).

    [80](#c1-note-0080a){#c1-note-0080}  Max Weber, *Economy and Society: An
    Outline of Interpretive Sociology*, trans. Guenther Roth and Claus
    Wittich (Berkeley, CA: University of California Press, 1978), pp. 26--8.

    [81](#c1-note-0081a){#c1-note-0081}  Ernst Friedrich Schumacher, *Small
    Is Beautiful: Economics as if People Mattered*, 8th edn (New York:
    Harper Perennial, 2014).

    [82](#c1-note-0082a){#c1-note-0082}  Fred Turner, *From Counterculture
    to Cyberculture: Stewart Brand, the Whole Earth Movement and the Rise of
    Digital Utopianism* (Chicago, IL: University of Chicago Press, 2006), p.
    21. In this regard, see also the documentary films *Das Netz* by Lutz
    Dammbeck (2003) and *All Watched Over by Machines of Loving Grace* by
    Adam Curtis (2011).

    [83](#c1-note-0083a){#c1-note-0083}  It was possible to understand
    cybernetics as a language of free markets or also as one of centralized
    planned economies. See Slava Gerovitch, *From Newspeak to Cyberspeak: A
    History of Soviet Cybernetics* (Cambridge, MA: MIT Press, 2002). The
    great interest of Soviet scientists in cybernetics rendered the term
    rather suspicious in the West, where it was disassociated from
    artificial intelligence.[]{#Page_183 type="pagebreak" title="183"}

    [84](#c1-note-0084a){#c1-note-0084}  Claus Pias, "The Age of
    Cybernetics," in Pias (ed.), *Cybernetics: The Macy Conferences
    1946--1953* (Zurich: Diaphanes, 2016), pp. 11--27.

    [85](#c1-note-0085a){#c1-note-0085}  Norbert Wiener, one of the
    cofounders of cybernetics, explained this as follows in 1950: "In giving
    the definition of Cybernetics in the original book, I classed
    communication and control together. Why did I do this? When I
    communicate with another person, I impart a message to him, and when he
    communicates back with me he returns a related message which contains
    information primarily accessible to him and not to me. When I control
    the actions of another person, I communicate a message to him, and
    although this message is in the imperative mood, the technique of
    communication does not differ from that of a message of fact.
    Furthermore, if my control is to be effective I must take cognizance of
    any messages from him which may indicate that the order is understood
    and has been obeyed." Norbert Wiener, *The Human Use of Human Beings:
    Cybernetics and Society*, 2nd edn (London: Free Association Books,
    1989), p. 16.

    [86](#c1-note-0086a){#c1-note-0086}  Though presented here as distinct,
    these interests could in fact be held by one and the same person. In
    *From Counterculture to Cyberculture*, for instance, Turner discusses
    countercultural entrepreneurs.
    [87](#c1-note-0087a){#c1-note-0087}  Richard Brautigan, "All Watched
    Over by Machines of Loving Grace," in *All Watched Over by Machines of
    Loving Grace*, by Brautigan (San Francisco: The Communication Company,
    1967).

    [88](#c1-note-0088a){#c1-note-0088}  David D. Clark, "A Cloudy Crystal
    Ball: Visions of the Future," *Internet Engineering Taskforce* (July
    1992), online.

    [89](#c1-note-0089a){#c1-note-0089}  Castells, *The Rise of the Network
    Society*.

    [90](#c1-note-0090a){#c1-note-0090}  Bill Gates, "An Open Letter to
    Hobbyists," *Homebrew Computer Club Newsletter* 2/1 (1976): 2.

    [91](#c1-note-0091a){#c1-note-0091}  Richard Stallman, "What Is Free
    Software?", *GNU Operating System*, online.

    [92](#c1-note-0092a){#c1-note-0092}  The fundamentally cooperative
    nature of programming was recognized early on. See Gerald M. Weinberg,
    *The Psychology of Computer Programming*, rev. edn (New York: Dorset
    House, 1998 \[originally published in 1971\]).

    [93](#c1-note-0093a){#c1-note-0093}  On the history of free software,
    see Volker Grassmuck, *Freie Software: Zwischen Privat- und
    Gemeineigentum* (Berlin: Bundeszentrale für politische Bildung, 2002).

    [94](#c1-note-0094a){#c1-note-0094}  In his first email on the topic, he
    wrote: "Hello everybody out there \[...\]. I'm doing a (free) operating
    system (just a hobby, won\'t be big and professional like gnu) \[...\].
    This has been brewing since April, and is starting to get ready. I\'d
    like any feedback on things people like/dislike." Linus Torvalds, "What
    []{#Page_184 type="pagebreak" title="184"}Would You Like to See Most in
    Minix," *Usenet Group* (August 1991), online.

    [95](#c1-note-0095a){#c1-note-0095}  ARD/ZDF, "Onlinestudie" (2015),
    online.

    [96](#c1-note-0096a){#c1-note-0096}  From 1997 to 2003, the average use
    of online media in Germany climbed from 76 to 138 minutes per day, and
    by 2013 it reached 169 minutes. Over the same span of time, the average
    frequency of use increased from 3.3 to 4.4 days per week, and by 2013 it
    was 5.8. From 2007 to 2013, the percentage of people who were members of
    private social networks like Facebook grew from 15 percent to 46
    percent. Of these, nearly 60 percent -- around 19 million people -- used
    such services on a daily basis. The source of this information is the
    article cited in the previous note.

    [97](#c1-note-0097a){#c1-note-0097}  "Internet Access Is 'a Fundamental
    Right'," *BBC News* (8 March 2010), online.

    [98](#c1-note-0098a){#c1-note-0098}  Manuel Castells, *The Power of
    Identity* (Oxford: Blackwell, 1997), pp. 7--22.
    :::
    :::

    [II]{.chapterNumber} [Forms]{.chapterTitle} {#c2}

    ::: {.section}
    With the emergence of the internet around the turn of the millennium as
    an omnipresent infrastructure for communication and coordination,
    previously independent cultural developments began to spread beyond
    their specific original contexts, mutually influencing and enhancing one
    another, and becoming increasingly intertwined. Out of a disconnected
    conglomeration of more or less marginalized practices, a new and
    specific cultural environment thus took shape, usurping or marginalizing
    an ever greater variety of cultural constellations. The following
    discussion will focus on three *forms* of the digital condition; that
    is, on those formal qualities that (notwithstanding all of its internal
    conflicts and contradictions) lend a particular shape to this cultural
    environment as a whole: *referentiality*, *communality*, and
    *algorithmicity*. It is only because most of the cultural processes
    operating under the digital condition are characterized by common formal
    features such as these that it is reasonable to speak of the digital
    condition in the singular.

    "Referentiality" is a method with which individuals can inscribe
    themselves into cultural processes and constitute themselves as
    producers. Understood as shared social meaning, the arena of culture
    entails that such an undertaking cannot be limited to the individual.
    Rather, it takes place within a larger framework whose existence and
    development depend on []{#Page_58 type="pagebreak" title="58"}communal
    formations. "Algorithmicity" denotes those aspects of cultural processes
    that are (pre-)arranged by the activities of machines. Algorithms
    transform the vast quantities of data and information that characterize
    so many facets of present-day life into dimensions and formats that can
    be registered by human perception. It is impossible to read the content
    of billions of websites. Therefore we turn to services such as Google\'s
    search algorithm, which reduces the data flood ("big data") to a
    manageable amount and translates it into a format that humans can
    understand ("small data"). Without them, human beings could not
    comprehend or do anything within a culture built around digital
    technologies, but they influence our understanding and activity in an
    ambivalent way. They create new dependencies by pre-sorting and making
    the (informational) world available to us, yet simultaneously ensure our
    autonomy by providing the preconditions that enable us to act.
    :::

    ::: {.section}
    Referentiality {#c2-sec-0002}
    --------------

    In the digital condition, one of the methods (if not *the* most
    fundamental method) enabling humans to participate -- alone or in groups
    -- in the collective negotiation of meaning is the system of creating
    references. In a number of arenas, referential processes play an
    important role in the assignment of both meaning and form. According to
    the art historian André Rottmann, for instance, "one might claim that
    working with references has in recent years become the dominant
    production-aesthetic model in contemporary
    art."[^1^](#c2-note-0001){#c2-note-0001a} This burgeoning engagement
    with references, however, is hardly restricted to the world of
    contemporary art. Referentiality is a feature of many processes that
    encompass the operations of various genres of professional and everyday
    culture. In its essence, it is the use of materials that are already
    equipped with meaning -- as opposed to so-called raw material -- to
    create new meanings. The referential techniques used to achieve this are
    extremely diverse, a fact reflected in the numerous terms that exist to
    describe them: re-mix, re-make, re-enactment, appropriation, sampling,
    meme, imitation, homage, tropicália, parody, quotation, post-production,
    re-performance, []{#Page_59 type="pagebreak" title="59"}camouflage,
    (non-academic) research, re-creativity, mashup, transformative use, and
    so on.

    These processes have two important aspects in common: the
    recognizability of the sources and the freedom to deal with them however
    one likes. The first creates an internal system of references from which
    meaning and aesthetics are derived in an essential
    manner.[^2^](#c2-note-0002){#c2-note-0002a} The second is the
    precondition enabling the creation of something that is both new and on
    the same level as the re-used material. This represents a clear
    departure from the historical--critical method, which endeavors to embed
    a source in its original context in order to re-determine its meaning,
    but also a departure from classical forms of rendition such as
    translations, adaptations (for instance, adapting a book for a film), or
    cover versions, which, though they translate a work into another
    language or medium, still attempt to preserve its original meaning.
    Re-mixes produced by DJs are one example of the referential treatment of
    source material. In his book on the history of DJ culture, the
    journalist Ulf Poschardt notes: "The remixer isn\'t concerned with
    salvaging authenticity, but with creating a new
    authenticity."[^3^](#c2-note-0003){#c2-note-0003a} For instead of
    distancing themselves from the past, which would follow the (Western)
    logic of progress or the spirit of the avant-garde, these processes
    refer explicitly to precursors and to existing material. In one and the
    same gesture, both one\'s own new position and the context and cultural
    tradition that is being carried on in one\'s own work are constituted
    performatively; that is, through one\'s own activity in the moment. I
    will discuss this phenomenon in greater depth below.

    To work with existing cultural material is, in itself, nothing new. In
    modern montages, artists likewise drew upon available texts, images, and
    treated materials. Yet there is an important difference: montages were
    concerned with bringing together seemingly incongruous but stable
    "finished pieces" in a more or less unmediated and fragmentary manner.
    This is especially clear in the collages by the Dadaists or in
    Expressionist literature such as Alfred Döblin\'s *Berlin
    Alexanderplatz*. In these works, the experience of Modernity\'s many
    fractures -- its fragmentation and turmoil -- was given a new aesthetic
    form. In his reference to montages, Adorno thus observed that the
    "negation of synthesis becomes a principle []{#Page_60 type="pagebreak"
    title="60"}of form."[^4^](#c2-note-0004){#c2-note-0004a} At least for a
    brief moment, he considered them an adequate expression for the
    impossibility of reconciling the contradictions of capitalist culture.
    Influenced by Adorno, the literary theorist Peter Bürger went so far as
    to call the montage the true "paradigm of
    modernity."[^5^](#c2-note-0005){#c2-note-0005a} In today\'s referential
    processes, on the contrary, pieces are not brought together as much as
    they are integrated into one another by being altered, adapted, and
    transformed. Unlike the older arrangement, it is not the fissures
    between elements that are foregrounded but rather their synthesis in the
    present. Conchita Wurst, the bearded diva, is not torn between two
    conflicting poles. Rather, she represents a successful synthesis --
    something new and harmonious that distinguishes itself by showcasing
    elements of the old order (man/woman) and simultaneously transcending
    them.

    This synthesis, however, is usually just temporary, for at any time it
    can itself serve as material for yet another rendering. Of course, this
    is far easier to pull off with digital objects than with analog objects,
    though these categories have become increasingly porous and thus
    increasingly problematic as opposites. More and more objects exist both
    in an analog and in a digital form. Think of photographs and slides,
    which have become so easy to digitalize. Even three-dimensional objects
    can now be scanned and printed. In the future, programmable materials
    with controllable and reversible features will cause the difference
    between the two domains to vanish: analog is becoming more and more
    digital.

    Montages and referential processes can only become widespread methods
    if, in a given society, cultural objects are available in three
    different respects. The first is economic and organizational: they must
    be affordable and easily accessible. Whoever is unable to afford books
    or get hold of them by some other means will not be able to reconfigure
    any texts. The second is cultural: working with cultural objects --
    which can always create deviations from the source in unpredictable ways
    -- must not be treated as taboo or illegal, but rather as an everyday
    activity without any special preconditions. It is much easier to
    manipulate a text from a secular newspaper than one from a religious
    canon. The third is material: it must be possible to use the material
    and to change it.[^6[]{#Page_61 type="pagebreak"
    title="61"}^](#c2-note-0006){#c2-note-0006a}

    In terms of this third form of availability, montages differ from
    referential processes, for cultural objects can be integrated into one
    another -- instead of simply being placed side by side -- far more
    readily when they are digitally coded. Information is digitally coded
    when it is stored by means of a limited system of discrete (that is,
    separated by finite intervals or distances) signs that are meaningless
    in themselves. This allows information to be copied from one carrier to
    another without any loss and it allows the respective signs, whether
    individually or in groups, to be arranged freely. Seen in this way,
    digital coding is not necessarily bound to computers but can rather be
    realized with all materials: a mosaic is a digital process in which
    information is coded by means of variously colored tiles, just as a
    digital image consists of pixels. In the case of the mosaic, of course,
    the resolution is far lower. Alphabetic writing is a form of coding
    linguistic information by means of discrete signs that are, in
    themselves, meaningless. Consequently, Florian Cramer has argued that
    "every form of literature that is recorded alphabetically and not based
    on analog parameters such as ideograms or orality is already digital in
    that it is stored in discrete
    signs."[^7^](#c2-note-0007){#c2-note-0007a} However, the specific
    features of the alphabet, as Marshall McLuhan repeatedly underscored,
    did not fully develop until the advent of the printing
    press.[^8^](#c2-note-0008){#c2-note-0008a} It was the printing press, in
    other words, that first abstracted written signs from analog handwriting
    and transformed them into standardized symbols that could be repeated
    without any loss of information. In this practical sense, the printing
    press made writing digital, with the result that dealing with texts soon
    became radically different.

    ::: {.section}
    ### Information overload 1.0 {#c2-sec-0003}

    The printing press made texts available in the three respects mentioned
    above. For one thing, their number increased rapidly, while their price
    significantly sank. During the first two generations after Gutenberg\'s
    invention -- that is, between 1450 and 1500 -- more books were produced
    than during the thousand years
    before.[^9^](#c2-note-0009){#c2-note-0009a} And that was just the
    beginning. Dealing with books and their content changed from the ground
    up. In manuscript culture, every new copy represented a potential
    degradation of the original, and therefore []{#Page_62 type="pagebreak"
    title="62"}the oldest sources (those that had undergone as little
    corruption as possible) were valued above all. With the advent of print
    culture, the idea took hold that texts could be improved by the process
    of editing, not least because the availability of old sources, through
    reprints and facsimiles, had also improved dramatically. Pure
    reproduction was mechanized and overcome as a cultural challenge.

    According to the historian Elizabeth Eisenstein, one of the first
    consequences of the greatly increased availability of the printed book
    was that it overcame the "tyranny of major authorities, which was common
    in small libraries."[^10^](#c2-note-0010){#c2-note-0010a} Scientists
    were now able to compare texts with one another and critique them to an
    unprecedented extent. Their general orientation turned around: instead
    of looking back in order to preserve what they knew, they were now
    looking ahead toward what they might not (yet) know.

    In order to organize this information flood of rapidly amassing texts,
    it was necessary to create new conventions: books were now specified by
    their author, publisher, and date of publication, not to mention
    furnished with page numbers. This enabled large numbers of texts to be
    catalogued and every individual text -- indeed, every single passage --
    to be referenced.[^11^](#c2-note-0011){#c2-note-0011a} Scientists could
    legitimize the pursuit of new knowledge by drawing attention to specific
    mistakes or gaps in existing texts. In the scientific culture that was
    developing at the time, the close connection between old and new
    ma­terial was not simply regarded as something positive; it was also
    urgently prescribed as a method of argumentation. Every text had to
    contain an internal system of references, and this was the basis for the
    development of schools, disciplines, and specific discourses.

    The digital character of printed writing also made texts available in
    the third respect mentioned above. Because discrete signs could be
    reproduced without any loss of information, it was possible not only to
    make perfect copies but also to remove content from one carrier and
    transfer it to another. Materials were no longer simply arranged
    sequentially, as in medieval compilations and almanacs, but manipulated
    to give rise to a new and independent fluid text. A set of conventions
    was developed -- one that remains in use today -- for modifying embedded
    or quoted material in order for it []{#Page_63 type="pagebreak"
    title="63"}to fit into its new environment. In this manner, quotations
    could be altered in such a way that they could be integrated seamlessly
    into a new text while remaining recognizable as direct citations.
    Several of these conventions, for instance the use of square brackets to
    indicate additions ("\[ \]") or ellipses to indicate omissions ("..."),
    are also used in this very book. At the same time, the conventions for
    making explicit references led to the creation of an internal reference
    system that made the singular position of the new text legible within a
    collective field of work. "Printing," to quote Elizabeth Eisenstein once
    again, "encouraged forms of combinatory activity which were social as
    well as intellectual. It changed relationships between men of learning
    as well as between systems of
    ideas."[^12^](#c2-note-0012){#c2-note-0012a} Exchange between scholars,
    in the form of letters and visits, intensified. The seventeenth century
    saw the formation of the *respublica literaria* or the "Republic of
    Letters," a loose network of scholars devoted to promoting the ideas of
    the Enlightenment. Beginning in the eighteenth century, the rapidly
    growing number of scientific fields was arranged and institutionalized
    into clearly distinct disciplines. In the nineteenth and twentieth
    centuries, diverse media-technical innovations made images, sounds, and
    moving images available, though at first only in analog formats. These
    created the preconditions that enabled the montage in all of its forms
    -- film cuts, collages, readymades, *musique concrète*, found-footage
    films, literary cut-ups, and artistic assemblages (to name only the
    best-known genres) -- to become the paradigm of Modernity.
    :::

    ::: {.section}
    ### Information overload 2.0 {#c2-sec-0004}

    It was not until new technical possibilities for recording, storing,
    processing, and reproduction appeared over the course of the 1990s that
    it also became increasingly possible to code and edit images, audio, and
    video digitally. Through the networking that was taking place not far
    behind, society was flooded with an unprecedented amount of digit­ally
    coded information *of every sort*, and the circulation of this
    information accelerated. This was not, however, simply a quantitative
    change but also and above all a qualitative one. Cultural materials
    became available in a comprehensive []{#Page_64 type="pagebreak"
    title="64"}sense -- economically and organizationally, culturally
    (despite legal problems), and materially (because digitalized). Today it
    would not be bold to predict that nearly every text, image, or sound
    will soon exist in a digital form. Most of the new reproducible works
    are already "born digital" and digit­ally distributed, or they are
    physically produced according to digital instructions. Many initiatives
    are working to digitalize older, analog works. We are now anchored in
    the digital.

    Among the numerous digitalization projects currently under way, the most
    ambitious is that of Google Books, which, since its launch in 2004, has
    digitalized around 20 million books from the collections of large
    libraries and prepared them for full-text searches. Right from the
    start, a fierce debate arose about the legal and cultural acceptability
    of this project. One concern was whether Google\'s process infringed
    upon the rights of the authors and publishers of the scanned books or
    whether, according to American law, it qualified as "fair use," in which
    case there would be no obligation for the company to seek authorization
    or offer compensation. The second main concern was whether it would be
    culturally or politically appropriate for a private corporation to hold
    a de facto monopoly over the digital heritage of book culture. The first
    issue incited a complex legal battle that, in 2013, was decided in
    Google\'s favor by a judge on the United States District Court in New
    York.[^13^](#c2-note-0013){#c2-note-0013a} At the heart of the second
    issue was the question of how a public library should look in the
    twenty-first century.[^14^](#c2-note-0014){#c2-note-0014a} In November
    of 2008, the European Commission and the cultural minister of the
    European Union launched the virtual Europeana library, which occurred
    after a number of European countries had already invested hundreds of
    millions of euros in various digitalization
    initiatives.[^15^](#c2-note-0015){#c2-note-0015a} Today, Europeana
    serves as a common access point to the online archives of around 2,500
    European cultural institutions. By the end of 2015, its digital holdings
    had grown to include more than 40 million objects. This is still,
    however, a relatively small number, for it has been estimated that
    European archives and museums contain more than 220 million
    natural-historical and more than 260 million cultural-historical
    objects. In the United States, discussions about the future of libraries
    []{#Page_65 type="pagebreak" title="65"}led to the 2013 launch of the
    Digital Public Library of America (DPLA), which, like Europeana,
    provides common access to the digitalized holdings of archives, museums,
    and libraries. By now, more than 14 million items can be viewed there.

    In one way or another, however, both the private and the public projects
    of this sort have been limited by binding copyright laws. The librarian
    and book historian Robert Darnton, one of the most prominent advocates
    of the Digital Public Library of America, has accordingly stated: "The
    main impediment to the DPLA\'s growth is legal, not financial. Copyright
    laws could exclude everything published after 1964, most works published
    after 1923, and some that go back as far as
    1873."[^16^](#c2-note-0016){#c2-note-0016a} The legal situation in
    Europe is similar to that in the United States. It, too, massively
    obstructs the work of public
    institutions.[^17^](#c2-note-0017){#c2-note-0017a} In many cases, this
    has had the absurd consequence that certain materials, though they have
    been fully digitalized, may only be accessed in part or exclusively
    inside the facilities of a particular institution. Whereas companies
    such as Google can afford to wage long legal battles, and in the
    meantime create precedents, public institutions must proceed with great
    caution, not least to avoid the accusation of using public funds to
    violate copyright laws. Thus, they tend to fade into the background and
    leave users, who are unfamiliar with the complex legal situation, with
    the impression that they are even more out-of-date than they often are.

    Informal actors, who explicitly operate beyond the realm of copyright
    law, are not faced with such restrictions. UbuWeb, for instance, which
    is the largest online archive devoted to the history of
    twentieth-century avant-garde art, was not created by an art museum but
    rather by the initiative of an individual artist, Kenneth Goldsmith.
    Since 1996, he has been collecting historically relevant materials that
    were no longer in distribution and placing them online for free and
    without any stipulations. He forgoes the process of obtaining the rights
    to certain works of art because, as he remarks on the website, "Let\'s
    face it, if we had to get permission from everyone on UbuWeb, there
    would be no UbuWeb."[^18^](#c2-note-0018){#c2-note-0018a} It would
    simply be too demanding to do so. Because he pursues the project without
    any financial interest and has saved so much []{#Page_66
    type="pagebreak" title="66"}from oblivion, his efforts have provoked
    hardly any legal difficulties. On the contrary, UbuWeb has become so
    important that Goldsmith has begun to receive more and more material
    directly from artists and their heirs, who would like certain works not
    to be forgotten. Nevertheless, or perhaps for this very reason,
    Goldsmith repeatedly stresses the instability of his archive, which
    could disappear at any moment if he loses interest in maintaining it or
    if something else happens. Users are therefore able to download works
    from UbuWeb and archive, on their own, whatever items they find most
    important. Of course, this fragility contradicts the idea of an archive
    as a place for long-term preservation. Yet such a task could only be
    undertaken by an institution that is oriented toward the long term.
    Because of the existing legal conditions, however, it is hardly likely
    that such an institution will come about.

    Whereas Goldsmith is highly adept at operating within a niche that not
    only tolerates but also accepts the violation of formal copyright
    claims, large websites responsible for the uncontrolled dissemination of
    digital content do not bother with such niceties. Their purpose is
    rather to ensure that all popular content is made available digitally
    and for free, whether legally or not. These sites, too, have experienced
    uninterrupted growth. By the end of 2015, dozens of millions of people
    were simultaneously using the BitTorrent tracker The Pirate Bay -- the
    largest nodal point for file-sharing networks during the last decade --
    to exchange several million digital files with one
    another.[^19^](#c2-note-0019){#c2-note-0019a} And this was happening
    despite protracted attempts to block or close down the file-sharing site
    by legal means and despite a variety of competing services. Even when
    the founders of the website were sentenced in Sweden to pay large fines
    (around €3 million) and to serve time in prison, the site still did not
    disappear from the internet.[^20^](#c2-note-0020){#c2-note-0020a} At the
    same time, new providers have entered the market of free access; their
    method is not to facilitate distributed downloads but rather to offer,
    on account of the drastically reduced cost of data transfers, direct
    streaming. Although some of these services are relatively easy to locate
    and some have been legally banned -- the best-known case in Germany
    being that of the popular site kino.to -- more of them continue to
    appear.[^21^](#c2-note-0021){#c2-note-0021a} Moreover, this phenomenon
    []{#Page_67 type="pagebreak" title="67"}is not limited to music and
    films, but encompasses all media formats. For instance, it is
    foreseeable that the number of freely available plans for 3D objects
    will increase along with the popularity of 3D printing. It has almost
    escaped notice, however, that so-called "shadow libraries" have been
    popping up everywhere; the latter are not accessible to the public but
    rather to members, for instance, of closed exchange platforms or of
    university intranets. Few seminars take place any more without a corpus
    of scanned texts, regardless of whether this practice is legal or
    not.[^22^](#c2-note-0022){#c2-note-0022a}

    The lines between these different mechanisms of access are highly
    permeable. Content acquired legally can make its way to file-sharing
    networks as an illegal copy; content available for free can be sold in
    special editions; content from shadow libraries can make its way to
    publicly accessible sites; and, conversely, content that was once freely
    available can disappear into shadow libraries. As regards free access,
    the details of this rapidly changing landscape are almost
    inconsequential, for the general trend that has emerged from these
    various dynamics -- legal and illegal, public and private -- is
    unambiguous: in a comprehensive and practical sense, cultural works of
    all sorts will become freely available despite whatever legal and
    technical restrictions might be in place. Whether absolutely all
    material will be made available in this way is not the decisive factor,
    at least not for the individual, for, as the German Library Association
    has stated, "it is foreseeable that non-digitalized material will
    increasingly escape the awareness of users, who have understandably come
    to appreciate the ubiquitous availability and more convenient
    processability of the digital versions of analog
    objects."[^23^](#c2-note-0023){#c2-note-0023a} In this context of excess
    information, it is difficult to determine whether a particular work or a
    crucial reference is missing, given that a multitude of other works and
    references can be found in their place.

    At the same time, prodigious amounts of new material are being produced
    that, before the era of digitalization and networks, never could have
    existed at all or never would have left the private sphere. An example
    of this is amateur photography. This is nothing new in itself; as early
    as 1899, Kodak was marketing its films and apparatus with the slogan
    "You press the button, we do the rest," and ever since, []{#Page_68
    type="pagebreak" title="68"}drawers and albums have been overflowing
    with photographs. With the advent of digitalization, however, certain
    economic and material limitations ceased to exist that, until then, had
    caused most private photographers to think twice about how many shots
    they wanted to take. After all, they had to pay for the film to be
    developed and then store the pictures somewhere. Cameras also became
    increasingly "intelligent," which improved the technical quality of
    photo­graphs. Even complex procedures such as increasing the level of
    detail or the contrast ratio -- the difference between an image\'s
    brightest and darkest points -- no longer require any specialized
    knowledge of photochemical processes in the darkroom. Today, such
    features are often pre-installed in many cameras as an option (high
    dynamic range). Ever since the introduction of built-in digital cameras
    for smartphones, anyone with such a device can take pictures everywhere
    and at any time and then store them digitally. Images can then be posted
    on online platforms and shared with others. By the middle of 2015,
    Flickr -- the largest but certainly not the only specialized platform of
    this sort -- had more than 112 million registered users participating in
    more than 2 million groups. Every user has access to free storage space
    for about half a million of his or her own pictures. At that point, in
    other words, the platform was equipped to manage more than 55 billion
    photographs. Around 3.5 million images were being uploaded every day,
    many of which could be accessed by anyone. This may seem like a lot, but
    in reality it is just a small portion of the pictures that are posted
    online on a daily basis. Around that same time -- again, the middle of
    2015 -- approximately 350 million pictures were being posted on Facebook
    *every day*. The total number of photographs saved there has been
    estimated to be 250 billion. In addition, there are also large platforms
    for professional "stock photos" (supplies of pre-produced images that
    are supposed to depict generic situations) and the databanks of
    professional agencies such Getty Images or Corbis. All of these images
    can be found easily and acquired quickly (though not always for free).
    Yet photography is not unique in this regard. In all fields, the number
    of cultural artifacts available to the public on specialized platforms
    has been increasing rapidly in recent years.[]{#Page_69 type="pagebreak"
    title="69"}
    :::

    ::: {.section}
    ### The great disorder {#c2-sec-0005}

    The old orders that had been responsible for filtering, organ­izing, and
    publishing cultural material -- culture industries, mass media,
    libraries, museums, archives, etc. -- are incapable of managing almost
    any aspect of this deluge. They can barely function as gatekeepers any
    more between those realms that, with their help, were once defined as
    "private" and "public." Their decisions about what is or is not
    important matter less and less. Moreover, having already been subjected
    to a decades-long critique, their rules, which had been relatively
    binding and formative over long periods of time, are rapidly losing
    practical significance.

    Even Europeana, a relatively small project based on trad­itional museums
    and archives and with a mandate to make the European cultural heritage
    available online, has contributed to the disintegration of established
    orders: it indiscriminately brings together 2,500 previously separated
    institutions. The specific semantic contexts that formerly shaped the
    history and orientation of institutions have been dissolved or reduced
    to dry meta-data, and millions upon millions of cultural artifacts are
    now equidistant from one another. Instead of certain artifacts being
    firmly anchored in a location, for instance in an ethnographic
    collection devoted to the colonial history of France, it is now possible
    for everything to exist side by side. Europeana is not an archive in the
    traditional sense, or even a museum with a fixed and meaningful order;
    rather, it is just a standard database. Everything in it is just one
    search request away, and every search generates a unique order in the
    form of a sequence of visible artifacts. As a result, individual objects
    are freed from those meta-narratives, created by the museums and
    archives that preserve them, which situate them within broader contexts
    and assign more or less clear meanings to them. They consequently become
    more open to interpretation. A search result does not articulate an
    interpretive field of reference but merely a connection, created by
    constantly changing search algorithms, between a request and the corpus
    of material, which is likewise constantly changing.

    Precisely because it offers so many different approaches to more or less
    freely combinable elements of information, []{#Page_70 type="pagebreak"
    title="70"}the order of the database no longer really provides a
    framework for interpreting search results in a meaningful way.
    Al­together, the meaning of many objects and signs is becoming even more
    uncertain. On the one hand, this is because the connection to their
    original context is becoming fragile; on the other hand, it is because
    they can appear in every possible combination and in the greatest
    variety of reception contexts. In less official archives and in less
    specialized search engines, the dissolution of context is far more
    pronounced than it is in the case of the Europeana project. For the sake
    of orienting its users, for instance, YouTube provides the date when a
    video has been posted, but there is no indication of when a video was
    actually produced. Further information provided about a video, for
    example in the comments section, is essentially unreliable. It might be
    true -- or it might not. The internet researcher David Weinberger has
    called this the "new digital disorder," which, at least for many users,
    is an entirely apt description.[^24^](#c2-note-0024){#c2-note-0024a} For
    individuals, this disorder has created both the freedom to establish
    their own orders and the obligation of doing so, regardless of whether
    or not they are ready for the task.

    This tension between freedom and obligation is at its strongest online,
    where the excess of culture and its more or less free availability are
    immediate and omnipresent. In fact, everything that can be retrieved
    online is culture in the sense that everything -- from the deepest layer
    of hardware to the most superficial tweet -- has been made by someone
    with a particular intention, and everything has been made to fit a
    particular order. And it is precisely this excess of often contradictory
    meanings and limited, regional, and incompatible orders that leads to
    disorder and meaninglessness. This is not limited to the online world,
    however, because the latter is not self-contained. In an essential way,
    digital media also serve to organize the material world. On the basis of
    extremely complex and opaque yet highly efficient logistical and
    production processes, people are also confronted with constantly
    changing material things about whose origins and meanings they have
    little idea. Even something as simple to produce as yoghurt usually has
    a thousand kilometers behind it before it ends up on a shelf in the
    supermarket. The logistics that enable this are oriented toward
    flexibility; []{#Page_71 type="pagebreak" title="71"}they bring elements
    together as efficiently as possible. It is nearly impossible for final
    customers to find out anything about the ingredients. Customers are
    merely supposed to be oriented by signs and notices such as "new" or "as
    before," "natural," and "healthy," which are written by specialists and
    meant to manipulate shoppers as much as the law allows. Even here, in
    corporeal everyday life, every individual has to deal with a surge of
    excess and disorder that threatens to erode the original meaning
    conferred on every object -- even where such meaning was once entirely
    unproblematic, as in the case of
    yoghurt.[^25^](#c2-note-0025){#c2-note-0025a}
    :::

    ::: {.section}
    ### Selecting and organizing {#c2-sec-0006}

    In this situation, the creation of one\'s own system of references has
    become a ubiquitous and generally accessible method for organizing all
    of the ambivalent things that one encounters on a given day. Such things
    are thus arranged within a specific context of meaning that also
    (co)determines one\'s own relation to the world and subjective position
    in it. Referentiality takes place through three types of activity, the
    first being simply to attract attention to certain things, which affirms
    (at least implicitly) that they are important. With every single picture
    posted on Flickr, every tweet, every blog post, every forum post, and
    every status update, the user is doing exactly that; he or she is
    communicating to others: "Look over here! I think this is important!" Of
    course, there is nothing new to filtering and allocating meaning. What
    is new, however, is that these processes are no longer being carried out
    primarily by specialists at editorial offices, museums, or archives, but
    have become daily requirements for a large portion of the population,
    regardless of whether they possess the material and cultural resources
    that are necessary for the task.
    :::

    ::: {.section}
    ### The loop through the body {#c2-sec-0007}

    Given the flood of information that perpetually surrounds everyone, the
    act of focusing attention and reducing vast numbers of possibilities
    into something concrete has become a productive achievement, however
    banal each of these micro-activities might seem on its own, and even if,
    at first, []{#Page_72 type="pagebreak" title="72"}the only concern might
    be to focus the attention of the person doing it. The value of this
    (often very brief) activity is that it singles out elements from the
    uniform sludge of unmanageable complexity. Something plucked out in this
    way gains value because it has required the use of a resource that
    cannot be reproduced, that exists outside of the world of information
    and that is invariably limited for every individual: our own lifetime.
    Every status update that is not machine-generated means that someone has
    invested time, be it only a second, in order to point to this and not to
    something else. Thus, a process of validating what exists in the excess
    takes place in connection with the ultimate scarcity -- our own
    lifetimes, our own bodies. Even if the value generated by this act is
    minimal or diffuse, it is still -- to borrow from Gregory Bateson\'s
    famous definition of information -- a difference that makes a difference
    in this stream of equivalencies and
    meaninglessness.[^26^](#c2-note-0026){#c2-note-0026a} This singling out
    -- this use of one\'s own body to generate meaning -- does not, however,
    take place by means of mere micro-activities throughout the day; it is
    also a defining aspect of complex cultural strategies. In recent years,
    re-enactment (that is, the re-staging of historical situ­ations and
    events) has established itself as a common practice in contemporary art.
    Unlike traditional re-enactments, such as those of historically
    significant battles, which attempt to represent the past as faithfully
    as possible, "artistic re-enactments," according to the curator Inke
    Arns, "are not an affirmative confirmation of the past; rather, they are
    *questionings* of the present through reaching back to historical
    events," especially as they are represented in images and other forms of
    documentation. Thanks to search engines and databases, such
    representations are more or less always present, though in the form of
    indeterminate images, ambivalent documents, and contentious
    interpretations. Artists in this situation, as Arns explains,

    ::: {.extract}
    do not ask the naïve question about what really happened outside of the
    history represented in the media -- the "authenticity" beyond the images
    -- instead, they ask what the images we see might mean concretely to us,
    if we were to experience these situations personally. In this way the
    artistic reenactment confronts the general feeling of insecurity about
    the meaning []{#Page_73 type="pagebreak" title="73"}of images by using a
    paradoxical approach: through erasing distance to the images and at the
    same time distancing itself from the
    images.[^27^](#c2-note-0027){#c2-note-0027a}
    :::

    This paradox manifests itself in that the images are appropriated and
    sublated through the use of one\'s own body in the re-enactments. They
    simultaneously refer to the past and create a new reality in the
    present. In perhaps the best-known re-enactment of this type, the artist
    Jeremy Deller revived, in 2001, the Battle of Orgreave, one of the
    central episodes of the British miners\' strike of 1984 and 1985. This
    historical event is regarded as a turning point in the protracted
    conflict between Margaret Thatcher\'s government and the labor unions --
    a key moment in the implementation of Great Britain\'s neoliberal
    regime, which is still in effect today. In Deller\'s re-enactment, the
    heart of the matter is not historical accuracy, which is always
    controversial in such epoch-changing events. Rather, he focuses on the
    former participants -- the miners and police officers alike, who, along
    with non-professional actors, lived through the situation again -- in
    order to explore both the distance from the events and their
    representation in the media, as well as their ongoing biographical and
    societal presence.[^28^](#c2-note-0028){#c2-note-0028a}

    Elaborate practices of embodying medial images through processes of
    appropriation and distancing have also found their way into popular
    culture, for instance in so-called "cosplay." The term, which is a
    contraction of the words "costume" and "play," was coined by a Japanese
    man named Nobuyuki Takahashi. In 1984, while attending the World Science
    Fiction Convention in Los Angeles, he used the word to describe the
    practice of certain attendees to dress up as their favorite characters.
    Participants in cosplay embody fictitious figures -- mostly from the
    worlds of science fiction, comics/manga, or computer games -- by donning
    home-made costumes and striking characteristic
    poses.[^29^](#c2-note-0029){#c2-note-0029a} The often considerable
    effort that goes into this is mostly reflected in the costumes, not in
    the choreography or dramaturgy of the performance. What is significant
    is that these costumes are usually not exact replicas but are rather
    freely adapted by each player to represent the character as he or she
    interprets it to be. Accordingly, "Cosplay is a form of appropriation
    []{#Page_74 type="pagebreak" title="74"}that transforms, actualizes and
    performs an existing story in close connection to the fan\'s own
    identity."[^30^](#c2-note-0030){#c2-note-0030a} This practice,
    admittedly, goes back quite far in the history of fan culture, but it
    has experienced a striking surge through the opportunity for fans to
    network with one another around the world, to produce costumes and
    images of professional quality, and to place themselves on the same
    level as their (fictitious) idols. By now it has become a global
    subculture whose members are active not only online but also at hundreds
    of conventions throughout the world. In Germany, an annual cosplay
    competition has been held since 2007 (it is organized by the Frankfurt
    Book Fair and Animexx, the country\'s largest manga and anime
    community). The scene, which has grown and branched out considerably
    over the past few years, has slowly begun to professionalize, with
    shops, books, and players who make paid appearances. Even in fan
    culture, stars are born. As soon as the subculture has exceeded a
    certain size, this gradual onset of commercialization will undoubtedly
    lead to tensions within the community. For now, however, two of its
    noteworthy features remain: the power of the desire to appropriate, in a
    bodily manner, characters from vast cultural universes, and the
    widespread combination of free interpretation and meticulous attention
    to detail.
    :::

    ::: {.section}
    ### Lineages and transformations {#c2-sec-0008}

    Because of the great effort tha they require, re-enactment and cosplay
    are somewhat extreme examples of singling out, appropriating, and
    referencing. As everyday activities that almost take place incidentally,
    however, these three practices usually do not make any significant or
    lasting differences. Yet they do not happen just once, but over and over
    again. They accumulate and thus constitute referentiality\'s second type
    of activity: the creation of connections between the many things that
    have attracted attention. In such a way, paths are forged through the
    vast complexity. These paths, which can be formed, for instance, by
    referring to different things one after another, likewise serve to
    produce and filter meaning. Things that can potentially belong in
    multiple contexts are brought into a single, specific context. For the
    individual []{#Page_75 type="pagebreak" title="75"}producer, this is how
    fields of attention, reference systems, and contexts of meaning are
    first established. In the third step, the things that have been selected
    and brought together are changed. Perhaps something is removed to modify
    the meaning, or perhaps something is added that was previously absent or
    unavailable. Either way, referential culture is always producing
    something new.

    These processes are applied both within individual works (referentiality
    in a strict sense) and within currents of communication that consist of
    numerous molecular acts (referentiality in a broader sense). This latter
    sort of compilation is far more widespread than the creation of new
    re-mix works. Consider, for example, the billionfold sequences of status
    updates, which sometimes involve a link to an interesting video,
    sometimes a post of a photograph, then a short list of favorite songs, a
    top 10 chart from one\'s own feed, or anything else. Such methods of
    inscribing oneself into the world by means of references, combinations,
    or alterations are used to create meaning through one\'s own activity in
    the world and to constitute oneself in it, both for one\'s self and for
    others. In a culture that manifests itself to a great extent through
    mediatized communication, people have to constitute themselves through
    such acts, if only by posting
    "selfies."[^31^](#c2-note-0031){#c2-note-0031a} Not to do so would be to
    risk invisibility and being forgotten.

    On this basis, a genuine digital folk culture of re-mixing and mashups
    has formed in recent years on online platforms, in game worlds, but also
    through cultural-economic productions of individual pieces or short
    series. It is generated and maintained by innumerable people with
    varying degrees of intensity and ambition. Its common feature with
    trad­itional folk culture, in choirs or elsewhere, is that production
    and reception (but also reproduction and creation) largely coincide.
    Active participation admittedly requires a certain degree of
    proficiency, interest, and engagement, but usually not any extraordinary
    talent. Many classical institutions such as museums and archives have
    been attempting to take part in this folk culture by setting up their
    own re-mix services. They know that the "public" is no longer able or
    willing to limit its engagement with works of art and cultural history
    to one of quiet contemplation. At the end of 2013, even []{#Page_76
    type="pagebreak" title="76"}the Deutsches Symphonie-Orchester Berlin
    initiated a re-mix competition. A year earlier, the Rijksmuseum in
    Amsterdam launched so-called "Rijksstudios." Since then, the museum has
    made available on its website more than 200,000 high-resolution images
    from its collection. Users are free to use these to create their own
    re-mixes online and share them with others. Interestingly, the
    Rijksmuseum does not distinguish between the work involved in
    transforming existing pieces and that involved in curating its own
    online gallery.

    Referential processes have no beginning and no end. Any material that is
    used to make something new has a pre-history of its own, even if its
    traces are lost in clouds of uncertainty. Upon closer inspection, this
    cloud might clear a little bit, but it is extremely uncommon for a
    genuine beginning -- a *creatio ex nihilo* -- to be revealed. This
    raises the question of whether there can really be something like
    originality in the emphatic sense.[^32^](#c2-note-0032){#c2-note-0032a}
    Regardless of the answer to this question, the fact that by now many
    people select, combine, and alter objects on a daily basis has led to a
    slow shift in our perception and sensibilities. In light of the
    experiences that so many people are creating, the formerly exotic
    theories of deconstruction suddenly seem anything but outlandish. Nearly
    half a century ago, Roland Barthes defined the text as a fabric of
    quotations, and this incited vehement
    opposition.[^33^](#c2-note-0033){#c2-note-0033a} "But of course," one
    would be inclined to say today, "that can be statistically proven
    through software analysis!" Amazon identifies books by means of their
    "statistically improbable phrases"; that is, by means of textual
    elements that are highly unlikely to occur elsewhere. This implies, of
    course, that books contain many textual elements that are highly likely
    to be found in other texts, without suggesting that such elements would
    have to be regarded as plagiarism.

    In the Gutenberg Galaxy, with its fixation on writing, the earliest
    textual document is usually understood to represent a beginning. If no
    references to anything before can be identified, the text is then
    interpreted as a closed entity, as a new text. Thus, fairy tales and
    sagas, which are typical elements of oral culture, are still more
    strongly associated with the names of those who recorded them than with
    the names of those who narrated them. This does not seem very convincing
    today. In recent years, literary historians have made strong []{#Page_77
    type="pagebreak" title="77"}efforts to shift the focus of attention to
    the people (mostly women) who actually told certain fairy tales. In
    doing so, they have been able to work out to what extent the respective
    narrators gave shape to specific stories, which were written down as
    common versions, and to what extent these stories reflect their
    narrators\' personal histories.[^34^](#c2-note-0034){#c2-note-0034a}

    Today, after more than 40 years of deconstructionist theory and a change
    in our everyday practices, it is no longer controversial to read works
    -- even by canonical figures like Wagner or Mozart -- in such a way as
    to highlight the other works, either by the artists in question or by
    other artists, that are contained within
    them.[^35^](#c2-note-0035){#c2-note-0035a} This is not an expression of
    decreased appreciation but rather an indication that, as Zygmunt Bauman
    has stressed, "The way human beings understand the world tends to be at
    all times *praxeomorphic*: it is always shaped by the know-how of the
    day, by what people can do and how they usually go about doing
    it."[^36^](#c2-note-0036){#c2-note-0036a} And the everyday practice of
    today is one of singling out, bringing together, altering, and adding.
    Accordingly, not only has our view of current cultural production
    shifted; our view of cultural history has shifted as well. As always,
    the past is made to suit the sensibilities of the present.

    As a rule, however, things that have no beginning also have no end. This
    is not only because they can in turn serve as elements for other new
    contexts of meaning, but also because the attention paid to the context
    in which they take on specific meaning is sensitive to the work that has
    to be done to maintain the context itself. Even timelessness is an
    elaborate everyday business. The attempt to rescue works of art from the
    ravages of time -- to preserve them forever -- means that they regularly
    need to be restored. Every restoration inevit­ably stirs a debate about
    whether the planned interventions are appropriate and about how to deal
    with the traces of previous interventions, which, from the current
    perspective, often seem to be highly problematic. Whereas, just a
    generation ago, preservationists ensured that such interventions
    remained visible (as articulations of the historical fissures that are
    typical of Modernity), today greater emphasis is placed on reducing
    their visibility and re-creating the illusion of an "original condition"
    (without, however, impeding any new functionality that a piece might
    have in the present). []{#Page_78 type="pagebreak" title="78"}The
    historically faithful restoration of the Berlin City Palace, and yet its
    repurposed function as a museum and meeting place, are typical of this
    new attitude in dealing with our historical heritage.

    In everyday activity, too, the never-ending necessity of this work can
    be felt at all times. Here the issue is not timelessness, but rather
    that the established contexts of meaning quickly become obsolete and
    therefore have to be continuously affirmed, expanded, and changed in
    order to maintain the relevance of the field that they define. This
    lends referentiality a performative character that combines productive
    and reproductive dimensions. That which is not constantly used and
    renewed simply disappears. Often, however, this only means that it will
    sink into an endless archive and become unrealized potential until
    someone reactivates it, breathes new life into it, rouses it from its
    slumber, and incorporates it into a newly relevant context of meaning.
    "To be relevant," according to the artist Eran Schaerf, "things must be
    recyclable."[^37^](#c2-note-0037){#c2-note-0037a}

    Alone, everyone is overwhelmed by the task of having to generate meaning
    against this backdrop of all-encompassing meaninglessness. First, the
    challenge is too great for any individual to overcome; second, meaning
    itself is only created intersubjectively. While it can admittedly be
    asserted by a single person, others have to confirm it before it can
    become a part of culture. For this reason, the actual subject of
    cultural production under the digital condition is not the individual
    but rather the next-largest unit.
    :::
    :::

    ::: {.section}
    Communality {#c2-sec-0009}
    -----------

    As an individual, it is impossible to orient oneself within a complex
    environment. Meaning -- as well as the ability to act -- can only be
    created, reinforced, and altered in exchange with others. This is
    nothing noteworthy; biologically and culturally, people are social
    beings. What has changed historically is how people are integrated into
    larger contexts, how processes of exchange are organized, and what every
    individual is expected to do in order to become a fully fledged
    participant in these processes. For nearly 50 years, traditional
    []{#Page_79 type="pagebreak" title="79"}institutions -- that is,
    hierarchically and bureaucratically organ­ized civic institutions such
    as established churches, labor unions, and political parties -- have
    continuously been losing members.[^38^](#c2-note-0038){#c2-note-0038a}
    In tandem with this, the overall commitment to the identities, family
    values, and lifestyles promoted by these institutions has likewise been
    in decline. The great mech­anisms of socialization from the late stages
    of the Gutenberg Galaxy have been losing more and more of their
    influence, though at different speeds and to different extents. All
    told, however, explicitly and collectively normative impulses are
    decreasing, while others (implicitly economic, above all) are on the
    rise. According to mainstream sociology, a cause or consequence of this
    is the individualization and atomization of society. As early as the
    middle of the 1980s, Ulrich Beck claimed: "In the individualized society
    the individual must therefore learn, on pain of permanent disadvantage,
    to conceive of himself or herself as the center of action, as the
    planning office with respect to his/her own biography, abil­ities,
    orientations, relationships and so
    on."[^39^](#c2-note-0039){#c2-note-0039a} Over the past three decades,
    the dominant neoliberal political orientation, with its strong stress on
    the freedom of the individual -- to realize oneself as an individual
    actor in the allegedly open market and in opposition to allegedly
    domineering collective mechanisms -- has radicalized these tendencies
    even further. The ability to act, however, is not only a question of
    one\'s personal attitude but also of material resources. And it is this
    same neoliberal politics that deprives so many people of the resources
    needed to take advantage of these new freedoms in their own lives. As a
    result they suffer, in Ulrich Beck\'s terms, "permanent disadvantage."

    Under the digital condition, this process has permeated the finest
    structures of social life. Individualization, commercialization, and the
    production of differences (through design, for instance) are ubiquitous.
    Established civic institutions are not alone in being hollowed out;
    relatively new collectives are also becoming more differentiated, a
    development that I outlined above with reference to the transformation
    of the gay movement into the LGBT community. Yet nevertheless, or
    perhaps for this very reason, new forms of communality are being formed
    in these offshoots -- in the small activities of everyday life. And
    these new communal formations -- rather []{#Page_80 type="pagebreak"
    title="80"}than individual people -- are the actual subjects who create
    the shared meaning that we call culture.

    ::: {.section}
    ### The problem of the "community" {#c2-sec-0010}

    I have chosen the rather cumbersome expression "communal formation" in
    order to avoid the term "community" (*Gemeinschaft*), although the
    latter is used increasingly often in discussions of digital cultures and
    has played an import­ant role, from the beginning, in conceptions of
    networking. Viewed analytically, however, "community" is a problematic
    term because it is almost hopelessly overloaded. Particularly in the
    German-speaking tradition, Ferdinand Tönnies\'s polar distinction
    between "community" (*Gemeinschaft*) and "society" (*Gesellschaft*),
    which he introduced in 1887, remains
    influential.[^40^](#c2-note-0040){#c2-note-0040a} Tönnies contrasted two
    fundamentally different and exclusive types of social relations. Whereas
    community is characterized by the overlapping multidimensional nature of
    social relationships, society is defined by the functional separation of
    its sectors and spheres. Community embeds every individual into complex
    social relationships, all of which tend to be simultaneously present. In
    the traditional village community ("communities of place," in Tönnies\'s
    terms), neighbors are involved with one another, for better or for
    worse, both on a familiar basis and economically or religiously. Every
    activity takes place on several different levels at the same time.
    Communities are comprehensive social institutions that penetrate all
    areas of life, endowing them with meaning. Through mutual dependency,
    they create stability and security, but they also obstruct change and
    hinder social mobility. Because everyone is connected with each other,
    no can leave his or her place without calling into question the
    arrangement as a whole. Communities are thus structurally conservative.
    Because every human activity is embedded in multifaceted social
    relationships, every change requires adjustments across the entire
    interrelational web -- a task that is not easy to accomplish.
    Accordingly, the trad­itional communities of the eighteenth and
    nineteenth centuries fiercely opposed the establishment of capitalist
    society. In order to impose the latter, the old community structures
    were broken apart with considerable violence. This is what Marx
    []{#Page_81 type="pagebreak" title="81"}and Engels were referring to in
    that famous passage from *The Communist Manifesto*: "All the settled,
    age-old relations with their train of time-honoured preconceptions and
    viewpoints are dissolved. \[...\] Everything feudal and fixed goes up in
    smoke, everything sacred is
    profaned."[^41^](#c2-note-0041){#c2-note-0041a}

    The defining feature of society, on the contrary, is that it frees the
    individual from such multifarious relationships. Society, according to
    Tönnies, separates its members from one another. Although they
    coordinate their activity with others, they do so in order to pursue
    partial, short-term, and personal goals. Not only are people separated,
    but so too are different areas of life. In a market-oriented society,
    for instance, the economy is conceptualized as an independent sphere. It
    can therefore break away from social connections to be organized simply
    by limited formal or legal obligations between actors who, beyond these
    obligations, have nothing else to do with one another. Costs or benefits
    that inadvertently affect people who are uninvolved in a given market
    transaction are referred to by economists as "externalities," and market
    participants do not need to care about these because they are strictly
    pursuing their own private interests. One of the consequences of this
    form of social relationship is a heightened social dynamic, for now it
    is possible to introduce changes into one area of life without
    considering its effects on other areas. In the end, the dissolution of
    mutual obligations, increased uncertainty, and the reduction of many
    social connections go hand in hand with what Marx and Engels referred to
    in *The Communist Manifesto* as "unfeeling hard cash."

    From this perspective, the historical development looks like an
    ambivalent process of modernization in which society (dynamic, but cold)
    is erected over the ruins of community (static, but warm). This is an
    unusual combination of romanticism and progress-oriented thinking, and
    the problems with this influential perspective are numerous. There is,
    first, the matter of its dichotomy; that is, its assumption that there
    can only be these two types of arrangement, community and society. Or
    there is the notion that the one form can be completely ousted by the
    other, even though aspects of community and aspects of society exist at
    the same time in specific historical situations, be it in harmony or in
    conflict.[^42^](#c2-note-0042){#c2-note-0042a} []{#Page_82
    type="pagebreak" title="82"}These impressions, however, which are so
    firmly associated with the German concept of *Gemeinschaft*, make it
    rather difficult to comprehend the new forms of communality that have
    developed in the offshoots of networked life. This is because, at least
    for now, these latter forms do not represent a genuine alternative to
    societal types of social
    connectedness.[^43^](#c2-note-0043){#c2-note-0043a} The English word
    "community" is somewhat more open. The opposition between community and
    society resonates with it as well, although the dichotomy is not as
    clear-cut. American communitarianism, for instance, considers the
    difference between community and society to be gradual and not
    categorical. Its primary aim is to strengthen civic institutions and
    mechanisms, and it regards community as an intermediary level between
    the individual and society.[^44^](#c2-note-0044){#c2-note-0044a} But
    there is a related English term, which seems even more productive for my
    purposes, namely "community of practice," a concept that is more firmly
    grounded in the empirical observation of concrete social relationships.
    The term was introduced at the beginning of the 1990s by the social
    researchers Jean Lave and Étienne Wenger. They observed that, in most
    cases, professional learning (for instance, in their case study of
    midwives) does not take place as a one-sided transfer of knowledge or
    proficiency, but rather as an open exchange, often outside of the formal
    learning environment, between people with different levels of knowledge
    and experience. In this sense, learning is an activity that, though
    distinguishable, cannot easily be separated from other "normal"
    activities of everyday life. As Lave and Wenger stress, however, the
    community of practice is not only a social space of exchange; it is
    rather, and much more fundamentally, "an intrinsic condition for the
    existence of knowledge, not least because it provides the interpretive
    support necessary for making sense of its
    heritage."[^45^](#c2-note-0045){#c2-note-0045a} Communities of practice
    are thus always epistemic communities that form around certain ways of
    looking at the world and one\'s own activity in it. What constitutes a
    community of practice is thus the joint acquisition, development, and
    preservation of a specific field of practice that contains abstract
    knowledge, concrete proficiencies, the necessary material and social
    resources, guidelines, expectations, and room to interpret one\'s own
    activity. All members are active participants in the constitution of
    this field, and this reinforces the stress on []{#Page_83
    type="pagebreak" title="83"}practice. Each of them, however, brings
    along different presuppositions and experiences, for their situations
    are embedded within numerous and specific situations of life or work.
    The processes within the community are mostly informal, and yet they are
    thoroughly structured, for authority is distributed unequally and is
    based on the extent to which the members value each other\'s (and their
    own) levels of knowledge and experience. At first glance, then, the term
    "community of practice" seems apt to describe the meaning-generating
    communal formations that are at issue here. It is also somewhat
    problematic, however, because, having since been subordinated to
    management strategies, its use is now narrowly applied to professional
    learning and managing knowledge.[^46^](#c2-note-0046){#c2-note-0046a}

    From these various notions of community, it is possible to develop the
    following way of looking at new types of communality: they are formed in
    a field of practice, characterized by informal yet structured exchange,
    focused on the generation of new ways of knowing and acting, and
    maintained through the reflexive interpretation of their own activity.
    This last point in particular -- the communal creation, preservation,
    and alteration of the interpretive framework in which actions,
    processes, and objects acquire a firm meaning and connection -- can be
    seen as the central role of communal formations.

    Communication is especially significant to them. Indi­viduals must
    continuously communicate in order to constitute themselves within the
    fields and practices, or else they will remain invisible. The mass of
    tweets, updates, emails, blogs, shared pictures, texts, posts on
    collaborative platforms, and databases (etc.) that are necessary for
    this can only be produced and processed by means of digital
    technologies. In this act of incessant communication, which is a
    constitutive element of social existence, the personal desire for
    self-constitution and orientation becomes enmeshed with the outward
    pressure of having to be present and available to form a new and binding
    set of requirements. This relation between inward motivation and outward
    pressure can vary highly, depending on the character of the communal
    formation and the position of the individual within it (although it is
    not the individual who determines what successful communication is, what
    represents a contribution to the communal formation, or in which form
    one has to be present). []{#Page_84 type="pagebreak" title="84"}Such
    decisions are made by other members of the formation in the form of
    positive or negative feedback (or none at all), and they are made with
    recourse to the interpretive framework that has been developed in
    common. These communal and continuous acts of learning, practicing, and
    orientation -- the exchange, that is, between "novices" and "experts" on
    the same field, be it concerned with internet politics, illegal street
    racing, extreme right-wing music, body modification, or a free
    encyclopedia -- serve to maintain the framework of shared meaning,
    expand the constituted field, recruit new members, and adapt the
    framework of interpretation and activity to changing conditions. Such
    communal formations constitute themselves; they preserve and modify
    themselves by constantly working out the foundations of their
    constitution. This may sound circular, for the process of reflexive
    self-constitution -- "autopoiesis" in the language of systems theory --
    is circular in the sense that control is maintained through continuous,
    self-generating feedback. Self-referentiality is a structural feature of
    these formations.
    :::

    ::: {.section}
    ### Singularity and communality {#c2-sec-0011}

    The new communal formations are informal forms of organ­ization that are
    based on voluntary action. No one is born into them, and no one
    possesses the authority to force anyone else to join or remain against
    his or her will, or to assign anyone with tasks that he or she might be
    unwilling to do. Such a formation is not an enclosed disciplinary
    institution in Foucault\'s sense,[^47^](#c2-note-0047){#c2-note-0047a}
    and, within it, power is not exercised through commands, as in the
    classical sense formulated by Max
    Weber.[^48^](#c2-note-0048){#c2-note-0048a} The condition of not being
    locked up and not being subordinated can, at least at first, represent
    for the individual a gain in freedom. Under a given set of conditions,
    everyone can (and must) choose which formations to participate in, and
    he or she, in doing so, will have a better or worse chance to influence
    the communal field of reference.

    On the everyday level of communicative self-constitution and creating a
    personal cognitive horizon -- in innumerable streams, updates, and
    timelines on social mass media -- the most important resource is the
    attention of others; that is, their feedback and the mutual recognition
    that results from it. []{#Page_85 type="pagebreak" title="85"}And this
    recognition may simply be in the form of a quickly clicked "like," which
    is the smallest unit that can assure the sender that, somewhere out
    there, there is a receiver. Without the latter, communication has no
    meaning. The situation is somewhat menacing if no one clicks the "like"
    button beneath a post or a photo. It is a sign that communication has
    broken, and the result is the dissolution of one\'s own communicatively
    constituted social existence. In this context, the boundaries are
    blurred between the categories of information, communication, and
    activity. Making information available always involves the active --
    that is, communicating -- person, and not only in the case of ubiquitous
    selfies, for in an overwhelming and chaotic environment, as discussed
    above, selection itself is of such central importance that the
    differences between the selected and the selecting become fluid,
    particularly when the goal of the latter is to experience confirmation
    from others. In this back-and-forth between one\'s own presence and the
    validation of others, one\'s own motives and those of the community are
    not in opposition but rather mutually depend on one another. Condensed
    to simple norms and to a basic set of guidelines within the context of
    an image-oriented social mass media service, the rule (or better:
    friendly tip) that one need not but probably ought to follow is this:

    ::: {.extract}
    Be an active member of the Instagram community to receive likes and
    comments. Take time to comment on a friend\'s photo, or to like photos.
    If you do this, others will reciprocate. If you never acknowledge your
    followers\' photos, then they won\'t acknowledge
    you.[^49^](#c2-note-0049){#c2-note-0049a}
    :::

    The context of this widespread and highly conventional piece of advice
    is not, for instance, a professional marketing campaign; it is simply
    about personally positioning oneself within a social network. The goal
    is to establish one\'s own, singular, identity. The process required to
    do so is not primarily inward-oriented; it is not based on questions
    such as: "Who am I really, apart from external influences?" It is rather
    outward-oriented. It takes place through making connections with others
    and is concerned with questions such as: "Who is in my network, and what
    is my position within it?" It is []{#Page_86 type="pagebreak"
    title="86"}revealing that none of the tips in the collection cited above
    offers advice about achieving success within a community of
    photographers; there are not suggestions, for instance, about how to
    take high-quality photographs. With smart cameras and built-in filters
    for post-production, this is not especially challenging any more,
    especially because individual pictures, to be examined closely and on
    their own terms, have become less important gauges of value than streams
    of images that are meant to be quickly scrolled through. Moreover, the
    function of the critic, who once monopolized the right to interpret and
    evaluate an image for everyone, is no longer of much significance.
    Instead, the quality of a picture is primarily judged according to
    whether "others like it"; that is, according to its performance in the
    ongoing popularity contest within a specific niche. But users do not
    rely on communal formations and the feedback they provide just for the
    sharing and evaluation of pictures. Rather, this dynamic has come to
    determine more and more facets of life. Users experience the
    constitution of singularity and communality, in which a person can be
    perceived as such, as simultaneous and reciprocal processes. A million
    times over and nearly subconsciously (because it is so commonplace),
    they engage in a relationship between the individual and others that no
    longer really corresponds to the liberal opposition between
    individuality and society, between personal and group identity. Instead
    of viewing themselves as exclusive entities (either in terms of the
    emphatic affirmation of individuality or its dissolution within a
    homogeneous group), the new formations require that the production of
    difference and commonality takes place
    simultaneously.[^50^](#c2-note-0050){#c2-note-0050a}
    :::

    ::: {.section}
    ### Authenticity and subjectivity {#c2-sec-0012}

    Because members have decided to participate voluntarily in the
    community, their expressions and actions are regarded as authentic, for
    it is implicitly assumed that, in making these gestures, they are not
    following anyone else\'s instructions but rather their own motivations.
    The individual does not act as a representative or functionary of an
    organization but rather as a private and singular (that is, unique)
    person. While at a gathering of the Occupy movement, a sure way to be
    kicked out to is to stick stubbornly to a party line, even if this way
    []{#Page_87 type="pagebreak" title="87"}of thinking happens to agree
    with that of the movement. Not only at Occupy gatherings, however, but
    in all new communal formations it is expected that everyone there is
    representing his or her own interests. As most people are aware, this
    assumption is theoretically naïve and often proves to be false in
    practice. Even spontaneity can be calculated, and in many cases it is.
    Nevertheless, the expectation of authenticity is relevant because it
    creates a minimum of trust. As the basis of social trust, such
    contra-factual expectations exist elsewhere as well. Critical readers of
    newspapers, for instance, must assume that what they are reading has
    been well researched and is presented as objectively as possible, even
    though they know that objectivity is theoretically a highly problematic
    concept -- to this extent, postmodern theory has become common knowledge
    -- and that newspapers often pursue (hidden) interests or lead
    campaigns. Yet without such contra-factual assumptions, the respective
    orders of knowledge and communication would not function, for they
    provide the normative framework within which deviations can be
    perceived, criticized, and sanctioned.

    In a seemingly traditional manner, the "authentic self" is formulated
    with reference to one\'s inner world, for instance to personal
    knowledge, interests, or desires. As the core of personality, however,
    this inner world no longer represents an immutable and essential
    characteristic but rather a temporary position. Today, even someone\'s
    radical reinvention can be regarded as authentic. This is the central
    difference from the classical, bourgeois conception of the subject. The
    self is no longer understood in essentialist terms but rather
    performatively. Accordingly, the main demand on the individual who
    voluntarily opts to participate in a communal formation is no longer to
    be self-aware but rather to be
    self-motivated.[^51^](#c2-note-0051){#c2-note-0051a} Nor is it necessary
    any more for one\'s core self to be coherent. It is not a contradiction
    to appear in various communal formations, each different from the next,
    as a different "I myself," for every formation is comprehensive, in that
    it appeals to the whole person, and simultaneously partial, in that it
    is oriented toward a particular goal and not toward all areas of life.
    As in the case of re-mixes and other referential processes, the concern
    here is not to preserve authenticity but rather to create it in the
    moment. The success or failure []{#Page_88 type="pagebreak"
    title="88"}of these efforts is determined by the continuous feedback of
    others -- one like after another.

    These practices have led to a modified form of subject constitution for
    which some sociologists, engaged in empir­ical research, have introduced
    the term "networked individualism."[^52^](#c2-note-0052){#c2-note-0052a}
    The idea is based on the observation that people in Western societies
    (the case studies were mostly in North America) are defining their
    identity less and less by their family, profession, or other stable
    collective, but rather increasingly in terms of their personal social
    networks; that is, according to the communal formations in which they
    are active as individuals and in which they are perceived as singular
    people. In this regard, individualization and atomization no longer
    necessarily go hand in hand. On the contrary, the intertwined nature of
    personal identity and communality can be experienced on an everyday
    level, given that both are continuously created, adapted, and affirmed
    by means of personal communication. This makes the networks in question
    simultaneously fragile and stable. Fragile because they require the
    ongoing presence of every individual and because communication can break
    down quickly. Stable because the networks of relationships that can
    support a single person -- as regards the number of those included,
    their geograph­ical distribution, and the duration of their cohesion --
    have expanded enormously by means of digital communication technologies.

    Here the issue is not that of close friendships, whose number remains
    relatively constant for most people and over long periods of
    time,[^53^](#c2-note-0053){#c2-note-0053a} but rather so-called "weak
    ties"; that is, more or less loose acquaintances that can be tapped for
    new information and resources that do not exist within one\'s close
    circle of friends.[^54^](#c2-note-0054){#c2-note-0054a} The more they
    are expanded, the more sustainable and valuable these networks become,
    for they bring together a large number of people and thus multiply the
    material and organizational resources that are (potentially) accessible
    to the individual. It is impossible to make a sweeping statement as to
    whether these formations actually represent communities in a
    comprehensive sense and how stable they really are, especially in times
    of crisis, for this is something that can only be found out on a
    case-by-case basis. It is relevant that the development of personal
    networks []{#Page_89 type="pagebreak" title="89"}has not taken place in
    a vacuum. The disintegration of institutions that were formerly
    influential in the formation of identity and meaning began long before
    the large-scale spread of networks. For most people, there is no other
    choice but to attempt to orient and organize oneself, regardless of how
    provisional or uncertain this may be. Or, as Manuel Castells somewhat
    melodramatically put it, "At the turn of the millennium, the king and
    the queen, the state and civil society, are both naked, and their
    children-citizens are wandering around a variety of foster
    homes."[^55^](#c2-note-0055){#c2-note-0055a}
    :::

    ::: {.section}
    ### Space and time as a communal practice {#c2-sec-0013}

    Although participation in a communal formation is voluntary, it is not
    unselfish. Quite the contrary: an important motivation is to gain access
    to a formation\'s constitutive field of practice and to the resources
    associated with it. A communal formation ultimately does more than
    simply steer the attention of its members toward one another. Through
    the common production of culture, it also structures how the members
    perceive the world and how they are able to design themselves and their
    potential actions in it. It is thus a co­operative mechanism of
    filtering, interpretation, and constitution. Through the everyday
    referential work of its members, the community selects a manageable
    amount of information from the excess of potentially available
    information and brings it into a meaningful context, whereby it
    validates the selection itself and orients the activity of each of its
    members.

    The new communal formations consist of self-referential worlds whose
    constructive common practice affects the foundations of social activity
    itself -- the constitution of space and time. How? The spatio-temporal
    horizon of digital communication is a global (that is, placeless) and
    ongoing present. The technical vision of digital communication is always
    the here and now. With the instant transmission of information,
    everything that is not "here" is inaccessible and everything that is not
    "now" has disappeared. Powerful infrastructure has been built to achieve
    these effects: data centers, intercontinental networks of cables,
    satellites, high-performance nodes, and much more. Through globalized
    high-frequency trading, actors in the financial markets have realized
    this []{#Page_90 type="pagebreak" title="90"}technical vision to its
    broadest extent by creating a never-ending global present whose expanse
    is confined to milliseconds. This process is far from coming to an end,
    for massive amounts of investment are allocated to accomplish even the
    smallest steps toward this goal. On November 3, 2015, a 4,600-kilometer,
    300-million-dollar transatlantic telecommunications cable (Hibernia
    Express) was put into operation between London and New York -- the first
    in more than 10 years -- with the single goal of accelerating automated
    trading between the two places by 5.2 milliseconds.

    For social and biological processes, this technical horizon of space and
    time is neither achievable nor desirable. Such processes, on the
    contrary, are existentially dependent on other spatial and temporal
    orders. Yet because of the existence of this non-geographical and
    atemporal horizon, the need -- as well as the possibility -- has arisen
    to redefine the parameters of space and time themselves in order to
    counteract the mire of technically defined spacelessness and
    timelessness. If space and time are not simply to vanish in this
    spaceless, ongoing present, how then should they be defined? Communal
    formations create spaces for action not least by determining their own
    geographies and temporal rhythms. They negotiate what is near and far
    and also which places are disregarded (that is, not even perceived). If
    every place is communicatively (and physically) reachable, every person
    must decide which place he or she would like to reach in practice. This,
    however, is not an individual decision but rather a task that can only
    be approached collectively. Those places which are important and thus
    near are determined by communal formations. This takes place in the form
    of a rough consensus through the blogs that "one" has to read, the
    exhibits that "one" has to see, the events and conferences that "one"
    has to attend, the places that "one" has to visit before they are
    overrun by tourists, the crises in which "the West" has to intervene,
    the targets that "lend themselves" to a terrorist attack, and so on. On
    its own, however, selection is not enough. Communal formations are
    especially powerful when they generate the material and organizational
    resources that are necessary for their members to implement their shared
    worldview through actions -- to visit, for instance, the places that
    have been chosen as important. This can happen if they enable access
    []{#Page_91 type="pagebreak" title="91"}to stipends, donations, price
    reductions, ride shares, places to stay, tips, links, insider knowledge,
    public funds, airlifts, explosives, and so on. It is in this way that
    each formation creates its respective spatial constructs, which define
    distances in a great variety of ways. At the same time that war-torn
    Syria is unreachably distant even for seasoned reporters and their
    staff, veritable travel agencies are being set up in order to bring
    Western jihadists there in large numbers.

    Things are similar for the temporal dimensions of social and biological
    processes. Permanent presence is a temporality that is inimical to life
    but, under its influence, temporal rhythms have to be redefined as well.
    What counts as fast? What counts as slow? In what order should things
    proceed? On the everyday level, for instance, the matter can be as
    simple as how quickly to respond to an email. Because the transmission
    of information hardly takes any time, every delay is a purely social
    creation. But how much is acceptable? There can be no uniform answer to
    this. The members of each communal formation have to negotiate their own
    rules with one another, even in areas of life that are otherwise highly
    formalized. In an interview with the magazine *Zeit*, for instance, a
    lawyer with expertise in labor law was asked whether a boss may require
    employees to be reachable at all times. Instead of answering by
    referring to any binding legal standards, the lawyer casually advised
    that this was a matter of flexible negotiation: "Express your misgivings
    openly and honestly about having to be reachable after hours and,
    together with your boss, come up with an agreeable rule to
    follow."[^56^](#c2-note-0056){#c2-note-0056a} If only it were that easy.

    Temporalities that, in many areas, were once simply taken for granted by
    everyone on account of the factuality of things now have to be
    culturally determined -- that is, explicitly negotiated -- in a greater
    number of contexts. Under the conditions of capitalism, which is always
    creating new competitions and incentives, one consequence is the
    often-lamented "acceleration of time." We are asked to produce, consume,
    or accomplish more and more in less and less
    time.[^57^](#c2-note-0057){#c2-note-0057a} This change in the
    structuring of time is not limited to linear acceleration. It reaches
    deep into the foundations of life and has even reconfigured biological
    processes themselves. Today there is an entire industry that specializes
    in freezing the stem []{#Page_92 type="pagebreak" title="92"}cells of
    newborns in liquid nitrogen -- that is, in suspending cellular
    biological time -- in case they might be needed later on in life for a
    transplant or for the creation of artificial organs. Children can be
    born even if their physical mothers are already dead. Or they can be
    "produced" from ova that have been stored for many years at minus 196
    degrees.[^58^](#c2-note-0058){#c2-note-0058a} At the same time,
    questions now have to be addressed every day whose grand temporal
    dimensions were once the matter of myth. In the case of atomic energy,
    for instance, there is the issue of permanent disposal. Where can we
    deposit nuclear waste for the next hundred thousand years without it
    causing catastrophic damage? How can the radioactive material even be
    transported there, wherever that is, within the framework of everday
    traffic laws?[^59^](#c2-note-0059){#c2-note-0059a}

    The construction of temporal dimensions and sequences has thus become an
    everyday cultural question. Whereas throughout Europe, for example,
    committees of experts and ethicists still meet to discuss reproductive
    medicine and offer their various recommendations, many couples are
    concerned with the specific question of whether or how they can fulfill
    their wish to have children. Without a coherent set of rules, questions
    such as these have to be answered by each individual with recourse to
    his or her personally relevant communal formation. If there is no
    cultural framework that at least claims to be binding for everyone, then
    the individual must negotiate independently within each communal
    formation with the goal of acquiring the resources necessary to act
    according to communal values and objectives.
    :::

    ::: {.section}
    ### Self-generating orders {#c2-sec-0014}

    These three functions -- selection, interpretation, and the constitutive
    ability to act -- make communal formations the true subject of the
    digital condition. In principle, these functions are nothing new;
    rather, they are typical of fields that are organized without reference
    to external or irrefutable authorities. The state of scholarship, for
    instance, is determined by what is circulated in refereed publications.
    In this case, "refereed" means that scientists at the same professional
    rank mutually evaluate each other\'s work. The scientific community (or
    better: the sub-community of a specialized discourse) []{#Page_93
    type="pagebreak" title="93"}evaluates the contributions of individual
    scholars. They decide what should be considered valuable, and this
    consensus can theoretically be revised at any time. It is based on a
    particular catalog of criteria, on an interpretive framework that
    provides lines of inquiry, methods, appraisals, and conventions of
    presentation. With every article, this framework is confirmed and
    reconstituted. If the framework changes, this can lead in the most
    extreme case to a paradigm shift, which overturns fundamental
    orientations, assumptions, and
    certainties.[^60^](#c2-note-0060){#c2-note-0060a} The result of this is
    not only a change in how scientific contributions are evaluated but also
    a change in how the external world is perceived and what activities are
    possible in it. Precisely because the sciences claim to define
    themselves, they have the ability to revise their own foundations.

    The sciences were the first large sphere of society to achieve
    comprehensive cultural autonomy; that is, the ability to determine its
    own binding meaning. Art was the second that began to organize itself on
    the basis of internal feedback. It was during the era of Romanticism
    that artists first laid claim to autonomy. They demanded "to absolve art
    from all conditions, to represent it as a realm -- indeed as the only
    realm -- in which truth and beauty are expressed in their pure form, a
    realm in which everything truly human is
    transcended."[^61^](#c2-note-0061){#c2-note-0061a} With the spread of
    photography in the second half of the nineteenth century, art also
    liberated itself from its final task, which was hoisted upon it from the
    outside, namely the need to represent external reality. Instead of
    having to represent the external world, artists could now focus on their
    own subjectivity. This gave rise to a radical individualism, which found
    its clearest summation in Marcel Duchamp\'s assertion that only the
    artist could determine what is art. This he claimed in 1917 by way of
    explaining how an industrially produced urinal, exhibited as a signed
    piece with the title "Fountain," could be considered a work of art.

    With the rise of the knowledge economy and the expansion of cultural
    fields, including the field of art and the artists active within it,
    this individualism quickly swelled to unmanageable levels. As a
    consequence, the task of defining what should be regarded as art shifted
    from the individual artist to the curator. It now fell upon the latter
    to select a few works from the surplus of competing scenes and thus
    bring temporary []{#Page_94 type="pagebreak" title="94"}order to the
    constantly diversifying and changing world of contemporary art. This
    order was then given expression in the form of exhibits, which were
    intended to be more than the sum of their parts. The beginning of this
    practice can be traced to the 1969 exhibition When Attitudes Become
    Form, which was curated by Harald Szeemann for the Kunsthalle Bern (it
    was also sponsored by Philip Morris). The works were not neatly
    separated from one another and presented without reference to their
    environment, but were connected with each other both spatially and in
    terms of their content. The effect of the exhibition could be felt at
    least as much through the collection of works as a whole as it could
    through the individual pieces, many of which had been specially
    commissioned for the exhibition itself. It not only cemented Szeemann\'s
    reputation as one of the most significant curators of the twentieth
    century; it also completely redefined the function of the curator as a
    central figure within the art system.

    This was more than 40 years ago and in a system that functioned
    differently from that of today. The distance from this exhibition, but
    also its ongoing relevance, was negotiated, significantly, in a
    re-enactment at the 2013 Biennale in Venice. For this, the old rooms at
    the Kunsthalle Bern were reconstructed in the space of the Fondazione
    Prada in such a way that both could be seen simultaneously. As is
    typical with such re-enactments, the curators of the project described
    its goals in terms of appropriation and distancing: "This was the
    challenge: how could we find and communicate a limit to a non-limit,
    creating a place that would reflect exactly the architectural structures
    of the Kunsthalle, but also an asymmetrical space with respect to our
    time and imbued with an energy and tension equivalent to that felt at
    Bern?"[^62^](#c2-note-0062){#c2-note-0062a}

    Curation -- that is, selecting works and associating them with one
    another -- has become an omnipresent practice in the art system. No
    exhibition takes place any more without a curator. Nevertheless,
    curators have lost their extraordinary
    position,[^63^](#c2-note-0063){#c2-note-0063a} with artists taking on
    more of this work themselves, not only because the boundaries between
    artistic and curatorial activities have become fluid but also because
    many artists explicitly co-produce the context of their work by
    incorporating a multitude of references into their pieces. It is with
    precisely this in mind that André Rottmann, in the []{#Page_95
    type="pagebreak" title="95"}quotation cited at the beginning of this
    chapter, can assert that referentiality has become the dominant
    production-aesthetic model in contemporary art. This practice enables
    artists to objectify themselves by explicitly placing themselves into a
    historical and social context. At the same time, it also enables them to
    subjectify the historical and social context by taking the liberty to
    select and arrange the references
    themselves.[^64^](#c2-note-0064){#c2-note-0064a}

    Such strategies are no longer specific to art. Self-generated spaces of
    reference and agency are now deeply embedded in everyday life. The
    reason for this is that a growing number of questions can no longer be
    answered in a generally binding way (such as those about what
    constitutes fine art), while the enormous expansion of the cultural
    requires explicit decisions to be made in more aspects of life. The
    reaction to this dilemma has been radical subjectivation. This has not,
    however, been taking place at the level of the individual but rather at
    that of communal formations. There is now a patchwork of answers to
    large questions and a multitude of reactions to large challenges, all of
    which are limited in terms of their reliability and scope.
    :::

    ::: {.section}
    ### Ambivalent voluntariness {#c2-sec-0015}

    Even though participation in new formations is voluntary and serves the
    interests of their members, it is not without preconditions. The most
    important of these is acceptance, the willing adoption of the
    interpretive framework that is generated by the communal formation. The
    latter is formed from the social, cultural, legal, and technical
    protocols that lend to each of these formations its concrete
    constitution and specific character. Protocols are common sets of rules;
    they establish, according to the network theorist Alexander Galloway,
    "the essential points necessary to enact an agreed-upon standard of
    action." They provide, he goes on, "etiquette for autonomous
    agents."[^65^](#c2-note-0065){#c2-note-0065a} Protocols are
    simul­taneously voluntary and binding; they allow actors to meet
    eye-to-eye instead of entering into hierarchical relations with one
    another. If everyone voluntarily complies with the protocols, then it is
    not necessary for one actor to give instructions to another. Whoever
    accepts the relevant protocols can interact with others who do the same;
    whoever opts not to []{#Page_96 type="pagebreak" title="96"}accept them
    will remain on the outside. Protocols establish, for example, common
    languages, technical standards, or social conventions. The fundamental
    protocol for the internet is the Transmission Control Protocol/Internet
    Protocol (TCP/IP). This suite of protocols defines the common language
    for exchanging data. Every device that exchanges information over the
    internet -- be it a smartphone, a supercomputer in a data center, or a
    networked thermostat -- has to use these protocols. In growing areas of
    social contexts, the common language is English. Whoever wishes to
    belong has to speak it increasingly often. In the natural sciences,
    communication now takes place almost exclusively in English. Non-native
    speakers who accept this norm may pay a high price: they have to learn a
    new language and continually improve their command of it or else resign
    themselves to being unable to articulate things as they would like --
    not to mention losing the possibility of expressing something for which
    another language would perhaps be more suitable, or forfeiting
    trad­itions that cannot be expressed in English. But those who refuse to
    go along with these norms pay an even higher price, risking
    self-marginalization. Those who "voluntarily" accept conventions gain
    access to a field of practice, even though within this field they may be
    structurally disadvantaged. But unwillingness to accept such
    conventions, with subsequent denial of access to this field, might have
    even greater disadvantages.[^66^](#c2-note-0066){#c2-note-0066a}

    In everyday life, the factors involved with this trade-off are often
    presented in the form of subtle cultural codes. For instance, in order
    to participate in a project devoted to the development of free software,
    it is not enough for someone to possess the necessary technical
    knowledge; he or she must also be able to fit into a wide-ranging
    informal culture with a characteristic style of expression, humor, and
    preferences. Ultimately, software developers do not form a professional
    corps in the traditional sense -- in which functionaries meet one
    another in the narrow and regulated domain of their profession -- but
    rather a communal formation in which the engagement of the whole person,
    both one\'s professional and social self, is scrutinized. The
    abolishment of the separ­ation between different spheres of life,
    requiring interaction of a more holistic nature, is in fact a key
    attraction of []{#Page_97 type="pagebreak" title="97"}these communal
    formations and is experienced by some as a genuine gain in freedom. In
    this situation, one is no longer subjected to rules imposed from above
    but rather one is allowed to -- and indeed ought to -- be authentically
    pursuing his or her own interests.

    But for others the experience can be quite the opposite because the
    informality of the communal formation also allows forms of exclusion and
    discrimination that are no longer acceptable in formally organized
    realms of society. Discrimination is more difficult to identify when it
    takes place within the framework of voluntary togetherness, for no one
    is forced to participate. If you feel uncomfortable or unwelcome, you
    are free to leave at any time. But this is a specious argument. The
    areas of free software or Wikipedia are difficult places for women. In
    these clubby atmospheres of informality, they are often faced with
    blatant sexism, and this is one of the reasons why many women choose to
    stay away from such projects.[^67^](#c2-note-0067){#c2-note-0067a} In
    2007, according to estimates by the American National Center for Women &
    Information Technology, whereas approximately 27 percent of all jobs
    related to computer science were held by women, their representation at
    the same time was far lower in the field of free software -- on average
    less than 2 percent. And for years, the proportion of women who edit
    texts on Wikipedia has hovered at around 10
    percent.[^68^](#c2-note-0068){#c2-note-0068a}

    The consequences of such widespread, informal, and elusive
    discrimination are not limited to the fact that certain values and
    prejudices of the shared culture are included in these products, while
    different viewpoints and areas of knowledge are
    excluded.[^69^](#c2-note-0069){#c2-note-0069a} What is more, those who
    are excluded or do not wish to expose themselves to discrimination (and
    thus do not even bother to participate in any communal formations) do
    not receive access to the resources that circulate there (attention and
    support, valuable and timely knowledge, or job offers). Many people are
    thus faced with the choice of either enduring the discrimination within
    a community or remaining on the outside and thus invisible. That this
    decision is made on a voluntary basis and on one\'s own responsibility
    hardly mitigates the coercive nature of the situation. There may be a
    choice, but it would be misleading to call it a free one.[]{#Page_98
    type="pagebreak" title="98"}
    :::

    ::: {.section}
    ### The power of sociability {#c2-sec-0016}

    In order to explain the peculiar coercive nature of the (nom­inally)
    voluntary acceptance of protocols, rules, and norms, the political
    scientist David Singh Grewal, drawing on the work of Max Weber and
    Michel Foucault, has distinguished between the "power of sovereignty"
    and the "power of sociabil­ity."[^70^](#c2-note-0070){#c2-note-0070a}
    The former develops on the basis of dominance and subordination, as
    imposed by authorities, police officers, judges, or other figures within
    formal hierarchies. Their power is anchored in disciplinary
    institutions, and the dictum of this sort of power is: "You must!" The
    power of sociability, on the contrary, functions by prescribing the
    conditions or protocols under which people are able to enter into an
    exchange with one another. The dictum of this sort of power is: "You
    can!" The more people accept certain protocols and standards, the more
    powerful these become. Accordingly, the sociability that they structure
    also becomes more comprehensive, and those not yet involved have to ask
    themselves all the more urgently whether they can afford not to accept
    these protocols and standards. Whereas the first type of power is
    ultimately based on the monopoly of violence and on repression, the
    second is founded on voluntary submission. When the entire internet
    speaks TCP/IP, then an individual\'s decision to use it may be voluntary
    in nominal terms, but at the same time it is an indispensable
    precondition for existing within the network at all. Protocols exert
    power without there having to be anyone present to possess the power in
    question. Whereas the sovereign can be located, the effects of
    sociability\'s power are diffuse and omnipresent. They are not
    repressive but rather constitutive. No one forces a scientist to publish
    in English or a woman editor to tolerate disparaging remarks on
    Wikipedia. People accept these often implicit behavioral norms (sexist
    comments are permitted, for instance) out of their own interests in
    order to acquire access to the resources circulating within the networks
    and to constitute themselves within it. In this regard, Singh
    distinguishes between the "intrinsic" and "extrinsic" reasons for
    abiding by certain protocols.[^71^](#c2-note-0071){#c2-note-0071a} In
    the first case, the motivation is based on a new protocol being better
    suited than existing protocols for carrying out []{#Page_99
    type="pagebreak" title="99"}a specific objective. People thus submit
    themselves to certain rules because they are especially efficient,
    transparent, or easy to use. In the second case, a protocol is accepted
    not because but in spite of its features. It is simply a precondition
    for gaining access to a space of agency in which resources and
    opportunities are available that cannot be found anywhere else. In the
    first case, it is possible to speak subjectively of voluntariness,
    whereas the second involves some experience of impersonal compunction.
    One is forced to do something that might potentially entail grave
    disadvantages in order to have access, at least, to another level of
    opportunities or to create other advantages for oneself.
    :::

    ::: {.section}
    ### Homogeneity, difference and authority {#c2-sec-0017}

    Protocols are present on more than a technical level; as interpretive
    frameworks, they structure viewpoints, rules, and patterns of behavior
    on all levels. Thus, they provide a degree of cultural homogeneity, a
    set of commonalities that lend these new formations their communal
    nature. Viewed from the outside, these formations therefore seem
    inclined toward consensus and uniformity, for their members have already
    accepted and internalized certain aspects in common -- the protocols
    that enable exchange itself -- whereas everyone on the outside has not
    done so. When everyone is speaking in English, the conversation sounds
    quite monotonous to someone who does not speak the language.

    Viewed from the inside, the experience is something different: in order
    to constitute oneself within a communal formation, not only does one
    have to accept its rules voluntarily and in a self-motivated manner; one
    also has to make contributions to the reproduction and development of
    the field. Everyone is urged to contribute something; that is, to
    produce, on the basis of commonalities, differences that simultaneously
    affirm, modify, and enhance these commonalities. This leads to a
    pronounced and occasionally highly competitive internal differentiation
    that can only be understood, however, by someone who has accepted the
    commonalities. To an outsider, this differentiation will seem
    irrelevant. Whoever is not well versed in the universe of *Star Wars*
    will not understand why the various character interpretations at
    []{#Page_100 type="pagebreak" title="100"}cosplay conventions, which I
    discussed above, might be brilliant or even controversial. To such a
    person, they will all seem equally boring and superficial.

    These formations structure themselves internally through the production
    of differences; that is, by constantly changing their common ground.
    Those who are able to add many novel aspects to the common resources
    gain a degree of authority. They assume central positions and they
    influence, through their behavior, the development of the field more
    than others do. However, their authority, influence, and de facto power
    are not based on any means of coercion. As Niklas Luhmann noted, "In the
    end, one participant\'s achievements in making selections \[...\] are
    accepted by another participant \[...\] as a limitation of the latter\'s
    potential experiences and activities without him having to make the
    selection on his own."[^72^](#c2-note-0072){#c2-note-0072a} Even this is
    a voluntary and self-interested act: the members of the formation
    recognize that this person has contributed more to the common field and
    to the resources within it. This, in turn, is to everyone\'s advantage,
    for each member would ultimately like to make use of the field\'s
    resources to achieve his or her own goals. This arrangement, which can
    certainly take on hierarchical qualities, is experienced as something
    meritocratically legitimized and voluntarily
    accepted.[^73^](#c2-note-0073){#c2-note-0073a} In the context of free
    software, there has therefore been some discussion of "benevolent
    dictators."[^74^](#c2-note-0074){#c2-note-0074a} The matter of
    "dictators" is raised because projects are often led by charismatic
    figures without a formal mandate. They are "benevolent" because their
    pos­ition of authority is based on the fact that a critical mass of
    participating producers has voluntarily subordinated itself for its own
    self-interest. If the consensus breaks over whose contributions have
    been carrying the most weight, then the formation will be at risk of
    losing its internal structure and splitting apart ("forking," in the
    jargon of free software).
    :::
    :::

    ::: {.section}
    Algorithmicity {#c2-sec-0018}
    --------------

    Through personal communication, referential processes in communal
    formations create cultural zones of various sizes and scopes. They
    expand into the empty spaces that have been created by the erosion of
    established institutions and []{#Page_101 type="pagebreak"
    title="101"}processes, and once these new processes have been
    established the process of erosion intensifies. Multiple processes of
    exchange take place alongside one another, creating a patchwork of
    interconnected, competing, or entirely unrelated spheres of meaning,
    each with specific goals and resources and its own preconditions and
    potentials. The structures of knowledge, order, and activity that are
    generated by this are holistic as well as partial and limited. The
    participants in such structures are simultaneously addressed on many
    levels that were once functionally separated; previously independent
    spheres, such as work and leisure, are now mixed together, but usually
    only with respect to the subdivisions of one\'s own life. And, at first,
    the structures established in this way are binding only for active
    participants.

    ::: {.section}
    ### Exiting the "Library of Babel" {#c2-sec-0019}

    For one person alone, however, these new processes would not be able to
    generate more than a local island of meaning from the enormous clamor of
    chaotic spheres of information. In his 1941 story "The Library of
    Babel," Jorge Luis Borges fashioned a fitting image for such a
    situation. He depicts the world as a library of unfathomable and
    possibly infinite magnitude. The characters in the story do not know
    whether there is a world outside of the library. There are reasons to
    believe that there is, and reasons that suggest otherwise. The library
    houses the complete collection of all possible books that can be written
    on exactly 410 pages. Contained in these volumes is the promise that
    there is "no personal or universal problem whose eloquent solution
    \[does\] not exist," for every possible combination of letters, and thus
    also every possible pronouncement, is recorded in one book or another.
    No catalog has yet been found for the library (though it must exist
    somewhere), and it is impossible to identify any order in its
    arrangement of books. The "men of the library," according to Borges,
    wander round in search of the one book that explains everything, but
    their actual discoveries are far more modest. Only once in a while are
    books found that contain more than haphazard combinations of signs. Even
    small regularities within excerpts of texts are heralded as sensational
    discoveries, and it is around these discoveries that competing
    []{#Page_102 type="pagebreak" title="102"}schools of interpretation
    develop. Despite much labor and effort, however, the knowledge gained is
    minimal and fragmentary, so the prevailing attitude in the library is
    bleak. By the time of the narrator\'s generation, "nobody expects to
    discover anything."[^75^](#c2-note-0075){#c2-note-0075a}

    Although this vision has now been achieved from a quantitative
    perspective -- no one can survey the "library" of digital information,
    which in practical terms is infinitely large, and all of the growth
    curves continue to climb steeply -- today\'s cultural reality is
    nevertheless entirely different from that described by Borges. Our
    ability to deal with massive amounts of data has radically improved, and
    thus our faith in the utility of information is not only unbroken but
    rather gaining strength. What is new is precisely such large quantities
    of data ("big data"), which, as we are promised or forewarned, will lead
    to new knowledge, to a comprehensive understanding of the world, indeed
    even to "omniscience."[^76^](#c2-note-0076){#c2-note-0076a} This faith
    in data is based above all on the fact that the two processes described
    above -- referentiality and communality -- are not the only new
    mechanisms for filtering, sorting, aggregating, and evaluating things.
    Beneath or ahead of the social mechanisms of decentralized and networked
    cultural production, there are algorithmic processes that pre-sort the
    immeasurably large volumes of data and convert them into a format that
    can be apprehended by individuals, evaluated by communities, and
    invested with meaning.

    Strictly speaking, it is impossible to maintain a categorical
    distinction between social processes that take place in and by means of
    technological infrastructures and technical pro­cesses that are socially
    constructed. In both cases, social actors attempt to realize their own
    interests with the resources at their disposal. The methods of
    (attempted) realization, the available resources, and the formulation of
    interests mutually influence one another. The technological resources
    are inscribed in the formulation of goals. These open up fields of
    imagination and desire, which in turn inspire technical
    development.[^77^](#c2-note-0077){#c2-note-0077a} Although it is
    impossible to draw clear theoretical lines, the attempt to make such a
    distinction can nevertheless be productive in practice, for in this way
    it is possible to gain different perspectives about the same object of
    investigation.[]{#Page_103 type="pagebreak" title="103"}
    :::

    ::: {.section}
    ### The rise of algorithms {#c2-sec-0020}

    An algorithm is a set of instructions for converting a given input into
    a desired output by means of a finite number of steps: algorithms are
    used to solve predefined problems. For a set of instructions to become
    an algorithm, it has to be determined in three different respects.
    First, the necessary steps -- individually and as a whole -- have to be
    described unambiguously and completely. To do this, it is usually
    neces­sary to use a formal language, such as mathematics, or a
    programming language, in order to avoid the characteristic imprecision
    and ambiguity of natural language and to ensure instructions can be
    followed without interpretation. Second, it must be possible in practice
    to execute the individual steps together. For this reason, every
    algorithm is tied to the context of its realization. If the context
    changes, so do the operating processes that can be formalized as
    algorithms and thus also the ways in which algorithms can partake in the
    constitution of the world. Third, it must be possible to execute an
    operating instruction mechanically so that, under fixed conditions, it
    always produces the same result.

    Defined in such general terms, it would also be possible to understand
    the instruction manual for a typical piece of Ikea furniture as an
    algorithm. It is a set of instructions for creating, with a finite
    number of steps, a specific and predefined piece of furniture (output)
    from a box full of individual components (input). The instructions are
    composed in a formal language, pictograms, which define each step as
    unambiguously as possible, and they can be executed by a single person
    with simple tools. The process can be repeated, for the final result is
    always the same: a Billy box will always yield a Billy shelf. In this
    case, a person takes over the role of a machine, which (unambiguous
    pictograms aside) can lead to problems, be it that scratches and other
    traces on the finished piece of furniture testify to the unique nature
    of the (unsuccessful) execution, or that, inspired by the micro-trend of
    "Ikea hacking," the official instructions are intentionally ignored.

    Because such imprecision is supposed to be avoided, the most important
    domain of algorithms in practice is mathematics and its implementation
    on the computer. The term []{#Page_104 type="pagebreak"
    title="104"}"algorithm" derives from the Persian mathematician,
    astronomer, and geographer Muḥammad ibn Mūsā al-Khwārizmī. His book *On
    the Calculation with Hindu Numerals*, which was written in Baghdad in
    825, was known widely in the Western Middle Ages through a Latin
    translation and made the essential contribution of introducing
    Indo-Arabic nu­merals and the number zero to Europe. The work begins
    with the formula *dixit algorizmi* ... ("Algorismi said ..."). During
    the Middle Ages, *algorizmi* or *algorithmi* soon became a general term
    for advanced methods of
    calculation.[^78^](#c2-note-0078){#c2-note-0078a}

    The modern effort to build machines that could mechanic­ally carry out
    instructions achieved its first breakthrough with Gottfried Wilhelm
    Leibniz. He has often been credited with making the following remark:
    "It is unworthy of excellent men to lose hours like slaves in the labour
    of calculation which could be done by any peasant with the aid of a
    machine."[^79^](#c2-note-0079){#c2-note-0079a} This vision already
    contains a distinction between higher cognitive and interpretive
    activities, which are regarded as being truly human, and lower processes
    that involve pure execution and can therefore be mechanized. To this
    end, Leibniz himself developed the first calculating machine, which
    could carry out all four of the basic types of arithmetic. He was not
    motivated to do this by the practical necessities of production and
    business (although conceptually groundbreaking, Leibniz\'s calculating
    machine remained, on account of its mechanical complexity, a unique item
    and was never used).[^80^](#c2-note-0080){#c2-note-0080a} In the
    estimation of the philosopher Sybille Krämer, calculating machines "were
    rather speculative masterpieces of a century that, like none before it,
    was infatuated by the idea of mechanizing 'intellectual'
    processes."[^81^](#c2-note-0081){#c2-note-0081a} Long before machines
    were implemented on a large scale to increase the efficiency of material
    production, Leibniz had already speculated about using them to enhance
    intellectual labor. And this vision has never since disappeared. Around
    a century and a half later, the English polymath Charles Babbage
    formulated it anew, now in direct connection with industrial
    mechanization and its imperative of time-saving
    efficiency.[^82^](#c2-note-0082){#c2-note-0082a} Yet he, too, failed to
    overcome the problem of practically realizing such a machine.

    The decisive step that turned the vision of calculating machines into
    reality was made by Alan Turing in 1937. With []{#Page_105
    type="pagebreak" title="105"}a theoretical model, he demonstrated that
    every algorithm could be executed by a machine as long as it could read
    an incremental set of signs, manipulate them according to established
    rules, and then write them out again. The validity of his model did not
    depend on whether the machine would be analog or digital, mechanical or
    electronic, for the rules of manipulation were not at first conceived as
    being a fixed component of the machine itself (that is, as being
    implemented in its hardware). The electronic and digital approach came
    to be preferred because it was hoped that even the instructions could be
    read by the machine itself, so that the machine would be able to execute
    not only one but (theoretically) every written algorithm. The
    Hungarian-born mathematician John von Neumann made it his goal to
    implement this idea. In 1945, he published a model in which the program
    (the algorithm) and the data (the input and output) were housed in a
    common storage device. Thus, both could be manipulated simultaneously
    without having to change the hardware. In this way, he converted the
    "Turing machine" into the "universal Turing machine"; that is, the
    modern computer.[^83^](#c2-note-0083){#c2-note-0083a}

    Gordon Moore, the co-founder of the chip manufacturer Intel,
    prognosticated 20 years later that the complexity of integrated circuits
    and thus the processing power of computer chips would double every 18 to
    24 months. Since the 1970s, his prediction has been known as Moore\'s
    Law and has essentially been correct. This technical development has
    indeed taken place exponentially, not least because the semi-conductor
    industry has been oriented around
    it.[^84^](#c2-note-0084){#c2-note-0084a} An IBM 360/40 mainframe
    computer, which was one of the first of its kind to be produced on a
    large scale, could make approximately 40,000 calculations per second and
    its cost, when it was introduced to the market in 1965, was \$1.5
    million per unit. Just 40 years later, a standard server (with a
    quad-core Intel processor) could make more than 40 billion calculations
    per second, and this at a price of little more than \$1,500. This
    amounts to an increase in performance by a factor of a million and a
    corresponding price reduction by a factor of a thousand; that is, an
    improvement in the price-to-performance ratio by a factor of a billion.
    With inflation taken into consideration, this factor would be even
    higher. No less dramatic were the increases in performance -- or rather
    []{#Page_106 type="pagebreak" title="106"}the price reductions -- in the
    area of data storage. In 1980, it cost more than \$400,000 to store a
    gigabyte of data, whereas 30 years later it would cost just 10 cents to
    do the same -- a price reduction by a factor of 4 million. And in both
    areas, this development has continued without pause.

    These increases in performance have formed the material basis for the
    rapidly growing number of activities carried out by means of algorithms.
    We have now reached a point where Leibniz\'s distinction between
    creative mental functions and "simple calculations" is becoming
    increasingly fuzzy. Recent discussions about the allegedly threatening
    "domination of the computer" have been kindled less by the increased use
    of algorithms as such than by the gradual blurring of this distinction
    with new possibilities to formalize and mechanize increasing areas of
    creative thinking.[^85^](#c2-note-0085){#c2-note-0085a} Activities that
    not long ago were reserved for human intelligence, such as composing
    texts or analyzing the content of images, are now frequently done by
    machines. As early as 2010, a program called Stats Monkey was introduced
    to produce short reports about baseball games. All that the program
    needs for this is comprehensive data about the games, which can be
    accumulated mechanically and which have since become more detailed due
    to improved image recognition and sensors. From these data, the program
    extracts the decisive moments and players of a game, recognizes
    characteristic patterns throughout the course of play (such as
    "extending an early lead," "a dramatic comeback," etc.), and on this
    basis generates its own report. Regarding the reports themselves, a
    number of variables can be determined in advance, for instance whether
    the story should be written from the perspective of a neutral observer
    or from the standpoint of one of the two teams. If writing about little
    league games, the program can be instructed to ignore the errors made by
    children -- because no parent wants to read about those -- and simply
    focus on their heroics. The algorithm was soon patented, and a start-up
    business was created from the original interdisciplinary research
    project: Narrative Science. In addition to sport reports it now offers
    texts of all sorts, but above all financial reports -- another field for
    which there is a great deal of available data. These texts have been
    published by reputable media outlets such as the business magazine
    *Forbes*, in which their authorship []{#Page_107 type="pagebreak"
    title="107"}is credited to "Narrative Science." Although these
    contributions are still limited to relatively simple topics, this will
    not remain the case for long. When asked about the percentage of news
    that would be written by computers 15 years from now, Narrative
    Science\'s chief technology officer and co-founder Kristian Hammond
    confidently predicted "\[m\]ore than 90 percent." He added that, within
    the next five years, an algorithm could even win a Pulitzer
    Prize.[^86^](#c2-note-0086){#c2-note-0086a} This may be blatant hype and
    self-promotion but, as a general estimation, Hammond\'s assertion is not
    entirely beyond belief. It remains to be seen whether algorithms will
    replace or simply supplement traditional journalism. Yet because media
    companies are now under strong financial pressure, it is certainly
    reasonable to predict that many journalistic texts will be automated in
    the future. Entirely different applications, however, have also been
    conceived. Alexander Pschera, for instance, foresees a new age in the
    relationship between humans and nature, for, as soon as animals are
    equipped with transmitters and sensors and are thus able to tell their
    own stories through the appropriate software, they will be regarded as
    individuals and not merely as generic members of a
    species.[^87^](#c2-note-0087){#c2-note-0087a}

    We have not yet reached this point. However, given that the CIA has also
    expressed interest in Narrative Science and has invested in it through
    its venture-capital firm In-Q-Tel, there are indications that
    applications are being developed beyond the field of journalism. For the
    purpose of spreading propaganda, for instance, algorithms can easily be
    used to create a flood of entries on online forums and social mass
    media.[^88^](#c2-note-0088){#c2-note-0088a} Narrative Science is only
    one of many companies offering automated text analysis and production.
    As implemented by IBM and other firms, so-called E-discovery software
    promises to reduce dramatically the amount of time and effort required
    to analyze the constantly growing numbers of files that are relevant to
    complex legal cases. Without such software, it would be impossible in
    practice for lawyers to deal with so many documents. Numerous bots
    (automated editing programs) are active in the production of Wikipedia
    as well. Whereas, in the German edition, bots are forbidden from writing
    their own articles, this is not the case in the Swedish version.
    Measured by the number of entries, the latter is now the second-largest
    edition of the online encyclopedia in the []{#Page_108 type="pagebreak"
    title="108"}world, for, in the summer of 2013, a single bot contributed
    more than 200,000 articles to it.[^89^](#c2-note-0089){#c2-note-0089a}
    Since 2013, moreover, the company Epagogix has offered software that
    uses histor­ical data to evaluate the market potential of film scripts.
    At least one major Hollywood studio uses this software behind the backs
    of scriptwriters and directors, for, according to the company\'s CEO,
    the latter would be "nervous" to learn that their creative work was
    being analyzed in such a way.[^90^](#c2-note-0090){#c2-note-0090a}
    Think, too, of the typical statement that is made at the beginning of a
    call to a telephone hotline -- "This call may be recorded for training
    purposes." Increasingly, this training is not intended for the employees
    of the call center but rather for algorithms. The latter are expected to
    learn how to recognize the personality type of the caller and, on that
    basis, to produce an appropriate script to be read by its poorly
    educated and part-time human
    co-workers.[^91^](#c2-note-0091){#c2-note-0091a} Another example is the
    use of algorithms to grade student
    essays,[^92^](#c2-note-0092){#c2-note-0092a} or ... But there is no need
    to expand this list any further. Even without additional references to
    comparable developments in the fields of image, sound, language, and
    film analysis, it is clear by now that, on many fronts, the borders
    between the creative and the mechanical have
    shifted.[^93^](#c2-note-0093){#c2-note-0093a}
    :::

    ::: {.section}
    ### Dynamic algorithms {#c2-sec-0021}

    The algorithms used for such tasks, however, are no longer simple
    sequences of static instructions. They are no longer repeated unchanged,
    over and over again, but are dynamic and adaptive to a high degree. The
    computing power available today is used to write programs that modify
    and improve themselves semi-automatically and in response to feedback.

    What this means can be illustrated by the example of evolutionary and
    self-learning algorithms. An evolutionary algorithm is developed in an
    iterative process that continues to run until the desired result has
    been achieved. In most cases, the values of the variables of the first
    generation of algorithms are chosen at random in order to diminish the
    influence of the programmer\'s presuppositions on the results. These
    cannot be avoided entirely, however, because the type of variables
    (independent of their value) has to be determined in the first place. I
    will return to this problem later on. This is []{#Page_109
    type="pagebreak" title="109"}followed by a phase of evaluation: the
    output of every tested algorithm is evaluated according to how close it
    is to the desired solution. The best are then chosen and combined with
    one another. In addition, mutations (that is, random changes) are
    introduced. These steps are then repeated as often as necessary until,
    according to the specifications in question, the algorithm is
    "sufficient" or cannot be improved any further. By means of intensive
    computational processes, algorithms are thus "cultivated"; that is,
    large numbers of these are tested instead of a single one being designed
    analytically and then implemented. At the heart of this pursuit is a
    functional solution that proves itself experimentally and in practice,
    but about which it might no longer be possible to know why it functions
    or whether it actually is the best possible solution. The fundamental
    methods behind this process largely derive from the 1970s (the first
    stage of artificial intelligence), the difference being that today they
    can be carried out far more effectively. One of the best-known examples
    of an evolutionary algorithm is that of Google Flu Trends. In order to
    predict which regions will be especially struck by the flu in a given
    year, it evaluates the geographic distribution of internet searches for
    particular terms ("cold remedies," for instance). To develop the
    program, Google tested 450 million different models until one emerged
    that could reliably identify local flu epidemics one to two weeks ahead
    of the national health authorities.[^94^](#c2-note-0094){#c2-note-0094a}

    In pursuits of this magnitude, the necessary processes can only be
    administered by computer programs. The series of tests are no longer
    conducted by programmers but rather by algorithms. In short, algorithms
    are implemented in order to write new algorithms or determine their
    variables. If this reflexive process, in turn, is built into an
    algorithm, then the latter becomes "self-learning": the programmers do
    not set the rules for its execution but rather the rules according to
    which the algorithm is supposed to know how to accomplish a particular
    goal. In many cases, the solution strategies are so complex that they
    are incomprehensible in retrospect. They can no longer be tested
    logically, only experimentally. Such algorithms are essentially black
    boxes -- objects that can only be understood by their outer behavior but
    whose internal structure cannot be known.[]{#Page_110 type="pagebreak"
    title="110"}

    Automatic facial recognition, as used in surveillance technologies and
    for authorizing access to certain things, is based on the fact that
    computers can evaluate large numbers of facial images, first to produce
    a general model for a face, then to identify the variables that make a
    face unique and therefore recognizable. With so-called "unsupervised" or
    "deep-learning" algorithms, some developers and companies have even
    taken this a step further: computers are expected to extract faces from
    unstructured images -- that is, from volumes of images that contain
    images both with faces and without them -- and to do so without
    possessing in advance any model of the face in question. So far, the
    extraction and evaluation of unknown patterns from unstructured material
    has only been achieved in the case of very simple patterns -- with edges
    or surfaces in images, for instance -- for it is extremely complex and
    computationally intensive to program such learning processes. In recent
    years, however, there have been enormous leaps in available computing
    power, and both the data inputs and the complexity of the learning
    models have increased exponentially. Today, on the basis of simple
    patterns, algorithms are developing improved recognition of the complex
    content of images. They are refining themselves on their own. The term
    "deep learning" is meant to denote this very complexity. In 2012, Google
    was able to demonstrate the performance capacity of its new programs in
    an impressive manner: from a collection of randomly chosen YouTube
    videos, analyzed in a cluster by 1,000 computers with 16,000 processors,
    it was possible to create a model in just three days that increased
    facial recognition in unstructured images by 70
    percent.[^95^](#c2-note-0095){#c2-note-0095a} Of course, the algorithm
    does not "know" what a face is, but it reliably recognizes a class of
    forms that humans refer to as a face. One advantage of a model that is
    not created on the basis of prescribed parameters is that it can also
    identify faces in non-standard situ­ations (for instance if a person is
    in the background, if a face is half-concealed, or if it has been
    recorded at a sharp angle). Thanks to this technique, it is possible to
    search the content of images directly and not, as before, primarily by
    searching their descriptions. Such algorithms are also being used to
    identify people in images and to connect them in social networks with
    the profiles of the people in question, and this []{#Page_111
    type="pagebreak" title="111"}without any cooperation from the users
    themselves. Such algorithms are also expected to assist in directly
    controlling activity in "unstructured" reality, for instance in
    self-driving cars or other autonomous mobile applications that are of
    great interest to the military in particular.

    Algorithms of this sort can react and adjust themselves directly to
    changes in the environment. This feedback, however, also shortens the
    timeframe within which they are able to generate repetitive and
    therefore predictable results. Thus, algorithms and their predictive
    powers can themselves become unpredictable. Stock markets have
    frequently experi­enced so-called "sub-second extreme events"; that is,
    price fluctuations that happen in less than a
    second.[^96^](#c2-note-0096){#c2-note-0096a} Dramatic "flash crashes,"
    however, such as that which occurred on May 6, 2010, when the Dow Jones
    Index dropped almost a thousand points in a few minutes (and was thus
    perceptible to humans), have not been terribly
    uncommon.[^97^](#c2-note-0097){#c2-note-0097a} With the introduction of
    voice commands on mobile phones (Apple\'s Siri, for example, which came
    out in 2011), programs based on self-learning algorithms have now
    reached the public at large and have infiltrated increased areas of
    everyday life.
    :::

    ::: {.section}
    ### Sorting, ordering, extracting {#c2-sec-0022}

    Orders generated by algorithms are a constitutive element of the digital
    condition. On the one hand, the mechanical pre-sorting of the
    (informational) world is a precondition for managing immense and
    unstructured amounts of data. On the other hand, these large amounts of
    data and the computing centers in which they are stored and processed
    provide the material precondition for developing increasingly complex
    algorithms. Necessities and possibilities are mutually motivating one
    another.[^98^](#c2-note-0098){#c2-note-0098a}

    Perhaps the best-known algorithms that sort the digital infosphere and
    make it usable in its present form are those of search engines, above
    all Google\'s PageRank. Thanks to these, we can find our way around in a
    world of unstructured information and transfer increasingly larger parts
    of the (informational) world into the order of unstructuredness without
    giving rise to the "Library of Babel." Here, "unstructured" means that
    there is no prescribed order such as (to stick []{#Page_112
    type="pagebreak" title="112"}with the image of the library) a cataloging
    system that assigns to each book a specific place on a shelf. Rather,
    the books are spread all over the place and are dynamically arranged,
    each according to a search, so that the appropriate books for each
    visitor are always standing ready at the entrance. Yet the metaphor of
    books being strewn all about is problematic, for "unstructuredness" does
    not simply mean the absence of any structure but rather the presence of
    another type of order -- a meta-structure, a potential for order -- out
    of which innumerable specific arrangements can be generated on an ad hoc
    basis. This meta-structure is created by algorithms. They subsequently
    derive from it an actual order, which the user encounters, for instance,
    when he or she scrolls through a list of hits produced by a search
    engine. What the user does not see are the complex preconditions for
    assembling the search results. By the middle of 2014, according to the
    company\'s own information, the Google index alone included more than a
    hundred million gigabytes of data.

    Originally (that is, in the second half of the 1990s), Page­Rank
    functioned in such a way that the algorithm analyzed the structure of
    links on the World Wide Web, first by noting the number of links that
    referred to a given document, and second by evaluating the "relevance"
    of the site that linked to the document in question. The relevance of a
    site, in turn, was determined by the number of links that led to it.
    From these two variables, every document registered by the search engine
    was assigned a value, the PageRank. The latter served to present the
    documents found with a given search term as a hierarchical list (search
    results), whereby the document with the highest value was listed
    first.[^99^](#c2-note-0099){#c2-note-0099a} This algorithm was extremely
    successful because it reduced the unfathomable chaos of the World Wide
    Web to a task that could be managed without difficulty by an individual
    user: inputting a search term and selecting from one of the presented
    "hits." The simplicity of the user\'s final choice, together with the
    quality of the algorithmic pre-selection, quickly pushed Google past its
    competition.

    Underlying this process is the assumption that every link is an
    indication of relevance, and that links from frequently linked (that is,
    popular) sources are more important than those from less frequently
    linked (that is, unpopular) sources. []{#Page_113 type="pagebreak"
    title="113"}The advantage of this assumption is that it can be
    understood in terms of purely quantitative variables and it is not
    necessary to have any direct understanding of a document\'s content or
    of the context in which it exists.

    In the middle of the 1990s, when the first version of the PageRank
    algorithm was developed, the problem of judging the relevance of
    documents whose content could only partially be evaluated was not a new
    one. Science administrators at universities and funding agencies had
    been facing this difficulty since the 1950s. During the rise of the
    knowledge economy, the number of scientific publications increased
    rapidly. Scientific fields, perspectives, and methods also multiplied
    and diversified during this time, so that even experts could not survey
    all of the work being done in their own areas of
    research.[^100^](#c2-note-0100){#c2-note-0100a} Thus, instead of reading
    and evaluating the content of countless new publications, they shifted
    their analysis to a higher level of abstraction. They began to count how
    often an article or book was cited and applied this information to
    assess the value of a given author or
    publication.[^101^](#c2-note-0101){#c2-note-0101a} The underlying
    assumption was (and remains) that only important things are referenced,
    and therefore every citation and every reference can be regarded as an
    indirect vote for something\'s relevance.

    In both cases -- classifying a chaotic sphere of information and
    administering an expanding industry of knowledge -- the challenge is to
    develop dynamic orders for rapidly changing fields, enabling the
    evaluation of the importance of individual documents without knowledge
    of their content. Because the analysis of citations or links operates on
    a purely quantitative basis, large amounts of data can be quickly
    structured with them, and especially relevant positions can be
    determined. The second advantage of this approach is that it does not
    require any assumptions about the contours of different fields or their
    relationships to one another. This enables the organ­ization of
    disordered or dynamic content. In both cases, references made by the
    actors themselves are used: citations in a scientific text, links on
    websites. Their value for establishing the order of a field as a whole,
    however, is only visible in the aggregate, for instance in the frequency
    with which a given article is
    cited.[^102^](#c2-note-0102){#c2-note-0102a} In both cases, the shift
    from analyzing "data" (the content of documents in the traditional
    sense) to []{#Page_114 type="pagebreak" title="114"}analyzing
    "meta-data" (describing documents in light of their relationships to one
    another) is a precondition for being able to make any use at all of
    growing amounts of information.[^103^](#c2-note-0103){#c2-note-0103a}
    This shift introduced a new level of abstraction. Information is no
    longer understood as a representation of external reality; its
    significance is not evaluated with regard to the relation between
    "information" and "the world," for instance with a qualitative criterion
    such as "true"/"false." Rather, the sphere of information is treated as
    a self-referential, closed world, and documents are accordingly only
    evaluated in terms of their position within this world, though with
    quantitative criteria such as "central"/"peripheral."

    Even though the PageRank algorithm was highly effective and assisted
    Google\'s rapid ascent to a market-leading position, at the beginning it
    was still relatively simple and its mode of operation was at least
    partially transparent. It followed the classical statistical model of an
    algorithm. A document or site referred to by many links was considered
    more important than one to which fewer links
    referred.[^104^](#c2-note-0104){#c2-note-0104a} The algorithm analyzed
    the given structural order of information and determined the position of
    every document therein, and this was largely done independently of the
    context of the search and without making any assumptions about it. This
    approach functioned relatively well as long as the volume of information
    did not exceed a certain size, and as long as the users and their
    searches were somewhat similar to one another. In both respects, this is
    no longer the case. The amount of information to be pre-sorted is
    increasing, and users are searching in all possible situations and
    places for everything under the sun. At the time Google was founded, no
    one would have thought to check the internet, quickly and while on
    one\'s way, for today\'s menu at the restaurant round the corner. Now,
    thanks to smartphones, this is an obvious thing to do.
    :::

    ::: {.section}
    ### Algorithm clouds {#c2-sec-0023}

    In order to react to such changes in user behavior -- and simultaneously
    to advance it further -- Google\'s search algorithm is constantly being
    modified. It has become increasingly complex and has assimilated a
    greater amount of contextual []{#Page_115 type="pagebreak"
    title="115"}information, which influences the value of a site within
    Page­Rank and thus the order of search results. The algorithm is no
    longer a fixed object or unchanging recipe but is transforming into a
    dynamic process, an opaque cloud composed of multiple interacting
    algorithms that are continuously refined (between 500 and 600 times a
    year, according to some estimates). These ongoing developments are so
    extensive that, since 2003, several new versions of the algorithm cloud
    have appeared each year with their own names. In 2014 alone, Google
    carried out 13 large updates, more than ever
    before.[^105^](#c2-note-0105){#c2-note-0105a}

    These changes continue to bring about new levels of abstraction, so that
    the algorithm takes into account add­itional variables such as the time
    and place of a search, alongside a person\'s previously recorded
    behavior -- but also his or her involvement in social environments, and
    much more. Personalization and contextualization were made part of
    Google\'s search algorithm in 2005. At first it was possible to choose
    whether or not to use these. Since 2009, however, they have been a fixed
    and binding component for everyone who conducts a search through
    Google.[^106^](#c2-note-0106){#c2-note-0106a} By the middle of 2013, the
    search algorithm had grown to include at least 200
    variables.[^107^](#c2-note-0107){#c2-note-0107a} What is relevant is
    that the algorithm no longer determines the position of a document
    within a dynamic informational world that exists for everyone
    externally. Instead, it now assigns a rank to their content within a
    dynamic and singular universe of information that is tailored to every
    individual user. For every person, an entirely different order is
    created instead of just an excerpt from a previously existing order. The
    world is no longer being represented; it is generated uniquely for every
    user and then presented. Google is not the only company that has gone
    down this path. Orders produced by algorithms have become increasingly
    oriented toward creating, for each user, his or her own singular world.
    Facebook, dating services, and other social mass media have been
    pursuing this approach even more radically than Google.
    :::

    ::: {.section}
    ### From the data shadow to the synthetic profile {#c2-sec-0024}

    This form of generating the world requires not only detailed information
    about the external world (that is, the reality []{#Page_116
    type="pagebreak" title="116"}shared by everyone) but also information
    about every individual\'s own relation to the
    latter.[^108^](#c2-note-0108){#c2-note-0108a} To this end, profiles are
    established for every user, and the more extensive they are, the better
    they are for the algorithms. A profile created by Google, for instance,
    identifies the user on three levels: as a "knowledgeable person" who is
    informed about the world (this is established, for example, by recording
    a person\'s searches, browsing behavior, etc.), as a "physical person"
    who is located and mobile in the world (a component established, for
    example, by tracking someone\'s location through a smartphone, sensors
    in a smart home, or body signals), and as a "social person" who
    interacts with other people (a facet that can be determined, for
    instance, by following someone\'s activity on social mass
    media).[^109^](#c2-note-0109){#c2-note-0109a}

    Unlike the situation in the 1990s, however, these profiles are no longer
    simply representations of singular people -- they are not "digital
    personas" or "data shadows." They no longer represent what is
    conventionally referred to as "individuality," in the sense of a
    spatially and temporally uniform identity. On the one hand, profiles
    rather consist of sub-individual elements -- of fragments of recorded
    behavior that can be evaluated on the basis of a particular search
    without promising to represent a person as a whole -- and they consist,
    on the other hand, of clusters of multiple people, so that the person
    being modeled can simultaneously occupy different positions in time.
    This temporal differentiation enables predictions of the following sort
    to be made: a person who has already done *x* will, with a probability
    of *y*, go on to engage in activity *z*. It is in this way that Amazon
    assembles its book recommendations, for the company knows that, within
    the cluster of people that constitutes part of every person\'s profile,
    a certain percentage of them have already gone through this sequence of
    activity. Or, as the data-mining company Science Rockstars (!) once
    pointedly expressed on its website, "Your next activity is a function of
    the behavior of others and your own past."

    Google and other providers of algorithmically generated orders have been
    devoting increased resources to the prognostic capabilities of their
    programs in order to make the confusing and potentially time-consuming
    step of the search obsolete. The goal is to minimize a rift that comes
    to light []{#Page_117 type="pagebreak" title="117"}in the act of
    searching, namely that between the world as everyone experiences it --
    plagued by uncertainty, for searching implies "not knowing something" --
    and the world of algorithmically generated order, in which certainty
    prevails, for everything has been well arranged in advance. Ideally,
    questions should be answered before they are asked. The first attempt by
    Google to eliminate this rift is called Google Now, and its slogan is
    "The right information at just the right time." The program, which was
    originally developed as an app but has since been made available on
    Chrome, Google\'s own web browser, attempts to anticipate, on the basis
    of existing data, a user\'s next step, and to provide the necessary
    information before it is searched for in order that such steps take
    place efficiently. Thus, for instance, it draws upon information from a
    user\'s calendar in order to figure out where he or she will have to go
    next. On the basis of real-time traffic data, it will then suggest the
    optimal way to get there. For those driving cars, the amount of traffic
    on the road will be part of the equation. This is ascertained by
    analyzing the motion profiles of other drivers, which will allow the
    program to determine whether the traffic is flowing or stuck in a jam.
    If enough historical data is taken into account, the hope is that it
    will be possible to redirect cars in such a way that traffic jams should
    no longer occur.[^110^](#c2-note-0110){#c2-note-0110a} For those who use
    public transport, Google Now evaluates real-time data about the
    locations of various transport services. With this information, it will
    suggest the optimal route and, depending on the calculated travel time,
    it will send a reminder (sometimes earlier, sometimes later) when it is
    time to go. That which Google is just experimenting with and testing in
    a limited and unambiguous context is already part of Facebook\'s
    everyday operations. With its EdgeRank algorithm, Facebook already
    organizes everyone\'s newsfeed, entirely in the background and without
    any explicit user interaction. On the basis of three variables -- user
    affinity (previous interactions between two users), content weight (the
    rate of interaction between all users and a specific piece of content),
    and currency (the age of a post) -- the algorithm selects content from
    the status updates made by one\'s friends to be displayed on one\'s own
    page.[^111^](#c2-note-0111){#c2-note-0111a} In this way, Facebook
    ensures that the stream of updates remains easy to scroll through, while
    also -- it is safe []{#Page_118 type="pagebreak" title="118"}to assume
    -- leaving enough room for advertising. This potential for manipulation,
    which algorithms possess as they work away in the background, will be
    the topic of my next section.
    :::

    ::: {.section}
    ### Variables and correlations {#c2-sec-0025}

    Every complex algorithm contains a multitude of variables and usually an
    even greater number of ways to make connections between them. Every
    variable and every relation, even if they are expressed in technical or
    mathematical terms, codifies assumptions that express a specific
    position in the world. There can be no purely descriptive variables,
    just as there can be no such thing as "raw
    data."[^112^](#c2-note-0112){#c2-note-0112a} Both -- data and variables
    -- are always already "cooked"; that is, they are engendered through
    cultural operations and formed within cultural
    categories.[^113^](#c2-note-0113){#c2-note-0113a} With every use of
    produced data and with every execution of an algorithm, the assumptions
    embedded in them are activated, and the positions contained within them
    have effects on the world that the algorithm generates and presents.

    As already mentioned, the early version of the PageRank algorithm was
    essentially based on the rather simple assumption that frequently linked
    content is more relevant than content that is only seldom linked to, and
    that links to sites that are themselves frequently linked to should be
    given more weight than those found on sites with fewer links to them.
    Replacing the qualitative criterion of "relevance" with the quantitative
    criterion of "popularity" not only proved to be tremendously practical
    but also extremely consequential, for search engines not only describe
    the world; they create it as well. That which search engines put at the
    top of this list is not just already popular but will remain so. A third
    of all users click on the first search result, and around 95 percent do
    not look past the first 10.[^114^](#c2-note-0114){#c2-note-0114a} Even
    the earliest version of the PageRank algorithm did not represent
    existing reality but rather (co-)constituted it.

    Popularity, however, is not the only element with which algorithms
    actively give shape to the user\'s world. A search engine can only sort,
    weigh, and make available that portion of information which has already
    been incorporated into its index. Everything else remains invisible. The
    relation between []{#Page_119 type="pagebreak" title="119"}the recorded
    part of the internet (the "surface web") and the unrecorded part (the
    "deep web") is difficult to determine. Estimates have varied between
    ratios of 1:5 and 1:500.[^115^](#c2-note-0115){#c2-note-0115a} There are
    many reasons why content might be inaccessible to search engines.
    Perhaps the information has been saved in formats that search engines
    cannot read or can only poorly read, or perhaps it has been hidden
    behind proprietary barriers such as paywalls. In order to expand the
    realm of things that can be exploited by their algorithms, the operators
    of search engines offer extensive guidance about how providers should
    design their sites so that search tools can find them in an optimal
    manner. It is not necessary to follow this guidance, but given the
    central role of search engines in sorting and filtering information, it
    is clear that they exercise a great deal of power by setting the
    standards.[^116^](#c2-note-0116){#c2-note-0116a}

    That the individual must "voluntarily" submit to this authority is
    typical of the power of networks, which do not give instructions but
    rather constitute preconditions. Yet it is in the interest of (almost)
    every producer of information to optimize its position in a search
    engine\'s index, and thus there is a strong incentive to accept the
    preconditions in question. Considering, moreover, the nearly
    monopolistic character of many providers of algorithmically generated
    orders and the high price that one would have to pay if one\'s own site
    were barely (or not at all) visible to others, the term "voluntary"
    begins to take on a rather foul taste. This is a more or less subtle way
    of pre-formatting the world so that it can be optimally recorded by
    algorithms.[^117^](#c2-note-0117){#c2-note-0117a}

    The providers of search engines usually justify such methods in the name
    of offering "more efficient" services and "more relevant" results.
    Ostensibly technical and neutral terms such as "efficiency" and
    "relevance" do little, however, to conceal the political nature of
    defining variables. Efficient with respect to what? Relevant for whom?
    These are issues that are decided without much discussion by the
    developers and institutions that regard the algorithms as their own
    property. Every now and again such questions incite public debates,
    mostly when the interests of one provider happen to collide with those
    of its competition. Thus, for instance, the initiative known as
    FairSearch has argued that Google abuses its market power as a search
    engine to privilege its []{#Page_120 type="pagebreak" title="120"}own
    content and thus to showcase it prominently in search
    results.[^118^](#c2-note-0118){#c2-note-0118a} FairSearch\'s
    representatives alleged, for example, that Google favors its own map
    service in the case of address searches and its own price comparison
    service in the case of product searches. The argument had an effect. In
    November of 2010, the European Commission initiated an antitrust
    investigation against Google. In 2014, a settlement was proposed that
    would have required the American internet giant to pay certain
    concessions, but the members of the Commission, the EU Parliament, and
    consumer protection agencies were not satisfied with the agreement. In
    April 2015, the anti-trust proceedings were recommenced by a newly
    appointed Commission, its reasoning being that "Google does not apply to
    its own comparison shopping service the system of penalties which it
    applies to other comparison shopping services on the basis of defined
    parameters, and which can lead to the lowering of the rank in which they
    appear in Google\'s general search results
    pages."[^119^](#c2-note-0119){#c2-note-0119a} In other words, the
    Commission accused the company of manipulating search results to its own
    advantage and the disadvantage of users.

    This is not the only instance in which the political side of search
    algorithms has come under public scrutiny. In the summer of 2012, Google
    announced that sites with higher numbers of copyright removal notices
    would henceforth appear lower in its
    rankings.[^120^](#c2-note-0120){#c2-note-0120a} The company thus
    introduced explicitly political and economic criteria in order to
    influence what, according to the standards of certain powerful players
    (such as film studios), users were able to
    view.[^121^](#c2-note-0121){#c2-note-0121a} In this case, too, it would
    be possible to speak of the personalization of searching, except that
    the heart of the situation was not the natural person of the user but
    rather the juridical person of the copyright holder. It was according to
    the latter\'s interests and preferences that searching was being
    reoriented. Amazon has employed similar tactics. In 2014, the online
    merchant changed its celebrated recommendation algorithm with the goal
    of reducing the presence of books released by irritating publishers that
    dared to enter into price negotiations with the
    company.[^122^](#c2-note-0122){#c2-note-0122a}

    Controversies over the methods of Amazon or Google, however, are the
    exception rather than the rule. Necessary (but never neutral) decisions
    about recording and evaluating data []{#Page_121 type="pagebreak"
    title="121"}with algorithms are being made almost all the time without
    any discussion whatsoever. The logic of the original Page­Rank algorithm
    was criticized as early as the year 2000 for essentially representing
    the commercial logic of mass media, systematically disadvantaging
    less-popular though perhaps otherwise relevant information, and thus
    undermining the "substantive vision of the web as an inclusive
    democratic space."[^123^](#c2-note-0123){#c2-note-0123a} The changes to
    the search algorithm that have been adopted since then may have modified
    this tendency, but they have certainly not weakened it. In addition to
    concentrating on what is popular, the new variables privilege recently
    uploaded and constantly updated content. The selection of search results
    is now contingent upon the location of the user, and it takes into
    account his or her social networking. It is oriented toward the average
    of a dynamically modeled group. In other words, Google\'s new algorithm
    favors that which is gaining popularity within a user\'s social network.
    The global village is thus becoming more and more
    provincial.[^124^](#c2-note-0124){#c2-note-0124a}
    :::

    ::: {.section}
    ### Data behaviorism {#c2-sec-0026}

    Algorithms such as Google\'s thus reiterate and reinforce a tendency
    that has already been apparent on both the level of individual users and
    that of communal formations: in order to deal with the vast amounts and
    complexity of information, they direct their gaze inward, which is not
    to say toward the inner being of individual people. As a level of
    reference, the individual person -- with an interior world and with
    ideas, dreams, and wishes -- is irrelevant. For algorithms, people are
    black boxes that can only be understood in terms of their reactions to
    stimuli. Consciousness, perception, and intention do not play any role
    for them. In this regard, the legal philosopher Antoinette Rouvroy has
    written about "data behaviorism."[^125^](#c2-note-0125){#c2-note-0125a}
    With this, she is referring to the gradual return of a long-discredited
    approach to behavioral psychology that postulated that human behavior
    could be explained, predicted, and controlled purely by our outwardly
    observable and measurable actions.[^126^](#c2-note-0126){#c2-note-0126a}
    Psychological dimensions were ignored (and are ignored in this new
    version of behaviorism) because it is difficult to observe them
    empiric­ally. Accordingly, this approach also did away with the need
    []{#Page_122 type="pagebreak" title="122"}to question people directly or
    take into account their subjective experiences, thoughts, and feelings.
    People were regarded (and are so again today) as unreliable, as poor
    judges of themselves, and as only partly honest when disclosing
    information. Any strictly empirical science, or so the thinking went,
    required its practitioners to disregard everything that did not result
    in physical and observable action. From this perspective, it was
    possible to break down even complex behavior into units of stimulus and
    reaction. This led to the conviction that someone observing another\'s
    activity always knows more than the latter does about himself or herself
    for, unlike the person being observed, whose impressions can be
    inaccurate, the observer is in command of objective and complete
    information. Even early on, this approach faced a wave of critique. It
    was held to be mechanistic, reductionist, and authoritarian because it
    privileged the observing scientist over the subject. In practice, it
    quickly ran into its own limitations: it was simply too expensive and
    complicated to gather data about human behavior.

    Yet that has changed radically in recent years. It is now possible to
    measure ever more activities, conditions, and contexts empirically.
    Algorithms like Google\'s or Amazon\'s form the technical backdrop for
    the revival of a mechanistic, reductionist, and authoritarian approach
    that has resurrected the long-lost dream of an objective view -- the
    view from nowhere.[^127^](#c2-note-0127){#c2-note-0127a} Every critique
    of this positivistic perspective -- that every measurement result, for
    instance, reflects not only the measured but also the measurer -- is
    brushed aside with reference to the sheer amounts of data that are now
    at our disposal.[^128^](#c2-note-0128){#c2-note-0128a} This attitude
    substantiates the claim of those in possession of these new and
    comprehensive powers of observation (which, in addition to Google and
    Facebook, also includes the intelligence services of Western nations),
    namely that they know more about individuals than individuals know about
    themselves, and are thus able to answer our questions before we ask
    them. As mentioned above, this is a goal that Google expressly hopes to
    achieve.

    At issue with this "inward turn" is thus the space of communal
    formations, which is constituted by the sum of all of the activities of
    their interacting participants. In this case, however, a communal
    formation is not consciously created []{#Page_123 type="pagebreak"
    title="123"}and maintained in a horizontal process, but rather
    synthetic­ally constructed as a computational function. Depending on the
    context and the need, individuals can either be assigned to this
    function or removed from it. All of this happens behind the user\'s back
    and in accordance with the goals and pos­itions that are relevant to the
    developers of a given algorithm, be it to optimize profit or
    surveillance, create social norms, improve services, or whatever else.
    The results generated in this way are sold to users as a personalized
    and efficient service that provides a quasi-magical product. Out of the
    enormous haystack of searchable information, results are generated that
    are made to seem like the very needle that we have been looking for. At
    best, it is only partially transparent how these results came about and
    which positions in the world are strengthened or weakened by them. Yet,
    as long as the needle is somewhat functional, most users are content,
    and the algorithm registers this contentedness to validate itself. In
    this dynamic world of unmanageable complexity, users are guided by a
    sort of radical, short-term pragmatism. They are happy to have the world
    pre-sorted for them in order to improve their activity in it. Regarding
    the matter of whether the information being provided represents the
    world accurately or not, they are unable to formulate an adequate
    assessment for themselves, for it is ultimately impossible to answer
    this question without certain resources. Outside of rapidly shrinking
    domains of specialized or everyday know­ledge, it is becoming
    increasingly difficult to gain an overview of the world without
    mechanisms that pre-sort it. Users are only able to evaluate search
    results pragmatically; that is, in light of whether or not they are
    helpful in solving a concrete problem. In this regard, it is not
    paramount that they find the best solution or the correct answer but
    rather one that is available and sufficient. This reality lends an
    enormous amount of influence to the institutions and processes that
    provide the solutions and answers.[]{#Page_124 type="pagebreak"
    title="124"}
    :::
    :::

    ::: {.section .notesSet type="rearnotes"}
    []{#notesSet}Notes {#c2-ntgp-9999}
    ------------------

    ::: {.section .notesList}
    [1](#c2-note-0001a){#c2-note-0001}  André Rottmann, "Reflexive Systems
    of Reference: Approximations to 'Referentialism' in Contemporary Art,"
    trans. Gerrit Jackson, in Dirk Snauwaert et al. (eds), *Rehabilitation:
    The Legacy of the Modern Movement* (Ghent: MER, 2010), pp. 97--106, at
    99.

    [2](#c2-note-0002a){#c2-note-0002}  The recognizability of the sources
    distinguishes these processes from plagiarism. The latter operates with
    the complete opposite aim, namely that of borrowing sources without
    acknow­ledging them.

    [3](#c2-note-0003a){#c2-note-0003}  Ulf Poschardt, *DJ Culture* (London:
    Quartet Books, 1998), p. 34.

    [4](#c2-note-0004a){#c2-note-0004}  Theodor W. Adorno, *Aesthetic
    Theory*, trans. Robert Hullot-Kentor (Minneapolis, MN: University of
    Minnesota Press, 1997), p. 151.

    [5](#c2-note-0005a){#c2-note-0005}  Peter Bürger, *Theory of the
    Avant-Garde*, trans. Michael Shaw (Minneapolis, MN: University of
    Minnesota Press, 1984).

    [6](#c2-note-0006a){#c2-note-0006}  Felix Stalder, "Neun Thesen zur
    Remix-Kultur," *i-rights.info* (May 25, 2009), online.

    [7](#c2-note-0007a){#c2-note-0007}  Florian Cramer, *Exe.cut(up)able
    Statements: Poetische Kalküle und Phantasmen des selbstausführenden
    Texts* (Munich: Wilhelm Fink, 2011), pp. 9--10 \[--trans.\]

    [8](#c2-note-0008a){#c2-note-0008}  McLuhan stressed that, despite using
    the alphabet, every manuscript is unique because it not only depended on
    the sequence of letters but also on the individual ability of a given
    scribe to []{#Page_185 type="pagebreak" title="185"}lend these letters a
    particular shape. With the rise of the printing press, the alphabet shed
    these last elements of calligraphy and became typography.

    [9](#c2-note-0009a){#c2-note-0009}  Elisabeth L. Eisenstein, *The
    Printing Revolution in Early Modern Europe* (Cambridge: Cambridge
    University Press, 1983), p. 15.

    [10](#c2-note-0010a){#c2-note-0010}  Eisenstein, *The Printing
    Revolution in Early Modern Europe*, p. 204.

    [11](#c2-note-0011a){#c2-note-0011}  The fundamental aspects of these
    conventions were formulated as early as the beginning of the sixteenth
    century; see Michael Giesecke, *Der Buchdruck in der frühen Neuzeit:
    Eine historische Fallstudie über die Durchsetzung neuer Informations-
    und Kommunikationstechnologien* (Frankfurt am Main: Suhrkamp, 1991), pp.
    420--40.

    [12](#c2-note-0012a){#c2-note-0012}  Eisenstein, *The Printing
    Revolution in Early Modern Europe*, p. 49.

    [13](#c2-note-0013a){#c2-note-0013}  In April 2014, the Authors Guild --
    the association of American writers that had sued Google -- filed an
    appeal to overturn the decision and made a public statement demanding
    that a new organization be established to license the digital rights of
    out-of-print books. See "Authors Guild: Amazon was Google's Target,"
    *The Authors Guild: Industry & Advocacy News* (April 11, 2014), online.
    In October 2015, however, the next-highest authority -- the United
    States Court of Appeals for the Second Circuit -- likewise decided in
    Google\'s favor. The Authors Guild promptly announced its intention to
    take the case to the Supreme Court.

    [14](#c2-note-0014a){#c2-note-0014}  Jean-Noël Jeanneney, *Google and
    the Myth of Universal Knowledge: A View from Europe*, trans. Teresa
    Lavender Fagan (Chicago, IL: University of Chicago Press, 2007).

    [15](#c2-note-0015a){#c2-note-0015}  Within the framework of the Images
    for the Future project (2007--14), the Netherlands alone invested more
    than €170 million to digitize the collections of the most important
    audiovisual archives. Over 10 years, the cost of digitizing the entire
    cultural heritage of Europe has been estimated to be around €100
    billion. See Nick Poole, *The Cost of Digitising Europe\'s Cultural
    Heritage: A Report for the Comité des Sages of the European Commission*
    (November 2010), online.

    [16](#c2-note-0016a){#c2-note-0016}  Richard Darnton, "The National
    Digital Public Library Is Launched!", *New York Review of Books* (April
    25, 2013), online.

    [17](#c2-note-0017a){#c2-note-0017}  According to estimates by the
    British Library, so-called "orphan works" alone -- that is, works still
    legally protected but whose right holders are unknown -- make up around
    40 percent of the books in its collection that still fall under
    copyright law. In an effort to alleviate this problem, the European
    Parliament and the European Commission issued a directive []{#Page_186
    type="pagebreak" title="186"}in 2012 concerned with "certain permitted
    uses of orphan works." This has allowed libraries and archives to make
    works available online without permission if, "after carrying out
    diligent searches," the copyright holders cannot be found. What
    qualifies as a "diligent search," however, is so strictly formulated
    that the German Library Association has called the directive
    "impracticable." Deutscher Bibliotheksverband, "Rechtlinie über
    bestimmte zulässige Formen der Nutzung verwaister Werke" (February 27,
    2012), online.

    [18](#c2-note-0018a){#c2-note-0018}  UbuWeb, "Frequently Asked
    Questions," online.

    [19](#c2-note-0019a){#c2-note-0019}  The numbers in this area of
    activity are notoriously unreliable, and therefore only rough estimates
    are possible. It seems credible, however, that the Pirate Bay was
    attracting around a billion page views per month by the end of 2013.
    That would make it the seventy-fourth most popular internet destination.
    See Ernesto, "Top 10 Most Popular Torrent Sites of 2014" (January 4,
    2014), online.

    [20](#c2-note-0020a){#c2-note-0020}  See the documentary film *TPB AFK:
    The Pirate Bay Away from Keyboard* (2013), directed by Simon Klose.

    [21](#c2-note-0021a){#c2-note-0021}  In technical terms, there is hardly
    any difference between a "stream" and a "download." In both cases, a
    complete file is transferred to the user\'s computer and played.

    [22](#c2-note-0022a){#c2-note-0022}  The practice is legal in Germany
    but illegal in Austria, though digitized texts are routinely made
    available there in seminars. See Seyavash Amini Khanimani and Nikolaus
    Forgó, "Rechtsgutachten über die Erforderlichkeit einer freien
    Werknutzung im österreichischen Urheberrecht zur Privilegierung
    elektronisch unterstützter Lehre," *Forum Neue Medien Austria* (January
    2011), online.

    [23](#c2-note-0023a){#c2-note-0023}  Deutscher Bibliotheksverband,
    "Digitalisierung" (2015), online \[--trans\].

    [24](#c2-note-0024a){#c2-note-0024}  David Weinberger, *Everything Is
    Miscellaneous: The Power of the New Digital Disorder* (New York: Times
    Books, 2007).

    [25](#c2-note-0025a){#c2-note-0025}  This is not a question of material
    wealth. Those who are economically or socially marginalized are
    confronted with the same phenomenon. Their primary experience of this
    excess is with cheap goods and junk.

    [26](#c2-note-0026a){#c2-note-0026}  See Gregory Bateson, "Form,
    Substance and Difference," in Bateson, *Steps to an Ecology of Mind:
    Collected Essays in Anthropology, Psychiatry, Evolution and
    Epistemology* (London: Jason Aronson, 1972), pp. 455--71, at 460:
    "\[I\]n fact, what we mean by information -- the elementary unit of
    information -- is *a difference which makes a difference*" (the emphasis
    is original).

    [27](#c2-note-0027a){#c2-note-0027}  Inke Arns and Gabriele Horn,
    *History Will Repeat Itself* (Frankfurt am Main: Revolver, 2007), p.
    42.[]{#Page_187 type="pagebreak" title="187"}

    [28](#c2-note-0028a){#c2-note-0028}  See the film *The Battle of
    Orgreave* (2001), directed by Mike Figgis.

    [29](#c2-note-0029a){#c2-note-0029}  Theresa Winge, "Costuming the
    Imagination: Origins of Anime and Manga Cosplay," *Mechademia* 1 (2006),
    pp. 65--76.

    [30](#c2-note-0030a){#c2-note-0030}  Nicolle Lamerichs, "Stranger than
    Fiction: Fan Identity in Cosplay," *Transformative Works and Cultures* 7
    (2011), online.

    [31](#c2-note-0031a){#c2-note-0031}  The *Oxford English Dictionary*
    defines "selfie" as a "photographic self-portrait; *esp*. one taken with
    a smartphone or webcam and shared via social media."

    [32](#c2-note-0032a){#c2-note-0032}  Odin Kroeger et al. (eds),
    *Geistiges Eigentum und Originalität: Zur Politik der Wissens- und
    Kulturproduktion* (Vienna: Turia + Kant, 2011).

    [33](#c2-note-0033a){#c2-note-0033}  Roland Barthes, "The Death of the
    Author," in Barthes, *Image -- Music -- Text*, trans. Stephen Heath
    (London: Fontana Press, 1977), pp. 142--8.

    [34](#c2-note-0034a){#c2-note-0034}  Heinz Rölleke and Albert
    Schindehütte, *Es war einmal: Die wahren Märchen der Brüder Grimm und
    wer sie ihnen erzählte* (Frankfurt am Main: Eichborn, 2011); and Heiner
    Boehncke, *Marie Hassenpflug: Eine Märchenerzählerin der Brüder Grimm*
    (Darmstadt: Von Zabern, 2013).

    [35](#c2-note-0035a){#c2-note-0035}  Hansjörg Ewert, "Alles nur
    geklaut?", *Zeit Online* (February 26, 2013), online. This is not a new
    realization but has long been a special area of research for
    musicologists. What is new, however, is that it is no longer
    controversial outside of this narrow disciplinary discourse. See Peter
    J. Burkholder, "The Uses of Existing Music: Musical Borrowing as a
    Field," *Notes* 50 (1994), pp. 851--70.

    [36](#c2-note-0036a){#c2-note-0036}  Zygmunt Bauman, *Liquid Modernity*
    (Cambridge: Polity, 2000), p. 56.

    [37](#c2-note-0037a){#c2-note-0037}  Quoted from Eran Schaerf\'s audio
    installation *FM-Scenario: Reality Race* (2013), online.

    [38](#c2-note-0038a){#c2-note-0038}  The number of members, for
    instance, of the two large polit­ical parties in Germany, the Social
    Democratic Party and the Christian Democratic Union, reached its peak at
    the end of the 1970s or the beginning of the 1980s. Both were able to
    increase their absolute numbers for a brief time at the beginning of the
    1990s, when the Christian Democratic Party even reached its absolute
    high point, but this can be explained by a surge in new members after
    reunification. By 2010, both parties already had fewer members than
    Greenpeace, whose 580,000 members make it Germany's largest NGO.
    Parallel to this, between 1970 and 2010, the proportion of people
    without any religious affiliations shrank to approximately 37 percent.
    That there are more churches and political parties today is indicative
    of how difficult []{#Page_188 type="pagebreak" title="188"}it has become
    for any single organization to attract broad strata of society.

    [39](#c2-note-0039a){#c2-note-0039}  Ulrich Beck, *Risk Society: Towards
    a New Modernity*, trans. Mark Ritter (London: SAGE, 1992), p. 135.

    [40](#c2-note-0040a){#c2-note-0040}  Ferdinand Tönnies, *Community and
    Society*, trans. Charles P. Loomis (East Lansing: Michigan State
    University Press, 1957).

    [41](#c2-note-0041a){#c2-note-0041}  Karl Marx and Friedrich Engels,
    "The Manifesto of the Communist Party (1848)," trans. Terrell Carver, in
    *The Cambridge Companion to the Communist Manifesto*, ed. Carver and
    James Farr (Cambridge: Cambridge University Press, 2015), pp. 237--60,
    at 239. For Marx and Engels, this was -- like everything pertaining to
    the dynamics of capitalism -- a thoroughly ambivalent development. For,
    in this case, it finally forced people "to take a down-to-earth view of
    their circumstances, their multifarious relationships" (ibid.).

    [42](#c2-note-0042a){#c2-note-0042}  As early as the 1940s, Karl Polanyi
    demonstrated in *The Great Transformation* (New York: Farrar & Rinehart,
    1944) that the idea of strictly separated spheres, which are supposed to
    be so typical of society, is in fact highly ideological. He argued above
    all that the attempt to implement this separation fully and consistently
    in the form of the free market would destroy the foundations of society
    because both the life of workers and the environment of the market
    itself would be regarded as externalities. For a recent adaptation of
    this argument, see David Graeber, *Debt: The First 5000 Years* (New
    York: Melville House, 2011).

    [43](#c2-note-0043a){#c2-note-0043}  Tönnies's persistent influence can
    be felt, for instance, in Zygmunt Bauman's negative assessment of the
    compunction to strive for community in his *Community: Seeking Safety in
    an Insecure World* (Malden, MA: Blackwell, 2001).

    [44](#c2-note-0044a){#c2-note-0044}  See, for example, Amitai Etzioni,
    *The Third Way to a Good Society* (London: Demos, 2000).

    [45](#c2-note-0045a){#c2-note-0045}  Jean Lave and Étienne Wenger,
    *Situated Learning: Legitimate Peripheral Participation* (Cambridge:
    Cambridge University Press, 1991), p. 98.

    [46](#c2-note-0046a){#c2-note-0046}  Étienne Wenger, *Cultivating
    Communities of Practice: A Guide to Managing Knowledge* (Boston, MA:
    Harvard Business School Press, 2000).

    [47](#c2-note-0047a){#c2-note-0047}  The institutions of the
    disciplinary society -- schools, factories, prisons and hospitals, for
    instance -- were closed. Whoever was inside could not get out.
    Participation was obligatory, and instructions had to be followed. See
    Michel Foucault, *Discipline and Punish: The Birth of the Prison*,
    trans. Alan Sheridan (New York: Pantheon Books, 1977).[]{#Page_189
    type="pagebreak" title="189"}

    [48](#c2-note-0048a){#c2-note-0048}  Weber famously defined power as
    follows: "Power is the probability that one actor within a social
    relationship will be in a position to carry out his own will despite
    resistance, regardless of the basis on which this probability rests."
    Max Weber, *Economy and Society: An Outline of Interpretive Sociology*,
    trans. Guenther Roth and Claus Wittich (Berkeley, CA: University of
    California Press, 1978), p. 53.

    [49](#c2-note-0049a){#c2-note-0049}  For those in complete despair, the
    following tip is provided: "To get more likes, start liking the photos
    of random people." Such a strategy, it seems, is more likely to increase
    than decrease one's hopelessness. The quotations are from "How to Get
    More Likes on Your Instagram Photos," *WikiHow* (2016), online.

    [50](#c2-note-0050a){#c2-note-0050}  Jeremy Gilbert, *Democracy and
    Collectivity in an Age of Individualism* (London: Pluto Books, 2013).

    [51](#c2-note-0051a){#c2-note-0051}  Diedrich Diederichsen,
    *Eigenblutdoping: Selbstverwertung, Künstlerromantik, Partizipation*
    (Cologne: Kiepenheuer & Witsch, 2008).

    [52](#c2-note-0052a){#c2-note-0052}  Harrison Rainie and Barry Wellman,
    *Networked: The New Social Operating System* (Cambridge, MA: MIT Press,
    2012). The term is practical because it is easy to understand, but it is
    also conceptually contradictory. An individual (an indivisible entity)
    cannot be defined in terms of a distributed network. With a nod toward
    Gilles Deleuze, the cumbersome but theoretically more precise term
    "dividual" (the divisible) has also been used. See Gerald Raunig,
    "Dividuen des Facebook: Das neue Begehren nach Selbstzerteilung," in
    Oliver Leistert and Theo Röhle (eds), *Generation Facebook: Über das
    Leben im Social Net* (Bielefeld: Transcript, 2011), pp. 145--59.

    [53](#c2-note-0053a){#c2-note-0053}  Jariu Saramäki et al., "Persistence
    of Social Signatures in Human Communication," *Proceedings of the
    National Academy of Sciences of the United States of America* 111
    (2014): 942--7.

    [54](#c2-note-0054a){#c2-note-0054}  The term "weak ties" derives from a
    study of where people find out information about new jobs. As the study
    shows, this information does not usually come from close friends, whose
    level of knowledge often does not differ much from that of the person
    looking for a job, but rather from loose acquaintances, whose living
    environments do not overlap much with one\'s own and who can therefore
    make information available from outside of one\'s own network. See Mark
    Granovetter, "The Strength of Weak Ties," *American Journal of
    Sociology* 78 (1973): 1360--80.

    [55](#c2-note-0055a){#c2-note-0055}  Castells, *The Power of Identity*,
    420.

    [56](#c2-note-0056a){#c2-note-0056}  Ulf Weigelt, "Darf der Chef
    ständige Erreichbarkeit ver­langen?" *Zeit Online* (June 13, 2012),
    online \[--trans.\].[]{#Page_190 type="pagebreak" title="190"}

    [57](#c2-note-0057a){#c2-note-0057}  Hartmut Rosa, *Social Acceleration:
    A New Theory of Modernity*, trans. Jonathan Trejo-Mathys (New York:
    Columbia University Press, 2013).

    [58](#c2-note-0058a){#c2-note-0058}  This technique -- "social freezing"
    -- has already become so standard that it is now regarded as way to help
    women achieve a better balance between work and family life. See Kolja
    Rudzio "Social Freezing: Ein Kind von Apple," *Zeit Online* (November 6,
    2014), online.

    [59](#c2-note-0059a){#c2-note-0059}  See the film *Into Eternity*
    (2009), directed by Michael Madsen.

    [60](#c2-note-0060a){#c2-note-0060}  Thomas S. Kuhn, *The Structure of
    Scientific Revolutions*, 3rd edn (Chicago, IL: University of Chicago
    Press, 1996).

    [61](#c2-note-0061a){#c2-note-0061}  Werner Busch and Peter Schmoock,
    *Kunst: Die Geschichte ihrer Funktionen* (Weinheim: Quadriga/Beltz,
    1987), p. 179 \[--trans.\].

    [62](#c2-note-0062a){#c2-note-0062}  "'When Attitude Becomes Form' at
    the Fondazione Prada," *Contemporary Art Daily* (September 18, 2013),
    online.

    [63](#c2-note-0063a){#c2-note-0063}  Owing to the hyper-capitalization
    of the art market, which has been going on since the 1990s, this role
    has shifted somewhat from curators to collectors, who, though validating
    their choices more on financial than on argumentative grounds, are
    essentially engaged in the same activity. Today, leading cur­ators
    usually work closely together with collectors and thus deal with more
    money than the first generation of curators ever could have imagined.

    [64](#c2-note-0064a){#c2-note-0064}  Diedrich Diederichsen, "Showfreaks
    und Monster," *Texte zur Kunst* 71 (2008): 69--77.

    [65](#c2-note-0065a){#c2-note-0065}  Alexander R. Galloway, *Protocol:
    How Control Exists after Decentralization* (Cambridge, MA: MIT Press,
    2004), pp. 7, 75.

    [66](#c2-note-0066a){#c2-note-0066}  Even the *Frankfurter Allgemeine
    Zeitung* -- at least in its online edition -- has begun to publish more
    and more articles in English. The newspaper has accepted the
    disadvantage of higher editorial costs in order to remain relevant in
    the increasingly globalized debate.

    [67](#c2-note-0067a){#c2-note-0067}  Joseph Reagle, "'Free as in
    Sexist?' Free Culture and the Gender Gap," *First Monday* 18 (2013),
    online.

    [68](#c2-note-0068a){#c2-note-0068}  Wikipedia\'s own "Editor Survey"
    from 2011 reports a women\'s quota of 9 percent. Other studies have come
    to a slightly higher number. See Benjamin Mako Hill and Aaron Shaw, "The
    Wikipedia Gender Gap Revisited: Characterizing Survey Response Bias with
    Propensity Score Estimation," *PLOS ONE* 8 (July 26, 2013), online. The
    problem is well known, and the Wikipedia Foundation has been making
    efforts to correct matters. In 2011, its goal was to increase the
    participation of women to 25 percent by 2015. This has not been
    achieved.[]{#Page_191 type="pagebreak" title="191"}

    [69](#c2-note-0069a){#c2-note-0069}  Shyong (Tony) K. Lam et al. (2011),
    "WP: Clubhouse? An Exploration of Wikipedia's Gender Imbalance,"
    *WikiSym* 11 (2011), online.

    [70](#c2-note-0070a){#c2-note-0070}  David Singh Grewal, *Network Power:
    The Social Dynamics of Globalization* (New Haven, CT: Yale University
    Press, 2008).

    [71](#c2-note-0071a){#c2-note-0071}  Ibid., p. 29.

    [72](#c2-note-0072a){#c2-note-0072}  Niklas Luhmann, *Macht im System*
    (Berlin: Suhrkamp, 2013), p. 52 \[--trans.\].

    [73](#c2-note-0073a){#c2-note-0073}  Mathieu O\'Neil, *Cyberchiefs:
    Autonomy and Authority in Online Tribes* (London: Pluto Press, 2009).

    [74](#c2-note-0074a){#c2-note-0074}  Eric Steven Raymond, "The Cathedral
    and the Bazaar," *First Monday* 3 (1998), online.

    [75](#c2-note-0075a){#c2-note-0075}  Jorge Luis Borges, "The Library of
    Babel," trans. Anthony Kerrigan, in Borges, *Ficciones* (New York: Grove
    Weidenfeld, 1962), pp. 79--88.

    [76](#c2-note-0076a){#c2-note-0076}  Heinrich Geiselberger and Tobias
    Moorstedt (eds), *Big Data: Das neue Versprechen der Allwissenheit*
    (Berlin: Suhrkamp, 2013).

    [77](#c2-note-0077a){#c2-note-0077}  This is one of the central tenets
    of science and technology studies. See, for instance, Geoffrey C. Bowker
    and Susan Leigh Star, *Sorting Things Out: Classification and Its
    Consequences* (Cambridge, MA: MIT Press, 1999).

    [78](#c2-note-0078a){#c2-note-0078}  Sybille Krämer, *Symbolische
    Maschinen: Die Idee der Formalisierung in geschichtlichem Abriß*
    (Darmstadt: Wissenschaft­liche Buchgesellschaft, 1988), 50--69.

    [79](#c2-note-0079a){#c2-note-0079}  Quoted from Doron Swade, "The
    'Unerring Certainty of Mechanical Agency': Machines and Table Making in
    the Nineteenth Century," in Martin Campbell-Kelly et al. (eds), *The
    History of Mathematical Tables: From Sumer to Spreadsheets* (Oxford:
    Oxford University Press, 2003), pp. 145--76, at 150.

    [80](#c2-note-0080a){#c2-note-0080}  The mechanical construction
    suggested by Leibniz was not to be realized as a practically usable (and
    therefore patentable) calculating machine until 1820, by which point it
    was referred to as an "arithmometer."

    [81](#c2-note-0081a){#c2-note-0081}  Krämer, *Symbolische Maschinen*, 98
    \[--trans.\].

    [82](#c2-note-0082a){#c2-note-0082}  Charles Babbage, *On the Economy of
    Machinery and Manufactures* (London: Charles Knight, 1832), p. 153: "We
    have already mentioned what may, perhaps, appear paradoxical to some of
    our readers -- that the division of labour can be applied with equal
    success to mental operations, and that it ensures, by its adoption, the
    same economy of time."

    [83](#c2-note-0083a){#c2-note-0083}  This structure, which is known as
    "Von Neumann architecture," continues to form the basis of almost all
    computers.

    [84](#c2-note-0084a){#c2-note-0084}  "Gordon Moore Says Aloha to
    Moore\'s Law," *The Inquirer* (April 13, 2005), online.[]{#Page_192
    type="pagebreak" title="192"}

    [85](#c2-note-0085a){#c2-note-0085}  Miriam Meckel, *Next: Erinnerungen
    an eine Zukunft ohne uns* (Reinbeck bei Hamburg: Rowohlt, 2011). One
    could also say that this anxiety has been caused by the fact that the
    automation of labor has begun to affect middle-class jobs as well.

    [86](#c2-note-0086a){#c2-note-0086}  Steven Levy, "Can an Algorithm
    Write a Better News Story than a Human Reporter?" *Wired* (April 24,
    2012), online.

    [87](#c2-note-0087a){#c2-note-0087}  Alexander Pschera, *Animal
    Internet: Nature and the Digital Revolution*, trans. Elisabeth Laufer
    (New York: New Vessel Press, 2016).

    [88](#c2-note-0088a){#c2-note-0088}  The American intelligence services
    are not unique in this regard. *Spiegel* has reported that, in Russia,
    entire "bot armies" have been mobilized for the "propaganda battle."
    Benjamin Bidder, "Nemzow-Mord: Die Propaganda der russischen Hardliner,"
    *Spiegel Online* (February 28, 2015), online.

    [89](#c2-note-0089a){#c2-note-0089}  Lennart Guldbrandsson, "Swedish
    Wikipedia Surpasses 1 Million Articles with Aid of Article Creation
    Bot," [blog.wikimedia.org](http://blog.wikimedia.org) (June 17, 2013),
    online.

    [90](#c2-note-0090a){#c2-note-0090}  Thomas Bunnell, "The Mathematics of
    Film," *Boom Magazine* (November 2007): 48--51.

    [91](#c2-note-0091a){#c2-note-0091}  Christopher Steiner, "Automatons
    Get Creative," *Wall Street Journal* (August 17, 2012), online.

    [92](#c2-note-0092a){#c2-note-0092}  "The Hewlett Foundation: Automated
    Essay Scoring," [kaggle.com](http://kaggle.com) (February 10, 2012),
    online.

    [93](#c2-note-0093a){#c2-note-0093}  Ian Ayres, *Super Crunchers: How
    Anything Can Be Predicted* (London: Bookpoint, 2007).

    [94](#c2-note-0094a){#c2-note-0094}  Each of these models was tested on
    the basis of the 50 million most common search terms from the years
    2003--8 and classified according to the time and place of the search.
    The results were compared with data from the health authorities. See
    Jeremy Ginsberg et al., "Detecting Influenza Epidemics Using Search
    Engine Query Data," *Nature* 457 (2009): 1012--4.

    [95](#c2-note-0095a){#c2-note-0095}  In absolute terms, the rate of
    correct hits, at 15.8 percent, was still relatively low. With the same
    dataset, however, random guessing would only have an accuracy of 0.005
    percent. See V. Le Quoc et al., "Building High-Level Features Using
    Large-Scale Unsupervised Learning,"
    [research.google.com](http://research.google.com) (2012), online.

    [96](#c2-note-0096a){#c2-note-0096}  Neil Johnson et al., "Abrupt Rise
    of New Machine Ecology beyond Human Response Time," *Nature: Scientific
    Reports* 3 (2013), online. The authors counted 18,520 of these events
    between January 2006 and February 2011; that is, about 15 per day on
    average.

    [97](#c2-note-0097a){#c2-note-0097}  Gerald Nestler, "Mayhem in Mahwah:
    The Case of the Flash Crash; or, Forensic Re-performance in Deep Time,"
    in Anselm []{#Page_193 type="pagebreak" title="193"}Franke et al. (eds),
    *Forensis: The Architecture of Public Truth* (Berlin: Sternberg Press,
    2014), pp. 125--46.

    [98](#c2-note-0098a){#c2-note-0098}  Another facial recognition
    algorithm by Google provides a good impression of the rate of progress.
    As early as 2011, the latter was able to identify dogs in images with 80
    percent accuracy. Three years later, this rate had not only increased to
    93.5 percent (which corresponds to human capabilities), but the
    algorithm could also identify more than 200 different types of dog,
    something that hardly any person can do. See Robert McMillan, "This Guy
    Beat Google\'s Super-Smart AI -- But It Wasn\'t Easy," *Wired* (January
    15, 2015), online.

    [99](#c2-note-0099a){#c2-note-0099}  Sergey Brin and Lawrence Page, "The
    Anatomy of a Large-Scale Hypertextual Web Search Engine," *Computer
    Networks and ISDN Systems* 30 (1998): 107--17.

    [100](#c2-note-0100a){#c2-note-0100}  Eugene Garfield, "Citation Indexes
    for Science: A New Dimension in Documentation through Association of
    Ideas," *Science* 122 (1955): 108--11.

    [101](#c2-note-0101a){#c2-note-0101}  Since 1964, the data necessary for
    this has been published as the Science Citation Index (SCI).

    [102](#c2-note-0102a){#c2-note-0102}  The assumption that the subjects
    produce these structures indirectly and without any strategic intention
    has proven to be problematic in both contexts. In the world of science,
    there are so-called citation cartels -- groups of scientists who
    frequently refer to one another\'s work in order to improve their
    respective position in the SCI. Search engines have likewise given rise
    to search engine optimizers, which attempt by various means to optimize
    a website\'s evaluation by search engines.

    [103](#c2-note-0103a){#c2-note-0103}  Regarding the history of the SCI
    and its influence on the early version of Google\'s PageRank, see Katja
    Mayer, "Zur Soziometrik der Suchmaschinen: Ein historischer Überblick
    der Methodik," in Konrad Becker and Felix Stalder (eds), *Deep Search:
    Die Politik des Suchens jenseits von Google* (Innsbruck: Studienverlag,
    2009), pp. 64--83.

    [104](#c2-note-0104a){#c2-note-0104}  A site with zero links to it could
    not be registered by the algorithm at all, for the search engine indexed
    the web by having its "crawler" follow the links itself.

    [105](#c2-note-0105a){#c2-note-0105}  "Google Algorithm Change History,"
    [moz.com](http://moz.com) (2016), online.

    [106](#c2-note-0106a){#c2-note-0106}  Martin Feuz et al., "Personal Web
    Searching in the Age of Semantic Capitalism: Diagnosing the Mechanisms
    of Personalisation," *First Monday* 17 (2011), online.

    [107](#c2-note-0107a){#c2-note-0107}  Brian Dean, "Google\'s 200 Ranking
    Factors," *Search Engine Journal* (May 31, 2013), online.

    [108](#c2-note-0108a){#c2-note-0108}  Thus, it is not only the world of
    advertising that motivates the collection of personal information. Such
    information is also needed for the development of personalized
    algorithms that []{#Page_194 type="pagebreak" title="194"}give order to
    the flood of data. It can therefore be assumed that the rampant
    collection of personal information will not cease or slow down even if
    commercial demands happen to change, for instance to a business model
    that is not based on advertising.

    [109](#c2-note-0109a){#c2-note-0109}  For a detailed discussion of how
    these three levels are recorded, see Felix Stalder and Christine Mayer,
    "Der zweite Index: Suchmaschinen, Personalisierung und Überwachung," in
    Konrad Becker and Felix Stalder (eds), *Deep Search: Die Politik des
    Suchens jenseits von Google* (Innsbruck: Studienverlag, 2009), pp.
    112--31.

    [110](#c2-note-0110a){#c2-note-0110}  This raises the question of which
    drivers should be sent on a detour, so that no traffic jam comes about,
    and which should be shown the most direct route, which would now be
    traffic-free.

    [111](#c2-note-0111a){#c2-note-0111}  Pamela Vaughan, "Demystifying How
    Facebook\'s EdgeRank Algorithm Works," *HubSpot* (April 23, 2013),
    online.

    [112](#c2-note-0112a){#c2-note-0112}  Lisa Gitelman (ed.), *"Raw Data"
    Is an Oxymoron* (Cambridge, MA: MIT Press, 2013).

    [113](#c2-note-0113a){#c2-note-0113}  The terms "raw," in the sense of
    unprocessed, and "cooked," in the sense of processed, derive from the
    anthropologist Claude Lévi-Strauss, who introduced them to clarify the
    difference between nature and culture. See Claude Lévi-Strauss, *The Raw
    and the Cooked*, trans. John Weightman and Doreen Weightman (Chicago,
    IL: University of Chicago Press, 1983).

    [114](#c2-note-0114a){#c2-note-0114}  Jessica Lee, "No. 1 Position in
    Google Gets 33% of Search Traffic," *Search Engine Watch* (June 20,
    2013), online.

    [115](#c2-note-0115a){#c2-note-0115}  One estimate that continues to be
    cited quite often is already obsolete: Michael K. Bergman, "White Paper
    -- The Deep Web: Surfacing Hidden Value," *Journal of Electronic
    Publishing* 7 (2001), online. The more content is dynamically generated
    by databases, the more questionable such estimates become. It is
    uncontested, however, that only a small portion of online information is
    registered by search engines.

    [116](#c2-note-0116a){#c2-note-0116}  Theo Röhle, "Die Demontage der
    Gatekeeper: Relationale Perspektiven zur Macht der Suchmaschinen," in
    Konrad Becker and Felix Stalder (eds), *Deep Search: Die Politik des
    Suchens jenseits von Google* (Innsbruck: Studienverlag, 2009), pp.
    133--48.

    [117](#c2-note-0117a){#c2-note-0117}  The phenomenon of preparing the
    world to be recorded by algorithms is not restricted to digital
    networks. As early as 1994 in Germany, for instance, a new sort of
    typeface was introduced (the *Fälschungserschwerende Schrift*,
    "forgery-impeding typeface") on license plates for the sake of machine
    readability and facilitating automatic traffic control. To the human
    eye, however, it appears somewhat misshapen and
    disproportionate.[]{#Page_195 type="pagebreak" title="195"}

    [118](#c2-note-0118a){#c2-note-0118}  [Fairsearch.org](http://Fairsearch.org)
    was officially supported by several of Google\'s competitors, including
    Microsoft, TripAdvisor, and Oracle.

    [119](#c2-note-0119a){#c2-note-0119}  "Antitrust: Commission Sends
    Statement of Objections to Google on Comparison Shopping Service,"
    *European Commission: Press Release Database* (April 15, 2015), online.

    [120](#c2-note-0120a){#c2-note-0120}  Amit Singhal, "An Update to Our
    Search Algorithms," *Google Inside Search* (August 10, 2012), online. By
    the middle of 2014, according to some sources, Google had received
    around 20 million requests to remove links from its index on account of
    copyright violations.

    [121](#c2-note-0121a){#c2-note-0121}  Alexander Wragge, "Google-Ranking:
    Herabstufung ist 'Zensur light'," *iRights.info* (August 23, 2012),
    online.

    [122](#c2-note-0122a){#c2-note-0122}  Farhad Manjoo,"Amazon\'s Tactics
    Confirm Its Critics\' Worst Suspicions," *New York Times: Bits Blog*
    (May 23, 2014), online.

    [123](#c2-note-0123a){#c2-note-0123}  Lucas D. Introna and Helen
    Nissenbaum, "Shaping the Web: Why the Politics of Search Engines
    Matters," *Information Society* 16 (2000): 169--85, at 181.

    [124](#c2-note-0124a){#c2-note-0124}  Eli Pariser, *The Filter Bubble:
    How the New Personalized Web Is Changing What We Read and How We Think*
    (New York: Penguin, 2012).

    [125](#c2-note-0125a){#c2-note-0125}  Antoinette Rouvroy, "The End(s) of
    Critique: Data-Behaviourism vs. Due-Process," in Katja de Vries and
    Mireille Hilde­brandt (eds), *Privacy, Due Process and the Computational
    Turn: The Philosophy of Law Meets the Philosophy of Technology* (New
    York: Routledge, 2013), pp. 143--65.

    [126](#c2-note-0126a){#c2-note-0126}  See B. F. Skinner, *Science and
    Human Behavior* (New York: The Free Press, 1953), p. 35: "We undertake
    to predict and control the behavior of the individual organism. This is
    our 'dependent variable' -- the effect for which we are to find the
    cause. Our 'independent variables' -- the causes of behavior -- are the
    external conditions of which behavior is a function."

    [127](#c2-note-0127a){#c2-note-0127}  Nathan Jurgenson, "View from
    Nowhere: On the Cultural Ideology of Big Data," *New Inquiry* (October
    9, 2014), online.

    [128](#c2-note-0128a){#c2-note-0128}  danah boyd and Kate Crawford,
    "Critical Questions for Big Data: Provocations for a Cultural,
    Technological and Scholarly Phenomenon," *Information, Communication &
    Society* 15 (2012): 662--79.
    :::
    :::

    [III]{.chapterNumber} [Politics]{.chapterTitle} {#c3}

    ::: {.section}
    Referentiality, communality, and algorithmicity have become the
    characteristic forms of the digital condition because more and more
    people -- in more and more segments of life and by means of increasingly
    complex technologies -- are actively (or compulsorily) participating in
    the negotiation of social meaning. They are thus reacting to the demands
    of a chaotic, overwhelming sphere of information and thereby
    contributing to its greater expansion. It is the ubiquity of these forms
    that makes it possible to speak of the digital condition in the
    singular. The goals pursued in these cultural forms, however, are as
    diverse, contradictory, and conflicted as society itself. It would
    therefore be equally false to assume uniformity or an absence of
    alternatives in the unfolding of social and political developments. On
    the contrary, the idea of a lack of alternatives is an ideological
    assertion that is itself part of a specific political agenda.

    In order to resolve this ostensible contradiction between developments
    that take place in a manner that is uniform and beyond influence and
    those that are characterized by the variable and open-ended
    implementation of diverse interests, it is necessary to differentiate
    between two levels. One possibility for doing so is presented by Marxist
    political economy. It distinguishes between *productive forces*, which
    are defined as the technical infrastructure, the state of knowledge, and
    the []{#Page_125 type="pagebreak" title="125"}organization of labor, and
    the *relations of production*, which are defined as the institutions,
    laws, and practices in which people are able to realize the
    techno-cultural possibilities of their time. Both are related to one
    another, though each develops with a certain degree of autonomy. The
    relation between them is essential for the development of society. The
    closer they correspond to one another, the more smoothly this
    development will run its course; the more contradictions happen to exist
    between them, the more this course will suffer from unrest and
    conflicts. One of many examples of a current contradiction between these
    two levels is the development that has occurred in the area of cultural
    works. Whereas radical changes have taken place in their production,
    processing, and reproduction (that is, on the level of productive
    forces), copyright law (that is, the level of the relations of
    production) has remained almost unchanged. In Marxist theory, such
    contradictions are interpreted as a starting point for political
    upheavals, indeed as a precondition for revolution. As Marx wrote:

    ::: {.extract}
    At a certain stage of development, the material productive forces of
    society come into conflict with the existing relations of production or
    -- this merely expresses the same thing in legal terms -- with the
    property relations within the framework of which they have operated
    hitherto. From forms of development of the productive forces these
    relations turn into their fetters. Then begins an era of social
    revolution.[^1^](#c3-note-0001){#c3-note-0001a}
    :::

    Many theories aiming to overcome capitalism proceed on the basis of this
    dynamic.[^2^](#c3-note-0002){#c3-note-0002a} The distinction between
    productive forces and the relations of production, however, is not
    unproblematic. On the one hand, no one has managed to formulate an
    entirely convincing theory concerning the reciprocal relation between
    the two. What does it mean, exactly, that they are related to one
    another and yet are simultaneously autonomous? When does the moment
    arrive in which they come into conflict with one another? And what,
    exactly, happens then? For the most part, these are unsolved questions.
    On the other hand, because of the blending of work and leisure already
    mentioned, as well as the general economization of social activity (as
    is happening on social []{#Page_126 type="pagebreak" title="126"}mass
    media and in the creative economy, for instance), it is hardly possible
    now to draw a line between production and reproduction. Thus, this set
    of concepts, which is strictly oriented toward economic production
    alone, is more problematic than ever. My decision to use these concepts
    is therefore limited to clarifying the conceptual transition from the
    previous chapter to the chapter at hand. The concern of the last chapter
    was to explain the forms that cultural processes have adopted under the
    present conditions -- ubiquitous telecommunication, general expressivity
    (referentiality), flexible cooperation (communality), and informational
    automation (algorithmicity). In what follows, on the contrary, my focus
    will turn to the political dynamics that have emerged from the
    realization of "productive forces" as concrete "relations of production"
    or, in more general terms, as social relations. Without claiming to be
    comprehensive, I have assigned the confusing and conflicting
    multiplicity of actors, projects, and institutions to two large
    political developments: post-democracy and commons. The former is moving
    toward an essentially authoritarian society, while the latter is moving
    toward a radical renewal of democracy by broadening the scope of
    collective decision-making. Both cases involve more than just a few
    minor changes to the existing order. Rather, both are ultimately leading
    to a new political constellation beyond liberal representative
    democracy.
    :::

    ::: {.section}
    Post-democracy {#c3-sec-0002}
    --------------

    The current dominant political development is the spread and
    entrenchment of post-democracy. The term was coined in the middle of the
    1990s by Jacques Rancière. "Post-democracy," as he defined it, "is the
    government practice and conceptual legitimization of a democracy *after*
    the demos, a democracy that has eliminated the appearance, miscount and
    dispute of the people."[^3^](#c3-note-0003){#c3-note-0003a} Rancière
    argued that the immediate presence of the people (the demos) has been
    abolished and replaced by processes of simulation and modeling such as
    opinion polls, focus groups, and plans for various scenarios -- all
    guided by technocrats. Thus, he believed that the character of political
    processes has changed, namely from disputes about how we []{#Page_127
    type="pagebreak" title="127"}ought to face a principally open future to
    the administration of predefined necessities and fixed constellations.
    As early as the 1980s, Margaret Thatcher justified her radical reforms
    with the expression "There is no alternative!" Today, this form of
    argumentation remains part of the core vocabulary of post-democratic
    politics. Even Angela Merkel is happy to call her political program
    *alternativlos* ("without alternatives"). According to Rancière, this
    attitude is representative of a government practice that operates
    without the unpredictable presence of the people and their dissent
    concerning fundamental questions. All that remains is "police logic," in
    which everything is already determined, counted, and managed.

    Ten years after Rancière\'s ruminations, Colin Crouch revisited the
    concept and defined it anew. His notion of post-democracy is as follows:

    ::: {.extract}
    Under this model, while elections certainly exist and can change
    governments, public electoral debate is a tightly controlled spectacle,
    managed by rival teams of professionals expert in the technique of
    persuasion, and considering a small range of issues selected by those
    teams. The mass of citizens plays a passive, quiescent, even apathetic
    part, responding only to the signals given them. Behind this spectacle
    of the electoral game, politics is really shaped in private by
    interaction between elected governments and elites that overwhelmingly
    represent business interests.[^4^](#c3-note-0004){#c3-note-0004a}
    :::

    He goes on:

    ::: {.extract}
    My central contentions are that, while the forms of democracy remain
    fully in place and today in some respects are actually strengthened --
    politics and government are increasingly slipping back into the control
    of privileged elites in the manner characteristic of predemocratic
    times; and that one major consequence of this process is the growing
    impotence of egalitarian causes.[^5^](#c3-note-0005){#c3-note-0005a}
    :::

    In his analysis, Crouch focused on the Western political system in the
    strict sense -- parties, parliaments, governments, eligible voters --
    and in particular on the British system under Tony Blair. He described
    the development of representative democracy as a rising and declining
    curve, and he diagnosed []{#Page_128 type="pagebreak" title="128"}not
    only an erosion of democratic institutions but also a shift in the
    legitimation of public activity. In this regard, according to Crouch,
    the participation of citizens in political decision-making (input
    legitimation) has become far less important than the quality of the
    achievements that are produced for the citizens (output legitimation).
    Out of democracy -- the "dispute of the people," in Rancière\'s sense --
    emerges governance. As Crouch maintains, however, this shift was
    accompanied by a sustained weakening of public institutions, because it
    was simultaneously postulated that private actors are fundamentally more
    efficient than the state. This argument was used (and continues to be
    used) to justify taking an increasing number of services away from
    public actors and entrusting them instead to the private sphere, which
    has accordingly become more influential and powerful. One consequence of
    this has been, according to Crouch, "the collapse of self-confidence on
    the part of the state and the meaning of public authority and public
    service."[^6^](#c3-note-0006){#c3-note-0006a} Ultimately, the threat at
    hand is the abolishment of democratic institutions in the name of
    efficiency. These institutions are then replaced by technocratic
    governments without a democratic mandate, as has already happened in
    Greece, Portugal, or Ireland, where external overseers have been
    directly or indirectly determining the political situation.

    ::: {.section}
    ### Social mass media as an everyday aspect of post-democratic life {#c3-sec-0003}

    For my purposes, it is of little interest whether the concept of "public
    authority" really ought to be revived or whether and in what
    circumstances the parable of rising and declining will help us to
    understand the development of liberal
    democracy.[^7^](#c3-note-0007){#c3-note-0007a} Rather, it is necessary
    to supplement Crouch\'s approach in order to make it fruitful for our
    understanding of the digital condition, which extends greatly beyond
    democratic processes in the classical sense -- that is, with
    far-reaching decisions about issues concerning society in a formalized
    and binding manner that is legitimized by citizen participation. I will
    therefore designate as "post-democratic" all of those developments --
    wherever they are taking place -- that, although admittedly preserving
    or even providing new []{#Page_129 type="pagebreak"
    title="129"}possibilities for participation, simultaneously also
    strengthen the capacity for decision-making on levels that preclude
    co-determination. This has brought about a lasting separation between
    social participation and the institutional exertion of power. These
    developments, the everyday instances of which may often be harmless and
    banal, create as a whole the cultural preconditions and experiences that
    make post-democracy -- both in Crouch\'s strict sense and the broader
    sense of Rancière -- seem normal and acceptable.

    In an almost ideal-typical form, the developments in question can be
    traced alongside the rise of commercially driven social mass media.
    Their shape, however, is not a matter of destiny (it is not the result
    of any technological imperative) but rather the consequence of a
    specific political, economic, and technical constellation that realized
    the possibilities of the present (productive forces) in particular
    institutional forms (relations of production) and was driven to do so in
    the interest of maximizing profit and control. A brief look at the
    history of digital communication will be enough to clarify this. In the
    middle of the 1990s, the architecture of the internet was largely
    decentralized and based on open protocols. The attempts of America
    Online (AOL) and CompuServe to run a closed network (an intranet, as we
    would call it today) to compete with the open internet were
    unsuccessful. The large providers never really managed to address the
    need or desire of users to become active producers of meaning. Even the
    most popular elements of these closed worlds -- the forums in which
    users could interact relatively directly with one another -- lacked the
    diversity and multiplicity of participatory options that made the open
    internet so attractive.

    One of the most popular and radical services on the open internet was
    email. The special thing about it was that electronic messages could be
    used both for private (one-to-one) and for communal (many-to-many)
    communication of all sorts, and thus it helped to merge the previously
    distinct domains of the private and the communal. By the middle of the
    1980s, and with the help of specialized software, it was possible to
    create email lists with which one could send messages efficiently and
    reliably to small and large groups. Users could join these groups
    without much effort. From the beginning, email has played a significant
    role in the creation []{#Page_130 type="pagebreak" title="130"}of
    communal formations. Email was one of the first technologies that
    enabled the horizontal coordination of large and dispersed groups, and
    it was often used to that end. Linus Torvalds\'s famous call for people
    to collaborate with him on his operating system -- which was then "just
    a hobby" but today, as Linux, makes up part of the infrastructure of the
    internet -- was issued on August 25, 1991, via email (and news groups).

    One of the most important features of email was due to the service being
    integrated into an infrastructure that was decentralized by means of
    open protocols. And so it has remained. The fundamental Simple Mail
    Transfer Protocol (SMTP), which is still being used, is based on a
    so-called Request for Comments (RFC) from 1982. In this document, which
    sketched out the new protocol and made it open to discussion, it was
    established from the outset that communication should be enabled between
    independent networks.[^8^](#c3-note-0008){#c3-note-0008a} On the basis
    of this standard, it is thus possible today for different providers to
    create an integrated space for communication. Even though they are in
    competition with one another, they nevertheless cooperate on the level
    of the technical protocol and allow users to send information back and
    forth regardless of which providers are used. A choice to switch
    providers would not cause the forfeiting of individuals\' address books
    or any data. Those who put convenience first can use one of the large
    commercial providers, or they can choose one of the many small
    commercial or non-commercial services that specialize in certain niches.
    It is even possible to set up one\'s own server in order to control this
    piece of infrastructure independently. In short, thanks to the
    competition between providers or because they themselves command the
    necessary technical know-how, users continue to have the opportunity to
    influence the infrastructure directly and thus to co-determine the
    essential (technical) parameters that allow for specific courses of
    action. Admittedly, modern email services are set up in such a way that
    most of their users remain on the surface, while the essential decisions
    about how they are able to act are made on the "back side"; that is, in
    the program code, in databases, and in configuration files. Yet these
    two levels are not structurally (that is, organizationally and
    technically) separated from one another. Whoever is willing and ready to
    []{#Page_131 type="pagebreak" title="131"}appropriate the corresponding
    and freely available technical knowledge can shift back and forth
    between them. Before the internet was made suitable for the masses, it
    had been necessary to possess such knowledge in order to use the often
    complicated and error-prone infrastructure at all.

    Over the last 10 to 15 years, these structures have been radically
    changed by commercially driven social mass media, which have been
    dominated by investors. They began to offer a variety of services in a
    user-friendly form and thus enabled the great majority of the population
    to make use of complex applications on an everyday basis. This, however,
    has gone hand in hand with the centralization of applications and user
    information. In the case of email, this happened through the
    introduction of Webmail, which always stores every individual message on
    the provider\'s computer, where they can be read and composed via web
    browsers.[^9^](#c3-note-0009){#c3-note-0009a} From that point on,
    providers have been able to follow everything that users write in their
    emails. Thanks to nearly comprehensive internet connectivity, Webmail is
    very widespread today, and the large providers -- above all Google,
    whose Gmail service had more than 500 million users in 2014 -- dominate
    the market. The gap has thus widened between user interfaces and the
    processes that take place behind them on servers and in data centers,
    and this has expanded what Crouch referred to as "the influence of the
    privileged elite." In this case, the elite are the engineers and
    managers employed by the large providers, and everyone else with access
    to the underbelly of the infrastructure, including the British
    Government Communications Headquarters (GCHQ) and the US National
    Security Agency (NSA), both of which employ programs such as a MUSCULAR
    to record data transfers between the computer centers operated by large
    American providers.[^10^](#c3-note-0010){#c3-note-0010a}

    Nevertheless, email essentially remains an open application, for the
    SMTP protocol forces even the largest providers to cooperate. Small
    providers are able to collaborate with the latter and establish new
    services with them. And this creates options. Since Edward Snowden\'s
    revelations, most people are aware that all of their online activities
    are being monitored, and this has spurred new interest in secure email
    services. In the meantime, there has been a whole series of projects
    aimed at combining simple usability with complex []{#Page_132
    type="pagebreak" title="132"}encryption in order to strengthen the
    privacy of normal users. This same goal has led to a number of
    successful crowd-funding campaigns, which indicates that both the
    interest and the resources are available to accomplish
    it.[^11^](#c3-note-0011){#c3-note-0011a} For users, however, these
    offers are only attractive if they are able to switch providers without
    great effort. Moreover, such new competition has motivated established
    providers to modify their own
    infrastructure.[^12^](#c3-note-0012){#c3-note-0012a} In the case of
    email, the level on which new user options are created is still
    relatively closely linked to that on which generally binding decisions
    are made and implemented. In this sense, email is not a post-democratic
    technology.
    :::

    ::: {.section}
    ### Centralization and the power of networks {#c3-sec-0004}

    Things are entirely different in the case of new social mass media such
    as Facebook, Twitter, LinkedIn, WhatsApp, or most of the other
    commercial services that were developed after the year 2000. Almost all
    of them are based on standards that are closed and controlled by the
    network oper­ators, and these standards prevent users from communicating
    beyond the boundaries defined by the providers. Through Facebook, it is
    only possible to be in touch with other users of the platform, and
    whoever leaves the platform will have to give up all of his or her
    Facebook friends.

    As with email, these services also rely on people producing their own
    content. By now, Facebook has more than a billion users, and each of
    them has produced at least a rudimentary personal profile and a few
    likes. Thanks to networking opportunities, which make up the most
    important service offered by all of these providers, communal formations
    can be created with ease. Every day, groups are formed that organize
    information, knowledge, and resources in order to establish self-defined
    practices (both online and offline). The immense amounts of data,
    information, and cultural references generated by this are pre-sorted by
    algorithms that operate in the background to ensure that users never
    lose their orientation.[^13^](#c3-note-0013){#c3-note-0013a} Viewed from
    the perspective of output legitimation -- that is, in terms of what
    opportunities these services provide and at what cost -- such offers are
    extremely attractive. Examined from the perspective of input
    legitimation -- that is, in terms []{#Page_133 type="pagebreak"
    title="133"}of how essential decisions are made -- things look rather
    different. By means of technical, organizational, and legal standards,
    Facebook and other operators of commercially driven social mass media
    have created structures in which the level of user interaction is
    completely separated from the level on which essential decisions are
    made that concern the community of users. Users have no way to influence
    the design or development of the conditions under which they (have to)
    act. At best, it remains possible to choose one aspect or another from a
    predetermined offer; that is, to use certain options or not. Take it or
    leave it. As to which options and features are available, users can
    neither determine this nor have any direct influence over the matter. In
    short, commercial social networks have institutionalized a power
    imbalance between those engaged with the user interface and those who
    operate the services behind the scenes. The possibility of users to
    organize themselves and exert influence -- over the way their data are
    treated, for instance -- is severely limited.

    One (nominal) exception to this happened to be Facebook itself. From
    2009 to 2012, the company allowed users to vote about any proposed
    changes to its terms and conditions, which attracted more than 7,000
    comments. If 30 percent of all registered members participated, then the
    result would be binding. In practice, however, this rule did not have
    any consequences, for the quorum was never achieved. This is no
    surprise, because Facebook did not make any effort to increase
    participation. In fact, the opposite was true. As the privacy activist
    Max Schrems has noted, without mincing words, "After grand promises of
    user participation, the ballot box was then hidden away for
    safekeeping."[^14^](#c3-note-0014){#c3-note-0014a} With reference to the
    apparent lack of interest on the part of its users, Facebook did away
    with the possibility to vote and replaced it with the option of
    directing questions to management.[^15^](#c3-note-0015){#c3-note-0015a}
    Since then, and even in the case of fundamental decisions that concern
    everyone involved, there has been no way for users to participate in the
    discussion. This new procedure, which was used to implement a
    comprehensive change in Facebook\'s privacy policy, was described by the
    company\'s founder Mark Zuckerberg as follows: "We decided that these
    would be the social norms now, and we just went for
    it."[^16^](#c3-note-0016){#c3-note-0016a} It is not exactly clear whom
    he meant by "we." What is clear, []{#Page_134 type="pagebreak"
    title="134"}however, is that the number of people involved with
    decision-making is minute in comparison with the number of people
    affected by the decisions to be made.

    It should come as no surprise that, with the introduction of every new
    feature, providers such as Facebook have further tilted the balance of
    power between users and operators. With every new version and with every
    new update, the possibilities of interaction are changed in such a way
    that, within closed networks, more data can be produced in a more
    uniform format. Thus, it becomes easier to make connections between
    them, which is their only real source of value. Facebook\'s compulsory
    "real-name" policy, for instance, which no longer permits users to
    register under a pseudonym, makes it easier for the company to create
    comprehensive user profiles. Another standard allows the companies to
    assemble, in the background, a uniform profile out of the activities of
    users on sites or applications that seem at first to have nothing to do
    with one another.[^17^](#c3-note-0017){#c3-note-0017a} Google, for
    instance, connects user data from its search function with information
    from YouTube and other online services, but also with data from Nest, a
    networked thermostat. Facebook connects data from its social network
    with those from WhatsApp, Instagram, and the virtual-reality service
    Oculus.[^18^](#c3-note-0018){#c3-note-0018a} This trend is far from
    over. Many services are offering more and more new functions for
    generating data, and entire new areas of recording data are being
    developed (think, for instance, of Google\'s self-driving car). Yet
    users have access to just a minuscule portion of the data that they
    themselves have generated and with which they are being described. This
    information is fully available to the programmers and analysts alone.
    All of this is done -- as the sanctimonious argument goes -- in the name
    of data protection.
    :::

    ::: {.section}
    ### Selling, predicting, modifying {#c3-sec-0005}

    Unequal access to information has resulted in an imbalance of power, for
    the evaluation of data opens up new possibilities for action. Such data
    can be used, first, to earn revenue from personalized advertisements;
    second, to predict user behavior with greater accuracy; and third, to
    adjust the parameters of interaction in such a way that preferred
    patterns of []{#Page_135 type="pagebreak" title="135"}behavior become
    more likely. Almost all commercially driven social mass media are
    financed by advertising. In 2014, Facebook, Google, and Twitter earned
    90 percent of their revenue through such means. It is thus important for
    these companies to learn as much as possible about their users in order
    to optimize access to them and sell this access to
    advertisers.[^19^](#c3-note-0019){#c3-note-0019a} Google and Facebook
    justify the price for advertising on their sites by claiming that they
    are able to direct the messages of advertisers precisely to those people
    who would be most susceptible to them.

    Detailed knowledge about users, moreover, also provides new
    possibilities for predicting human
    behavior.[^20^](#c3-note-0020){#c3-note-0020a} In 2014, Facebook made
    headlines by claiming that it could predict a future romantic
    relationship between two of its members, and even that it could do so
    about a hundred days before the new couple changed their profile status
    to "in a relationship." The basis of this sort of prognosis is the
    changing frequency with which two people exchange messages over the
    social network. In this regard, it does not matter whether these
    messages are private (that is, only for the two of them), semi-public
    (only for friends), or public (visible to
    everyone).[^21^](#c3-note-0021){#c3-note-0021a} Facebook and other
    social mass media are set up in such a way that those who control the
    servers are always able to see everything. All of this information,
    moreover, is formatted in such a way as to optimize its statistical
    analysis. As the amounts of data increase, even the smallest changes in
    frequencies and correlations begin to gain significance. In its study of
    romantic relationships, for instance, Facebook discovered that the
    number of online interactions reaches its peak 12 days before a
    relationship begins and hits its low point 85 days after the status
    update (probably because of an increasing number of offline
    interactions).[^22^](#c3-note-0022){#c3-note-0022a} The difference in
    the frequency of online interactions between the high point and the low
    point was just 0.14 updates per day. In other words, Facebook\'s
    statisticians could recognize and evaluate when users would post, over
    the course of seven days, one more message than they might usually
    exchange. With trad­itional methods of surveillance, which focus on
    individual people, such a small deviation would not have been detected.
    To do so, it is necessary to have immense numbers of users generating
    immense volumes of data. Accordingly, these new []{#Page_136
    type="pagebreak" title="136"}analytic possibilities do not mean that
    Facebook can accur­ately predict the behavior of a single user. The
    unique person remains difficult to calculate, for all that could be
    ascertained from this information would be a minimally different
    probability of future behavior. As regards a single person, this gain in
    knowledge would not be especially useful, for a slight change in
    probability has no predictive power on a case-by-case basis. If, in the
    case of a unique person, the probability of a particular future action
    climbs from, say, 30 to 31 percent, then not much is gained with respect
    to predicting this one person\'s behavior. If vast numbers of similar
    people are taken into account, however, then the power of prediction
    increases enormously. If, in the case of 1 million people, the
    probability of a future action increases by 1 percent, this means that,
    in the future, around 10,000 more people will act in a certain way.
    Although it may be impossible to say for sure which member of a "group"
    this might be, this is not relevant to the value of the prediction (to
    an advertising agency, for instance).

    It is also possible to influence large groups by changing the parameters
    of their informational environment. Many online news portals, for
    instance, simultaneously test multiple headlines during the first
    minutes after the publication of an article (that is, different groups
    are shown different titles for the same article). These so-called A/B
    tests are used to measure which headlines attract the most clicks. The
    most successful headline is then adopted and shown to larger
    groups.[^23^](#c3-note-0023){#c3-note-0023a} This, however, is just the
    beginning. All services are constantly changing their features for
    select focus groups without any notification, and this is happening both
    on the level of the user interface and on that of their hidden
    infrastructure. In this way, reactions can be tested in order to
    determine whether a given change should be implemented more broadly or
    rejected. If these experiments and interventions are undertaken with
    commercial intentions -- to improve the placement of advertisements, for
    instance -- then they hardly trigger any special reactions. Users will
    grumble when their customary pro­cedures are changed, but this is
    usually a matter of short-term irritation, for users know that they can
    hardly do anything about it beyond expressing their discontent. A
    greater stir was caused by an experiment conducted in the middle of
    2014, []{#Page_137 type="pagebreak" title="137"}for which Facebook
    manipulated the timelines of 689,003 of its users, approximately 0.04
    percent of all members. The selected members were divided into two
    groups, one of which received more "positive" messages from their circle
    of friends while the other received more "negative" messages. For a
    control group, the filter settings were left unchanged. The goal was to
    investigate whether, without any direct interaction and non-verbal cues
    (mimicry, for example), the mood of a user could be influenced by the
    mood that he or she perceives in others -- that is, whether so-called
    "emotional contagion," which had hitherto only been demonstrated in the
    case of small and physically present groups, also took place online. The
    answer, according to the results of the study, was a resounding
    "yes."[^24^](#c3-note-0024){#c3-note-0024a} Another conclusion, though
    one that the researchers left unexpressed, is that Facebook can
    influence this process in a controlled manner. Here, it is of little
    interest whether it is genuinely possible to manipulate the emotional
    condition of someone posting on Facebook by increasing the presence of
    certain key words, or whether the presence of these words simply
    increases the social pressure for someone to appear in a better or worse
    mood.[^25^](#c3-note-0025){#c3-note-0025a} What is striking is rather
    the complete disregard of one of the basic ethical principles of
    scientific research, namely that human subjects must be informed about
    and agree to any experiments performed on or with them ("informed
    consent"). This disregard was not a mere oversight; the authors of the
    study were alerted to the issue before publication, and the methods were
    subjected to an internal review. The result: Facebook\'s terms of use
    allow such methods, no legal claims could be made, and the modulation of
    the newsfeed by changing filter settings is so common that no one at
    Facebook could see anything especially wrong with the
    experiment.[^26^](#c3-note-0026){#c3-note-0026a}

    Why would they? All commercially driven social mass media conduct
    manipulative experiments. From the perspective of "data behaviorism,"
    this is the best way to acquire feedback from users -- far better than
    direct surveys.[^27^](#c3-note-0027){#c3-note-0027a} Facebook had also
    already conducted experiments in order to intervene directly in
    political processes. On November 2, 2010, the social mass medium tested,
    by manipulating timelines, whether it might be possible to increase
    voter turnout for the American midterm elections that were taking place
    []{#Page_138 type="pagebreak" title="138"}on that day. An application
    was surreptitiously loaded into the timelines of more than 10 million
    people that contained polling information and a list of friends who had
    already voted. It was possible to collect this data because the
    application had a built-in function that enabled people to indicate
    whether they had already cast a vote. A control group received a message
    that encouraged them to vote but lacked any personalization or the
    possibility of social interaction. This experiment, too, relied on the
    principle of "contagion." By the end of the day, those who saw that
    their friends had already voted were 0.39 percent more likely to go to
    the polls than those in the control group. In relation to a single
    person, the extent of this influence was thus extremely weak and barely
    relevant. Indeed, it would be laughable even to speak of influence at
    all if only 250 people had altered their behavior. Personal experience
    suggests that one cannot be manipulated by such things. It would be
    false to conclude, however, that such interventions are irrelevant, for
    matters are entirely different where large groups are concerned. On
    account of Facebook\'s small experiment, approximately 60,000 people
    voted who otherwise would have stayed at home, and around 340,000 extra
    votes were cast (because most people do not go to vote alone but rather
    bring along friends and family members, who vote at the same
    time).[^28^](#c3-note-0028){#c3-note-0028a} These are relevant numbers
    if the margins are narrow between the competing parties or candidates,
    especially if the people who receive the extra information and incentive
    are not -- as they were for this study -- chosen at
    random.[^29^](#c3-note-0029){#c3-note-0029a} Facebook already possesses,
    in excess, the knowledge necessary to focus on a particular target
    group, for instance on people whose sympathies lie with one party or
    another.[^30^](#c3-note-0030){#c3-note-0030a}
    :::

    ::: {.section}
    ### The dark shadow of cybernetics {#c3-sec-0006}

    Far from being unusual, the manipulation of information behind the backs
    of users is rather something that is done every day by commercially
    driven social mass media, which are not primarily channels for
    transmitting content but rather -- and above all -- environments in
    which we live. Both of the examples discussed above illustrate what is
    possible when these environments, which do not represent the world but
    []{#Page_139 type="pagebreak" title="139"}rather generate it, are
    centrally controlled, as is presently the case. Power is being exercised
    not by directly stipulating what each individual ought to do, but rather
    by altering the environment in which everyone is responsible for finding
    his or her way. The baseline of facts can be slightly skewed in order to
    increase the probability that this modified fac­ticity will, as a sort
    of social gravity, guide things in a certain direction. At work here is
    the fundamental insight of cybernetics, namely that the "target" to be
    met -- be it an enemy bomber,[^31^](#c3-note-0031){#c3-note-0031a} a
    citizen, or a customer -- orients its behavior to its environment, to
    which it is linked via feedback. From this observation, cybernetically
    oriented social planners soon drew the conclusion that the best (because
    indirect and hardly perceptible) method for influencing the "target"
    would be to alter its environment. As early as the beginning of the
    1940s, the anthropologist and cyberneticist Gregory Bateson posed the
    following question: "How would we rig the maze or problem-box so that
    the anthropomorphic rat shall obtain a repeated and reinforced
    impression of his own free will?"[^32^](#c3-note-0032){#c3-note-0032a}
    Though Bateson\'s formulation is somewhat flippant, there was a serious
    backdrop to this problem. The electoral success of the Nazis during the
    1930s seemed to have indicated that the free expression of will can have
    catastrophic political consequences. In response to this, the American
    planners of the post-war order made it their objective to steer the
    population toward (or keep it on) the path of liberal, market-oriented
    democracy without obviously undermining the legitimacy of liberal
    democracy itself, namely its basis in the individual\'s free will and
    freedom of choice. According to the French author collective Tiqqun,
    this paradox was resolved by the introduction of "a new fable that,
    after the Second World War, definitively \[...\] supplanted the liberal
    hypothesis. Contrary to the latter, it proposes to conceive biological,
    physical and social behaviors as something integrally programmed and
    re-programmable."[^33^](#c3-note-0033){#c3-note-0033a} By the term
    "liberal hypothesis," Tiqqun meant the assumption, stemming from the
    time of the Enlightenment, that people could improve themselves by
    applying their own reason and exercising their own moral faculties, and
    could free themselves from ignorance through education and reflection.
    Thus, they could become autonomous individuals and operate as free
    actors (both as market []{#Page_140 type="pagebreak"
    title="140"}participants and as citizens). The liberal hypothesis is
    based on human understanding. The cybernetic hypothesis is not. Its
    conception of humans is analogous to its conception of animals, plants,
    and machines; like the latter, people are organisms that react to
    stimuli from their environment. The hypothesis is thus associated with
    the theories of "instrumental conditioning," which had been formulated
    by behaviorists during the 1940s. In the case of both humans and other
    animals, as it was argued, learning is not a process of understanding
    but rather one of executing a pattern of stimulus and response. To learn
    is thus to adopt a pattern of behavior with which one\'s own activity
    elicits the desired reaction. In this model, understanding does not play
    any role; all that matters is
    behavior.[^34^](#c3-note-0034){#c3-note-0034a}

    And this behavior, according the cybernetic hypothesis, can be
    programmed not by directly accessing people (who are conceived as
    impenetrable black boxes) but rather by indirectly altering the
    environment, with which organisms and machines are linked via feedback.
    These interventions are usually so subtle as to not be perceived by the
    individual, and this is because there is no baseline against which it is
    possible to measure the extent to which the "baseline of facts" has been
    tilted. Search results and timelines are always being filtered and,
    owing to personalization, a search will hardly ever generate the same
    results twice. On a case-by-case basis, the effects of this are often
    minimal for the individual. In aggregate and over long periods of time,
    however, the effects can be substantial without the individual even
    being able to detect them. Yet the practice of controlling behavior by
    manipulating the environment is not limited to the environment of
    information. In their enormously influential book from 2008, *Nudge*,
    Richard Thaler and Cass Sunstein even recommended this as a general
    method for "nudging" people, almost without their notice, in the
    direction desired by central planners. To accomplish this, it is
    necessary for the environment to be redesigned by the "choice architect"
    -- by someone, for instance, who can organize the groceries in a store
    in such a way as to increase the probability that shoppers will reach
    for healthier options. They refer to this system of control as
    "libertarian paternalism" because it combines freedom of choice
    (libertarianism) with obedience []{#Page_141 type="pagebreak"
    title="141"}to an -- albeit invisible -- authority figure
    (paternalism).[^35^](#c3-note-0035){#c3-note-0035a} The ideal sought by
    the authors is a sort of unintrusive caretaking. In the spirit of
    cybernetics and in line with the structures of post-democracy, the
    expectation is for people to be moved in the experts\' chosen direction
    by means of a change to their environment, while simultaneously
    maintaining the impression that they are behaving in a free and
    autonomous manner. The compatibility of this approach with agendas on
    both sides of the political spectrum is evident in the fact that the
    Democratic president Barack Obama regularly sought Cass Sunstein\'s
    advice and, in 2009, made him the director of the Office of Information
    and Regulatory Affairs, while Richard Thaler, in 2010, was appointed to
    the advisory board of the so-called Behavioural Insights Team, which,
    known as the "nudge unit," had been founded by the Conservative prime
    minister David Cameron.

    In the case of social mass media, the ability to manipulate the
    environment is highly one-sided. It is reserved exclusively for those on
    the inside, and the latter are concerned with maximizing the profit of a
    small group and expanding their power. It is possible to regard this
    group as the inner core of the post-democratic system, consisting of
    leading figures from business, politics, and the intelligence agencies.
    Users typically experience this power, which determines the sphere of
    possibility within which their everyday activity can take place, in its
    soft form, for instance when new features are introduced that change the
    information environment. The hard form of this power only becomes
    apparent in extreme cases, for instance when a profile is suddenly
    deleted or a group is removed. This can happen on account of a rule
    whose existence does not necessarily have to be public or
    transparent,[^36^](#c3-note-0036){#c3-note-0036a} or because of an
    external intervention that will only be communicated if it is in the
    providers\' interest to do so. Such cases make it clear that, at any
    time, service providers can take away the possibilities for action that
    they offer. This results in a paradoxical experience on the part of
    users: the very environments that open up new opportunities for them in
    their personal lives prove to be entirely beyond influence when it comes
    to fundamental decisions that affect everyone. And, as the majority of
    people gradually lose the ability to co-determine how the "big
    questions" are answered, a very []{#Page_142 type="pagebreak"
    title="142"}small number of actors is becoming stronger than ever. This
    paradox of new opportunities for action and simultaneous powerlessness
    has been reflected in public debate, where there has also been much
    (one-sided) talk about empowerment and the loss of
    control.[^37^](#c3-note-0037){#c3-note-0037a} It would be better to
    discuss a shift in power that has benefited the elite at the expense of
    the vast majority of people.
    :::

    ::: {.section}
    ### Networks as monopolies {#c3-sec-0007}

    Whereas the dominance of output legitimation is new in the realm of
    politics, it is normal and seldom regarded as problematic in the world
    of business.[^38^](#c3-note-0038){#c3-note-0038a} For, at least in
    theory (that is, under the conditions of a functioning market),
    customers are able to deny the legitimacy of providers and ultimately
    choose between competing products. In the case of social mass media,
    however, there is hardly any competition, despite all of the innovation
    that is allegedly taking place. Facebook, Twitter, and many other
    platforms use closed protocols that greatly hinder the ability of their
    members to communicate with the users of competing providers. This has
    led to a situation in which the so-called *network effect* -- the fact
    that the more a network connects people with one another, the more
    useful and attractive it becomes -- has given rise to a *monopoly
    effect*: the entire network can only consist of a single provider. This
    connection between the network effect and the monopoly effect, however,
    is not inevitable, but rather fabricated. It is the closed standards
    that make it impossible to switch providers without losing access to the
    entire network and thus also to the communal formations that were
    created on its foundation. From the perspective of the user, this
    represents an extremely high barrier against leaving the network -- for,
    as discussed above, these formations now play an essential role in the
    creation of both identity and opportunities for action. From the user\'s
    standpoint, this is an all-or-nothing decision with severe consequences.
    Formally, this is still a matter of individual and free choice, for no
    one is being forced, in the classical sense, to use a particular
    provider.[^39^](#c3-note-0039){#c3-note-0039a} Yet the options for
    action are already pre-structured in such a way that free choice is no
    longer free. The majority of American teens, for example, despite
    []{#Page_143 type="pagebreak" title="143"}no longer being very
    enthusiastic about Facebook, continue using the network for fear of
    missing out on something.[^40^](#c3-note-0040){#c3-note-0040a} This
    contradiction -- voluntarily doing something that one does not really
    want to do -- and the resulting experience of failing to shape one\'s
    own activity in a coherent manner are ideal-typical manifestations of
    the power of networks.

    The problem experienced by the unwilling-willing users of Facebook has
    not been caused by the transformation of communication into data as
    such. This is necessary to provide input for algorithms, which turn the
    flood of information into something usable. To this extent, the general
    complaint about the domination of algorithms is off the mark. The
    problem is not the algorithms themselves but rather the specific
    capitalist and post-democratic setting in which they are implemented.
    They only become an instrument of domin­ation when open and
    decentralized activities are transferred into closed and centralized
    structures in which far-reaching, fundamental decision-making powers and
    possibilities for action are embedded that legitimize themselves purely
    on the basis of their output. Or, to adapt the title of Rosa von
    Praunheim\'s film, which I discussed in my first chapter: it is not the
    algorithm that is perverse, but the situation in which it lives.
    :::

    ::: {.section}
    ### Political surveillance {#c3-sec-0008}

    In June 2013, Edward Snowden exposed an additional and especially
    problematic aspect of the expansion of post-democratic structures: the
    comprehensive surveillance of the internet by government intelligence
    agencies. The latter do not use collected data primarily for commercial
    ends (although they do engage in commercial espionage) but rather for
    political repression and the protection of central power interests --
    or, to put it in more neutral terms, in the service of general security.
    Yet the NSA and other intelligence agencies also record decentralized
    communication and transform it into (meta-)data, which are centrally
    stored and analyzed.[^41^](#c3-note-0041){#c3-note-0041a} This process
    is used to generate possible courses of action, from intensifying the
    surveillance of individuals and manipulating their informational
    environment[^42^](#c3-note-0042){#c3-note-0042a} to launching military
    drones for the purpose of
    assassination.[^43^](#c3-note-0043){#c3-note-0043a} The []{#Page_144
    type="pagebreak" title="144"}great advantage of meta-data is that they
    can be standardized and thus easily evaluated by machines. This is
    especially important for intelligence agencies because, unlike social
    mass media, they do not analyze uniformly formatted and easily
    processable streams of communication. That said, the boundaries between
    post-democratic social mass media and government intelligence services
    are fluid. As is well known by now, the two realms share a number of
    continuities in personnel and commonalities with respect to their
    content.[^44^](#c3-note-0044){#c3-note-0044a} In 2010, for instance,
    Facebook\'s chief security officer left his job for a new position at
    the NSA. Personnel swapping of this sort takes place at all levels and
    is facilitated by the fact that the two sectors are engaged in nearly
    the same activity: analyzing social interactions in real time by means
    of their exclusive access to immense volumes of data. The lines of
    inquiry and the applied methods are so similar that universities,
    companies, and security organizations are able to cooperate closely with
    one another. In many cases, certain programs or analytic methods are
    just as suitable for commercial purposes as they are for intelligence
    agencies and branches of the military. This is especially apparent in
    the research that is being conducted. Scientists, businesses, and
    militaries share a common interest in discovering collective social
    dynamics as early as possible, isolating the relevant nodes (machines,
    individual people, or groups) through which these dynamics can be
    influenced, and developing strategies for specific interventions to
    achieve one goal or another. Aspects of this cooperation are publicly
    documented. Since 2011, for instance, the Defense Advanced Research
    Projects Agency (DARPA) -- the American agency that, in the 1960s,
    initiated and financed the development of the internet -- has been
    running its own research program on social mass media with the name
    Social Media in Strategic Communication. Within the framework of this
    program, more than 160 scientific studies have already been published,
    with titles such as "Automated Leadership Analysis" or "Interplay
    between Social and Topical
    Structure."[^45^](#c3-note-0045){#c3-note-0045a} Since 2009, the US
    military has been coordinating research in this field through a program
    called the Minerva Initiative, which oversees more than 70 individual
    projects.[^46^](#c3-note-0046){#c3-note-0046a} Since 2009, too, the
    European Union has been working together []{#Page_145 type="pagebreak"
    title="145"}with universities and security agencies within the framework
    of the so-called INDECT program, the goal of which is "to involve
    European scientists and researchers in the development of solutions to
    and tools for automatic threat
    detection."[^47^](#c3-note-0047){#c3-note-0047a} Research, however, is
    just one area of activity. As regards the collection of data and the
    surveillance of communication, there is also a high degree of
    cooperation between private and government actors, though it is not
    always without tension. Snowden\'s revelations have done little to
    change this. The public outcry of large internet companies over the fact
    that the NSA has been monitoring their services might be an act of
    showmanship more than anything else. Such bickering, according to the
    security expert Bruce Schneier, is "mostly role-playing designed to keep
    us blasé about what\'s really going
    on."[^48^](#c3-note-0048){#c3-note-0048a}

    Like the operators of social mass media, intelligence agencies also
    argue that their methods should be judged according to their output;
    that is, the extent to which they ensure state security. Outsiders,
    however, are hardly able to make such a judgment. Input legitimation --
    that is, the question of whether government security agencies are
    operating within the bounds of the democratically legitimized order of
    law -- seems to be playing a less significant role in the public
    discussion. In somewhat exaggerated terms, one could say that the
    disregard for fundamental rights is justified by the quality of the
    "security" that these agencies have created. Perhaps the similarity of
    the general methods and self-justifications with which service providers
    of social production, consumption, and security are constantly
    "optimized" is one reason why there has yet to be widespread public
    protest against comprehensive surveillance programs. We have been warned
    of the establishment of a "police state in reserve," which can be
    deployed at any time, but these warnings seem to have fallen on deaf
    ears.[^49^](#c3-note-0049){#c3-note-0049a}
    :::

    ::: {.section}
    ### The normalization of post-democracy {#c3-sec-0009}

    At best, it seems as though the reflex of many people is to respond to
    even fundamental political issues by considering only what might be
    useful or pleasant for themselves in the short term. Apparently, many
    people consider it normal to []{#Page_146 type="pagebreak"
    title="146"}be excluded from decisions that affect broad and significant
    areas of their life. The post-democracy of social mass media, which has
    deeply permeated the constitution of everyday life and the constitution
    of subjects, is underpinned by the ever advancing post-democracy of
    politics. It changes the expectations that citizens have for democratic
    institutions, and it makes their increasing erosion seem expected and
    normal to broad strata of society. The violation of fundamental and
    constitutional civil rights, such as those concerning the protection of
    data, is increasingly regarded as unavoidable and -- from the pragmatic
    perspective of the individual -- not so bad. This has of course
    benefited political decision-makers, who have shown little desire to
    change the situation, safeguard basic rights, and establish democratic
    control over all areas of executive
    authority.[^50^](#c3-note-0050){#c3-note-0050a}

    The spread of "smart" technologies is enabling such post-democratic
    processes and structures to permeate all areas of life. Within one\'s
    private living space, this happens through smart homes, which are still
    limited to the high end of the market, and smart meters, which have been
    implemented across all social
    strata.[^51^](#c3-note-0051){#c3-note-0051a} The latter provide
    electricity companies with detailed real-time data about a household\'s
    usage behavior and are supposed to enhance energy efficiency, but it
    remains unclear exactly how this new efficiency will be
    achieved.[^52^](#c3-note-0052){#c3-note-0052a} The concept of the "smart
    city" extends this process to entire municipalities. Over the course of
    the next few decades, for instance, Siemens predicts that "cities will
    have countless autonomous, intelligently functioning IT systems that
    will have perfect knowledge of users\' habits and energy consumption,
    and provide optimum service. \[...\] The goal of such a city is to
    optimally regulate and control resources by means of autonomous IT
    systems."[^53^](#c3-note-0053){#c3-note-0053a} According to this vision,
    the city will become a cybernetic machine, but if everything is
    "optimally" regulated and controlled, who will be left to ask in whose
    interests these autonomous systems are operating?

    Such dynamics, however, not only reorganize physical space on a small
    and a large scale; they also infiltrate human beings. Adherents of the
    Quantified Self movement work diligently to record digital information
    about their own bodies. The number of platforms that incite users to
    stay fit (and []{#Page_147 type="pagebreak" title="147"}share their data
    with companies) with competitions, point systems, and similar incentives
    has been growing steadily. It is just a small step from this hobby
    movement to a disciplinary regime that is targeted at the
    body.[^54^](#c3-note-0054){#c3-note-0054a} Imagine the possibilities of
    surveillance and sanctioning that will come about when data from
    self-optimizing applications are combined with the data available to
    insurance companies, hospitals, authorities, or employers. It does not
    take too much imagination to do so, because this is already happening in
    part today. At the end of 2014, for instance, the Generali Insurance
    Company announced a new set of services that is marketed under the name
    Vitality. People insured in Germany, France, and Austria are supposed to
    send their health information to the company and, as a reward for
    leading a "proper" lifestyle, receive a rebate on their premium. The
    long-term goal of the program is to develop "behavior-dependent tariff
    models," which would undermine the solidarity model of health
    insurance.[^55^](#c3-note-0055){#c3-note-0055a}

    According to the legal scholar Frank Pasquale, the sum of all these
    developments has led to a black-box society: More social processes are
    being controlled by algorithms whose operations are not transparent
    because they are shielded from the outside world and thus from
    democratic control.[^56^](#c3-note-0056){#c3-note-0056a} This
    ever-expanding "post-democracy" is not simply liberal democracy with a
    few problems that can be eliminated through well-intentioned reforms.
    Rather, a new social system has emerged in which allegedly relaxed
    control over social activity is compensated for by a heightened level of
    control over the data and structural conditions pertaining to the
    activity itself. In this system, both the virtual and the physical world
    are altered to achieve particular goals -- goals determined by just a
    few powerful actors -- without the inclusion of those affected by these
    changes and often without them being able to notice the changes at all.
    Whoever refuses to share his or her data freely comes to look suspicious
    and, regardless of the motivations behind this anonymity, might even be
    regarded as a potential enemy. In July 2014, for instance, the following
    remarks were included in Facebook\'s terms of use: "On Facebook people
    connect using their real names and identities. \[...\] Claiming to be
    another person \[...\] or creating multiple accounts undermines
    community []{#Page_148 type="pagebreak" title="148"}and violates
    Facebook\'s terms."[^57^](#c3-note-0057){#c3-note-0057a} For the police
    and the intelligence agencies in particular, all activities that attempt
    to evade comprehensive surveillance are generally suspicious. Even in
    Germany, people are labeled "extremists" by the NSA for the sole reason
    that they have supported the Tor Project\'s anonymity
    software.[^58^](#c3-note-0058){#c3-note-0058a} In a 2014 trial in
    Vienna, the use of a foreign pre-paid telephone was introduced as
    evidence that the defendant had attempted to conceal a crime, even
    though this is a harmless and common method for avoiding roaming charges
    while abroad.[^59^](#c3-note-0059){#c3-note-0059a} This is a sort of
    anti-mask law 2.0, and every additional terrorist attack is used to
    justify extending its reach.

    It is clear that Zygmunt Bauman\'s bleak assessment of freedom in what
    he calls "liquid modernity" -- "freedom comes when it no longer
    matters"[^60^](#c3-note-0060){#c3-note-0060a} -- can easily be modified
    to suit the digital condition: everyone can participate in cultural
    processes, because culture itself has become irrelevant. Disputes about
    shared meaning, in which negotiations are made about what is important
    to people and what ought to be achieved, have less and less influence
    over the way power is exercised. Politics has been abandoned for an
    administrative management that oscillates between paternalism and
    authoritarianism. Issues that concern the common good have been
    delegated to "autonomous IT systems" and removed from public debate. By
    now, the exercise of power, which shapes society, is based less on basic
    consensus and cultural hegemony than it is on the technocratic argument
    that "there is no alternative" and that the (informational) environment
    in which people have to orient themselves should be optimized through
    comprehensive control and manipulation -- whether they agree with this
    or not.
    :::

    ::: {.section}
    ### Forms of resistance {#c3-sec-0010}

    As far as the circumstances outlined above are concerned, Bauman\'s
    conclusion may seem justified. But as an overarching assessment of
    things, it falls somewhat short, for every form of power provokes its
    own forms of resistance.[^61^](#c3-note-0061){#c3-note-0061a} In the
    context of post-democracy under the digital condition, these forms have
    likewise shifted to the level of data, and an especially innovative and
    effective means of resistance []{#Page_149 type="pagebreak"
    title="149"}has been the "leak"; that is, the unauthorized publication
    of classified documents, usually in the form of large datasets. The most
    famous platform for this is WikiLeaks, which since 2006 has attracted
    international attention to this method with dozens of spectacular
    publications -- on corruption scandals, abuses of authority, corporate
    malfeasance, environmental damage, and war crimes. As a form of
    resistance, however, leaking entire databases is not limited to just one
    platform. In recent years and through a variety of channels, large
    amounts of data (from banks and accounting firms, for instance) have
    been made public or have been handed over to tax investigators by
    insiders. Thus, in 2014, for instance, the *Süddeutsche Zeitung*
    (operating as part of the International Consortium of Investigative
    Journalists based in Washington, DC), was not only able to analyze the
    so-called "Offshore Leaks" -- a database concerning approximately
    122,000 shell companies registered in tax
    havens[^62^](#c3-note-0062){#c3-note-0062a} -- but also the "Luxembourg
    Leaks," which consisted of 28,000 pages of documents demonstrating the
    existence of secret and extensive tax deals between national authorities
    and multinational corporations and which caused a great deal of
    difficulty for Jean-Claude Juncker, the newly elected president of the
    European Commission and former prime minister of
    Luxembourg.[^63^](#c3-note-0063){#c3-note-0063a}

    The reasons why employees or government workers have become increasingly
    willing to hand over large amounts of information to journalists or
    whistle-blowing platforms are to be sought in the contradictions of the
    current post-democratic regime. Over the past few years, the discrepancy
    in Western countries between the self-representation of democratic
    institutions and their frequently post-democratic practices has become
    even more obvious. For some people, including the former CIA employee
    Edward Snowden, this discrepancy created a moral conflict. He claimed
    that his work consisted in the large-scale investigation and monitoring
    of respectable citizens, thus systematically violating the Constitution,
    which he was supposed to be protecting. He resolved this inner conflict
    by gathering material about his own activity, then releasing it, with
    the help of journalists, to the public, so that the latter could
    understand and judge what was taking
    place.[^64^](#c3-note-0064){#c3-note-0064a} His leaks benefited from
    technical []{#Page_150 type="pagebreak" title="150"}advances, including
    the new forms of cooperation which have resulted from such advances.
    Even institutions that depend on keeping secrets, such as banks and
    intelligence agencies, have to "share" their information internally and
    rely on a large pool of technical personnel to record and process the
    massive amounts of data. To accomplish these tasks, employees need the
    fullest possible access to this information, for even the most secret
    databases have to be maintained by someone, and this also involves
    copying data. Thus, it is far easier today than it was just a few
    decades ago to smuggle large volumes of data out of an
    institution.[^65^](#c3-note-0065){#c3-note-0065a}

    This new form of leaking, however, did not become an important method of
    resistance on account of technical developments alone. In the era of big
    data, databases are the central resource not only for analyzing how the
    world is described by digital communication, but also for generating
    that communication. The power of networks in particular is organized
    through the construction of environmental conditions that operate
    simultaneously in many places. On their own, the individual commands and
    instructions are often banal and harmless, but as a whole they
    contribute to a dynamic field that is meant to produce the results
    desired by the planners who issue them. In order to reconstruct this
    process, it is necessary to have access to these large amounts of data.
    With such information at hand, it is possible to relocate the
    surreptitious operations of post-democracy into the sphere of political
    debate -- the public sphere in its emphatic, liberal sense -- and this
    needs to be done in order to strengthen democratic forces against their
    post-democratic counterparts. Ten years after WikiLeaks and three years
    after Edward Snowden\'s revelations, it remains highly questionable
    whether democratic actors are strong enough or able to muster the
    political will to use this information to tip the balance in their favor
    for the long term. Despite the forms of resistance that have arisen in
    response to these new challenges, one could be tempted to concur with
    Bauman\'s pessimistic conclusion about the irrelevance of freedom,
    especially if post-democracy were the only concrete political tendency
    of the digital condition. But it is not. There is a second political
    trend taking place, though it is not quite as well
    developed.[]{#Page_151 type="pagebreak" title="151"}
    :::
    :::

    ::: {.section}
    Commons {#c3-sec-0011}
    -------

    The digital condition includes not only post-democratic structures in
    more areas of life; it is also characterized by the development of a new
    manner of production. As early as 2002, the legal scholar Yochai Benkler
    coined the term "commons-based peer production" to describe the
    development in question.[^66^](#c3-note-0066){#c3-note-0066a} Together,
    Benkler\'s peers form what I have referred to as "communal formations":
    people joining forces voluntarily and on a fundamentally even playing
    field in order to pursue common goals. Benkler enhances this idea with
    reference to the constitutive role of the commons for many of these
    communal formations.

    As such, commons are neither new nor specifically Western. They exist in
    many cultural traditions, and thus the term is used in a wide variety of
    ways.[^67^](#c3-note-0067){#c3-note-0067a} In what follows, I will
    distinguish between three different dimensions. The first of these
    involves "common pool resources"; that is, *goods* that can be used
    communally. The second dimension is that these goods are administered by
    the "commoners"; that is, by members of *communities* who produce, use,
    and cultivate the resources. Third, this activity gives rise to forms of
    "commoning"; that is, to *practices*, *norms*, and *institutions* that
    are developed by the communities
    themselves.[^68^](#c3-note-0068){#c3-note-0068a}

    In the commons, efforts are focused on the long-term utility of goods.
    This does not mean that commons cannot also be used for the production
    of commercial products -- cheese from the milk of cows that graze on a
    common pasture, for instance, or books based on the content of Wikipedia
    articles. The relationships between the people who use a certain
    resource communally, however, are not structured through money but
    rather through direct social cooper­ation. Commons are thus
    fundamentally different from classical market-oriented institutions,
    which orient their activity primarily in response to price signals.
    Commons are also fundamentally distinct from bureaucracies -- whether in
    the form of public administration or private industry -- which are
    organized according to hierarchical chains of command. And they differ,
    too, from public institutions. Whereas the latter are concerned with
    society as a whole -- or at least that is []{#Page_152 type="pagebreak"
    title="152"}their democratic mandate -- commons are inwardly oriented
    forms that primarily exist by means and for the sake of their members.

    ::: {.section}
    ### The organization of the commons {#c3-sec-0012}

    Commoners create institutions when they join together for the sake of
    using a resource in a long-term and communal manner. In this, the
    separation of producers and consumers, which is otherwise ubiquitous,
    does not play a significant role: to different and variable extents, all
    commoners are producers and consumers of the common resources. It is an
    everyday occurrence for someone to take something from the common pool
    of resources for his or her own use, but it is understood that something
    will be created from this that, in one form or another, will flow back
    into the common pool. This process -- the reciprocal relationship
    between singular appropriation and communal provisions -- is one of the
    central dynamics within commons.

    Because commoners orient their activity neither according to price
    signals (markets) nor according to instructions or commands
    (hierarchies), social communication among the members is the most
    important means of self-organization. This communication is intended to
    achieve consensus and the voluntary acceptance of negotiated rules, for
    only in such a way is it possible to maintain the voluntary nature of
    the arrangement and to keep internal controls at a minimum. Voting,
    which is meant to legitimize the preferences of a majority, is thus
    somewhat rare, and when it does happen, it is only of subordinate
    significance. The main issue is to build consensus, and this is usually
    a complex process requiring intensive communication. One of the reasons
    why the very old practice of the commons is now being readopted and
    widely discussed is because communication-intensive and horizontal
    processes can be organized far more effectively with digital
    technologies. Thus, the idea of collective participation and
    organization beyond small groups is no longer just a utopian vision.

    The absence of price signals and chains of command causes the social
    institutions of the commons to develop complex structures for
    comprehensively integrating their members. []{#Page_153 type="pagebreak"
    title="153"}This typically involves weaving together a variety of
    economic, social, cultural, and technical dimensions. Commons realize an
    alternative to the classical separation of spheres that is so typical of
    our modern economy and society. The economy is not understood here as an
    independent realm that functions according to a different set of rules
    and with externalities, but rather as one facet of a complex and
    comprehensive phenomenon with intertwining commercial, social, ethical,
    ecological, and cultural dimensions.

    It is impossible to determine how the interplay between these three
    dimensions generally solidifies into concrete institutions.
    Historically, many different commons-based institutions were developed,
    and their number and variety have only increased under the digital
    condition. Elinor Ostrom, who was awarded the 2009 Nobel Prize in
    Economics for her work on the commons, has thus refrained from
    formulating a general model for
    them.[^69^](#c3-note-0069){#c3-note-0069a} Instead, she has identified a
    series of fundamental challenges for which all commoners have to devise
    their own solutions.[^70^](#c3-note-0070){#c3-note-0070a} For example,
    the membership of a group that communally uses a particular resource
    must be defined and, if necessary, limited. Especially in the case of
    material resources, such as pastures on which several people keep their
    animals, it is important to limit the number of members for the simple
    reason that the resource in question might otherwise be over-utilized
    (this is allegedly the "tragedy of the
    commons").[^71^](#c3-note-0071){#c3-note-0071a} Things are different
    with so-called non-rival goods, which can be consumed by one person
    without excluding its use by another. When I download and use a freely
    available word-processing program, for instance, I do not take away
    another person\'s chance to do the same. But even in the case of digital
    common goods, access is often tied to certain conditions. Whoever uses
    free software has to accept its licensing agreement.

    Internally, commons are often meritocratically oriented. Those who
    contribute more are also able to make greater use of the common good (in
    the case of material goods) or more strongly influence its development
    (in the case of informational goods). In the latter case, the
    meritocratic element takes into account the fact that the challenge does
    not lie in avoiding the over-utilization of a good, but rather in
    generating new contributions to its further development. Those who
    []{#Page_154 type="pagebreak" title="154"}contribute most to the
    provision of resources should also be able to determine their further
    course of development, and this represents an important incentive for
    these members to remain in the group. This is in the interest of all
    participants, and thus the authority of the most active members is
    seldom called into question. This does not mean, however, that there are
    no differences of opinion within commons. Here, too, reaching consensus
    can be a time-consuming process. Among the most important
    characteristics of all commons are thus mechanisms for decision-making
    that involve members in a variety of ways. The rules that govern the
    commons are established by the members themselves. This goes far beyond
    choosing between two options presented by a third party. Commons are not
    simply markets without money. All rele­vant decisions are made
    collectively within the commons, and they do not simply aggregate as the
    sum of individual decisions. Here, unlike the case of post-democratic
    structures, the levels of participation and decision-making are not
    separ­ated from one another. On the contrary, they are directly and
    explicitly connected.

    The implementation of rules and norms, even if they are the result of
    consensus, is never an entirely smooth process. It is therefore
    necessary, as Ostrom has stressed, to monitor rule compliance within
    commons and to develop a system of graded sanctions. Minor infractions
    are punished with social disapproval or small penalties, while graver
    infractions warrant stiffer penalties that can lead to a person\'s
    exclusion from the group. In order for conflicts or rule violations not
    to escalate in the commons to the extent that expulsion is the only
    option, mechanisms for conflict resolution have to be put in place. In
    the case of Wikipedia, for instance, conflicts are usually resolved
    through discussions. This is not always productive, however, for
    occasionally the "solution" turns out to be that one side or the other
    has simply given up out of exhaustion.

    A final important point is that commons do not exist in isolation from
    society. They are always part of larger social systems, which are
    normally governed by the principles of the market or subject to state
    control, and are thus in many cases oppositional to the practice of
    commoning. Political resistance is often incited by the very claim that
    a particular []{#Page_155 type="pagebreak" title="155"}good can be
    communally administered and does not belong to a single owner, but
    rather to a group that governs its own affairs. Yet without the
    recognition of the right to self-organization and without the
    corresponding legal conditions allowing this right to be perceived as
    such, commons are barely able to form at all, and existing commons are
    always at risk of being expropriated and privatized by a third party.
    This is the true "tragedy of the commons," and it happens all the
    time.[^72^](#c3-note-0072){#c3-note-0072a}
    :::

    ::: {.section}
    ### Informational common goods: free software and free culture {#c3-sec-0013}

    The term "commons" was first applied to informational goods during the
    second half of the 1990s.[^73^](#c3-note-0073){#c3-note-0073a} The
    practice of creating digital common goods, however, goes back to the
    origins of free software around the middle of the 1980s. Since then, a
    complex landscape has developed, with software codes being cooperatively
    and sustainably managed as common resources available to everyone (who
    accepts their licensing agreements). This can best be explained with an
    example. One of the oldest projects in the area of free software -- and
    one that continues to be of relevance today -- is Debian, a so-called
    "distribution" (that is, a compilation of software components) that has
    existed since 1993. According to its own website:

    ::: {.extract}
    The Debian Project is an association of individuals who have made common
    cause to create a free operating system. \[...\] An operating system is
    the set of basic programs and utilities that make your computer run.
    \[...\] Debian comes with over 43000 packages (precompiled software that
    is bundled up in a nice format for easy installation on your machine).
    \[...\] All of it free.[^74^](#c3-note-0074){#c3-note-0074a}
    :::

    The special thing about Unix-like operating systems is that they are
    composed of a very large number of independent yet interacting programs.
    The task of a distribution -- and this task is hardly trivial -- is to
    combine this modular variety into a whole that provides, in an
    integrated manner, all of the functions of a contemporary computer.
    Debian is particularly []{#Page_156 type="pagebreak"
    title="156"}important because the community sets extremely high
    standards for itself, and it is for this reason that the distribution is
    not only used by many server administrators but is also the foundation
    of numerous end-user-oriented services, including Ubuntu and Linux Mint.

    The Debian Project has developed a complex form of organization that is
    based on a set of fundamental principles defined by the members
    themselves. These are delineated in the Debian Social Contract, which
    was first formulated in 1997 and subsequently revised in
    2004.[^75^](#c3-note-0075){#c3-note-0075a} It stipulates that the
    software has to remain "100% free" at all times, in the sense that the
    software license guarantees the freedom of unlimited use, modification,
    and distribution. The developers understand this primarily as an ethical
    obligation. They explicitly regard the project as a contribution "to the
    free software community." The social contract demands transparency on
    the level of the program code: "We will keep our entire bug report
    database open for public view at all times. Reports that people file
    online will promptly become visible to others." There are both technical
    and ethical considerations behind this. The contract makes no mention at
    all of a classical production goal; there is no mention, for instance,
    of competitive products or a schedule for future developments. To put it
    in Colin Crouch\'s terms, input legitimation comes before output
    legitimation. The initiators silently assume that the project\'s basic
    ethical, technical, and social orientations will result in high quality,
    but they do not place this goal above any other.

    The Debian Social Contract is the basis for cooperation and the central
    reference point for dealing with conflicts. It forms the normative core
    of a community that is distinguished by its equal treatment of ethical,
    political, technical, and economic issues. The longer the members have
    been cooperating together on this basis, the more binding this attitude
    has become for each of them, and the more sustainable the community has
    become as a whole. In other words, it has taken on a concrete form that
    is relevant to the activities of everyday
    life.[^76^](#c3-note-0076){#c3-note-0076a} Today, Debian is a global
    project with a stable core of about a thousand developers, most of whom
    live in Europe, the United States, and Latin
    America.[^77^](#c3-note-0077){#c3-note-0077a} The Debian commons is a
    high-grade collaborative organization, []{#Page_157 type="pagebreak"
    title="157"}the necessary cooperation for which is enabled by a complex
    infrastructure that automates many routine tasks. This is the only
    efficient way to manage the program code, which has grown to more than a
    hundred million lines. Yet not everything takes place online.
    International and local meetings and conferences have long played an
    important role. These have not only been venues for exchanging
    information and planning the coordination of the project; they have also
    helped to create a sense of mutual trust, without which this form of
    voluntary collaboration would not be possible.

    Despite the considerable size of the Debian Project, it is just one part
    of a much larger institutional ecology that includes other communities,
    universities, and businesses. Most of the 43,000 software packets of the
    Debian distribution are programmed by groups of developers that do not
    belong to the Debian Project. Debian is "just" a compilation of these
    many individual programs. One of these programs written by outsiders is
    the Linux kernel, which in many respects is the central and most complex
    program within a GNU/Linux operating system. Governing the organization
    of processes and data, it thus forms the interface between hardware and
    software. An entire institutional subsystem has been built up around
    this complex program, upon which everything else depends. The community
    of developers was initiated by Linus Torvalds, who wrote the first
    rudimentary kernel in 1991. Even though most of the kernel developers
    since then have been paid for their work, their cooperation then and now
    has been voluntary and, for the vast majority of contributors, has
    functioned without monetary exchange. In order to improve collaboration,
    a specialized technological infrastructure has been used -- above all
    Torvalds\'s self-developed system Git, which automates many steps for
    managing the distributed revisions of code. In all of this, an important
    role is played by the Linux Foundation, a non-profit organization that
    takes over administrative, legal, and financial tasks for the community.
    The foundation is financed by its members, which include large software
    companies that contribute as much as \$500,000 a year. This money is
    used, for instance, to pay the most important programmers and to
    organize working groups, thus ensuring that the development and
    distribution of Linux will continue on a long-term basis. The
    []{#Page_158 type="pagebreak" title="158"}businesses that finance the
    Linux Foundation may be profit-oriented institutions, but the main work
    of the developers -- the program code -- flows back into the common pool
    of resources, which the explicitly non-profit Debian Project can then
    use to compile its distribution. The freedoms guaranteed by the free
    license render this transfer from commercial to non-commercial use not
    only legally unproblematic but even desirable to the for-profit service
    providers, as they themselves also need entire operating systems and not
    just the kernel.

    The Debian Project draws from this pool of resources and is at the same
    time a part of it. Therefore others can use Debian\'s software code,
    which happens to a large extent, for instance through other Linux
    distributions. This is not understood as competition for market share
    but rather as an expression of the community\'s vitality, which for
    Debian represents a central and normative point of pride. As the Debian
    Social Contract explicitly states, "We will allow others to create
    distributions containing both the Debian system and other works, without
    any fee."

    Thus, over the years, a multifaceted institutional landscape has been
    created in which collaboration can take place between for-profit and
    non-profit entities -- between formal organizations and informal
    communal formations. Together, they form the software commons.
    Communally, they strive to ensure that high-quality free software will
    continue to exist for the long term. The coordination necessary for this
    is not tension-free. Within individual communities, on the contrary,
    there are many conflicts and competitive disputes about people, methods,
    and strategic goals. Tensions can also run high between the communities,
    foundations, and com­panies that cooperate and compete with one another
    (sometimes more directly, sometimes less directly). To cite one example,
    the relationship between the Debian Project and Canonical, the company
    that produces the Ubuntu operating system, was strained for several
    years. At the heart of the conflict was the issue of whether Ubuntu\'s
    developers were giving enough back to the Debian Project or whether they
    were simply exploiting it. Although the Debian Social Contract expressly
    allows the commercial use of its operating system, Canonical was and
    remains dependent on the software commons functioning as []{#Page_159
    type="pagebreak" title="159"}a whole, because, after all, the company
    needs to be able to make use of the latest developments in the Debian
    system. It took years to defuse the conflict, and this was only achieved
    when forums were set up to guarantee that information and codes could
    flow in both directions. The Debian community, for example, introduced
    something called a "derivatives front desk" to improve its communication
    with programmers of distributions that, like Ubuntu, derive from Debian.
    For its part, Canonical improved its internal processes so that code
    could flow back into the Debian Project, and their systems for
    bug-tracking were partially integrated to avoid duplicates. After
    several years of strife, Raphaël Hertzog, a prominent member of the
    Debian community, was able to summarize matters as follows:

    ::: {.extract}
    The Debian--Ubuntu relationship used to be a hot topic, but that\'s no
    longer the case thanks to regular efforts made on both sides. Conflicts
    between individuals still happen, but there are multiple places where
    they can be reported and discussed \[...\]. Documentation and
    infrastructure are in place to make it easier for volunteers to do the
    right thing. Despite all those process improvements, the best results
    still come out when people build personal relationships by discussing
    what they are doing. It often leads to tight cooperation, up to commit
    rights to the source repositories. Regular contacts help build a real
    sense of cooperation that no automated process can ever hope to
    achieve.[^78^](#c3-note-0078){#c3-note-0078a}
    :::

    In all successful commons, diverse social relations, mutual trust, and a
    common culture play an important role as preconditions for the
    consensual resolution of conflicts. This is not a matter of achieving an
    ideal -- as Hertzog stressed, not every conflict can be set aside -- but
    rather of reaching pragmatic solutions that allow actors to pursue, on
    equal terms, their own divergent goals within the common project.

    The immense commons of the Debian Project encompasses a nearly
    unfathomable number of variations. The distribution is available in over
    70 languages (in comparison, Apple\'s operating system is sold in 22
    languages), and diverse versions exist to suit different application
    contexts, aesthetic preferences, hardware needs, and stability
    requirements. Within each of these versions, in turn, there are
    innumerable []{#Page_160 type="pagebreak" title="160"}variations that
    have been created by individual users with different sets of technical
    or creative skills. The final result is a continuously changing service
    that can be adapted for countless special requirements, desires, and
    other features. To outsiders, this internal differentiation is often
    difficult to comprehend, and it can soon leave the impression that there
    is little more to it than a tedious variety of essentially the same
    thing. What user would ever need 60 different text
    editors?[^79^](#c3-note-0079){#c3-note-0079a} For those who would like
    to use free software without having to join a group, a greater number of
    simple and standardized products have been made available. For
    commoners, however, this diversity is enormously important, for it is an
    expression of their fundamental freedom to work precisely on those
    problems that are closest to their hearts -- even if that means creating
    another text editor.

    With the success of free software toward the end of the 1990s, producers
    in other areas of culture, who were just starting to use the internet,
    also began to take an interest in this new manner of production. It
    seemed to be a good fit with the vibrant do-it-yourself culture that was
    blooming online, and all the more so because there were hardly any
    attractive commercial alternatives at the time. This movement was
    sustained by the growing tier of professional and non-professional
    makers of culture that had emerged over the course of the aforementioned
    transformations of the labor market. At first, many online sources were
    treated as "quasi-common goods." It was considered normal and desirable
    to appropriate them and pass them on to others without first having to
    develop a proper commons for such activity. This necessarily led to
    conflicts. Unlike free software, which on account of its licensing was
    on secure legal ground from the beginning, copyright violations were
    rampant in the new do-it-yourself culture. For the sake of engaging in
    the referential processes discussed in the previous chapter,
    copyright-protected content was (and continues to be) used, reproduced,
    and modified without permission. Around the turn of the millennium, the
    previously latent conflict between "quasi-commoners" and the holders of
    traditional copyrights became an open dispute, which in many cases was
    resolved in court. Founded in June 1999, the file-sharing service
    Napster gained, over the course of just 18 months, 25 million users
    []{#Page_161 type="pagebreak" title="161"}worldwide who simply took the
    distribution of music into their own hands without the authorization of
    copyright owners. This incited a flood of litigation that managed to
    shut the service down in July 2001. This did not, however, put an end to
    the large-scale practice of unauthorized data sharing. New services and
    technologies, many of which used (the file-sharing protocol) BitTorrent,
    quickly filled in the gap. The number of court cases skyrocketed, not
    least because new legal standards expanded the jurisdiction of copyright
    law and enabled it to be applied more
    aggressively.[^80^](#c3-note-0080){#c3-note-0080a} These conflicts
    forced a critical mass of cultural producers to deal with copyright law
    and to reconsider how the practices of sharing and modifying could be
    perpetuated in the long term. One of the first results of these
    considerations was to develop, following the model of free software,
    numerous licenses that were tailored to cultural
    production.[^81^](#c3-note-0081){#c3-note-0081a} In the cultural
    context, free licenses achieved widespread distribution after 2001 with
    the arrival of Creative Commons (CC), a California-based foundation that
    began to provide easily understandable and adaptable licensing kits and
    to promote its services internationally through a network of partner
    organizations. This set of licenses made it possible to transfer user
    rights to the community (defined by the acceptance of the license\'s
    terms and conditions) and thus to create a freely accessible pool of
    cultural resources. Works published under a CC license can always be
    consumed and distributed free of charge (though not necessarily freely).
    Some versions of the license allow works to be altered; others permit
    their commercial use; while some, in turn, only allow non-commercial use
    and distribution. In comparison with free software licenses, this
    greater emphasis on the rights of individual producers over those of the
    community, whose freedoms of use can be twice as restricted (in terms of
    the right to alter works or use them for commercial ends), gave rise to
    the long-standing critique that, with respect to freedom and
    communality, CC licenses in fact represent a
    regression.[^82^](#c3-note-0082){#c3-note-0082a} A combination of good
    timing, user-friendly implementations, and powerful support from leading
    American universities, however, resulted in CC licenses becoming the de
    facto legal standard of free culture.

    Based on a solid legal foundation and thus protected from rampant
    copyright conflicts, large and well-structured []{#Page_162
    type="pagebreak" title="162"}cultural commons were established, for
    instance around the online reference work Wikipedia (which was then,
    however, using a different license). As much as the latter is now taken
    for granted as an everyday component of informational
    life,[^83^](#c3-note-0083){#c3-note-0083a} the prospect of a
    commons-generated encyclopedia hardly seemed realistic at the beginning.
    Even the founders themselves had little faith in it, and thus Wikipedia
    began as a side project. Their primary goal was to develop an
    encyclopedia called Nupedia, for which only experts would be allowed to
    write entries, which would then have to undergo a seven-stage
    peer-review process before being published for free use. From its
    beginning, on the contrary, Wikipedia was open for anyone to edit, and
    any changes made to it were published without review or delay. By the
    time that Nupedia was abandoned in September 2003 (with only 25
    published articles), the English-language version of Wikipedia already
    consisted of more than 160,000 entries, and the German version, which
    came online in May 2001, already had 30,000. The former version reached
    1 million entries by January 2003, the latter by December 2009, and by
    the beginning of 2015 they had 4.7 million and 1.8 million entries,
    respectively. In the meantime (by August 2015), versions have been made
    available in 289 other languages, 48 of which have at least 100,000
    entries. Both its successes -- its enormous breadth of up-to-date
    content, along with its high level of acceptance and quality -- and its
    failures, with its low percentage of women editors (around 10 percent),
    exhausting discussions, complex rules, lack of young personnel, and
    systematic attempts at manipulation, have been well documented because
    Wikipedia also guarantees free access to the data generated by the
    activities of users, and thus makes the development of the commons
    fairly transparent for outsiders.[^84^](#c3-note-0084){#c3-note-0084a}

    One of the most fundamental and complex decisions in the history of
    Wikipedia was to change its license. The process behind this is
    indicative of how thoroughly the community of a commons can be involved
    in its decision-making. When Wikipedia was founded in 2001, there was no
    established license for free cultural works. The best option available
    was the GNU license for free documentation (GLFD), which had been
    developed, however, for software documentation. In the following years,
    the CC license became the standard, and this []{#Page_163
    type="pagebreak" title="163"}gave rise to the legal problem that content
    from Wikipedia could not be combined with CC-licensed works, even though
    this would have aligned with the intentions of those who had published
    content under either of these licenses. To alleviate this problem and
    thus facilitate exchange between Wikipedia and other cultural commons,
    the Wikimedia Foundation (which holds the rights to Wikipedia) proposed
    to place older content retroactively under both licenses, the GLFD and
    the equivalent CC license. In strictly legal terms, the foundation would
    have been able to make this decision without consulting the community.
    However, it would have lacked legitimacy and might have even caused
    upheavals within it. In order to avoid this, an elaborate discussion
    process was initiated that led to a membership-wide vote. This process
    lasted from December 2007 (when the Wikipedia Foundation resolved to
    change the license) to the end of May 2009, when the voting period
    concluded. All told, 17,462 votes were cast, of which only 10.5 percent
    rejected the proposed changes. More important than the result, however,
    was the way it had come about: through a long, consensus-building
    process of discussion, for which the final vote served above all to make
    the achieved consensus unambiguously
    clear.[^85^](#c3-note-0085){#c3-note-0085a} All other decisions that
    concern the project as a whole were and continue to be reached in a
    similar way. Here, too, input legitimation is at least on an equal
    footing with output legitimation.

    With Wikipedia, a great deal happens voluntarily and without cost, but
    that does not mean that no financial resources are needed to organize
    and maintain such a commons on a long-term basis. In particular, it is
    necessary to raise funds for infrastructure (hardware, administration,
    bandwidth), the employees of the Wikipedia Foundation, conferences, and
    its own project initiatives -- networking with schools, uni­versities,
    and cultural institutions, for example, or increasing the diversity of
    the Wikipedia community. In light of the number of people who use the
    encyclopedia, it would be possible to finance the project, which accrued
    costs of around 45 million dollars during the 2013--14 fiscal year,
    through advertising (in the same manner, that is, as commercial mass
    media). Yet there has always been a consensus against this. Instead,
    Wikipedia is financed through donations. In 2013--14, the website was
    able to raise \$51 million, 37 million of []{#Page_164 type="pagebreak"
    title="164"}which came from approximately 2.5 million contributors, each
    of whom donated just a small sum.[^86^](#c3-note-0086){#c3-note-0086a}
    These small contributions are especially interesting because, to a large
    extent, they come from people who consider themselves part of the
    community but do not do much editing. This suggests that donating is
    understood as an opportunity to make a contribution without having to
    invest much time in the project. In this case, donating money is thus
    not an expression of charity but rather of communal spirit; it is just
    one of a diverse number of ways to remain active in a commons. Precisely
    because its economy is not understood as an independent sphere with its
    own logic (maximizing individual resources), but rather as an integrated
    aspect of cultivating a common resource, non-financial and financial
    contributions can be treated equally. Both types of contribution
    ultimately derive from the same motivation: they are expressions of
    appre­ciation for the meaning that the common resource possesses for
    one\'s own activity.
    :::

    ::: {.section}
    ### At the interface with physical space: open data {#c3-sec-0014}

    Wikipedia, however, is an exception. None of the other new commons have
    managed to attract such large financial contributions. The project known
    as OpenStreetMap (OSM), which was founded in 2004 by Steve Coast,
    happens to be the most important commons for
    geodata.[^87^](#c3-note-0087){#c3-note-0087a} By the beginning of 2016,
    it had collected and identified around 5 billion GPS coordinates and
    linked them to more than 273 million routes. This work was accomplished
    by about half a million people, who surveyed their neighborhoods with
    hand-held GPS devices or, where that was not a possibility, extracted
    data from satellite images or from public land registries. The project,
    which is organized through specialized infrastructure and by local and
    international communities, also utilizes a number of automated
    processes. These are so important that not only was a "mechanical edit
    policy" developed to govern the use of algorithms for editing; the
    latter policy was also supplemented by an "automated edits code of
    conduct," which defines further rules of behavior. Regarding the
    implementation of a new algorithm, for instance, the code states: "We do
    not require or recommend a formal vote, but if there []{#Page_165
    type="pagebreak" title="165"}is significant objection to your plan --
    and even minorities may be significant! -- then change it or drop it
    altogether."[^88^](#c3-note-0088){#c3-note-0088a} Here, again, there is
    the typical objection to voting and a focus on building a consensus that
    does not have to be perfect but simply good enough for the overwhelming
    majority of the community to acknowledge it (a "rough consensus").
    Today, the coverage and quality of the maps that can be generated from
    these data are so good for so many areas that they now represent serious
    competition to commercial digital alternatives. OSM data are used not
    only by Wikipedia and other non-commercial projects but also
    increasingly by large commercial services that need geographical
    information and suitable maps but do not want to rely on a commercial
    provider whose terms and conditions can change at any time. To the
    extent that these commercial applications provide their users with the
    opportunity to improve the maps, their input flows back through the
    commercial level and into the common pool.

    Despite its immense community and its regular requests for donations,
    the financial resources of the OSM Foundation, which functions as the
    legal entity and supporting organ­ization behind the project, cannot be
    compared to those of the Wikipedia Foundation. The OSM Foundation has no
    employees, and in 2014 it generated just £88,000 in revenue, half of
    which was obtained from donations and half from holding
    conferences.[^89^](#c3-note-0089){#c3-note-0089a} That said, OSM is
    nevertheless a socially, technologically, and financially robust
    commons, though one with a model entirely different from Wikipedia\'s.
    Because data are at the heart of the project, its needs for hardware and
    bandwidth are negligible compared to Wikipedia\'s, and its servers can
    be housed at universities or independently operated by individual
    groups. Around this common resource, a global network of companies has
    formed that offer services on the basis of complex geodata. In doing so,
    they allow improvements to go back into the pool or, if financed by
    external sources, they can work directly on the common
    infrastructure.[^90^](#c3-note-0090){#c3-note-0090a} Here, too, we find
    the characteristic juxtaposition of paid and unpaid work, of commercial
    and non-commercial orientations that depend on the same common resource
    to pursue their divergent goals. If this goes on for a long time, then
    there will be an especially strong (self-)interest among everyone
    involved for their own work, []{#Page_166 type="pagebreak"
    title="166"}or at least part of it, to benefit the long-term development
    of the resource in question. Functioning commons, especially the new
    informational ones, are distinguished by the heterogeneity of their
    motivations and actors. Just as the Wikipedia project successfully and
    transformatively extended the experience of working with free software
    to the generation of large bases of knowledge, the community responsible
    for OpenStreetMaps succeeded in making the experiences of the Wikipedia
    project useful for the creation of a commons based on large datasets,
    and managed to adapt these experiences according to the specific needs
    of such a project.[^91^](#c3-note-0091){#c3-note-0091a}

    It is of great political significance that informational commons have
    expanded into the areas of data recording and data use. Control over
    data, which specify and describe the world in real time, is an essential
    element of the contempor­ary constitution of power. From large volumes
    of data, new types of insight can be gained and new strategies for
    action can be derived. The more one-sided access to data becomes, the
    more it yields imbalances of power.

    In this regard, the commons model offers an alternative, for it allows
    various groups equal and unobstructed access to this potential resource
    of power. This, at least, is how the Open Data movement sees things.
    Data are considered "open" if they are available to everyone without
    restriction to be used, distributed, and developed freely. For this to
    occur, it is necessary to provide data in a standard-compatible format
    that is machine-readable. Only in such a way can they be browsed by
    algorithms and further processed. Open data are an important
    precondition for implementing the power of algorithms in a democratic
    manner. They ensure that there can be an effective diversity of
    algorithms, for anyone can write his or her own algorithm or commission
    others to process data in various ways and in light of various
    interests. Because algorithms cannot be neutral, their diversity -- and
    the resulting ability to compare the results of different methods -- is
    an important precondition for them not becoming an uncontrollable
    instrument of power. This can be achieved most dependably through free
    access to data, which are maintained and cultivated as a commons.

    Motivated by the conviction that free access to data represents a
    necessary condition for autonomous activity in the []{#Page_167
    type="pagebreak" title="167"}digital condition, many new initiatives
    have formed that are devoted to the decentralized collection,
    networking, and communal organization of data. For several years, for
    instance, there has been a global community of people who observe
    airplanes in their field of vision, share this information with one
    another, and make it generally accessible. Outside of the tight
    community, these data are typically of little interest. Yet it was
    through his targeted analysis of this information that the geographer
    and artist Trevor Paglen succeeded in mapping out the secret arrests
    made by American intelligence services. Ultimately, even the CIA\'s
    clandestine airplanes have to take off and land like any others, and
    thus they can be observed.[^92^](#c3-note-0092){#c3-note-0092a} Around
    the collection of environmental data, a movement has formed whose
    adherents enter measurements themselves. To cite just one example:
    thanks to a successful crowdfunding campaign that raised more than
    \$144,000 (just 39,000 were needed), it was possible to finance the
    development of a simple set of sensors called the Air Quality Egg. This
    device can measure the concentration of carbon dioxide or nitrogen
    dioxide in the air and send its findings to a public database. It
    involves the use of relatively simple technologies that are likewise
    freely licensed (open hardware). How to build and use it is documented
    in such a detailed and user-friendly manner -- in instructional videos
    on YouTube, for instance -- that anyone so inclined can put one together
    on his or her own, and it would also be easy to have them made on a
    large scale as a commercial product. Over time, this has brought about a
    network of stations that is able to measure the quality of the air
    exactly, locally, and in places that are relevant to users. All of this
    information is stored in a global and freely accessible database, from
    which it is possible to look up and analyze hyper-local data in real
    time and without restrictions.[^93^](#c3-note-0093){#c3-note-0093a}

    A list of examples of data commons, both the successful and the
    unsuccessful, could go on and on. It will suffice, however, to point out
    that many new commons have come about that are redefining the interface
    between physical and informational space and creating new strategies for
    actions in both directions. The Air Quality Egg, which is typical in
    this regard, also demonstrates that commons can develop cumulatively.
    Free software and free hardware are preconditions for []{#Page_168
    type="pagebreak" title="168"}producing and networking such an object. No
    less import­ant are commercial and non-commercial infrastructures for
    communal learning, compiling documentation, making infor­mation
    available, and thus facilitating access for those interested and
    building up the community. All of this depends on free knowledge, from
    Wikipedia to scientific databases. This enables a great variety of
    actors -- in this case en­vironmental scientists, programmers,
    engineers, and interested citizens -- to come together and create a
    common frame of reference in which everyone can pursue his or her own
    goals and yet do so on the basis of communal resources. This, in turn,
    has given rise to a new commons, namely that of environmental data.

    Not all data can or must be collected by individuals, for a great deal
    of data already exists. That said, many scientific and state
    institutions face the problem of having data that, though nominally
    public (or at least publicly funded), are in fact extremely difficult
    for third parties to use. Such information may exist, but it is kept in
    institutions to which there is no or little public access, or it exists
    only in analog or non-machine-readable formats (as PDFs of scanned
    documents, for instance), or its use is tied to high license fees. One
    of the central demands of the Open Data and Open Access movements is
    thus to have free access to these collections. Yet there has been a
    considerable amount of resistance. Whether for political or economic
    reasons, many public and scientific institutions do not want their data
    to be freely accessible. In many cases, moreover, they also lack the
    competence, guidelines, budgets, and internal processes that would be
    necessary to make their data available to begin with. But public
    pressure has been mounting, not least through initiatives such as the
    global Open Data Index, which compares countries according to the
    accessibility of their information.[^94^](#c3-note-0094){#c3-note-0094a}
    In Germany, the Digital Openness Index evaluates states and communities
    in terms of open data, the use of open-source software, the availability
    of open infrastructures (such as free internet access in public places),
    open policies (the licensing of public information,
    freedom-of-information laws, the transparency of budget planning, etc.),
    and open education (freely accessible educational resources, for
    instance).[^95^](#c3-note-0095){#c3-note-0095a} The results are rather
    sobering. The Open Data Index has identified 10 []{#Page_169
    type="pagebreak" title="169"}different datasets that ought to be open,
    including election results, company registries, maps, and national
    statistics. A study of 97 countries revealed that, by the middle of
    2015, only 11 percent of these datasets were entirely freely accessible
    and usable.

    Although public institutions are generally slow and resistant in making
    their data freely available, important progress has nevertheless been
    made. Such progress indicates not only that the new commons have
    developed their own structures in parallel with traditional
    institutions, but also that the commoners have begun to make new demands
    on established institutions. These are intended to change their internal
    processes and their interaction with citizens in such a way that they
    support the creation and growth of commons. This is not something that
    can be achieved overnight, for the institutions in question need to
    change at a fundamental level with respect to their procedures,
    self-perception, and relation to citizens. This is easier said than
    done.
    :::

    ::: {.section}
    ### Municipal infrastructures as commons: citizen networks {#c3-sec-0015}

    The demands for open access to data, however, are not exhausted by
    attempts to redefine public institutions and civic participation. In
    fact, they go far beyond that. In Germany, for instance, there has been
    a recent movement toward (re-)communalizing the basic provision of water
    and energy. Its goal is not merely to shift the ownership structure from
    private to public. Rather, its intention is to reorient the present
    institutions so that, instead of operating entirely on the basis of
    economic criteria, they also take into account democratic, ecological,
    and social factors. These efforts reached a high point in November 2013,
    when the population of Berlin was called upon to vote over the
    communalization of the power supply. Formed in 2011, a non-partisan
    coalition of NGOs and citizens known as the Berlin Energy Roundtable had
    mobilized to take over the local energy grid, whose license was due to
    become available in 2014. The proposal was for the network to be
    administered neither entirely privately nor entirely by the public.
    Instead, the license was to be held by a newly formed municipal utility
    that would not only []{#Page_170 type="pagebreak" title="170"}organize
    the efficient operation of the grid but also pursue social causes, such
    as the struggles against energy poverty and power cuts, and support
    ecological causes, including renewable energy sources and energy
    conservation. It was intended, moreover, for the utility to be
    democratically organized; that is, for it to offer expanded
    opportunities for civic participation on the basis of the complete
    transparency of its internal processes in order to increase -- and
    ensure for the long term -- the acceptance and identification of
    citizens.

    Yet it did not get that far. Even though it was conceivably close, the
    referendum failed to go through. While 83 percent voted in favor of the
    new utility, the necessary quorum of 25 percent of all eligible voters
    was not quite achieved (the voter turnout was 24.71 percent).
    Nevertheless, the vote represented a milestone. For the first time ever
    in a large European metropolis, a specific model "beyond the market and
    the state" had been proposed for an essential aspect of everyday life
    and put before the people. A central component of infrastructure, the
    reliability of which is absolutely indispensable for life in any modern
    city, was close to being treated as a common good, supported by a new
    institution, and governed according to a statute that explicitly
    formulated economic, social, ecological, and democratic goals on equal
    terms. This would not have resulted in a commons in the strict sense,
    but rather in a new public institution that would have adopted and
    embodied the values and orientations that, because of the activity of
    commons, have increasingly become everyday phenomena in the digital
    condition.

    In its effort to develop institutional forms beyond the market and the
    state, the Berlin Energy Roundtable is hardly unique. It is rather part
    of a movement that is striving for fundamental change and is in many
    respects already quite advanced. In Denmark, for example, not only does
    a comparatively large amount of energy come from renewable sources (27.2
    percent of total use, as of 2014), but 80 percent of the country\'s
    wind-generated electricity is produced by self-administered cooperatives
    or by individual people and
    households.[^96^](#c3-note-0096){#c3-note-0096a} The latter, as is
    typical of commons, function simultaneously as producers and consumers.

    It is not a coincidence that commons have begun to infiltrate the energy
    sector. As Jeremy Rifkin has remarked:[]{#Page_171 type="pagebreak"
    title="171"}

    ::: {.extract}
    The generation that grew up on the Communication Internet and that takes
    for granted its right to create value in distributed, collaborative,
    peer-to-peer virtual commons has little hesitation about generating
    their own green electricity and sharing it on an Energy Internet. They
    find themselves living through a deepening global economic crisis and an
    even more terrifying shift in the earth\'s climate, caused by an
    economic system reliant on fossil fuel energy and managed by
    centralized, top-down command and control systems. If they fault the
    giant telecommunications, media and entertainment companies for blocking
    their right to collaborate freely with their peers in an open
    Information Commons, they are no less critical of the world\'s giant
    energy, power, and utility companies, which they blame, in part, for the
    high price of energy, a declining economy and looming environmental
    crisis.[^97^](#c3-note-0097){#c3-note-0097a}
    :::

    It is not necessary to see in this, as Rifkin and a few others have
    done, the ineluctable demise of
    capitalism.[^98^](#c3-note-0098){#c3-note-0098a} Yet, like the influence
    of post-democratic institutions over social mass media and beyond, the
    commons are also shaping new expectations about possible courses of
    action and about the institutions that might embody these possibilities.
    :::

    ::: {.section}
    ### Eroding the commons: cloud software and the sharing economy {#c3-sec-0016}

    Even if the commons have recently enjoyed a renaissance, their continued
    success is far from guaranteed. This is not only because legal
    frameworks, then and now, are not oriented toward them. Two movements
    currently stand out that threaten to undermine the commons from within
    before they can properly establish themselves. These movements have been
    exploiting certain aspects of the commons while pursuing goals that are
    harmful to them. Thus, there are ways of using communal resources in
    order to offer, on their basis, closed and centralized services. An
    example of this is so-called cloud software; that is, applications that
    no longer have to be installed on the computer of the user but rather
    are centrally run on the providers\' servers. Such programs are no
    longer operated in the traditional sense, and thus they are exempt from
    the obligations mandated by free licenses. They do not, []{#Page_172
    type="pagebreak" title="172"}in other words, have to make their readable
    source code available along with their executable program code. Cloud
    providers are thus able to make wide use of free software, but they
    contribute very little to its further development. The changes that they
    make are implemented exclusively on their own computers and therefore do
    not have to be made public. They therefore follow the letter of the
    license, but not its spirit. Through the control of services, it is also
    possible for nominally free and open-source software to be centrally
    controlled. Google\'s Android operating system for smartphones consists
    largely of free software, but by integrating it so deeply with its
    closed applications (such as Google Maps and Google Play Store), the
    company ensures that even modified versions of the system will supply
    data in which Google has an
    interest.[^99^](#c3-note-0099){#c3-note-0099a}

    The idea of the communal use and provision of resources is eroded most
    clearly by the so-called sharing economy, especially by companies such
    as the short-term lodging service Airbnb or Uber, which began as a taxi
    service but has since expanded into other areas of business. In such
    cases, terms like "open" or "sharing" do little more than give a trendy
    and positive veneer to hyper-capitalistic structures. Instead of
    supporting new forms of horizontal cooperation, the sharing economy is
    forcing more and more people into working conditions in which they have
    to assert themselves on their own, without insurance and with complete
    flexibility, all the while being coordin­ated by centralized,
    internet-based platforms.[^100^](#c3-note-0100){#c3-note-0100a} Although
    the companies in question take a significant portion of overall revenue
    for their "intermediary" services, they act as though they merely
    facilitate such work and thus take no responsibility for their "newly
    self-employed" freelance
    workforce.[^101^](#c3-note-0101){#c3-note-0101a} The risk is passed on
    to individual providers, who are in constant competition with one
    another, and this only heightens the precariousness of labor relations.
    As is typical of post-democratic institutions, the sharing economy has
    allowed certain disparities to expand into broader sectors of society,
    namely the power and income gap that exists between those who
    "voluntarily" use these services and the providers that determine the
    conditions imposed by the platforms in question.[]{#Page_173
    type="pagebreak" title="173"}
    :::
    :::

    ::: {.section}
    Against a Lack of Alternatives {#c3-sec-0017}
    ------------------------------

    For now, the digital condition has given rise to two highly divergent
    political tendencies. The tendency toward "post-democracy" is
    essentially leading to an authoritarian society. Although this society
    may admittedly contain a high degree of cultural diversity, and although
    its citizens are able to (or have to) lead their lives in a
    self-responsible manner, they are no longer able to exert any influence
    over the political and economic structures in which their lives are
    unfolding. On the basis of data-intensive and comprehensive
    surveillance, these structures are instead shaped disproportionally by
    an influential few. The resulting imbalance of power has been growing
    steadily, as has income inequality. In contrast to this, the tendency
    toward commons is leading to a renewal of democracy, based on
    institutions that exist outside of the market and the state. At its core
    this movement involves a new combination of economic, social, and
    (ever-more pressing) ecological dimensions of everyday life on the basis
    of data-intensive participatory processes.

    What these two developments share in common is their comprehensive
    realization of the infrastructural possibilities of the present. Both of
    them develop new relations of production on the basis of new productive
    forces (to revisit the terminology introduced at the beginning of this
    chapter) or, in more general terms, they create suitable social
    institutions for these new opportunities. In this sense, both
    developments represent coherent and comprehensive answers to the
    Gutenberg Galaxy\'s long-lasting crisis of cultural forms and social
    institutions.

    It remains to be seen whether one of these developments will prevail
    entirely or whether and how they will coexist. Despite all of the new
    and specialized methods for making predictions, the future is still
    largely unpredictable. Too many moving variables are at play, and they
    are constantly influencing one another. This is not least the case
    because everyone\'s activity -- at times singularly aggregated, at times
    collectively organized -- is contributing directly and indirectly to
    these contradictory developments. And even though an individual or
    communal contribution may seem small, it is still exactly []{#Page_174
    type="pagebreak" title="174"}that: a contribution to a collective
    movement in one direction or the other. This assessment should not be
    taken as some naïve appeal along the lines of "Be the change you want to
    see!" The issue here is not one of personal attitudes but rather of
    social structures. Effective change requires forms of organization that
    are able to implement it for the long term and in the face of
    resistance. In this regard, the side of the commons has a great deal
    more work to do.

    Yet if, despite all of the simplifications that I have made, this
    juxtaposition of post-democracy and the commons has revealed anything,
    it is that even rapid changes, whose historical and structural
    dimensions cannot be controlled on account of their overwhelming
    complexity, are anything but fixed in their concrete social
    formulations. Even if it is impossible to preserve the old institutions
    and cultural forms in their traditional roles -- regardless of all the
    historical achievements that may be associated with them -- the dispute
    over what world we want to live in and the goals that should be achieved
    by the available potential of the present is as open as ever. And such
    is the case even though post-democracy wishes to abolish the political
    itself and subordinate everything to a technocratic lack of
    alternatives. The development of the commons, after all, has shown that
    genuine, fundamental, and cutting-edge alternatives do indeed exist. The
    contradictory nature of the present is keeping the future
    open.[]{#Page_175 type="pagebreak" title="175"}
    :::

    ::: {.section .notesSet type="rearnotes"}
    []{#notesSet}Notes {#c3-ntgp-9999}
    ------------------

    ::: {.section .notesList}
    [1](#c3-note-0001a){#c3-note-0001}  Karl Marx, *A Contribution to the
    Critique of Political Economy*, trans. S. W. Ryazanskaya (London:
    Lawrence and Wishart, 1971), p. 21.[]{#Page_196 type="pagebreak"
    title="196"}

    [2](#c3-note-0002a){#c3-note-0002}  See, for instance, Tomasz Konicz and
    Florian Rötzer (eds), *Aufbruch ins Ungewisse: Auf der Suche nach
    Alternativen zur kapitalistischen Dauerkrise* (Hanover: Heise
    Zeitschriften Verlag, 2014).

    [3](#c3-note-0003a){#c3-note-0003}  Jacques Rancière, *Disagreement:
    Politics and Philosophy*, trans. Julie Rose (Minneapolis, MN: University
    of Minnesota Press, 1999), p. 102 (the emphasis is original).

    [4](#c3-note-0004a){#c3-note-0004}  Colin Crouch, *Post-Democracy*
    (Cambridge: Polity, 2004), p. 4.

    [5](#c3-note-0005a){#c3-note-0005}  Ibid., p. 6.

    [6](#c3-note-0006a){#c3-note-0006}  Ibid., p. 96.

    [7](#c3-note-0007a){#c3-note-0007}  These questions have already been
    discussed at length, for instance in a special issue of the journal
    *Neue Soziale Be­wegungen* (vol. 4, 2006) and in the first two issues of
    the journal *Aus Politik und Zeitgeschichte* (2011).

    [8](#c3-note-0008a){#c3-note-0008}  See Jonathan B. Postel, "RFC 821,
    Simple Mail Transfer Protocol," *Information Sciences Institute:
    University of Southern California* (August 1982), online: "An important
    feature of SMTP is its capability to relay mail across transport service
    environments."

    [9](#c3-note-0009a){#c3-note-0009}  One of the first providers of
    Webmail was Hotmail, which became available in 1996. Just one year
    later, the company was purchased by Microsoft.

    [10](#c3-note-0010a){#c3-note-0010}  Barton Gellmann and Ashkan Soltani,
    "NSA Infiltrates Links to Yahoo, Google Data Centers Worldwide, Snowden
    Documents Say," *Washington Post* (October 30, 2013), online.

    [11](#c3-note-0011a){#c3-note-0011}  Initiated by hackers and activists,
    the Mailpile project raised more than \$160,000 in September 2013 (the
    fundraising goal had been just \$100,000). In July 2014, the rather
    business-oriented project ProtonMail raised \$400,000 (its target, too,
    had been just \$100,000).

    [12](#c3-note-0012a){#c3-note-0012}  In July 2014, for instance, Google
    announced that it would support "end-to-end" encryption for emails. See
    "Making End-to-End Encryption Easier to Use," *Google Security Blog*
    (June 3, 2014), online.

    [13](#c3-note-0013a){#c3-note-0013}  Not all services use algorithms to
    sort through data. Twitter does not filter the news stream of individual
    users but rather allows users to create their own lists or to rely on
    external service providers to select and configure them. This is one of
    the reasons why Twitter is regarded as "difficult." The service is so
    centralized, however, that this can change at any time, which indeed
    happened at the beginning of 2016.

    [14](#c3-note-0014a){#c3-note-0014}  Quoted from "Schrems:
    'Facebook-Abstimmung ist eine Farce'," *Futurezone.at* (July 4, 2012),
    online \[--trans.\].

    [15](#c3-note-0015a){#c3-note-0015}  Elliot Schrage, "Proposed Updates
    to Our Governing Documents," [Facebook.com](http://Facebook.com)
    (November 21, 2011), online.[]{#Page_197 type="pagebreak" title="197"}

    [16](#c3-note-0016a){#c3-note-0016}  Quoted from the documentary film
    *Terms and Conditions May Apply* (2013), directed by Cullen Hoback.

    [17](#c3-note-0017a){#c3-note-0017}  Felix Stalder and Christine Mayer,
    "Der zweite Index: Suchmaschinen, Personalisierung und Überwachung," in
    Konrad Becker and Felix Stalder (eds), *Deep Search: Die Politik des
    Suchens jenseits von Google* (Innsbruck: Studienverlag, 2009), pp.
    112--31.

    [18](#c3-note-0018a){#c3-note-0018}  Thus, in 2012, Google announced
    under a rather generic and difficult-to-Google headline that, from now
    on, "we may combine information you\'ve provided from one service with
    information from other services." See "Updating Our Privacy Policies and
    Terms of Service," *Google Official Blog* (January 24, 2012), online.

    [19](#c3-note-0019a){#c3-note-0019}  Wolfie Christl, "Kommerzielle
    digitale Überwachung im Alltag," *Studie im Auftrag der
    Bundesarbeitskammer* (November 2014), online.

    [20](#c3-note-0020a){#c3-note-0020}  Viktor Mayer-Schönberger and
    Kenneth Cukier, *Big Data: A Revolution That Will Change How We Live,
    Work and Think* (Boston, MA: Houghton Mifflin Harcourt, 2013).

    [21](#c3-note-0021a){#c3-note-0021}  Carlos Diuk, "The Formation of
    Love," *Facebook Data Science Blog* (February 14, 2014), online.

    [22](#c3-note-0022a){#c3-note-0022}  Facebook could have determined this
    simply by examining the location data that were transmitted by its own
    smartphone app. The study in question, however, did not take such
    information into account.

    [23](#c3-note-0023a){#c3-note-0023}  Dan Lyons, "A Lot of Top
    Journalists Don\'t Look at Traffic Numbers: Here\'s Why," *Huffington
    Post* (March 27, 2014), online.

    [24](#c3-note-0024a){#c3-note-0024}  Adam Kramer et al., "Experimental
    Evidence of Massive-Scale Emotional Contagion through Social Networks,"
    *Proceedings of the National Academy of Sciences* 111 (2014): 8788--90.

    [25](#c3-note-0025a){#c3-note-0025}  In all of these studies, it was
    presupposed that users present themselves naïvely and entirely
    truthfully. If someone writes something positive ("I\'m doing great!"),
    it is assumed that this person really is doing well. This, of course, is
    a highly problematic assumption. See John M. Grohl, "Emotional Contagion
    on Facebook? More Like Bad Research Methods," *PsychCentral* (June 23,
    2014), online.

    [26](#c3-note-0026a){#c3-note-0026}  See Adrienne LaFrance, "Even the
    Editor of Facebook\'s Mood Study Thought It Was Creepy," *The Atlantic*
    (June 29, 2014), online: "\[T\]he authors \[...\] said their local
    institutional review board had approved it -- and apparently on the
    grounds that Facebook apparently manipulates people\'s News Feeds all
    the time."

    [27](#c3-note-0027a){#c3-note-0027}  In a rare moment of openness, the
    founder of a large dating service made the following remark: "But guess
    what, everybody: []{#Page_198 type="pagebreak" title="198"}if you use
    the Internet, you\'re the subject of hundreds of experiments at any
    given time, on every site. That\'s how websites work." See Christian
    Rudder, "We Experiment on Human Beings!" *OKtrends* (July 28, 2014),
    online.

    [28](#c3-note-0028a){#c3-note-0028}  Zoe Corbyn, "Facebook Experiment
    Boosts US Voter Turnout," *Nature* (September 12, 2012), online. Because
    of the relative homogeneity of social groups, it can be assumed that a
    large majority of those who were indirectly influenced to vote have the
    same political preferences as those who were directly influenced.

    [29](#c3-note-0029a){#c3-note-0029}  In the year 2000, according to the
    official count, George W. Bush won the decisive state of Florida by a
    mere 537 votes.

    [30](#c3-note-0030a){#c3-note-0030}  Jonathan Zittrain, "Facebook Could
    Decide an Election without Anyone Ever Finding Out," *New Republic*
    (June 1, 2014), online.

    [31](#c3-note-0031a){#c3-note-0031}  This was the central insight that
    Norbert Wiener drew from his experiments on air defense during World War
    II. Although it could never be applied during the war itself, it would
    nevertheless prove of great importance to the development of
    cybernetics.

    [32](#c3-note-0032a){#c3-note-0032}  Gregory Bateson, "Social Planning
    and the Concept of Deutero-learning," in Bateson, *Steps to an Ecology
    of Mind: Collected Essays in Anthropology, Psychiatry, Evolution and
    Epistemology* (London: Jason Aronson, 1972), pp. 166--82, at 177.

    [33](#c3-note-0033a){#c3-note-0033}  Tiqqun, "The Cybernetic
    Hypothesis," p. 4 (online).

    [34](#c3-note-0034a){#c3-note-0034}  B. F. Skinner, *The Behavior of
    Organisms: An Experimental Analysis* (New York: Appleton Century, 1938).

    [35](#c3-note-0035a){#c3-note-0035}  Richard H. Thaler and Cass
    Sunstein, *Nudge: Improving Decisions about Health, Wealth and
    Happiness* (New York: Penguin, 2008).

    [36](#c3-note-0036a){#c3-note-0036}  It happened repeatedly, for
    instance, that pictures of breastfeeding mothers would be removed
    because they apparently violated Facebook\'s rule against sharing
    pornography. After a long protest, Facebook changed its "community
    standards" in 2014. Under the term "Nudity," it now reads as follows:
    "We also restrict some images of female breasts if they include the
    nipple, but we always allow photos of women actively engaged in
    breastfeeding or showing breasts with post-mastectomy scarring. We also
    allow photographs of paintings, sculptures and other art that depicts
    nude figures." See "Community Standards,"
    [Facebook.com](http://Facebook.com) (2017), online.

    [37](#c3-note-0037a){#c3-note-0037}  Michael Seemann, *Digital Tailspin:
    Ten Rules for the Internet after Snowden* (Amsterdam: Institute for
    Network Cultures, 2015).

    [38](#c3-note-0038a){#c3-note-0038}  The exception to this is fairtrade
    products, in which case it is attempted to legitimate their higher
    prices with reference to []{#Page_199 type="pagebreak" title="199"}the
    input -- that is, to the social and ecological conditions of their
    production.

    [39](#c3-note-0039a){#c3-note-0039}  This is only partially true,
    however, as more institutions (universities, for instance) have begun to
    outsource their technical infrastructure (to Google Mail, for example).
    In such cases, people are indeed being coerced, in the classical sense,
    to use these services.

    [40](#c3-note-0040a){#c3-note-0040}  Mary Madden et al., "Teens, Social
    Media and Privacy," *Pew Research Center: Internet, Science & Tech* (May
    21, 2013), online.

    [41](#c3-note-0041a){#c3-note-0041}  Meta-data are data that provide
    information about other data. In the case of an email, the header lines
    (the sender, recipient, date, subject, etc.) form the meta-data, while
    the data are made up of the actual content of communication. In
    practice, however, the two categories cannot always be sharply
    distinguished from one another.

    [42](#c3-note-0042a){#c3-note-0042}  By manipulating online polls, for
    instance, or flooding social mass media with algorithmically generated
    propaganda. See Glen Greenwald, "Hacking Online Polls and Other Ways
    British Spies Seek to Control the Internet," *The Intercept* (July 14,
    2014), online.

    [43](#c3-note-0043a){#c3-note-0043}  Jeremy Scahill and Glenn Greenwald,
    "The NSA\'s Secret Role in the US Assassination Program," *The
    Intercept* (February 10, 2014), online.

    [44](#c3-note-0044a){#c3-note-0044}  Regarding the interconnections
    between Google and the US State Department, see Julian Assange, *When
    Google Met WikiLeaks* (New York: O/R Books, 2014).

    [45](#c3-note-0045a){#c3-note-0045}  For a catalog of these
    publications, see the DARPA website:
    \<[opencatalog.darpa.mil/SMISC.html](http://opencatalog.darpa.mil/SMISC.html)\>.

    [46](#c3-note-0046a){#c3-note-0046}  See the military\'s own description
    of the project at:
    \<[minerva.dtic.mil/funded.html](http://minerva.dtic.mil/funded.html)\>.

    [47](#c3-note-0047a){#c3-note-0047}  Such is the goal stated on the
    project\'s homepage: \<\>.

    [48](#c3-note-0048a){#c3-note-0048}  Bruce Schneier, "Don\'t Listen to
    Google and Facebook: The Public--Private Surveillance Partnership Is
    Still Going Strong," *The Atlantic* (March 25, 2014), online.

    [49](#c3-note-0049a){#c3-note-0049}  See the documentary film *Low
    Definition Control* (2011), directed by Michael Palm.

    [50](#c3-note-0050a){#c3-note-0050}  Felix Stalder, "In der zweiten
    digitalen Phase: Daten versus Kommunikation," *Le Monde Diplomatique*
    (February 14, 2014), online.

    [51](#c3-note-0051a){#c3-note-0051}  In 2009, the European Parliament
    and the European Council ratified Directive 2009/72/EC, which stipulates
    that, by the year 2020, 80 percent of all households in the EU will have
    to be equipped with an intelligent metering system.[]{#Page_200
    type="pagebreak" title="200"}

    [52](#c3-note-0052a){#c3-note-0052}  There is no consensus about how or
    whether smart meters will contribute to the more efficient use of
    energy. On the contrary, one study commissioned by the German Federal
    Ministry for Economic Affairs and Energy concluded that the
    comprehensive implementation of smart metering would have negative
    economic effects for consumers. See Helmut Edelmann and Thomas Kästner,
    "Cost--Benefit Analysis for the Comprehensive Use of Smart Metering,"
    *Ernst & Young* (June 2013), online.

    [53](#c3-note-0053a){#c3-note-0053}  Quoted from "United Nations Working
    towards Urbanization," *United Nations Urbanization Agenda* (July 7,
    2015), online. For a comprehensive critique of such visions, see Adam
    Greenfield, *Against the Smart City* (New York City: Do Projects, 2013).

    [54](#c3-note-0054a){#c3-note-0054}  Stefan Selke, *Lifelogging: Warum
    wir unser Leben nicht digitalen Technologien überlassen sollten*
    (Berlin: Econ, 2014).

    [55](#c3-note-0055a){#c3-note-0055}  Rainer Schneider, "Rabatte für
    Gesundheitsdaten: Was die deutschen Krankenversicherer planen," *ZDNet*
    (December 18, 2014), online \[--trans.\].

    [56](#c3-note-0056a){#c3-note-0056}  Frank Pasquale, *The Black Box
    Society: The Secret Algorithms that Control Money and Information*
    (Cambridge, MA: Harvard University Press, 2015).

    [57](#c3-note-0057a){#c3-note-0057}  "Facebook Gives People around the
    World the Power to Publish Their Own Stories," *Facebook Help Center*
    (2017), online.

    [58](#c3-note-0058a){#c3-note-0058}  Lena Kampf et al., "Deutsche im
    NSA-Visier: Als Extremist gebrandmarkt," *Tagesschau.de* (July 3, 2014),
    online.

    [59](#c3-note-0059a){#c3-note-0059}  Florian Klenk, "Der Prozess gegen
    Josef S.," *Falter* (July 8, 2014), online.

    [60](#c3-note-0060a){#c3-note-0060}  Zygmunt Bauman, *Liquid Modernity*
    (Cambridge: Polity, 2000), p. 35.

    [61](#c3-note-0061a){#c3-note-0061}  This is so regardless of whether
    the dominant regime, eager to seem impervious to opposition, represents
    itself as the one and only alternative. See Byung-Chul Han, "Why
    Revolution Is No Longer Possible," *Transformation* (October 23, 2015),
    online.

    [62](#c3-note-0062a){#c3-note-0062}  See the *Süddeutsche Zeitung*\'s
    special website devoted to the "Offshore Leaks":
    \.

    [63](#c3-note-0063a){#c3-note-0063}  The *Süddeutsche Zeitung*\'s
    website devoted to the "Luxembourg Leaks" can be found at:
    \.

    [64](#c3-note-0064a){#c3-note-0064}  See the documentary film
    *Citizenfour* (2014), directed by Lara Poitras.

    [65](#c3-note-0065a){#c3-note-0065}  Felix Stalder, "WikiLeaks und die
    neue Ökologie der Nach­richtenmedien," in Heinrich Geiselberger (ed.),
    *WikiLeaks und die Folgen* (Berlin: Suhrkamp, 2011), pp.
    96--110.[]{#Page_201 type="pagebreak" title="201"}

    [66](#c3-note-0066a){#c3-note-0066}  Yochai Benkler, "Coase\'s Penguin,
    or, Linux and the Nature of the Firm," *Yale Law Journal* 112 (2002):
    369--446.

    [67](#c3-note-0067a){#c3-note-0067}  For an overview of the many commons
    traditions, see David Bollier and Silke Helfrich, *The Wealth of the
    Commons: A World beyond Market and State* (Amherst: Levellers Press,
    2012).

    [68](#c3-note-0068a){#c3-note-0068}  Massimo De Angelis and Stavros
    Stavrides, "On the Commons: A Public Interview," *e-flux* 17 (June
    2010), online.

    [69](#c3-note-0069a){#c3-note-0069}  Elinor Ostrom, *Governing the
    Commons: The Evolution of Institutions for Collective Action*
    (Cambridge: Cambridge University Press, 1990).

    [70](#c3-note-0070a){#c3-note-0070}  Michael McGinnis and Elinor Ostrom,
    "Design Principles for Local and Global Commons," *International
    Political Economy and International Institutions* 2 (1996): 465--93.

    [71](#c3-note-0071a){#c3-note-0071}  I say "allegedly" because the
    argument about their inevitable tragedy, which has been made without any
    empirical evidence, falsely conceives of the commons as a limited but
    fully unregulated resource. Because people are only interested in
    maximizing their own short-term benefits -- or so the conclusion goes --
    the resource will either have to be privatized or administered by the
    government in order to protect it from being over-used and to ensure the
    well-being of everyone involved. It was never taken into consideration
    that users could speak with one another and organize themselves. See
    Garrett Hardin, "The Tragedy of the Commons," *Science* 162 (1968):
    1243--8.

    [72](#c3-note-0072a){#c3-note-0072}  Jonathan Rowe, "The Real Tragedy:
    Ecological Ruin Stems from What Happens to -- Not What Is Caused by --
    the Commons," *On the Commons* (April 30, 2013), online.

    [73](#c3-note-0073a){#c3-note-0073}  James Boyle, "A Politics of
    Intellectual Property: Environmentalism for the Net?" *Duke Law Journal*
    47 (1997): 87--116.

    [74](#c3-note-0074a){#c3-note-0074}  Quoted from:
    \<[debian.org/intro/about.html](http://debian.org/intro/about.html)\>.

    [75](#c3-note-0075a){#c3-note-0075}  The Debian Social Contract can be
    read at: \<\>.

    [76](#c3-note-0076a){#c3-note-0076}  Gabriella E. Coleman and Benjamin
    Hill, "The Social Production of Ethics in Debian and Free Software
    Communities: Anthropological Lessons for Vocational Ethics," in Stefan
    Koch (ed.), *Free/Open Source Software Development* (Hershey, PA: Idea
    Group, 2005), pp. 273--95.

    [77](#c3-note-0077a){#c3-note-0077}  While it is relatively easy to
    identify the inner circle of such a project, it is impossible to
    determine the number of those who have contributed to it. This is
    because, among other reasons, the distinction between producers and
    consumers is so fluid that any firm line drawn between them for
    quantitative purposes would be entirely arbitrary. Should someone who
    writes the documentation be considered a producer of a software
    []{#Page_202 type="pagebreak" title="202"}project? To be counted as
    such, is it sufficient to report a single bug? Or to confirm the
    validity of a bug report that has already been sent? Should everyone be
    counted who has helped another person solve a problem in a forum?

    [78](#c3-note-0078a){#c3-note-0078}  Raphaël Hertzog, "The State of the
    Debian--Ubuntu Relationship" (December 6, 2010), online.

    [79](#c3-note-0079a){#c3-note-0079}  This, in any case, is the number of
    free software programs that appears in Wikipedia\'s entry titled "List
    of Text Editors." This list, however, is probably incomplete.

    [80](#c3-note-0080a){#c3-note-0080}  In this regard, the most
    significant legal changes were enacted through the Copyright Treaty of
    the World Intellectual Property Organization (1996), the US Digital
    Millennium Copyright Act (1998), and the EU guidelines for the
    harmonization of certain aspects of copyright (2001). Since 2006, a
    popular tactic in Germany and elsewhere has been to issue floods of
    cease-and-desist letters. This involves sending tens of thousands of
    semi-automatically generated threats of legal action with demands for
    payment in response to the presumably unauthorized use of
    copyright-protected material.

    [81](#c3-note-0081a){#c3-note-0081}  Examples include the Open Content
    License (1998) and the Free Art License (2000).

    [82](#c3-note-0082a){#c3-note-0082}  Benjamin Mako Hill, "Towards a
    Standard of Freedom: Creative Commons and the Free Software Movement,"
    *mako.cc* (June 29, 2005), online.

    [83](#c3-note-0083a){#c3-note-0083}  Since 2007, Wikipedia has
    continuously been one of the 10 most-used websites.

    [84](#c3-note-0084a){#c3-note-0084}  One of the best studies of
    Wikipedia remains Christian Stegbauer, *Wikipedia: Das Rätsel der
    Kooperation* (Wiesbaden: Verlag für Sozialwissenschaften, 2009).

    [85](#c3-note-0085a){#c3-note-0085}  Dan Wielsch, "Governance of Massive
    Multiauthor Collabor­ation -- Linux, Wikipedia and Other Networks:
    Governed by Bilateral Contracts, Partnerships or Something in Between?"
    *JIPITEC* 1 (2010): 96--108.

    [86](#c3-note-0086a){#c3-note-0086}  See Wikipedia\'s 2013--14
    fundraising report at:
    \<[meta.wikimedia.org/wiki/Fundraising/2013-14\_Report](http://meta.wikimedia.org/wiki/Fundraising/2013-14_Report)\>.

    [87](#c3-note-0087a){#c3-note-0087}  Roland Ramthun, "Offene Geodaten
    durch OpenStreetMap," in Ulrich Herb (ed.), *Open Initiatives: Offenheit
    in der digitalen Welt und Wissenschaft* (Saarbrücken: Universaar, 2012),
    pp. 159--84.

    [88](#c3-note-0088a){#c3-note-0088}  "Automated Edits Code of Conduct,"
    [WikiOpenStreetMap.org](http://WikiOpenStreetMap.org) (March 15, 2015),
    online.

    [89](#c3-note-0089a){#c3-note-0089}  See the information provided at:
    \<[wiki.osmfoundation.org/wiki/Finances](http://wiki.osmfoundation.org/wiki/Finances)\>.

    [90](#c3-note-0090a){#c3-note-0090}  As part of its "Knight News
    Challenge," for instance, the American Knight Foundation gave \$570,000
    in 2012 to the []{#Page_203 type="pagebreak" title="203"}company Mapbox
    in order for the latter to make improvements to OSM\'s infrastructure.

    [91](#c3-note-0091a){#c3-note-0091}  This was accomplished, for
    instance, by introducing methods for data indexing and quality control.
    See Ramthum, "Offene Geodaten durch OpenStreetMap" (cited above).

    [92](#c3-note-0092a){#c3-note-0092}  Trevor Paglen and Adam C. Thompson,
    *Torture Taxi: On the Trail of the CIA\'s Rendition Flights* (Hoboken,
    NJ: Melville House, 2006).

    [93](#c3-note-0093a){#c3-note-0093}  See the project\'s website:
    \<[airqualityegg.com](http://airqualityegg.com)\>.

    [94](#c3-note-0094a){#c3-note-0094}  See the project\'s homepage:
    \<[index.okfn.org](http://index.okfn.org)\>.

    [95](#c3-note-0095a){#c3-note-0095}  The homepage of the Digital
    Openness Index can be found at: \<[do-index.org](http://do-index.org)\>.

    [96](#c3-note-0096a){#c3-note-0096}  Tildy Bayar, "Community Wind
    Arrives Stateside," *Renewable Energy World* (July 5, 2012), online.

    [97](#c3-note-0097a){#c3-note-0097}  Jeremy Rifkin, *The Zero Marginal
    Cost Society: The Internet of Things, the Collaborative Commons and the
    Eclipse of Capitalism* (New York: Palgrave Macmillan, 2014), p. 217.

    [98](#c3-note-0098a){#c3-note-0098}  See, for instance, Ludger
    Eversmann, *Post-Kapitalismus: Blueprint für die nächste Gesellschaft*
    (Hanover: Heise Zeitschriften Verlag, 2014).

    [99](#c3-note-0099a){#c3-note-0099}  Ron Amadeo, "Google\'s Iron Grip on
    Android: Controlling Open Source by Any Means Necessary," *Ars Technica*
    (October 21, 2013), online.

    [100](#c3-note-0100a){#c3-note-0100}  Seb Olma, "To Share or Not to
    Share," [nettime.org](http://nettime.org) (October 20, 2014), online.

    [101](#c3-note-0101a){#c3-note-0101}  Susie Cagle, "The Case against
    Sharing," *The Nib* (May 27, 2014), online.[]{#Page_204 type="pagebreak"
    title="204"}
    :::
    :::

    [Copyright page]{.chapterTitle} {#ffirs03}
    =
    ::: {.section}
    First published in German as *Kultur der Digitalitaet* © Suhrkamp Verlag,
    Berlin, 2016

    This English edition © Polity Press, 2018

    Polity Press

    65 Bridge Street

    Cambridge CB2 1UR, UK

    Polity Press

    101 Station Landing

    Suite 300

    Medford, MA 02155, USA

    All rights reserved. Except for the quotation of short passages for the
    purpose of criticism and review, no part of this publication may be
    reproduced, stored in a retrieval system or transmitted, in any form or
    by any means, electronic, mechanical, photocopying, recording or
    otherwise, without the prior permission of the publisher.

    P. 51, Brautigan, Richard: From "All Watched Over by Machines of Loving
    Grace" by Richard Brautigan. Copyright © 1967 by Richard Brautigan,
    renewed 1995 by Ianthe Brautigan Swenson. Reprinted with the permission
    of the Estate of Richard Brautigan; all rights reserved.

    ISBN-13: 978-1-5095-1959-0

    ISBN-13: 978-1-5095-1960-6 (pb)

    A catalogue record for this book is available from the British Library.

    Library of Congress Cataloging-in-Publication Data

    Names: Stalder, Felix, author.

    Title: The digital condition / Felix Stalder.

    Other titles: Kultur der Digitalitaet. English

    Description: Cambridge, UK ; Medford, MA : Polity Press, \[2017\] \|
    Includes bibliographical references and index.

    Identifiers: LCCN 2017024678 (print) \| LCCN 2017037573 (ebook) \| ISBN
    9781509519620 (Mobi) \| ISBN 9781509519637 (Epub) \| ISBN 9781509519590
    (hardback) \| ISBN 9781509519606 (pbk.)

    Subjects: LCSH: Digital communications--Social aspects. \| Information
    society. \| Information society--Forecasting.

    Classification: LCC HM851 (ebook) \| LCC HM851 .S728813 2017 (print) \|
    DDC 302.23/1--dc23

    LC record available at

    Typeset in 10.5 on 12 pt Sabon

    by Toppan Best-set Premedia Limited

    Printed and bound in Great Britain by CPI Group (UK) Ltd, Croydon

    The publisher has used its best endeavours to ensure that the URLs for
    external websites referred to in this book are correct and active at the
    time of going to press. However, the publisher has no responsibility for
    the websites and can make no guarantee that a site will remain live or
    that the content is or will remain appropriate.

    Every effort has been made to trace all copyright holders, but if any
    have been inadvertently overlooked the publisher will be pleased to
    include any necessary credits in any subsequent reprint or edition.

    For further information on Polity, visit our website:
    politybooks.com[]{#Page_iv type="pagebreak" title="iv"}
    :::

    Stankievech
    Letter to the Superior Court of Quebec Regarding Arg.org
    2016


    Letter to the Superior Court of Quebec Regarding Arg.org

    Charles Stankievech
    19 January 2016

    To the Superior Court of Quebec:
    I am writing in support of the online community and library platform called “Arg.org” (also known under additional aliases and
    urls including “aaaaarg.org,” “grr.aaaaarg.org,” and most recently
    “grr.aaaaarg.fail”). It is my understanding that a copyright infringement lawsuit has been leveled against two individuals who
    support this community logistically. This letter will address what
    I believe to be the value of Arg.org to a variety of communities
    and individuals; it is written to encompass my perspective on the
    issue from three distinct positions: (1) As Director of the Visual
    Studies Program, Faculty of Architecture, Landscape, and Design,
    University of Toronto, where I am a professor and oversee three
    degree streams for both graduate and undergraduate students;
    (2) As the co-director of an independent publishing house based
    in Berlin, Germany, and Toronto, Canada, which works with international institutions around the world; (3) As a scholar and writer
    who has published in a variety of well-regarded international
    journals and presses. While I outline my perspective in relation to
    these professional positions below, please note that I would also
    be willing to testify via video-conference to further articulate
    my assessment of Arg.org’s contribution to a diverse international
    community of artists, scholars, and independent researchers.
    98

    Essay continuing from page 49

    “Warburgian tradition.”47 If we consider the Warburg Library
    in its simultaneous role as a contained space and the reflection
    of an idiosyncratic mental energy, General Stumm’s aforementioned feeling of “entering an enormous brain” seems an
    especially concise description. Indeed, for Saxl the librarian,
    “the books remain a body of living thought as Warburg had
    planned,”48 showing “the limits and contents of his scholarly
    worlds.”49 Developed as a research tool to solve a particular
    intellectual problem—and comparable on a number of levels
    to exhibition-led inquiry—Aby Warburg’s organically structured, themed library is a three-dimensional instance of a library that performatively articulates and potentiates itself,
    which is not yet to say exhibits, as both spatial occupation and
    conceptual arrangement, where the order of things emerges
    experimentally, and in changing versions, from the collection
    and its unusual cataloging.50

    47

    48
    49
    50

    Saxl speaks of “many tentative and personal excrescences” (“The History of
    Warburg’s Library,” 331). When Warburg fell ill in 1920 with a subsequent fouryear absence, the library was continued by Saxl and Gertrud Bing, the new and
    later closest assistant. Despite the many helpers, according to Saxl, Warburg always
    remained the boss: “everything had the character of a private book collection, where
    the master of the house had to see it in person that the bills were paid in time,
    that the bookbinder chose the right material, or that neither he nor the carpenter
    delivering a new shelf over-charged” (Ibid., 329).
    Ibid., 331.
    Ibid., 329.
    A noteworthy aside: Gertrud Bing was in charge of keeping a meticulous index of
    names and keywords; evoking the library catalog of Borges’s fiction, Warburg even
    kept an “index of un-indexed books.” See Diers, “Porträt aus Büchern,” 21.

    99

    1. Arg.org supports a collective & semiprivate community of
    academics & intellectuals.
    As the director of a graduate-level research program at the University of Toronto, I have witnessed first-hand the evolution
    of academic research. Arg.org has fostered a vibrant community
    of thinkers, students, and writers, who can share their research
    and create new opportunities for collaboration and learning
    because of the knowledge infrastructure provided by the platform.
    The accusation of copyright infringement leveled against the
    community misses the point of the research platform altogether.
    While there are texts made available for download at no expense
    through the Arg.org website, it is essential to note that these texts
    are not advertised, nor are they accessible to the general public.
    Arg.org is a private community whose sharing platform can only
    be accessed by invitation. Such modes of sharing have always
    existed in academic communities; for example, when a group of
    professors would share Xerox copies of articles they want to read
    together as part of a collaborative research project. Likewise,
    it would be hard to imagine a community of readers at any time
    in history without the frequent lending and sharing of books.
    From this perspective, Arg.org should be understood within a
    twenty-first century digital ethos, where the sharing of intellectual
    property and the generation of derivative IP occurs through collaborative platforms. On this point, I want to draw further attention
    to two fundamental aspects of Arg.org.
    a. One essential feature of the Arg.org platform is that it gives
    invited users the ability to create reading lists from available texts—
    what are called on the website “collections.” These collections
    are made up of curated folders containing text files (usually in
    Portable Document Format); such collections allow for new and
    novel associations of texts, and the development of working
    bibliographies that assist in research. Users can discover previously unfamiliar materials—including entire books and excerpted
    chapters, essays, and articles—through these shared collections.
    Based on the popularity of previous collections I have personally
    assembled on the Arg.org platform, I have been invited to give
    100

    In the Memory Hall of Reproductions
    Several photographs document how the Warburg Library was
    also a backdrop for Warburg’s picture panels, the wood boards
    lined with black fabric, which, not unlike contemporary mood
    boards, held the visual compositions he would assemble and
    re-assemble from around 2,000 photographs, postcards, and
    printed reproductions cut out of books and newspapers.
    Sometimes accompanied by written labels or short descriptions, the panels served as both public displays and researchin-process, and were themselves photographed with the aim
    to eventually be disseminated as book pages in publications.
    In the end, not every publishing venture was realized, and
    most panels themselves were even lost along the way; in fact,
    today, the panel photographs are the only visual remainder of
    this type of research from the Warburg Institute. Probably the
    most acclaimed of the panels are those which Warburg developed in close collaboration with his staff during the last years
    of his life and from which he intended to create a sequential
    picture atlas of human memory referred to as the Mnemosyne
    Atlas. Again defying the classical boundaries of the disciplines, Warburg had appropriated visual material from the
    archives of art history, natural philosophy, and science to
    vividly evoke and articulate his thesis through the creation of
    unprecedented associations. Drawing an interesting analogy,
    the following statement from Warburg scholar Kurt Forster
    underlines the importance of the panels for the creation of
    meaning:
    Warburg’s panels belong into the realm of the montage à la Schwitters or Lissitzky. Evidently, such a

    101

    guest lectures at various international venues; such invitations
    demonstrate that this cognitive work is considered original
    research and a valuable intellectual exercise worthy of further
    discussion.
    b. The texts uploaded to the Arg.org platform are typically documents scanned from the personal libraries of users who have
    already purchased the material. As a result, many of the documents are combinations of the original published text and annotations or notes from the reader. Commentary is a practice that
    has been occurring for centuries; in Medieval times, the technique
    of adding commentary directly onto a published page for future
    readers to read alongside the original writing was called “Glossing.”
    Much of the philosophy, theology, and even scientific theories
    were originally produced in the margins of other texts. For example, in her translation and publication of Charles Babbage’s lecture
    on the theory of the first computer, Ada Lovelace had more notes
    than the original lecture. Even though the text was subsequently
    published as Babbage’s work, today modern scholarship acknowledges Lovelace as important voice in the theorization of the
    modern computer due to these vital marginal notes.
    2. Arg.org supports small presses.
    Since 2011, I have been the co-founder and co-director of
    K. Verlag, an independent press based in Berlin, Germany, and
    Toronto, Canada. The press publishes academic books on art
    and culture, as well as specialty books on art exhibitions. While
    I am aware of the difficulties faced by small presses in terms of
    profitability, especially given fears that the sharing of books online
    could further hurt book sales; however, my experience has been
    in the opposite direction. At K. Verlag, we actually upload our new
    publications directly to Arg.org because we know the platform
    reaches an important community of readers and thinkers. Fully
    conscious of the uniqueness of printed books and their importance, digital circulation of ebooks and scanned physical books
    present a range of different possibilities in reaching our audiences
    in a variety of ways. Some members of Arg.org may be too
    102

    comparison does not need to claim artistic qualities
    for Warburg’s panels, nor does it deny them regarding
    Schwitters’s or Lissitzky’s collages. It simply lifts the
    role of graphic montage from the realm of the formal
    into the realm of the construction of meaning.51
    Interestingly, even if Forster makes a point not to categorize
    Warburg’s practice as art, in twentieth-century art theory and
    visual culture scholarship, his idiosyncratic technique has
    evidently been mostly associated with art practice. In fact,
    insofar as Warburg is acknowledged (together with Marcel
    Duchamp and, perhaps, the less well-known André Malraux),
    it is as one of the most important predecessors for artists
    working with the archive.52 Forster articulates the traditional
    assumption that only artists were “allowed” to establish idiosyncratic approaches and think with objects outside of the
    box. However, within the relatively new discourse of the
    “curatorial,” contra the role of the “curator,” the curatorial
    delineates its territory as that which is no longer defined exclusively by what the curator does (i.e. responsibilities of classification and care) but rather as a particular agency in terms of
    epistemologically and spatially working with existing materials and collections. Consequently, figures such as Warburg
    51
    52

    Kurt Forster, quoted in Benjamin H.D. Buchloh, “Gerhard Richter’s Atlas: Das anomische Archiv,” in Paradigma Fotografie: Fotokritik am Ende des fotografischen Zeitalters,
    ed. Herta Wolf (Frankfurt/M.: Suhrkamp Verlag, 2002), 407, with further references.
    One such example is the Atlas begun by Gerhard Richter in 1962; another is
    Thomas Hirschhorn’s large-format, mixed-media collage series MAPS. Entitled
    Foucault-Map (2008), The Map of Friendship Between Art and Philosophy (2007),
    and Hannah-Arendt-Map (2003), these works are partly made in collaboration
    with the philosopher Marcus Steinweg. They bring a diverse array of archival and
    personal documents or small objects into associative proximities and reflect the
    complex impact philosophy has had on Hirschhorn’s art and thinking.

    103

    poor to afford to buy our books (eg. students with increasing debt,
    precarious artists, or scholars in countries lacking accessible
    infrastructures for high-level academic research). We also realize
    that Arg.org is a library-community built over years; the site
    connects us to communities and individuals making original work
    and we are excited if our books are shared by the writers, readers,
    and artists who actively support the platform. Meanwhile, we
    have also seen that readers frequently discover books from our
    press through a collection of books on Arg.org, download the
    book for free to browse it, and nevertheless go on to order a print
    copy from our shop. Even when this is not the case, we believe
    in the environmental benefit of Arg.org; printing a book uses
    valuable resources and then requires additional shipping around
    the world—these practices contradict our desire for the broadest
    dissemination of knowledge through the most environmentallyconscious of means.
    3. Arg.org supports both official institutional academics
    & independent researchers.
    As a professor at the University of Toronto, I have access to one
    of the best library infrastructures in the world. In addition to
    core services, this includes a large number of specialty libraries,
    archives, and massive online resources for research. Such
    an investment by the administration of the university is essential
    to support the advanced research conducted in the numerous
    graduate programs and by research chairs. However, there are
    at least four ways in which the official, sanctioned access to these
    library resources can at times fall short.
    a. Physical limitations. While the library might have several copies
    of a single book to accommodate demand, it is often the case
    that these copies are simultaneously checked out and therefore
    not available when needed for teaching or writing. Furthermore,
    the contemporary academic is required to constantly travel for
    conferences, lectures, and other research obligations, but travelling with a library is not possible. Frequently while I am working
    abroad, I access Arg.org to find a book which I have previously
    104

    and Malraux, who thought apropos objects in space (even
    when those objects are dematerialized as reproductions),
    become productive forerunners across a range of fields: from
    art, through cultural studies and art history, to the curatorial.
    Essential to Warburg’s library and Mnemosyne Atlas, but
    not yet articulated explicitly, is that the practice of constructing two-dimensional, heterogeneous image clusters shifts the
    value between an original work of art and its mechanical
    reproduction, anticipating Walter Benjamin’s essay written a
    decade later.53 While a museum would normally exhibit an
    original of Albrecht Dürer’s Melencolia I (1514) so it could be
    contemplated aesthetically (admitting that even as an etching
    it is ultimately a form of reproduction), when inserted as a
    quotidian reprint into a Warburgian constellation and exhibited within a library, its “auratic singularity”54 is purposefully
    challenged. Favored instead is the iconography of the image,
    which is highlighted by way of its embeddedness within a
    larger (visual-emotional-intellectual) economy of human consciousness.55 As it receives its impetus from the interstices
    53

    54
    55

    One of the points Benjamin makes in “The Artwork in the Age of Mechanical
    Reproduction” is that reproducibility increases the “exhibition value” of a work of art,
    meaning its relationship to being viewed is suddenly valued higher than its
    relationship to tradition and ritual (“cult value”); a process which, as Benjamin writes,
    nevertheless engenders a new “cult” of remembrance and melancholy (224–26).
    Benjamin defines “aura” as the “here and now” of an object, that is, as its spatial,
    temporal, and physical presence, and above all, its uniqueness—which in his
    opinion is lost through reproduction. Ibid., 222.
    It is worth noting that Warburg wrote his professorial dissertation on Albrecht
    Dürer. Another central field of his study was astrology, which Warburg examined
    from historical and philosophical perspectives. It is thus not surprising to find
    out that Dürer’s Melencolia I (1514), addressing the relationship between the
    human and the cosmos, was of the highest significance to Warburg as a recurring
    theme. The etching is shown, for instance, as image 8 of Plate 58, “Kosmologie bei
    Dürer” (Cosmology in Dürer); reproduced in Warnke, ed., Aby Moritz Warburg:
    Der Bilderatlas Mnemosyne, Gesammelte Schriften, Vol. 1, 106–7. The connections

    105

    purchased, and which is on my bookshelf at home, but which
    is not in my suitcase. Thus, the Arg.org platform acts as a patch
    for times when access to physical books is limited—although
    these books have been purchased (either by the library or the
    reader herself) and the publisher is not being cheated of profit.
    b. Lack of institutional affiliation. The course of one’s academic
    career is rarely smooth and is increasingly precarious in today’s
    shift to a greater base of contract sessional instructors. When
    I have been in-between institutions, I lost access to the library
    resources upon which my research and scholarship depended.
    So, although academic publishing functions in accord with library
    acquisitions, there are countless intellectuals—some of whom
    are temporary hires or in-between job appointments, others whom
    are looking for work, and thus do not have access to libraries.
    In this position, I would resort to asking colleagues and friends
    to share their access or help me by downloading articles through
    their respective institutional portals. Arg.org helps to relieve
    this precarity through a shared library which allows scholarship
    to continue; Arg.org is thus best described as a community of
    readers who share their research and legally-acquired resources
    so that when someone is researching a specific topic, the adequate book/essay can be found to fulfill the academic argument.
    c. Special circumstances of non-traditional education. Several
    years ago, I co-founded the Yukon School of Visual Arts in
    Dawson City as a joint venture between an Indigenous government and the State college. Because we were a tiny school,
    we did not fit into the typical academic brackets regarding student
    population, nor could we access the sliding scale economics
    of academic publishers. As a result, even the tiniest package for
    a “small” academic institution would be thousands of times larger
    than our population and budget. As a result, neither myself
    nor my students could access the essential academic resources
    required for a post-secondary education. I attempted to solve this
    problem by forging partnerships, pulling in favors, and accessing
    resources through platforms like Arg.org. It is important to realize
    106

    among text and image, visual display and publishing, the
    expansive space of the library and the dense volume of the
    book, Aby Warburg’s wide-ranging work appears to be best
    summarized by the title of one of the Mnemosyne plates:
    “Book Browsing as a Reading of the Universe.”56

    To the Paper Museum
    Warburg had already died before Benjamin theorized the
    impact of mechanical reproduction on art in 1935. But it is
    Malraux who claims to have embarked on a lengthy, multipart project about similitudes in the artistic heritage of the
    world in exactly the same year, and for whom, in opposition
    to the architectonic space of the museum, photographic
    reproduction, montage, and the book are the decisive filters
    through which one sees the world. At the outset of his book
    Le Musée imaginaire (first published in 1947),57 Malraux argues
    that the secular modern museum has been crucial in reframing and transforming objects into art, both by displacing
    them from their original sacred or ritual context and purpose,
    and by bringing them into proximity and adjacency
    with one another, thereby opening new possible readings

    56
    57

    and analogies between Warburg’s image-based research and his theoretical ideas,
    and von Trier’s Melancholia, are striking; see Anna-Sophie Springer’s visual essay
    “Reading Rooms Reading Machines” on p. 91 of this book.
    “Buchblättern als Lesen des Universums,” Plate 23a, reproduced in Warnke, Aby
    Moritz Warburg: Der Bilderatlas Mnemosyne, Gesammelte Schriften, Vol. 1, 38–9.
    The title of the English translation, The Museum Without Walls, by Stuart Gilbert
    and Francis Price (London: Secker & Warburg, 1967), must be read in reference
    to Erasmus’s envisioning of a “library without walls,” made possible through the
    invention of the printing press, as Anthony Grafton mentions in his lecture, “The
    Crisis of Reading,” The CUNY Graduate Center, New York, 10 November 2014.

    107

    that Arg.org was founded to meet these grassroots needs; the
    platform supports a vast number of educational efforts, including
    co-research projects, self-organized reading groups, and numerous other non-traditional workshops and initiatives.
    d. My own writing on Arg.org. While using the platform, I have frequently come across my own essays and publications on the
    site; although I often upload copies of my work to Arg.org myself,
    these copies had been uploaded by other users. I was delighted
    to see that other users found my publications to be of value and
    were sharing my work through their curated “collections.” In some
    cases, I held outright exclusive copyright on the text and I was
    pleased it was being distributed. In other rare cases, I shared the
    copyright or was forced to surrender my IP prior to publication;
    I was still happy to see this type of document uploaded. I realize
    it is not within my authority to grant copyright that is shared,
    however, the power structure of contemporary publishing is often
    abusive towards the writer. Massive, for-profit corporations have
    dominated the publishing of academic texts and, as a result of
    their power, have bullied young academics into signing away their
    IP in exchange for publication. Even the librarians at Harvard
    University—who spend over $3.75 million USD annually on journal subscriptions alone—believe that the economy of academic
    publishing and bullying by a few giants has crossed a line, to the
    point where they are boycotting certain publishers and encouraging faculty to publish instead in open access journals.
    I want to conclude my letter of support by affirming that
    Arg.org is at the cutting edge of academic research and knowledge
    production. Sean Dockray, one of the developers of Arg.org,
    is internationally recognized as a leading thinker regarding the
    changing nature of research through digital platforms; he is regularly invited to academic conferences to discuss how the community on the Arg.org platform is experimenting with digital research.
    Reading, publishing, researching, and writing are all changing
    rapidly as networked digital culture influences professional and
    academic life more and more frequently. Yet, our legal frameworks and business models are always slower than the practices

    (“metamorphoses”) of individual objects—and, even more
    critically, producing the general category of art itself. As
    exceptions to this process, Malraux names those creations that
    are so embedded in their original architecture that they defy
    relocation in the museum (such as church windows, frescoes,
    or monuments); this restriction of scale and transportation, in
    fact, resulted in a consistent privileging of painting and sculpture within the museological apparatus.58
    Long before networked societies, with instant Google
    Image searches and prolific photo blogs, Malraux dedicated
    himself to the difficulty of accessing works and oeuvres
    distributed throughout an international topography of institutions. He located a revolutionary solution in the dematerialization and multiplication of visual art through photography
    and print, and, above all, proclaimed that an imaginary museum
    based on reproductions would enable the completion of a
    meaningful collection of artworks initiated by the traditional
    museum.59 Echoing Benjamin’s theory regarding the power of
    the reproduction to change how art is perceived, Malraux
    writes, “Reproduction is not the origin but a decisive means
    for the process of intellectualization to which we subject art.
    58

    59

    I thank the visual culture scholar Antonia von Schöning for pointing me to
    Malraux after reading my previous considerations of the book-as-exhibition. Von
    Schöning herself is author of the essay “Die universelle Verwandtschaft zwischen
    den Bildern: André Malraux’Musée Imaginaire als Familienalbum der Kunst,”
    kunsttexte.de, April 2012, edoc.hu-berlin.de/kunsttexte/2012-1/von-schoening
    -antonia-5/PDF/von-schoening.pdf.
    André Malraux, Psychologie der Kunst: Das imaginäre Museum (Baden-Baden:
    Woldemar Klein Verlag, 1949), 9; see also Rosalind Krauss, “The Ministry of
    Fate,” in A New History of French Literature, ed. Denis Hollier (Cambridge, MA
    and London: Harvard University Press, 1989), 1000–6: “The photographic archive
    itself, insofar as it is the locale of a potentially complete assemblage of world
    artifacts, is a repository of knowledge in a way that no individual museum could
    ever be” (1001).

    109

    of artists and technologists. Arg.org is a non-profit intellectual
    venture and should therefore be considered as an artistic experiment, a pedagogical project, and an online community of coresearchers; it should not be subject to the same legal judgments
    designed to thwart greedy profiteers and abusive practices.
    There are certainly some documents to be found on Arg.org that
    have been obtained by questionable or illegal means—every
    Web 2.0 platform is bound to find such examples, from Youtube
    to Facebook; however, such examples occur as a result of a small
    number of participant users, not because of two dedicated individuals who logistically support the platform. A strength of Arg.org
    and a source of its experimental vibrancy is its lack of policing,
    which fosters a sense of freedom and anonymity which are both
    vital elements for research within a democratic society and
    the foundations of any library system. As a result of this freedom,
    there are sometimes violations of copyright. However, since
    Arg.org is a committed, non-profit community-library, such transgressions occur within a spirit of sharing and fair use that characterize this intellectual community. This sharing is quite different
    from the popular platform Academia.edu, which is searchable
    by non-users and acquires value by monetizing its articles through
    the sale of digital advertising space and a nontransparent investment exit strategy. Arg.org is the antithesis of such a model
    and instead fosters a community of learning through its platform.
    Please do not hesitate to contact me for further information,
    or to testify as a witness.
    Regards,
    Charles Stankievech,
    Director of Visual Studies Program, University of Toronto
    Co-Director of K. Verlag, Berlin & Toronto

    … Medieval works, as diverse as the tapestry, the glass window,
    the miniature, the fresco, and the sculpture become united as
    one family if reproduced together on one page.”60 In his search
    for a common visual rhetoric, Malraux went further than
    merely arranging creations from one epoch and cultural sphere
    by attempting to collect and directly juxtapose artworks and
    artifacts from very diverse and distant cultural, historical, and
    geographic contexts.
    His richly illustrated series of books thus functions as a
    utopian archive of new temporalities of art liberated from
    history and scale by de-contextualizing and re-situating the
    works, or rather their reproduced images, in unorthodox combinations. Le Musée imaginaire was thus an experimental virtual
    museum intended to both form a repository of knowledge and
    provide a space of association and connection that could not
    be sustained by any other existing place or institution. From an
    art historical point of view—Malraux was not a trained scholar
    and was readily criticized by academics—his theoretical
    assumptions of “universal kinship” (von Schöning) and the
    “anti-destiny” of art have been rejected. His material selection
    process and visual appropriation and manipulation through
    framing, lighting, and scale, have also been criticized for their
    problematic and often controversial—one could say, colonizing—implications.61 Among the most recent critics is the art
    historian Walter Grasskamp, who argues that Malraux moreover might well have plagiarized the image-based work of the
    60
    61

    André Malraux, Das imaginäre Museum, 16.
    See the two volumes of Georges Duthuit, Le Musée Inimaginable (Paris: J. Corti,
    1956); Ernst Gombrich, “André Malraux and the Crisis of Expressionism,” The
    Burlington Magazine 96 (1954): 374–78; Michel Merlot, “L’art selon André Malraux,
    du Musée imaginaire à l’Inventaire general,” In Situ 1 (2001), www.insitu.revues
    .org/1053; and von Schöning, “Die universelle Verwandtschaft zwischen den Bildern.”

    111


    Tenen
    Preliminary Thoughts on the Way to the Free Library Congress
    2016


    # Preliminary Thoughts on the Way to the Free Library Congress

    by Dennis Yi Tenen — Mar 24, 2016

    ![](https://schloss-post.com/content/uploads/star-600x440.jpg)

    Figure 1: Article titles obscuring citation network topography. Image by Denis
    Y Tenen.

    **In the framework of the[Authorship](http://www.akademie-
    solitude.de/en/events/~no3764/) project, Akademie Schloss Solitude together
    with former and current fellows initiated a debate on the status of the author
    in the 21st century as well as closely related questions on the copyright
    system. The event »[Custodians.online – The Struggle over the Future of
    ›Pirate‹ Libraries and Universal Access to Knowledge](http://www.akademie-
    solitude.de/en/events/custodiansonline-the-struggle-over-the-future-of-pirate-
    libraries-and-universal-access-to-knowledge~no3779/)« was part of the debate
    by which the Akademie offers its fellows to articulate diverse and already
    long existing positions regarding this topic. In this article, published in
    the special online-issue on _[Authorship](http://schloss-
    post.com/category/issues/authorship/), _ Dennis Yi Tenen, PiracyLab/Columbia
    University, New York, reports his personal experiences from the
    »[Custodians.online](http://custodians.online/)« discussion. Edited by
    Rosemary Grennan, MayDay Rooms, London/UK.**

    I am on my way to the Free Library Congress at Akademie Schloss Solitude, in
    Stuttgart. The event is not really called the »Free Library Congress,« but
    that is what I imagine it to be. It will be a meeting about the growing
    conflict between those who assert their intellectual property rights and those
    who assert their right to access information freely.

    Working at a North American university, it is easy to forget that most people
    in the United States and abroad lack affordable access to published
    information – books, medical research, science, and law. Outside of a
    university subscription, reading a single academic article may cost upwards of
    several hundred dollars. The pricing structure precludes any meaningful idea
    of independent research.

    Imagine yourself a physician or a young scientist somewhere in the global
    south, or in Eastern Europe, or anywhere really without a good library and
    without the means to pay exorbitant subscription prices demanded by the
    distributors. How will you keep current in your field? How are you to do right
    for your patients in following the latest treatment protocols? What about
    citizen science or simply due diligence on the part of patients, litigants, or
    primary school students in search for reputable sources? Wherever library
    budgets do not soar into the millions, research involves building archives
    that exist outside of the intellectual property regime. It involves the
    organizational effort required to collect, sort, and share information widely.

    A number of prominent sites and communities emerged in the past decade in an
    attempt to address the global imbalance of access to information. Among them,
    Sci-Hub. [1] Founded by Alexandra Elbakyan, a young neuroscientist from
    Kazakhstan, the site makes close to 50 million scientific articles available
    for download. Elbakyan describes the mission of her library as »removing all
    barriers that impede the widest possible distribution of knowledge in human
    society.« Compare this with Google’s mission »to organize the world’s
    information and make it universally accessible and useful.« [2] The two
    visions are not so different. Sci-Hub violates intellectual property law in
    many jurisdictions, including the United States. Elsevier, one of the world’s
    largest scientific publishers, has filed a complaint against Sci-Hub in New
    York Southern District Court. [3] Of course, Google also continually finds
    itself at odds with intellectual property holders. The very logic of
    collecting and organizing human knowledge is, fundamentally, a public works
    project at odds with the idea of private intellectual property.

    Addressing the judge directly in her defense, Elbakyan appeals to universal
    ethical principles, like those enshrined in Article 27 of the United Nations
    Declaration of Human Rights, which holds that: »Everyone has the right to
    freely participate in the cultural life of the community, to enjoy the arts
    and to share in scientific advancement and its benefits.« [4] [5] Her
    language – our language – evokes also the »unquiet« history of the public
    library. [6] I call this small, scrappy group of artists, academics,
    librarians, and technologists »free« to evoke the history of »free and public«
    libraries and to appeal also to the intellectual legacy of the free software
    movement: as Richard Stallman famously put it »free as in free speech not as
    in free beer.« [7]

    The word »piracy« is also often used to describe the online free library
    world. For some it carries an unwelcome connotation. In most cases, the
    maintenance of large online archives is a drain on resources, not
    profiteering. It resembles much more the work of a librarian than that of a
    corsair. Nevertheless, many in the community actually embrace a few of the
    political implications that come with the idea of piracy. Piracy, in that
    sense, appeals to ideas and strategies similar to those of the Occupy
    Movement. When public resources are unjustly appropriated and when such
    systematic appropriation is subsequently defended through the use of law and
    force, the only available response is counter occupation.

    The agenda notes introducing the event calls for a »solidarity platform« in
    support of free online public libraries like Sci-Hub and Library Genesis,
    which increasingly find themselves in legal peril. I do not yet know what the
    organizers have in mind, but my own thoughts in preparation for the day’s
    activities revolve around the following few premises:

    1\. The case for universal and free access to knowledge is stronger when it is
    made on ethical, technological, and **tactical** grounds, not just legal.

    The cost of sharing and reproduction in the digital world are too low to
    sustain practices and institutions built on the assumptions of print. The
    attempt to re-introduce »stickiness« to electronic documents artificially
    through digital rights management technology and associated legislation like
    the Digital Millennium Copyright Act are doomed to fail. Information does not
    (and cannot) »want« to be free, [8] but it definitely has lost some of its
    purchase on the medium when words moved from vellum to magnetic charge and
    subsequently to solid storage medium that – I kid you not – works through
    mechanisms like quantum tunneling and electron avalanche injection.

    2\. Any proposed action will require the close **alignment of interests**
    between authors, publishers, readers, and librarians.

    For our institutions to catch up to the changing material conditions *and* our
    (hopefully not so rapidly changing) sense of what’s right and wrong in the
    world, writers, readers, publishers, and archivists need to coordinate their
    action. We are a community. And I think we want more or less the same thing:
    to reach an audience, to find and share information, and to remain a vital
    intellectual force. The real battle for the hearts and minds of an informed
    public lies elsewhere. Massive forces of capital and centralization threaten
    the very existence of a public commons. To survive, we need to nurture a
    conversation across organizational boundaries.

    By my calculations, Library Genesis, one of the most influential free online
    book libraries sustains itself on a budget of several thousand dollars per
    year. [9] The maintenance of Sci-Hub requires a bit more to reach millions of
    readers. [10] How do pirate libraries achieve so much with so little? The
    fact that these libraries do not pay exorbitant license fees can only comprise
    a small part of the answer. The larger part includes their ability to rely on
    the support of the community, in what I have called elsewhere »peer
    preservation.« Why can’t readers and writers contribute to the development of
    infrastructures within their own institutions? Why are libraries so reliant on
    outside vendors, who take most of the profits out of our ecosystem?

    I am conflicted about leaving booksellers out of the equation. In response
    about my question about booksellers – do they help or hinder project of
    universal access? – [Marcell Mars](https://www.memoryoftheworld.org/nenad-
    romic-aka-marcell-mars/) spoke about »a nostalgia for capitalism we used to
    know.« [Tomislav Medak](https://www.memoryoftheworld.org/tomislav-medak/)
    spoke in defense of small book publishers that produce beautiful objects. But
    the largest of booksellers are no longer strictly in the business of selling
    books. They build cloud infrastructures, they sell online services to the
    military, build autonomous drones, and much much more. The project of
    corporate growth just may be incompatible with the project to provide free and
    universal access to information.

    3\. Libraries and publishing conclude a **long chain of literary production**.
    Whatever ails the free library must be also addressed at the source of
    authorship.

    Much of the world’s knowledge is locked behind paywalls. Such closed systems
    at the point of distribution reflect labor practices that also rely on closed
    and proprietary tools. Inequities of access mirror inequities of production.
    Techniques of writing are furthermore impoverished when writers are not free
    to modify their instruments. This means that as we support free libraries we
    must also convince our peers to write using software that can be freely
    modified, hacked, personalized, and extended. Documents written in that way
    have a better chance of ending up in open archives.

    4\. We need **more empirical evidence** about the impact of media piracy.

    The political and economic response to piracy is often guided by fear and
    speculation. The work of researchers like [Bodo
    Balazs](http://www.warsystems.hu/) is beginning to connect the business of
    selling books with the practices of reading them. [11] Balazs makes a
    powerful argument, holding that the flourishing of shadow media markets
    indicates a failure in legitimate markets. Research suggests that piracy does
    not decrease, it increases sales, particularly in places which are not well-
    served by traditional publishers and distributors. A more complete, »thick
    description« of global media practice requires more research, both qualitative
    and quantitative.

    5\. **Multiplicity is key**.

    As everyone arrives and the conversation begins in earnest, several
    participants remark on the notable absences around the table. North America,
    Eastern and Western Europe are overrepresented. I remind the group that we
    travel widely and in good company of artists, scholars, activists, and
    philosophers who would stand in support of what [Antonia
    Majaca](http://izk.tugraz.at/people/faculty-staff/visiting-professor-antonia-
    majaca/) has called (after Walter Mignolo) »epistemic disobedience« and who
    need to be invited to this table. [12] I speak up to say, along with [Femke
    Snelting](http://snelting.domainepublic.net/) and [Ted
    Byfield](http://nettime.org/), that whatever is meant by »universal« access to
    knowledge must include a multiplicity of voices – not **the** universal but a
    tangled network of universalisms – international, planetary, intergalactic.

    1. Jump Up
    2. Jump Up [https://www.google.com/about/company/>](https://www.google.com/about/company/>)
    3. Jump Up
    4. Jump Up
    5. Jump Up
    6. Jump Up In reference to Battles, Matthew. _Library: An Unquiet History._ New York: Norton, 2003.
    7. Jump Up
    8. Jump Up Doctorow, Cory, Neil Gaiman, and Amanda Palmer. _Information Doesn’t Want to Be Free: Laws for the Internet Age_. San Francisco: McSweeney’s, 2014.
    9. Jump Up
    10. Jump Up
    11. Jump Up See for example Bodo, B. 2015. [Eastern Europeans in the pirate library] – _Visegrad Insight_ 7 1.
    12. Jump Up

    ![](data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7)

    [Dennis Yi Tenen](https://schloss-post.com/person/dennis-yi-tenen/), New
    York/USA

    [Dennis Yi Tenen](http://denten.plaintext.in/) is an assistant professor of
    English and Comparative Literature at Columbia University. He is the author of
    the forthcoming »Plain Text: The Poetics of Human-Computer Interaction«.​


    Tenen & Foxman
    Book Piracy as Peer Preservation
    2014


    Book Piracy as Peer Preservation {#book-piracy-as-peer-preservation .entry-title}

    **Abstract**

    In describing the people, books, and technologies behind one of the
    largest "shadow libraries" in the world, we find a tension between the
    dynamics of sharing and preservation. The paper proceeds to
    contextualize contemporary book piracy historically, challenging
    accepted theories of peer production. Through a close analysis of one
    digital library's system architecture, software and community, we assert
    that the activities cultivated by its members are closer to that of
    conservationists of the public libraries movement, with the goal of
    preserving rather than mass distributing their collected material.
    Unlike common peer production models emphasis is placed on the expertise
    of its members as digital preservations, as well as the absorption of
    digital repositories. Additionally, we highlight issues that arise from
    their particular form of distributed architecture and community.

    >  
    >
    > *Literature is the secretion of civilization, poetry of the ideal.
    > That is why literature is one of the wants of societies. That is why
    > poetry is a hunger of the soul. That is why poets are the first
    > instructors of the people. That is why Shakespeare must be translated
    > in France. That is why Molière must be translated in England. That is
    > why comments must be made on them. That is why there must be a vast
    > public literary domain. That is why all poets, all philosophers, all
    > thinkers, all the producers of the greatness of the mind must be
    > translated, commented on, published, printed, reprinted, stereotyped,
    > distributed, explained, recited, spread abroad, given to all, given
    > cheaply, given at cost price, given for nothing.*
    > ^[1](#fn-2025-1){#fnref-2025-1}^

    **Introduction**

    The big money (and the bandwidth) in online media is in film, music, and
    software. Text is less profitable for copyright holders; it is cheaper
    to duplicate and easier to share. Consequently, issues surrounding the
    unsanctioned sharing of print material receive less press and scant
    academic attention. The very words, "book piracy," fail to capture the
    spirit of what is essentially an Enlightenment-era project, openly
    embodied in many contemporary "shadow libraries":^[2](#fn-2025-2){#fnref-2025-2}^
    in the words of Victor Hugo, to establish a "vast public
    literary domain." Writers, librarians, and political activists from Hugo
    to Leo Tolstoy and Andrew Carnegie have long argued for unrestricted
    access to information as a form of a public good essential to civic
    engagement. In that sense, people participating in online book exchanges
    enact a role closer to that of a librarian than that of a bootlegger or
    a plagiarist. Whatever the reader's stance on the ethics of copyright
    and copyleft, book piracy should not be dismissed as mere search for
    free entertainment. Under the conditions of "digital
    disruption,"^[3](#fn-2025-3){#fnref-2025-3}^ when the traditional
    institutions of knowledge dissemination---the library, the university,
    the newspaper, and the publishing house---feel themselves challenged and
    transformed by the internet, we can look to online book sharing
    communities for lessons in participatory governance, technological
    innovation, and economic sustainability.

    The primary aims of this paper are ethnographic and descriptive: to
    study and to learn from a library that constitutes one of the world's
    largest digital archives, rivaling *Google Books*, *Hathi Trust*, and
    *Europeana*. In approaching a "thick description" of this archive we
    begin to broach questions of scope and impact. We would like to ask:
    Who? Where? and Why? What kind of people distribute books online? What
    motivates their activity? What technologies enable the sharing of print
    media? And what lessons can we draw from them? Our secondary aim is to
    continue the work of exploring the phenomenon of book sharing more
    widely, placing it in the context of other commons-based peer production
    communities like Project Gutenberg and Wikipedia. The archetypal model
    of peer production is one motivated by altruistic participation. But the
    very history of public libraries is one that combines the impulse to
    share and to protect. To paraphrase Jacques Derrida
    ^[4](#fn-2025-4){#fnref-2025-4}^ writing in "Archive Fever," the archive
    shelters memory just as it shelters itself from memory. We encompass
    this dual dynamic under the term "peer preservation," where the
    logistics of "peers" and of "preservation" can sometimes work at odds to
    one another.

    Academic literature tends to view piracy on the continuum between free
    culture and intellectual property rights. On the one side, an argument
    is made for unrestricted access to information as a prerequisite to
    properly deliberative democracy.^[5](#fn-2025-5){#fnref-2025-5}^ On this
    view, access to knowledge is a form of political power, which must be
    equitably distributed, redressing regional and social imbalances of
    access.^[6](#fn-2025-6){#fnref-2025-6}^ The other side offers pragmatic
    reasoning related to the long-term sustainability of the cultural
    sphere, which, in order to prosper, must provide proper economic
    incentives to content creators.^[7](#fn-2025-7){#fnref-2025-7}^

    It is our contention that grassroots file sharing practices cannot be
    understood solely in terms of access or intellectual property. Our field
    work shows that while some members of the book sharing community
    participate for activist or ideological reasons, others do so as
    collectors, preservationists, curators, or simply readers. Despite
    romantic notions to the contrary, reading is a social and mediated
    activity. The reader encounters texts in conversation, through a variety
    of physical interfaces and within an ecosystem of overlapping
    communities, each projecting their own material contexts, social norms,
    and ideologies. A technician who works in a biology laboratory, for
    example, might publish closed-access peer-review articles by day, as
    part of his work collective, and release terabytes of published material
    by night, in the role of a moderator for an online digital library. Our
    approach then, is to capture some of the complexity of such an
    ecosystem, particularly in the liminal areas where people, texts, and
    technology converge.

    **Ethics disclaimer**

    Research for this paper was conducted under the aegis of piracyLab, an
    academic collective exploring the impact of technology on the spread of
    knowledge globally.^[8](#fn-2025-8){#fnref-2025-8}^ One of the lab's
    first tasks was to discuss the ethical challenges of collaborative
    research in this space. The conversation involved students, faculty,
    librarians, and informal legal council. Neutrality, to the extent that
    it is possible, emerged as one of our foundational principles. To keep
    all channels of communication open, we wanted to avoid bias and to give
    voice to a diversity of stakeholders: from authors, to publishers, to
    distributors, whether sanctioned or not. Following a frank discussion
    and after several iterations, we drafted an ethics charter that
    continues to inform our work today. The charter contains the following
    provisions:

    -- We neither condone nor condemn any forms of information exchange.\
    -- We strive to protect our sources and do not retain any identifying
    personal information.\
    -- We seek transparency in sharing our methods, data, and findings with
    the widest possible audience.\
    -- Credit where credit is due. We believe in documenting attribution
    thoroughly.\
    -- We limit our usage of licensed material to the analysis of metadata,
    with results used for non-commercial, nonprofit, educational purposes.\
    -- Lab participants commit to abiding by these principles as long as
    they remain active members of the research group.

    In accordance with these principles and following the practice of
    scholars like Balazs Bodo ^[9](#fn-2025-9){#fnref-2025-9}^, Eric Priest
    ^[10](#fn-2025-10){#fnref-2025-10}^, and Ramon Lobato and Leah Tang
    ^[11](#fn-2025-11){#fnref-2025-11}^, we redact the names of file sharing
    services and user names, where such names are not made explicitly public
    elsewhere.

    **Centralization**

    We begin with the intuition that all infrastructure is social to an
    extent. Even private library collections cannot be said to reflect the
    work of a single individual. Collective forces shape furniture, books,
    and the very cognitive scaffolding that enables reading and
    interpretation. Yet, there are significant qualitative differences in
    the systems underpinning private collections, public libraries, and
    unsanctioned peer-to-peer information exchanges like *The Pirate Bay*,
    for example. Given these differences, the recent history of online book
    sharing can be divided roughly into two periods. The first is
    characterized by local, ad-hoc peer-to-peer document exchanges and the
    subsequent growth of centralized content aggregators. Following trends
    in the development of the web as a whole, shadow libraries of the second
    period are characterized by communal governance and distributed
    infrastructure.

    Shadow libraries of the first period resemble a private library in that
    they often emanate from a single authoritative source--a site of
    collection and distribution associated with an individual collector,
    sometimes explicitly. The library of Maxim Moshkov, for example,
    established in 1994 and still thriving at *lib.ru*, is one of the most
    visible collections of this kind. Despite their success, such libraries
    are limited in scale by the means and efforts of a few individuals. Due
    to their centralized architecture they are also susceptible to legal
    challenges from copyright owners and to state intervention.
    Shadow libraries responded to these problems by distributing labor,
    responsibility, and infrastructure, resulting in a system that is more
    robust, more redundant, and more resistant to any single point of
    failure or control.

    The case of *Gigapedia* (later *library.nu*) and its related file
    hosting service *ifile.it* demonstrates the successes and the
    deficiencies of the centralized digital library model. Arguably among
    the largest and most popular virtual libraries online in the period of
    2009-2011, the sites were operated by Irish
    nationals^[12](#fn-2025-12){#fnref-2025-12}^ on domains registered in
    Italy and on the island state of Niue, with servers on the territory of
    Germany and Ukraine. At its peak, *library.nu* (LNU) hosted more than
    400,000 books and was purported to make an "estimated turnover of EUR 8
    million (USD 10,602,400) from advertising revenues, donations and sales
    of premium-level accounts," at least according to a press release made
    by the International Publishers Association
    (IPA).^[13](#fn-2025-13){#fnref-2025-13}^\
    *Archived version of library.nu, circa 12/10/2010*

    Its apparent popularity notwithstanding, *LNU/Gigapedia* was supported
    by relatively simple architecture, likely maintained by a lone
    developer-administrator. The site itself consisted of a catalog of
    digital books and related metadata, including title, author, year of
    publication, number of pages, description, category classification, and
    a number of boolean parameters (whether the file is bookmarked,
    paginated, vectorized, is searchable, and has a cover). Although the
    books could be hosted anywhere, many in the catalog resided on the
    servers of a "cyberlocker" service *ifile.it*, affiliated with the main
    site. Not strictly a single-source archive, *LNU/Gigapedia* was
    nevertheless a federated entity, tied to a single site and to a single
    individual. On February 15, 2012, in a Munich court, the IPA, in
    conjunction with a consortium of international publishing houses and the
    help of the German law firm Lausen
    Rechtsanwalte,^[14](#fn-2025-14){#fnref-2025-14}^ served judicial
    cease-and-desist orders naming both sites (*Gigapedia* and *ifile.it*).
    Seventeen injunctions were sought in Ireland, with the consequent
    voluntary shut-down of both domains, which for a brief time redirected
    visitors first to *Google Books* and then to *Blue Latitudes*, a *New
    York Times* bestseller about pirates, for sale on *Amazon*.

    ::: {#attachment_2430 .wp-caption .alignnone style="width: 310px"}
    [![](http://computationalculture.net/wp-content/uploads/2014/11/figure-13-300x176.jpg "figure-1"){.size-medium
    .wp-image-2430 width="300" height="176"
    sizes="(max-width: 300px) 100vw, 300px"
    srcset="http://computationalculture.net/wp-content/uploads/2014/11/figure-13-300x176.jpg 300w, http://computationalculture.net/wp-content/uploads/2014/11/figure-13-1024x603.jpg 1024w"}](http://computationalculture.net/wp-content/uploads/2014/11/figure-13.jpg)

    Figure 1: Archived version of library.nu, circa 12/10/2010
    :::

    The relatively brief, by library standards, existence of *LNU/Gigapedia*
    underscores a weakness in the federated library model. The site
    flourished as long as it did not attract the ire of the publishing
    industry. A lack of redundancy in the site's administrative structure
    paralleled its lack on the server level. Once the authorities were able
    to establish the identity of the site's operators (via *Paypal*
    receipts, according to a partner at Lausen Rechtsanwalte), the project
    was forced to shut down irrevocably.^[15](#fn-2025-15){#fnref-2025-15}^
    The system's single point of origin proved also to be its single point
    of failure.

    Jens Bammel, Secretary General of the IPA, called the action "an
    important step towards a more transparent, honest and fair trade of
    digital content on the Internet."^[16](#fn-2025-16){#fnref-2025-16}^ The
    rest of the internet mourned the passage of "the greatest, largest and
    the best website for downloading
    eBooks,"^[17](#fn-2025-17){#fnref-2025-17}^ comparing the demise of
    *LNU/Gigapedia* to the burning of the ancient Library of
    Alexandria.^[18](#fn-2025-18){#fnref-2025-18}^ Readers from around the
    world flocked to sites like *Reddit* and *TorrentFreak* to express their
    support and anger. For example, one reader wrote on *TorrentFreak*:

    > I live in Macedonia (the Balkans), a country where the average salary
    > is somewhere around 200eu, and I'm a student, attending a MA degree in
    > communication sci. \[...\] where I come from the public library is not
    > an option. \[...\] Our libraries are so poor, mostly containing 30year
    > or older editions of books that almost never refer to the field of
    > communication or any other contemporary science. My professors never
    > hide that they use sites like library.nu \[...\] Original textbooks
    > \[...\] are copy-printed handouts of some god knows how obtained
    > original \[...\] For a country like Macedonia and the Balkans region
    > generally THIS IS A APOCALYPTIC SCALE DISASTER! I really feel like the
    > dark age is just around the corner these
    > days.^[19](#fn-2025-19){#fnref-2025-19}^

    A similar comment on *Reddit* reads:

    > This is the saddest news of the year...heart-breaking...shocking...I
    > was so attached to this site...I am from a third world country where
    > buying original books is way too expensive if we see currency exchange
    > rates...library.nu was a sea of knowledge for me and I learnt a lot
    > from it \[...\] RIP library.nu...you have ignited several minds with
    > free knowledge.^[20](#fn-2025-20){#fnref-2025-20}^

    Another redditor wrote:

    > This was an invaluable resource for international academics. The
    > catalog of libraries overseas often cannot meet the needs of
    > researchers in fields not specific to the country in which they are
    > located. My doctoral research has taken a significant blow due to this
    > recent shutdown \[...\] Please publishers, if you take away such a
    > valuable resource, realize that you have created a gap that will be
    > filled. This gap can either be filled by you or by
    > us.^[21](#fn-2025-21){#fnref-2025-21}^

    Another concludes:

    > This just makes me want to start archiving everything I can get my
    > hands on.^[22](#fn-2025-22){#fnref-2025-22}^

    These anecdotal reports confirm our own experiences of studying and
    teaching at universities with a diverse audience of international
    students, who often recount a similar personal narrative. *Gigapedia*
    and analogous sites fulfilled an unmet need in the international market,
    redressing global inequities of access to
    information.^[23](#fn-2025-23){#fnref-2025-23}^

    But, being a cyberlocker-based service, *Gigapedia* did not succeed in
    cultivating a meaningful sense of a community (even though it supported
    a forum for brief periods of its existence). As Lobato and Tang
    ^[24](#fn-2025-24){#fnref-2025-24}^ write in their paper on
    cyberlocker-based media distribution systems, cyberlockers in general
    "do not foster collaboration and co-creation," taking an "instrumental
    view of content hosted on their
    sites."^[25](#fn-2025-25){#fnref-2025-25}^ Although not strictly a
    cyberlocker, *LNU/Gigapedia* fit the profile of a passive,
    non-transformative site by these criteria. For Lobato and Tang, the
    rapid disappearance of many prominent cyberlocker sites underscores the
    "structural instability" of "fragile file-hosting
    ecology."^[26](#fn-2025-26){#fnref-2025-26}^ In our case, it would be
    more precise to say that cyberlocker architecture highlights rather the
    structural instability of centralized media archives, and not of file
    sharing communities in general. Although bereaved readers were concerned
    about the irrevocable loss of a valuable resource, digital libraries
    that followed built a model of file sharing that is more resilient, more
    transparent, and more participatory than their *LNU/Gigapedia*
    predecessors.

    **Distribution**

    In parallel with the development of *LNU/Gigapedia*, a group of Russian
    enthusiasts were working on a meta-library of sorts, under the name of
    *Aleph*. Records of *Aleph's* activity go back at least as far as 2009.
    Colloquially known as "prospectors," the volunteer members of *Aleph*
    compiled library collections widely available on the gray market, with
    an emphasis on academic and technical literature in Russian and
    English.\
    *DVD case cover of "Traum's library" advertising "more than 167,000
    books" in fb2 format. Similar DVDs sell for around 1,000 RUB (\$25-30
    US) on the streets of Moscow.*

    At its inception, *Aleph* aggregated several "home-grown" archives,
    already in wide circulation in universities and on the gray market.
    These included:

    -- *KoLXo3*, a collection of scientific texts that was at one time
    distributed on 20 DVDs, overlapping with early Gigapedia efforts;\
    -- *mexmat*, a library collected by the members of Moscow State
    University's Department of Mechanics and Mathematics for internal use,
    originally distributed through private FTP servers;\
    -- *Homelab*, *Ihtik*, and *Ingsat* libraries;\
    -- the Foreign Fiction archive collected from IRC \#\*\*\*
    2003.09-2011.07.09 and the Internet Library;\
    -- the *Great Science Textbooks* collection and, later, over 20 smaller
    miscellaneous archives.^[27](#fn-2025-27){#fnref-2025-27}^

    In retrospect, we can categorize the founding efforts along three
    parallel tracks: 1) as the development of "front-end" server software
    for searching and downloading books, 2) as the organization of an online
    forum for enthusiasts willing to contribute to the project, and 3) the
    collection effort required to expand and maintain the "back-end" archive
    of documents, primarily in .pdf and .djvu
    formats.^[28](#fn-2025-28){#fnref-2025-28}^ "What do we do?" writes one
    of the early volunteers (in 2009) on the topic of "Outcomes, Goals, and
    Scope of the Project." He answers: "we loot sites with ready-made
    collections," "sort the indices in arbitrary normalized formats," "for
    uncatalogued books we build a 'technical index': name of file, size,
    hashcode," "write scripts for database sorting after the initial catalog
    process," "search the database," "use the database for the construction
    of an accessible catalog," "build torrents for the distribution of files
    in the collection."^[29](#fn-2025-29){#fnref-2025-29}^ But, "everything
    begins with the forum," in the words of another founding
    member.^[30](#fn-2025-30){#fnref-2025-30}^ *Aleph*, the very name of the
    group, reflects the aspiration to develop a "platform for the inception
    of subsequent and more user-friendly" libraries--a platform "useful for
    the developer, the reader, and the
    librarian."^[31](#fn-2025-31){#fnref-2025-31}^\
    Aleph's *anatomy*

    ::: {#attachment_2431 .wp-caption .alignnone style="width: 310px"}
    [![](http://computationalculture.net/wp-content/uploads/2014/11/figure-21-300x300.jpg "figure-2"){.size-medium
    .wp-image-2431 width="300" height="300"
    sizes="(max-width: 300px) 100vw, 300px"
    srcset="http://computationalculture.net/wp-content/uploads/2014/11/figure-21-300x300.jpg 300w, http://computationalculture.net/wp-content/uploads/2014/11/figure-21-150x150.jpg 150w, http://computationalculture.net/wp-content/uploads/2014/11/figure-21-1024x1024.jpg 1024w, http://computationalculture.net/wp-content/uploads/2014/11/figure-21.jpg 1200w"}](http://computationalculture.net/wp-content/uploads/2014/11/figure-21.jpg)

    Figure 2: DVD case cover of "Traum's library" advertising "more than
    167,000 books
    :::

    What is *Aleph*? Is it a collection of books? A community? A piece of
    software? What makes a library? When attempting to visualize Aleph's
    constituents (Figure 3), it seems insufficient to point to books alone,
    or to social structure, or to technology in the absence of people and
    content. Taking a systems approach to description, we understand a
    library to comprise an assemblage of books, people, and infrastructure,
    along with their corresponding words and texts, rules and institutions,
    and shelves and servers.^[32](#fn-2025-32){#fnref-2025-32}^ In this
    light, *Aleph*'s iteration on *LNU/Gigapedia* lies not in technological
    advancement alone, but in system architecture, on all levels of
    analysis.

    Where the latter relied on proprietary server applications, *Aleph*
    built software that enabled others to mirror and to serve the site in
    its entirety. The server was written by d\* from www.l\*.com (Bet),
    utilizing a codebase common to several similar large book-sharing
    communities. The initial organizational efforts happened on a sub-forum
    of a popular torrent tracker (*RR*). Fifteen founding members reached
    early consensus to start hashing document filenames (using the MD5
    message-digest algorithm), rather than to store files as is, with their
    appropriate .pdf or .mobi extensions.^[33](#fn-2025-33){#fnref-2025-33}^
    Bit-wise hashing was likely chosen as a (computationally) cheap way to
    de-duplicate documents, since two identical files would hash into an
    identical string. Hashing the filenames was hoped to have the
    side-effect of discouraging direct (file system-level) browsing of the
    archive.^[34](#fn-2025-34){#fnref-2025-34}^ Instead, the books were
    meant to be accessed through the front-end "librarian" interface, which
    added a layer of meta-data and search tools. In other words, the group
    went out of its way to distribute *Aleph* as a library and not merely as
    a large aggregation of raw files.

    ::: {#attachment_2221 .wp-caption .alignnone style="width: 593px"}
    [![](http://computationalculture.net/wp-content/uploads/2014/10/figure-3.jpg "figure-3"){.size-full
    .wp-image-2221 width="583" height="526"
    sizes="(max-width: 583px) 100vw, 583px"
    srcset="http://computationalculture.net/wp-content/uploads/2014/10/figure-3.jpg 583w, http://computationalculture.net/wp-content/uploads/2014/10/figure-3-300x270.jpg 300w"}](http://computationalculture.net/wp-content/uploads/2014/10/figure-3.jpg)

    Figure 3: Aleph's anatomy
    :::

    Site volunteers coordinate their efforts asynchronously, by means of a
    simple online forum (using *phpBB* software), open to all interested
    participants. Important issues related to the governance of the
    project--decisions about new hardware upgrades, software design, and
    book acquisition--receive public airing. For example, at one point, the
    site experienced increased traffic from *Google* searches. Some senior
    members welcomed the attention, hoping to attract new volunteers. Others
    worried increased visibility would bring unwanted scrutiny. To resolve
    the issue, a member suggested delisting the website by altering the
    robots.txt configuration file and thereby blocking *Google*
    crawlers.^[35](#fn-2025-35){#fnref-2025-35}^ Consequently, the site
    would become invisible to *Google*, while remaining freely accessible
    via a direct link. Early conversations on *RR*, reflect a consistent
    concern about the archive's longevity and its vulnerability to official
    sanctions. Rather than following the cyber-locker model of distribution,
    the prospectors decided to release canonical versions of the library in
    chunks, via *BitTorrent*--a distributed protocol for file sharing.
    Another decision was made to "store" the library on open trackers (like
    *The Pirate Bay*), rather than tying it to a closed, by-invitation-only
    community. Although *LN/Gigapedia* was already decentralized to an
    extent, the archeology of the community discussion reveals a multitude
    of concious choices that work to further atomize *Aleph* and to
    decentralize it along the axes of the collection, governance, and
    engineering.

    By March of 2009 these efforts resulted in approximately 79k volumes or
    around 180gb of data.^[36](#fn-2025-36){#fnref-2025-36}^ By December of
    the same year, the moderators began talking about a terabyte, 2tb in
    2010, and around 7tb by 2011.^[37](#fn-2025-37){#fnref-2025-37}^ By
    2012, the core group of "prospectors" grew to 1,000 registered users.
    *Aleph*'s main mirror received over a million page views per month and
    about 40,000 unique visits per day.^[38](#fn-2025-38){#fnref-2025-38}^
    An online eBook piracy report estimates a combined total of a million
    unique visitors per day for *Aleph* and its
    mirrors.^[39](#fn-2025-39){#fnref-2025-39}^

    As of January 2014, the *Aleph* catalog contains over a million books
    (1,021,000) and over 15 million academic articles, "weighing in" at just
    under 10tb. Most remarkably, one of the world's largest digital
    libraries operates on an annual budget of \$1,900
    US.^[40](#fn-2025-40){#fnref-2025-40}^

    \#\#\# Vulnerability\
    Distributed architecture gives *Aleph* significant advantages over its
    federated predecessors. Were *Aleph* servers to go offline the archive
    would survive "in the cloud" of the *BitTorrent* network. Should the
    forum (*Bet*) close, another online forum could easily take its place.
    And were *Aleph* library portal itself go dark, other mirrors would (and
    usually do) quickly take its place.

    But the decentralized model of content distribution is not without its
    challenges. To understand them, we need to review some of the
    fundamentals behind the *BitTorrent* protocol. At its bare minimum (as
    it was described in the original specification by Bram Cohen) the
    protocol involves a "seeder," someone willing to share something it its
    entirety; a "leecher," someone downloading shared data; and a torrent
    "tracker" that coordinates activity between seeders and
    leechers.^[41](#fn-2025-41){#fnref-2025-41}^

    Imagine a music album sharing agreement between three friends, where,
    initially, only one holds a copy of some album: for example, Nirvana's
    *Nevermind*. Under the centralized model of file sharing, the friend
    holding the album would transmit two copies, one to each friend. The
    power of *BitTorrent* comes from shifting the burden of sharing from a
    single seeder (friend one) to a "swarm" of leechers (friends two and
    three). On this model, the first leecher joining the network (friend
    two, in our case) would begin to get his data from the seeder directly,
    as before. But the second leecher would receive some bits from the
    seeder and some from the first leecher, in a non-linear, asynchronous
    fashion. In our example, we can imagine the remaining friend getting
    some songs from the first friend and some from the second. The friend
    who held the album originally now transmitted something less than two
    full copies of the album, since the other two friends exchanged some
    bits of information between themselves, lessening the load on the
    original album holder.

    When downloading from the *BitTorrent* network, a peer may receive some
    bits from the beginning of the document, some from the middle, and some
    from the end, in parts distributed among the members of the swarm. A
    local application called the "client" is responsible for checking the
    integrity of the pieces and for reassembling the them into a coherent
    whole. A torrent "tracker" coordinates the activity between peers,
    keeping track of who has what where. Having received the whole document,
    a leecher can, in turn, become a seeder by sharing all of his downloaded
    bits with the remaining swarm (who only have partial copies). The
    leecher can also take the file offline, choosing not to share at
    all.^[42](#fn-2025-42){#fnref-2025-42}^

    The original protocol left torrent trackers vulnerable to charges of
    aiding and abetting copyright
    infringement.^[43](#fn-2025-43){#fnref-2025-43}^ Early in 2008, Cohen
    extended *BitTorrent* to make use of  "distributed sloppy hash tables"
    (DHT) for storing peer locations without resorting to a central tracker.
    Under these new guidelines, each peer would maintain a small routing
    table pointing to a handful of nearby peer locations. In effect, DHT
    placed additional responsibility on the swarm to become a tracker of
    sorts, however "sloppy" and imperfect. By November of of 2009, *Pirate
    Bay* announced its transition away from tracking entirely, in favor of
    DHT and the related PEX and Magnetic Links protocols. At the time they
    called it, "world's most resilient
    tracking."^[44](#fn-2025-44){#fnref-2025-44}^

    Despite these advancements, the decentralized model of file sharing
    remains susceptible to several chronic ailments. The first follows from
    the fact that ad-hoc distribution networks privilege popular material. A
    file needs to be actively traded to ensure its availability. If nobody
    is actively sharing and downloading Nirvana's *Nevermind*, the album is
    in danger of fading out of the cloud. As one member wrote succinctly on
    *Gimel* forums, "unpopular files are in danger of become
    inaccessible."^[45](#fn-2025-45){#fnref-2025-45}^ This dynamic is less
    of a concern for Hollywood blockbusters, but more so for "long tail"
    specialized materials of the sort found in *Aleph*, and indeed, for
    *Aleph* itself as a piece of software distributed through the network.
    *Aleph* combats the problem of fading torrents by renting
    "seedboxes"--servers dedicated to keeping the *Aleph* seeds containing
    the archive alive, preserving the availability of the collection. The
    server in production as of 2014 can serve up to 12tb of data speeds of
    100-800 megabits per second. Other file sharing communities address the
    issue by enforcing a certain download to upload ratio on members of
    their network.

    The lack of true anonymity is the second problem intrinsic to the
    *BitTorrent* protocol. Peers sharing bits directly cannot but avoid
    exposing their IP address (unless these are masked behind virtual
    private networks or TOR relays). A "Sybil" attack becomes possible when
    a malicious peer shares bits in bad faith, with the intent to log IP
    addresses.^[46](#fn-2025-46){#fnref-2025-46}^ Researchers exploring this
    vector of attack were able to harvest more than 91,000 IP addresses in
    less than 24 hours of sharing a popular television
    show.^[47](#fn-2025-47){#fnref-2025-47}^ They report that more than 9%
    of requests made to their servers indicated "modified clients", which
    are likely also to be running experiments in the DHT. Legitimate
    copyright holders and copyright "trolls" alike have used this
    vulnerability to bring lawsuits against individual sharers in
    court.^[48](#fn-2025-48){#fnref-2025-48}^

    These two challenges are further exacerbated in the case of *Aleph*,
    which uses *BitTorrent* to distribute large parts of its own
    architecture. These parts are relatively large--around 40-50GB each.
    Long-term sustainability of *Aleph* as a distributed system therefore
    requires a rare participant: one interested in downloading the archive
    as a whole (as opposed to downloading individual books), one who owns
    the hardware to store and transmit terabytes of data, and one possessing
    the technical expertise to do so safely.

    **Peer preservation**

    In light of the challenges and the effort involved in maintaining the
    archive, one would be remiss to describe *Aleph* merely in terms of book
    piracy, understood in conventional terms of financial gain, theft, or
    profiteering. Day-to-day labor of the core group is much more
    comprehensible as a mode of commons-based peer production, which is, in
    the canonical definition, work made possible by a "networked
    environment," "radically decentralized, collaborative, and
    non-proprietary; based on sharing resources and outputs among widely
    distributed, loosely connected individuals who cooperate with each other
    without relying on either market signals or managerial
    commands."^[49](#fn-2025-49){#fnref-2025-49}^ *Aleph* answers the
    definition of peer production, resembling in many respects projects like
    *Linux*, *Wikipedia*, and *Project Gutenberg*.

    Yet, *Aleph* is also patently a library. Its work can and should be
    viewed in the broader context of Enlightenment ideals: access to
    literacy, universal education, and the democratization of knowledge. The
    very same ideals gave birth to the public library movement as a whole at
    the turn of the 20th century, in the United States, Europe, and
    Russia.^[50](#fn-2025-50){#fnref-2025-50}^ Parallels between free
    library movements of the early 20th and the early 21st centuries point
    to a social dynamic that runs contrary to the populist spirit of
    commons-based peer production projects, in a mechanism that we describe
    as peer preservation. The idea encompasses conflicting drives both to
    share and to hoard information.

    The roots of many public libraries lie in extensive private collections.
    Bodleian Library at Oxford, for example, traces its origins back to the
    collections of Thomas Cobham, Bishop of Worcester, Humphrey, Duke of
    Gloucester, and to Thomas Bodley, himself an avid book collector.
    Similarly, Poland's Zaluski Library, one of Europe's oldest, owes its
    existence to the collecting efforts of the Zaluski brothers, both
    bishops and bibliophiles.^[51](#fn-2025-51){#fnref-2025-51}^ As we
    mentioned earlier, *Aleph* too began its life as an aggregator of
    collections, including the personal libraries of Moshkov and Traum. When
    books are scarce, private libraries are a sign of material wealth and
    prestige. In the digital realm, where the cost of media acquisition is
    low, collectors amass social capital. *Aleph* extends its collecting
    efforts on *RR*, a much larger, moderated torrent exchange forum and
    tracker. *RR* hosts a number of sub-forums dedicated to the exchange of
    software, film, music, and books (where members of *Aleph* often make an
    appearance). In the exchange economy of symbolic goods, top collectors
    are known by their standing in the community, as measured by their
    seniority, upload and download ratios, and the number of "releases." A
    release is more than just a file: it must not duplicate items in the
    archive and follows strict community guidelines related to packaging,
    quality, and meta-data accompanying the document. Less experienced
    members of the community treat high status numbers with reverence and
    respect.

    According to a question and answer session with an official *RR*
    representative, *RR* is not particularly friendly to new
    users.^[52](#fn-2025-52){#fnref-2025-52}^ In fact, high barriers to
    entry are exactly what differentiates *RR* from sites like *The Pirate
    Bay* and other unmoderated, open trackers. *RR* prides itself on the
    "quality of its moderation." Unlike *Pirate Bay*, *RR* sees itself as a
    "media library", where content is "organized and properly shelved." To
    produce an acceptable book "release" one needs to create a package of
    files, including well-formatted meta-data (following strict stylistic
    rules) in the header, the name of the book, an image of its cover, the
    year of release, author, genre, publisher, format, language, a required
    description, and screenshots of a sample page. The files must be named
    according to a convention, be "of the same kind" (that is belong to the
    same collection), and be of the right size. Home-made scans are
    discouraged and governed by a 1,000-words instruction manual. Scanned
    books must have clear attribution to the releaser responsible for
    scanning and processing.

    More than that, guidelines indicate that smaller releases should be
    expected to be "absorbed" into larger ones. In this way, a single novel
    by Charles Dickens can and will be absorbed into his collected works,
    which might further be absorbed into "Novels of 19th Century," and then
    into "Foreign Fiction" (as a hypothetical, but realistic example).
    According to the rules, the collection doing the absorbing must be "at
    least 50% larger than the collection it is absorbing." Releases are
    further governed by a subset or rules particular to the forum
    subsections (e.g. journals, fiction, documentation, service manuals,
    etc.).^[53](#fn-2025-53){#fnref-2025-53}^

    All this to say that although barriers to acquisition are low, the
    barriers to active participation are high and continually *increase with
    time*. The absorption of smaller collections by larger favors the
    veterans. Rules and regulations grow in complexity with the maturation
    of the community, further widening the rift between senior and junior
    peers. We are then witnessing something like the institutionalization of
    a professional "librarian" class, whose task it is to protect the
    collection from the encroachment of low-quality contributors. Rather
    than serving the public, a librarian's primary commitment is to the
    preservation of the archive as a whole. Thus what starts as a true peer
    production project, may, in the end, grow to erect solid walls to
    peering. This dynamic is already embodied in the history of public
    libraries, where amateur librarians of the late 19th century eventually
    gave way to their modern degree-holding counterparts. The conflicting
    logistics of access and preservation may lead digital library
    development along a similar path.

    The expression of this dual push and pull dynamic in the observed
    practices of peer preservation communities conforms to Derrida's insight
    into the nature of the archive. Just as the walls of a library serve to
    shelter the documents within, they also isolate the collection from the
    public at large. Access and preservation, in that sense, subsist at
    opposite and sometime mutually exclusive ends of the sharing spectrum.
    And it may be that this dynamic is particular to all peer production
    communities, like *Wikipedia*, which, according to recent studies, saw a
    decline in new contributors due to increasingly strict rule
    enforcement.^[54](#fn-2025-54){#fnref-2025-54}^ However, our results are
    merely speculative at the moment. The analysis of a large dataset we
    have collected as corollary to our field work online may offer further
    evidence for these initial intuitions. In the meantime, it is not enough
    to conclude that brick-and-mortar libraries should learn from these
    emergent, distributed architectures of peer preservation. If the future
    of *Aleph* is leading to increased institutionalization, the community
    may soon face the fate embodied by its own procedures: the absorption of
    smaller, wonderfully messy, ascending collections into larger, more
    established, and more rigid social structures.

     

     

    **Biographies**

    Dennis Tenen teaches in the fields of new media and digital humanities
    at Columbia University, Department of English and Comparative
    Literature. His research often happens at the intersection of people,
    texts, and technology. He is currently writing a book on minimal
    computing, called *Plain Text*.

    Maxwell Foxman is an adjunct professor at Marymount Manhattan College
    and a PhD candidate in Communications at Columbia University, where he
    studies the use and adoption of digital media into everyday life. He has
    written on failed social media and on gamification in electoral
    politics, newsrooms, and mobile media.

    **References**

    Allen, Elizabeth Akers, and James Phinney Baxter. *Dedicatory Exercises
    of the Baxter Building*. Auburn, Me: Lakeside Press, 1889.

    Anonymous author. "Library.nu: Modern era's 'Destruction of the Library
    of Alexandria.'" *Breaking Culture*. Last edited on February 16, 2012
    and archived on archived on January 14, 2014.
    [http://breakingculture.tumblr.com/post/17697325088/gigapedia-rip](“https://web.archive.org/web/20140113135846/http://breakingculture.tumblr.com/post/17697325088/gigapedia-rip”).

    Benkler, Yochai. *The Wealth of Networks: How Social Production
    Transforms Markets and Freedom*. New Haven: Yale University Press, 2006.

    Bittorrent.org. "The BitTorrent Protocol Specification." Last modified
    October 20, 2012 and archived on June 13, 2014.
    [http://www.bittorrent.org/beps/bep\_0003.html](“http://web.archive.org/web/20140613190300/http://www.bittorrent.org/beps/bep_0003.html”).

    Bodo, Balazs. "Set the Fox to Watch the Geese: Voluntary IP Regimes in
    Piratical File-Sharing Communities." In *Piracy: Leakages from
    Modernity*. Litwin Books, LLC, 2012.

    Bowker, Geoffrey C., and Susan Leigh Star. *Sorting Things Out:
    Classification and Its Consequences*. The MIT Press, 1999.

    Calandrillo, Steve P. "Economic Analysis of Property Rights in
    Information: Justifications and Problems of Exclusive Rights, Incentives
    to Generate Information, and the Alternative of a Government-Run Reward
    System, an." *Fordham Intellectual Property, Media & Entertainment Law
    Journal* 9 (1998): 301.

    Calhoun, Craig. "Information Technology and the International Public
    Sphere." *In Shaping the Network Society: the New Role of Civil Society
    in Cyberspace*, edited by Douglas Schuler and Peter Day, 229--52. MIT
    Press, 2004.

    Castells, Manuel. "Communication, Power and Counter-Power in the Network
    Society." *International Journal of Communication* 1 (2007): 238--66.

    Cholez, Thibault, Isabelle Chrisment, and Olivier Festor. "Evaluation of
    Sybil Attacks Protection Schemes in KAD." In *Scalability of Networks
    and Services*, edited by Ramin Sadre and Aiko Pras, 70--82. Lecture
    Notes in Computer Science 5637. Springer Berlin Heidelberg, 2009.

    Cohen, Bram. *Incentives Build Robustness in BitTorrent*, May 22, 2003.
    [http://www.bittorrent.org/bittorrentecon.pdf](“http://www.bittorrent.org/bittorrentecon.pdf”).

    Cohen, Julie. "Creativity and Culture in Copyright Theory." *U.C. Davis
    Law Review* 40 (2006): 1151.

    Day, Brian R. *In Defense of Copyright: Creativity, Record Labels, and
    the Future of Music*. SSRN Scholarly Paper. Rochester, NY: Social
    Science Research Network, May 2010.

    Derrida, Jacques. "Archive Fever: a Freudian Impression." *Diacritics*
    25, no. 2 (July 1995): 9--63.

    DiMaggio, Paul, Eszter Hargittai, W. Russell Neuman, and John P.
    Robinson. "Social Implications of the Internet." *Annual Review of
    Sociology* 27 (January 2001): 307--36.

    Edwards, Paul N. "Infrastructure and Modernity: Force, Time, and Social
    Organization in the History of Sociotechnical Systems." In *Modernity
    and Technology*, 185--225, 2003.

    ---------. "Y2K: Millennial Reflections on Computers as Infrastructure."
    *History and Technology* 15, no. 1-2 (1998): 7--29.

    Edwards, Paul N., Geoffrey C. Bowker, Steven J. Jackson, and Robin
    Williams. "Introduction: an Agenda for Infrastructure Studies." *Journal
    of the Association for Information Systems* 10, no. 5 (2009): 364--74.

    Ernesto. "US P2P Lawsuit Shows Signs of a 'Pirate Honeypot'."
    Technology. *TorrentFreak*. Last edited in June 2011 and archived on
    January 14, 2014.
    [http://torrentfreak.com/u-s-p2p-lawsuit-shows-signs-of-a-pirate-honeypot-110601/](“https://web.archive.org/web/20140114200326/http://torrentfreak.com/u-s-p2p-lawsuit-shows-signs-of-a-pirate-honeypot-110601/”).

    Gauravaram, Praveen, and Lars R. Knudsen. "Cryptographic Hash
    Functions." In *Handbook of Information and Communication Security*,
    edited by Peter Stavroulakis and Mark Stamp, 59--79. Springer Berlin
    Heidelberg, 2010.

    Greenwood, Thomas. *Public Libraries: a History of the Movement and a
    Manual for the Organization and Management of Rate Supported Libraries*.
    Simpkin, Marshall, Hamilton, Kent, 1890.

    Halfaker, Aaron, R. Stuart Geiger, Jonathan T. Morgan, and John Riedl.
    "The Rise and Decline of an Open Collaboration System: How Wikipedia's
    Reaction to Popularity Is Causing Its Decline." *American Behavioral
    Scientist*, December 2012, 0002764212469365.

    Harris, Michael H. *History of Libraries of the Western World*. Fourth
    Edition. Lanham, Md.; London: Scarecrow Press, 1999.

    Hughes, Justin. "Philosophy of Intellectual Property, the." *Georgetown
    Law Journal* 77 (1988): 287.
    http://heinonline.org/HOL/Page?handle=hein.journals/glj77&id=309&div=&collection=journals.

    Hugo, Victor. *Works of Victor Hugo*. New York: Nottingham Society,
    1907.

    International Publishers Association. "Publishers Strike Major Blow
    against Internet Piracy." Last modified February 15, 2012.
    [http://www.internationalpublishers.org/ipa-press-releases/286-publishers-strike-major-blow-against-internet-piracy](“http://www.internationalpublishers.org/ipa-press-releases/286-publishers-strike-major-blow-against-internet-piracy”).

    Johnson, Simon for Reuters.com. "Pirate Bay Copyright Test Case Begins
    in Sweden." Last edited on February 16, 2009 and archived on August 4,
    2014.
    [http://uk.reuters.com/article/2009/02/16/tech-us-sweden-piratebay-idUKTRE51F3K120090216](http://web.archive.org/web/20140804000829/http://uk.reuters.com/article/2009/02/16/tech-us-sweden-piratebay-idUKTRE51F3K120090216”).\]

    Karaganis, Joe, ed. *Media Piracy in Emerging Economies*. Social Science
    Research Network, March 2011.
    [http://piracy.americanassembly.org/the-report/.](“http://piracy.americanassembly.org/the-report/”).

    Landes, William M., and Richard A. Posner. *The Economic Structure of
    Intellectual Property Law*. Harvard University Press, 2003.

    Larkin, Brian. "Degraded Images, Distorted Sounds: Nigerian Video and
    the Infrastructure of Piracy." *Public Culture* 16, no. 2 (2004):
    289--314.

    ---------. "Pirate Infrastructures." In *Structures of Participation in
    Digital Culture*, edited by Joe Karaganis, 74--87. New York: SSRC, 2008.

    Lessig, Lawrence. *Free Culture: How Big Media Uses Technology and the
    Law to Lock Down Culture and Control Creativity*. The Penguin Press,
    2004.

    Liang, Lawrence. "Shadow Libraries E-Flux," last edited 2012 and
    archived on October 14, 2014.
    http://www.e-flux.com/journal/shadow-libraries/.

    Lobato, Ramon, and Leah Tang. "The Cyberlocker Gold Rush: Tracking the
    Rise of File-Hosting Sites as Media Distribution Platforms."
    *International Journal of Cultural Studies*, November 2013.

    Losowsky, Andrew. "Book Downloading Site Targeted in Injunctions
    Requested by 17 Publishers." *Huffington Post*, last edited on February
    2012 and archived on October 14, 2014.
    [http://www.huffingtonpost.com/2012/02/15/librarynu-book-downloading-injunction\_n\_1280383.html](“http://www.huffingtonpost.com/2012/02/15/librarynu-book-downloading-injunction_n_1280383.html”).

    Papacharissi, Zizi. "The Virtual Sphere the Internet as a Public
    Sphere." *New Media & Society* 4, no. 1 (February 2002): 9--27.

    Priest, Eric. "The Future of Music and Film Piracy in China." *Berkeley
    Technology Law Journal* 21 (2006): 795.

    Salmon, Ricardo, Jimmy Tran, and Abdolreza Abhari. "Simulating a File
    Sharing System Based on BitTorrent." In *Proceedings of the 2008 Spring
    Simulation Multiconference*, 21:1--:5. SpringSim '08. San Diego, CA,
    USA: Society for Computer Simulation International, 2008.

    Shirky, Clay. *Here Comes Everybody: the Power of Organizing Without
    Organizations*. New York: Penguin Press, 2008.

    Star, Susan Leigh, and Geoffrey C. Bowker. "How to Infrastructure." In
    *Handbook of New Media: Social Shaping and Social Consequences of ICTs*,
    Updated Student Edition., 230--46. SAGE Publications Ltd, 2010.

    Stuart, Mary. "Creating a National Library for the Workers' State: the
    Public Library in Petrograd and the Rumiantsev Library Under Bolshevik
    Rule." *The Slavonic and East European Review* 72, no. 2 (April 1994):
    233--58.

    ---------. "'The Ennobling Illusion': the Public Library Movement in
    Late Imperial Russia." *The Slavonic and East European Review* 76, no. 3
    (July 1998): 401--40.

    ---------. "The Evolution of Librarianship in Russia: the Librarians of
    the Imperial Public Library, 1808-1868." *The Library Quarterly* 64, no.
    1 (January 1994): 1--29.

    Timpanaro, J.P., T. Cholez, I Chrisment, and O. Festor. "BitTorrent's
    Mainline DHT Security Assessment." In *2011 4th IFIP International
    Conference on New Technologies, Mobility and Security (NTMS)*, 1--5,
    2011.

    TPB. "Worlds most resiliant tracking." Last edited November 17, 2009 and
    archived on August 4, 2014.
    [thepiratebay.se/blog/175](“http://web.archive.org/web/20140804015645/http://thepiratebay.se/blog/175”)

    Vik. "Gigapedia: The greatest, largest and the best website for
    downloading eBooks." Emotionallyspeaking.com. Last edited on August 10,
    2009 and archived on July 15, 2012.
    [http://archive.is/g205"\>http://vikas-gupta.in/2009/08/10/gigapedia-the-greatest-largest-and-the-best-website-for-downloading-free-e-books/](“http://archive.is/g205”).

     

     

     

     

     

     

     

     

    ::: {#footnotes-2025 .footnotes}
    ::: {.footnotedivider}
    :::

    1. [Victor Hugo, *Works of Victor Hugo* (New York: Nottingham Society,
    1907), 230. [[↩](#fnref-2025-1)]{.footnotereverse}]{#fn-2025-1}
    2. [Lawrence Liang, "Shadow Libraries E-Flux," 2012.
    [[↩](#fnref-2025-2)]{.footnotereverse}]{#fn-2025-2}
    3. [McKendrick, Joseph. *Libraries: At the Epicenter of the Digital
    Disruption, The Library Resource Guide Benchmark Study on 2013/14
    Library Spending Plans* (Unisphere Media, 2013).
    [[↩](#fnref-2025-3)]{.footnotereverse}]{#fn-2025-3}
    4. ["Archive Fever: a Freudian Impression," *Diacritics* 25, no. 2
    (July 1995): 9--63.
    [[↩](#fnref-2025-4)]{.footnotereverse}]{#fn-2025-4}
    5. [Yochai Benkler, *The Wealth of Networks: How Social Production
    Transforms Markets and Freedom* (New Haven: Yale University Press,
    2006), 92; Paul DiMaggio et al., "Social Implications of the
    Internet," *Annual Review of Sociology* 27 (January 2001): 320; Zizi
    Papacharissi "The Virtual Sphere the Internet as a Public Sphere,"
    *New Media & Society* 4.1 (2002): 9--27; Craig Calhoun "Information
    Technology and the International Public Sphere," in *Shaping the
    Network Society: the New Role of Civil Society in Cyberspace*, ed.
    Douglas Schuler and Peter Day (MIT Press, 2004), 229--52.
    [[↩](#fnref-2025-5)]{.footnotereverse}]{#fn-2025-5}
    6. [Benkler, *The Wealth of Networks*, 442; Manuel Castells,
    "Communication, Power and Counter-Power in the Network Society,"
    *International Journal of Communication* (2007): 251; Lawrence
    Lessig *Free Culture:How Big Media Uses Technology and the Law to
    Lock Down Culture and Control Creativity* (The Penguin Press, 2004);
    Clay Shirky Here Comes Everybody: the Power of Organizing Without
    Organizations (New York: Penguin Press, 2008), 153.
    [[↩](#fnref-2025-6)]{.footnotereverse}]{#fn-2025-6}
    7. [Brian R. Day "In Defense of Copyright: Creativity, Record Labels,
    and the Future of Music," *Seton Hall Journal of Sports and
    Entertainment Law*, 21.1 (2011); William M. Landes and Richard A.
    Posner, *The Economic Structure of Intellectual Property Law*
    (Harvard University Press, 2003). For further discussion see
    Steve P. Calandrillo, "Economic Analysis of Property Rights in
    Information: Justifications and Problems of Exclusive Rights,
    Incentives to Generate Information, and the Alternative of a
    Government-Run Reward System" *Fordham Intellectual Property, Media
    & Entertainment Law Journal* 9 (1998): 306; Julie Cohen, "Creativity
    and Culture in Copyright Theory," *U.C. Davis Law Review* 40 (2006):
    1151; Justin Hughes "Philosophy of Intellectual Property,"
    *Georgetown Law Journal* 77 (1988): 303.
    [[↩](#fnref-2025-7)]{.footnotereverse}]{#fn-2025-7}
    8. [[piracylab.org](“http://piracylab.org”).
    [[↩](#fnref-2025-8)]{.footnotereverse}]{#fn-2025-8}
    9. ["Set the Fox to Watch the Geese: Voluntary IP Regimes in Piratical
    File-Sharing Communities, in *Piracy: Leakages from Modernity*
    (Litwin Books, LLC, 2012).
    [[↩](#fnref-2025-9)]{.footnotereverse}]{#fn-2025-9}
    10. ["The Future of Music and Film Piracy in China," *Berkeley
    Technology Law Journal* 21 (2006): 795.
    [[↩](#fnref-2025-10)]{.footnotereverse}]{#fn-2025-10}
    11. ["The Cyberlocker Gold Rush: Tracking the Rise of File-Hosting Sites
    as Media Distribution Platforms," *International Journal of Cultural
    Studies*, (2013).
    [[↩](#fnref-2025-11)]{.footnotereverse}]{#fn-2025-11}
    12. [The injunctions name I\* and F\* N\* (also known as Smiley).
    [[↩](#fnref-2025-12)]{.footnotereverse}]{#fn-2025-12}
    13. ["Publishers Strike Major Blow against Internet Piracy" last
    modified February 15, 2012 and archived on January 10, 2014,
    [http://www.internationalpublishers.org/ipa-press-releases/286-publishers-strike-major-blow-against-internet-piracy](“http://web.archive.org/web/20140110160254/http://www.internationalpublishers.org/ipa-press-releases/286-publishers-strike-major-blow-against-internet-piracy”).
    [[↩](#fnref-2025-13)]{.footnotereverse}]{#fn-2025-13}
    14. [Including the German Publishers and Booksellers Association,
    Cambridge University Press, Georg Thieme, Harper Collins, Hogrefe,
    Macmillan Publishers Ltd., Cengage Learning, Elsevier, John Wiley &
    Sons, The McGraw-Hill Companies, Pearson Education Ltd., Pearson
    Education Inc., Oxford University Press, Springer, Taylor & Francis,
    C.H. Beck as well as Walter De Gruyter. The legal proceedings are
    also supported by the Association of American Publishers (AAP), the
    Dutch Publishers Association (NUV), the Italian Publishers
    Association (AIE) and the International Association of Scientific
    Technical and Medical Publishers (STM).
    [[↩](#fnref-2025-14)]{.footnotereverse}]{#fn-2025-14}
    15. [Andrew Losowsky, "Book Downloading Site Targeted in Injunctions
    Requested by 17 Publishers," *Huffington Post*, accessed on
    September 1, 2014,
    [http://www.huffingtonpost.com/2012/02/15/librarynu-book-downloading-injunction\_n\_1280383.html](“http://www.huffingtonpost.com/2012/02/15/librarynu-book-downloading-injunction_n_1280383.html”).
    [[↩](#fnref-2025-15)]{.footnotereverse}]{#fn-2025-15}
    16. [International Publishers Association.
    [[↩](#fnref-2025-16)]{.footnotereverse}]{#fn-2025-16}
    17. [Vik, "Gigapedia: The greatest, largest and the best website for
    downloading eBooks," Emotionallyspeaking.com, last edited on August
    10, 2009 and archived on July 15, 2012,
    [http://archive.is/g205"\>http://vikas-gupta.in/2009/08/10/gigapedia-the-greatest-largest-and-the-best-website-for-downloading-free-e-books/](“http://archive.is/g205”).
    [[↩](#fnref-2025-17)]{.footnotereverse}]{#fn-2025-17}
    18. [Anonymous author, "Library.nu: Modern era's 'Destruction of the
    Library of Alexandria,'" *Breaking Culture* (on tublr.com), last
    edited on February 16, 2012 and archived on January 14, 2014,
    [http://breakingculture.tumblr.com/post/17697325088/gigapedia-rip](“https://web.archive.org/web/20140113135846/http://breakingculture.tumblr.com/post/17697325088/gigapedia-rip”).
    [[↩](#fnref-2025-18)]{.footnotereverse}]{#fn-2025-18}
    19. [[http://torrentfreak.com/book-publishers-shut-down-library-nu-and-ifile-it-120215](“https://web.archive.org/web/20140110050710/http://torrentfreak.com/book-publishers-shut-down-library-nu-and-ifile-it-120215”)
    archived on January 10, 2014.
    [[↩](#fnref-2025-19)]{.footnotereverse}]{#fn-2025-19}
    20. [[http://www.reddit.com/r/trackers/comments/ppfwc/librarynu\_admin\_the\_website\_is\_shutting\_down\_due](“https://web.archive.org/web/20140110050450/http://www.reddit.com/r/trackers/comments/ppfwc/librarynu_admin_the_website_is_shutting_down_due”)
    archived on January 10, 2014.
    [[↩](#fnref-2025-20)]{.footnotereverse}]{#fn-2025-20}
    21. [[http://www.reddit.com/r/trackers/comments/ppfwc/librarynu\_admin\_the\_website\_is\_shutting\_down\_due](“https://web.archive.org/web/20140110050450/http://www.reddit.com/r/trackers/comments/ppfwc/librarynu_admin_the_website_is_shutting_down_due”)
    orchived on January 10, 2014.
    [[↩](#fnref-2025-21)]{.footnotereverse}]{#fn-2025-21}
    22. [[www.reddit.com/r/trackers/comments/ppfwc/librarynu\_admin\_the\_website\_is\_shutting\_down\_due](“https://web.archive.org/web/20140110050450/http://www.reddit.com/r/trackers/comments/ppfwc/librarynu_admin_the_website_is_shutting_down_due”)
    archived on January 10, 2014.
    [[↩](#fnref-2025-22)]{.footnotereverse}]{#fn-2025-22}
    23. [This point is made at length in the report on media piracy in
    emerging economies, released by the American Assembly in 2011. See
    Joe Karaganis, ed. *Media Piracy in Emerging Economies* (Social
    Science Research Network, March 2011),
    [http://piracy.americanassembly.org/the-report/](“http://piracy.americanassembly.org/the-report/”), I.
    [[↩](#fnref-2025-23)]{.footnotereverse}]{#fn-2025-23}
    24. [Lobato and Tang, "The Cyberlocker Gold Rush."
    [[↩](#fnref-2025-24)]{.footnotereverse}]{#fn-2025-24}
    25. [Lobato and Tang, "The Cyberlocker Gold Rush," 9.
    [[↩](#fnref-2025-25)]{.footnotereverse}]{#fn-2025-25}
    26. [Lobato and Tang, "The Cyberlocker Gold Rush," 7.
    [[↩](#fnref-2025-26)]{.footnotereverse}]{#fn-2025-26}
    27. [GIMEL/viewtopic.php?f=8&t=169; GIMEL/viewtopic.php?f=17&t=299.
    [[↩](#fnref-2025-27)]{.footnotereverse}]{#fn-2025-27}
    28. [GIMEL/viewtopic.php?f=17&t=299.
    [[↩](#fnref-2025-28)]{.footnotereverse}]{#fn-2025-28}
    29. [GIMEL/viewtopic.php?f=8&t=169. All quotes translated from Russian
    by the authors, unless otherwise noted.
    [[↩](#fnref-2025-29)]{.footnotereverse}]{#fn-2025-29}
    30. [GIMEL/viewtopic.php?f=8&t=6999&p=41911.
    [[↩](#fnref-2025-30)]{.footnotereverse}]{#fn-2025-30}
    31. [GIMEL/viewtopic.php?f=8&t=757.
    [[↩](#fnref-2025-31)]{.footnotereverse}]{#fn-2025-31}
    32. [In this sense, we see our work as complementary to but not
    exhausted by infrastructure studies. See Geoffrey C. Bowker and
    Susan Leigh Star, *Sorting Things Out: Classification and Its
    Consequences* (The MIT Press, 1999); Paul N. Edwards, "Y2K:
    Millennial Reflections on Computers as Infrastructure," *History and
    Technology* 15.1-2 (1998): 7--29; Paul N. Edwards, "Infrastructure
    and Modernity: Force, Time, and Social Organization in the History
    of Sociotechnical Systems," in *Modernity and Technology*, 2003,
    185--225; Paul N. Edwards et al., "Introduction: an Agenda for
    Infrastructure Studies," *Journal of the Association for Information
    Systems* 10.5 (2009): 364--74; Brian Larkin "Degraded Images,
    Distorted Sounds: Nigerian Video and the Infrastructure of Piracy,"
    *Public Culture* 16.2 (2004): 289--314; Brian Larkin "Pirate
    Infrastructures," in *Structures of Participation in Digital
    Culture*, ed. Joe Karaganis (New York: SSRC, 2008), 74--87; Susan
    Leigh Star and Geoffrey C. Bowker, "How to Infrastructure," in
    *Handbook of New Media: Social Shaping and Social Consequences of
    ICTs*, (SAGE Publications Ltd, 2010), 230--46.
    [[↩](#fnref-2025-32)]{.footnotereverse}]{#fn-2025-32}
    33. [For information on cryptographic hashing see Praveen Gauravaram and
    Lars R. Knudsen, "Cryptographic Hash Functions," in *Handbook of
    Information and Communication Security*, ed. Peter Stavroulakis and
    Mark Stamp (Springer Berlin Heidelberg, 2010), 59--79.
    [[↩](#fnref-2025-33)]{.footnotereverse}]{#fn-2025-33}
    34. [See GIMEL/viewtopic.php?f=8&t=55kj and
    GIMEL/viewtopic.php?f=8&t=18&sid=936.
    [[↩](#fnref-2025-34)]{.footnotereverse}]{#fn-2025-34}
    35. [GIMEL/viewtopic.php?f=8&t=714.
    [[↩](#fnref-2025-35)]{.footnotereverse}]{#fn-2025-35}
    36. [GIMEL/viewtopic.php?f=8&t=47.
    [[↩](#fnref-2025-36)]{.footnotereverse}]{#fn-2025-36}
    37. [GIMEL/viewtopic.php?f=17&t=175&hilit=RR&start=25.
    [[↩](#fnref-2025-37)]{.footnotereverse}]{#fn-2025-37}
    38. [GIMEL/viewtopic.php?f=17&t=104&start=450.
    [[↩](#fnref-2025-38)]{.footnotereverse}]{#fn-2025-38}
    39. [URL redacted; These numbers should be taken as a very rough
    estimate because 1) we do not consider Alexa to be a reliable source
    for web traffic and 2) some of the other figures cited in the report
    are suspicious. For example, *Aleph* has a relatively small archive
    of foreign fiction, at odds with the reported figure of 800,000
    volumes. [[↩](#fnref-2025-39)]{.footnotereverse}]{#fn-2025-39}
    40. [GIMEL/viewtopic.php?f=17&t=7061.
    [[↩](#fnref-2025-40)]{.footnotereverse}]{#fn-2025-40}
    41. ["The BitTorrent Protocol Specification," last modified October 20,
    2012 and archived on June 13, 2014,
    [http://www.bittorrent.org/beps/bep\_0003.html](“http://web.archive.org/web/20140613190300/http://www.bittorrent.org/beps/bep_0003.html”).
    [[↩](#fnref-2025-41)]{.footnotereverse}]{#fn-2025-41}
    42. [For more information on BitTorrent, see Bram Cohen, *Incentives
    Build Robustness in BitTorrent*, last modified on May 22, 2003,
    [http://www.bittorrent.org/bittorrentecon.pdf](“http://www.bittorrent.org/bittorrentecon.pdf”);
    Ricardo Salmon, Jimmy Tran, and Abdolreza Abhari, "Simulating a File
    Sharing System Based on BitTorrent," in *Proceedings of the 2008
    Spring Simulation Multiconference*, SpringSim '08 (San Diego, CA,
    USA: Society for Computer Simulation International, 2008), 21:1--5.
    [[↩](#fnref-2025-42)]{.footnotereverse}]{#fn-2025-42}
    43. [In 2008 *The Pirate Bay* co-founders Peter Sunde, Gottfrid
    Svartholm Warg, Fredrik Neij, and Carl Lundstromwere were charged
    with "conspiracy to break copyright related offenses" in Sweden. See
    Simon Johnson for Reuters.com, "Pirate Bay Copyright Test Case
    Begins in Sweden," last edited on February 16, 2009 and archived on
    August 4, 2014,
    [http://uk.reuters.com/article/2009/02/16/tech-us-sweden-piratebay-idUKTRE51F3K120090216](http://web.archive.org/web/20140804000829/http://uk.reuters.com/article/2009/02/16/tech-us-sweden-piratebay-idUKTRE51F3K120090216”).
    [[↩](#fnref-2025-43)]{.footnotereverse}]{#fn-2025-43}
    44. [TPB, "Worlds most resiliant tracking," last edited November 17,
    2009 and archived on August 4, 2014,
    [thepiratebay.se/blog/175](“http://web.archive.org/web/20140804015645/http://thepiratebay.se/blog/175”).
    [[↩](#fnref-2025-44)]{.footnotereverse}]{#fn-2025-44}
    45. [GIMEL/viewtopic.php?f=8&t=6999.
    [[↩](#fnref-2025-45)]{.footnotereverse}]{#fn-2025-45}
    46. [Thibault Cholez, Isabelle Chrisment, and Olivier Festor "Evaluation
    of Sybil Attacks Protection Schemes in KAD," in *Scalability of
    Networks and Services*, ed. Ramin Sadre and Aiko Pras, Lecture Notes
    in Computer Science 5637 (Springer Berlin Heidelberg, 2009), 70--82.
    [[↩](#fnref-2025-46)]{.footnotereverse}]{#fn-2025-46}
    47. [J.P. Timpanaro et al., "BitTorrent's Mainline DHT Security
    Assessment," in *2011 4th IFIP International Conference on New
    Technologies, Mobility and Security (NTMS)*, 2011, 1--5.
    [[↩](#fnref-2025-47)]{.footnotereverse}]{#fn-2025-47}
    48. [Ernesto, "US P2P Lawsuit Shows Signs of a 'Pirate Honeypot',"
    Technology, *TorrentFreak*, last edited in June 2011 and archived on
    January 14, 2014,
    [http://torrentfreak.com/u-s-p2p-lawsuit-shows-signs-of-a-pirate-honeypot-110601/](“https://web.archive.org/web/20140114200326/http://torrentfreak.com/u-s-p2p-lawsuit-shows-signs-of-a-pirate-honeypot-110601/”).
    [[↩](#fnref-2025-48)]{.footnotereverse}]{#fn-2025-48}
    49. [Benkler *The Wealth of Networks*, 60.
    [[↩](#fnref-2025-49)]{.footnotereverse}]{#fn-2025-49}
    50. [On the free and public library movement in England and the United
    States see Thomas Greenwood, *Public Libraries: a History of the
    Movement and a Manual for the Organization and Management of Rate
    Supported Libraries* (Simpkin, Marshall, Hamilton, Kent, 1890);
    Elizabeth Akers Allen and James Phinney Baxter, *Dedicatory
    Exercises of the Baxter Building* (Auburn, Me: Lakeside Press,
    1889). To read more about the history of free and public library
    movements in Russia see Mary Stuart, "The Evolution of Librarianship
    in Russia: the Librarians of the Imperial Public Library,
    1808-1868," *The Library Quarterly* 64.1 (January 1994): 1--29; Mary
    Stuart, "Creating a National Library for the Workers' State: the
    Public Library in Petrograd and the Rumiantsev Library Under
    Bolshevik Rule," *The Slavonic and East European Review* 72.2 (April
    1994): 233--58; Mary Stuart "The Ennobling Illusion: the Public
    Library Movement in Late Imperial Russia," *The Slavonic and East
    European Review* 76.3 (July 1998): 401--40.
    [[↩](#fnref-2025-50)]{.footnotereverse}]{#fn-2025-50}
    51. [Michael H. Harris, *History of Libraries of the Western World*,
    (London: Scarecrow Press, 1999), 136.
    [[↩](#fnref-2025-51)]{.footnotereverse}]{#fn-2025-51}
    52. [http://s\*.d\*.ru/comments/508985/.
    [[↩](#fnref-2025-52)]{.footnotereverse}]{#fn-2025-52}
    53. [RR/forum/viewtopic.php?t=1590026.
    [[↩](#fnref-2025-53)]{.footnotereverse}]{#fn-2025-53}
    54. [Aaron Halfaker et al."The Rise and Decline of an Open Collaboration
    System: How Wikipedia's Reaction to Popularity Is Causing Its
    Decline," *American Behavioral Scientist*, December 2012.
    [[↩](#fnref-2025-54)]{.footnotereverse}]{#fn-2025-54}
    :::

    Series Navigation[[\<\< What Do Metrics Want? How Quantification
    Prescribes Social Interaction on
    Facebook](http://computationalculture.net/what-do-metrics-want/ "<< What Do Metrics Want? How Quantification Prescribes Social Interaction on Facebook")]{.series-nav-left}[[Modelling
    biology -- working through (in-)stabilities and frictions
    \>\>](http://computationalculture.net/modelling-biology/ "Modelling biology – working through (in-)stabilities and frictions >>")]{.series-nav-right}
    :::

    ::: {.comments}
    :::

    Article printed from Computational Culture:
    **http://computationalculture.net**

    URL to article:
    **http://computationalculture.net/book-piracy-as-peer-preservation/**

    [Click here to print.](#Print "Click here to print.")

    Copyright © 2012 Computational Culture. All rights reserved.

    Thylstrup
    The Politics of Mass Digitization
    2019


    The Politics of Mass Digitization

    Nanna Bonde Thylstrup

    The MIT Press

    Cambridge, Massachusetts

    London, England

    # Table of Contents

    1. Acknowledgments
    2. I Framing Mass Digitization
    1. 1 Understanding Mass Digitization
    3. II Mapping Mass Digitization
    1. 2 The Trials, Tribulations, and Transformations of Google Books
    2. 3 Sovereign Soul Searching: The Politics of Europeana
    3. 4 The Licit and Illicit Nature of Mass Digitization
    4. III Diagnosing Mass Digitization
    1. 5 Lost in Mass Digitization
    2. 6 Concluding Remarks
    5. References
    6. Index

    ## List of figures

    1. Figure 2.1 François-Marie Lefevere and Marin Saric. “Detection of grooves in scanned images.” U.S. Patent 7508978B1. Assigned to Google LLC.
    2. Figure 2.2 Joseph K. O’Sullivan, Alexander Proudfooot, and Christopher R. Uhlik. “Pacing and error monitoring of manual page turning operator.” U.S. Patent 7619784B1. Assigned to Google LLC, Google Technology Holdings LLC.

    #
    Acknowledgments

    I am very grateful to all those who have contributed to this book in various
    ways. I owe special thanks to Bjarki Valtysson, Frederik Tygstrup, and Peter
    Duelund, for their supervision and help thinking through this project, its
    questions, and its forms. I also wish to thank Andrew Prescott, Tobias Olsson,
    and Rune Gade for making my dissertation defense a memorable and thoroughly
    enjoyable day of constructive critique and lively discussions. Important parts
    of the research for this book further took place during three visiting stays
    at Cornell University, Duke University, and Columbia University. I am very
    grateful to N. Katherine Hayles, Andreas Huyssen, Timothy Brennan, Lydia
    Goehr, Rodney Benson, and Fredric Jameson, who generously welcomed me across
    the Atlantic and provided me with invaluable new perspectives, as well as
    theoretical insights and challenges. Beyond the aforementioned, three people
    in particular have been instrumental in terms of reading through drafts and in
    providing constructive challenges, intellectual critique, moral support, and
    fun times in equal proportions—thank you so much Kristin Veel, Henriette
    Steiner, and Daniela Agostinho. Marianne Ping-Huang has further offered
    invaluable support to this project and her theoretical and practical
    engagement with digital archives and academic infrastructures continues to be
    a source of inspiration. I am also immensely grateful to all the people
    working on or with mass digitization who generously volunteered their time to
    share with me their visions for, and perspectives on, mass digitization.

    This book has further benefited greatly from dialogues taking place within the
    framework of two larger research projects, which I have been fortunate enough
    to be involved in: Uncertain Archives and The Past’s Future. I am very
    grateful to all my colleagues in both these research projects: Kristin Veel,
    Daniela Agostinho, Annie Ring, Katrine Dirkinck-Holmfeldt, Pepita Hesselberth,
    Kristoffer Ørum, Ekaterina Kalinina Anders Søgaard as well as Helle Porsdam,
    Jeppe Eimose, Stina Teilmann, John Naughton, Jeffrey Schnapp, Matthew Battles,
    and Fiona McMillan. I am further indebted to La Vaughn Belle, George Tyson,
    Temi Odumosu, Mathias Danbolt, Mette Kia, Lene Asp, Marie Blønd, Mace Ojala,
    Renee Ridgway, and many others for our conversations on the ethical issues of
    the mass digitization of colonial material. I have also benefitted from the
    support and insights offered by other colleagues at the Department of Arts and
    Cultural Studies, University of Copenhagen.

    A big part of writing a book is also about keeping sane, and for this you need
    great colleagues that can pull you out of your own circuit and launch you into
    other realms of inquiry through collaboration, conversation, or just good
    times. Thank you Mikkel Flyverbom, Rasmus Helles, Stine Lomborg, Helene
    Ratner, Anders Koed Madsen, Ulrik Ekman, Solveig Gade, Anna Leander, Mareile
    Kaufmann, Holger Schulze, Jakob Kreutzfeld, Jens Hauser, Nan Gerdes, Kerry
    Greaves, Mikkel Thelle, Mads Rosendahl Thomsen, Knut Ove Eliassen, Jens-Erik
    Mai, Rikke Frank Jørgensen, Klaus Bruhn Jensen, Marisa Cohn, Rachel Douglas-
    Jones, Taina Bucher, and Baki Cakici. To this end you also need good
    friends—thank you Thomas Lindquist Winther-Schmidt, Mira Jargil, Christian
    Sønderby Jepsen, Agnete Sylvest, Louise Michaëlis, Jakob Westh, Gyrith Ravn,
    Søren Porse, Jesper Værn, Jacob Thorsen, Maia Kahlke, Josephine Michau, Lærke
    Vindahl, Chris Pedersen, Marianne Kiertzner, Rebecca Adler-Nissen, Stig
    Helveg, Ida Vammen, Alejandro Savio, Lasse Folke Henriksen, Siine Jannsen,
    Rens van Munster, Stephan Alsman, Sayuri Alsman, Henrik Moltke, Sean Treadway,
    and many others. I also have to thank Christer and all the people at
    Alimentari and CUB Coffee who kept my caffeine levels replenished when I tired
    of the ivory tower.

    I am furthermore very grateful for the wonderful guidance and support from MIT
    Press, including Noah Springer, Marcy Ross, and Susan Clark—and of course for
    the many inspiring conversations with and feedback from Doug Sery. I also want
    to thank the anonymous peer reviewers whose insightful and constructive
    comments helped improve this book immensely. Research for this book was
    supported by grants from the Danish Research Council and the Velux Foundation.

    Last, but not least, I wish to thank my loving partner Thomas Gammeltoft-
    Hansen for his invaluable and critical input, optimistic outlook, and perfect
    morning cappuccinos; my son Georg and daughter Liv for their general
    awesomeness; and my extended family—Susanne, Bodil, and Hans—for their support
    and encouragement.

    I dedicate this book to my parents, Karen Lise Bonde Thylstrup and Asger
    Thylstrup, without whom neither this book nor I would have materialized.

    # I
    Framing Mass Digitization

    # 1
    Understanding Mass Digitization

    ## Introduction

    Mass digitization is first and foremost a professional concept. While it has
    become a disciplinary buzzword used to describe large-scale digitization
    projects of varying scope, it enjoys little circulation beyond the confines of
    information science and such projects themselves. Yet, as this book argues, it
    has also become a defining concept of our time. Indeed, it has even attained
    the status of a cultural and moral imperative and obligation.1 Today, anyone
    with an Internet connection can access hundreds of millions of digitized
    cultural artifacts from the comfort of their desk—or many other locations—and
    cultural institutions and private bodies add thousands of new cultural works
    to the digital sphere every day. The practice of mass digitization is forming
    new nexuses of knowledge, and new ways of engaging with that knowledge. What
    at first glance appears to be a simple act of digitization (the transformation
    of singular books from boundary objects to open sets of data), reveals, on
    closer examination, a complex process teeming with diverse political, legal,
    and cultural investments and controversies.

    This volume asks why mass digitization has become such a “matter of concern,”2
    and explores its implications for the politics of cultural memory. In
    practical terms, mass digitization is digitization on an industrial scale. But
    in cultural terms, mass digitization is much more than this. It is the promise
    of heightened access to—and better preservation of—the past, and of more
    original scholarship and better funding opportunities. It also promises
    entirely new ways of reading, viewing, and structuring archives, new forms of
    value and their extraction, and new infrastructures of control. This volume
    argues that the shape-shifting quality of mass digitization, and its social
    dynamics, alters the politics of cultural memory institutions. Two movements
    simultaneously drive mass digitization programs: the relatively new phenomenon
    of big data gold rushes, and the historically more familiar archival
    accumulative imperative. Yet despite these prospects, mass digitization
    projects are also uphill battles. They are costly and speculative processes,
    with no guaranteed rate of return, and they are constantly faced by numerous
    limitations and contestations on legal, social, and cultural levels.
    Nevertheless, both public and private institutions adamantly emphasize the
    need to digitize on a massive scale, motivating initiatives around the
    globe—from China to Russia, Africa to Europe, South America to North America.
    Some of these initiatives are bottom-up projects driven by highly motivated
    individuals, while others are top-down and governed by complex bureaucratic
    apparatuses. Some are backed by private money, others publically funded. Some
    exist as actual archives, while others figure only as projections in policy
    papers. As the ideal of mass digitization filters into different global
    empirical situations, the concept of mass digitization attains nuanced
    political hues. While all projects formally seek to serve the public interest,
    they are in fact infused with much more diverse, and often conflicting,
    political and commercial motives and dynamics. The same mass digitization
    project can even be imbued with different and/or contradictory investments,
    and can change purpose and function over time, sometimes rapidly.

    Mass digitization projects are, then, highly political. But they are not
    political in the sense that they transfer the politics of analog cultural
    memory institutions into the digital sphere 1:1, or even liberate cultural
    memory artifacts from the cultural politics of analog cultural memory
    institutions. Rather, mass digitization presents a new political cultural
    memory paradigm, one in which we see strands of technical and ideological
    continuities combine with new ideals and opportunities; a political cultural
    memory paradigm that is arguably even more complex—or at least appears more
    messy to us now—than that of analog institutions, whose politics we have had
    time to get used to. In order to grasp the political stakes of mass
    digitization, therefore, we need to approach mass digitization projects not as
    a continuation of the existing politics of cultural memory, or as purely
    technical endeavors, but rather as emerging sociopolitical and sociotechnical
    phenomena that introduce new forms of cultural memory politics.

    ## Framing, Mapping, and Diagnosing Mass Digitization

    Interrogating the phenomenon of mass digitization, this book asks the question
    of how mass digitization affects the politics of cultural memory institutions.
    As a matter of practice, something is clearly changing in the conversion of
    bounded—and scarce—historical material into ubiquitous ephemeral data. In
    addition to the technical aspects of digitization, mass digitization is also
    changing the political territory of cultural memory objects. Global commercial
    platforms are increasingly administering and operating their scanning
    activities in favor of the digital content they reap from the national “data
    tombs” of museums and libraries and the feedback loops these generate. This
    integration of commercial platforms into the otherwise primarily public
    institutional set-up of cultural memory has produced a reconfiguration of the
    political landscape of cultural memory from the traditional symbolic politics
    of scarcity, sovereignty, and cultural capital to the late-sovereign
    infrapolitics of standardization and subversion.

    The empirical outlook of the present book is predominantly Western. Yet, the
    overarching dynamics that have been pursued are far from limited to any one
    region or continent, nor limited solely to the field of cultural memory.
    Digitization is a global phenomenon and its reliance on late-sovereign
    politics and subpolitical governance forms are shared across the globe.

    The central argument of this book is that mass digitization heralds a new kind
    of politics in the regime of cultural memory. Mass digitization of cultural
    memory is neither a neutral technical process nor a transposition of the
    politics of analog cultural heritage to the digital realm on a 1:1 scale. The
    limitations of using conventional cultural-political frameworks for
    understanding mass digitization projects become clear when working through the
    concepts and regimes of mass digitization. Mass digitization brings together
    so many disparate interests and elements that any mono-theoretical lens would
    fail to account for the numerous political issues arising within the framework
    of mass digitization. Rather, mass digitization should be approached as an
    _infrapolitical_ process that brings together a multiplicity of interests
    hitherto foreign to the realm of cultural memory.

    The first part of the book, “framing,” outlines the theoretical arguments in
    the book—that the political dynamics of mass digitization organize themselves
    around the development of the technical infrastructures of mass digitization
    in late-sovereign frameworks. Fusing infrastructure theory and theories on the
    political dynamics of late sovereignty allows us to understand mass
    digitization projects as cultural phenomena that are highly dependent on
    standardization and globalization processes, while also recognizing that their
    resultant infrapolitics can operate as forms of both control and subversion.

    The second part of the book, “mapping,” offers an analysis of three different
    mass digitization phenomena and how they relate to the late-sovereign politics
    that gave rise to them. The part thus examines the historical foundation,
    technical infrastructures, and (il)licit status and ideological underpinnings
    of three variations of mass digitization projects: primarily corporate,
    primarily public, and primarily private. While these variations may come
    across as reproductions of more conventional societal structures, the chapters
    in part two nevertheless also present us with a paradox: while the different
    mass digitization projects that appear in this book—from Google’s privatized
    endeavor to Europeana’s supranational politics to the unofficial initiatives
    of shadow libraries—have different historical and cultural-political
    trajectories and conventional regimes of governance, they also undermine these
    conventional categories as they morph and merge into new infrastructures and
    produce a new form of infrapolitics. The case studies featured in this book
    are not to be taken as exhaustive examples, but rather as distinct, yet
    nevertheless entangled, examples of how analog cultural memory is taken online
    on a digital scale. They have been chosen with the aim of showing the
    diversity of mass digitization, but also how it, as a phenomenon, ultimately
    places the user in the dilemma of digital capitalism with its ethos of access,
    speed, and participation (in varying degrees). The choices also have their
    limitations, however. In their Western bias, which is partly rooted in this
    author’s lack of language skills (specifically in Russian and Chinese), for
    instance, they fail to capture the breadth and particularities of the
    infrapolitics of mass digitization in other parts of the world. Much more
    research is needed in this area.

    The final part of the book, “diagnosing,” zooms in on the pathologies of mass
    digitization in relation to affective questions of desire and uncertainty.
    This part argues that instead of approaching mass digitization projects as
    rationalized and instrumental projects, we should rather acknowledge them as
    ambivalent spatio-temporal projects of desire and uncertainty. Indeed, as the
    third part concludes, it is exactly uncertainty and desire that organizes the
    new spatio-temporal infrastructures of cultural memory institutions, where
    notions such as serendipity and the infrapolitics of platforms have taken
    precedence over accuracy and sovereign institutional politics. The third part
    thus calls into question arguments that imagine mass digitization as
    instrumentalized projects that either undermine or produce values of
    serendipity, as well as overarching narratives of how mass digitization
    produces uncomplicated forms of individualized empowerment and freedom.
    Instead, the chapter draws attention to the new cultural logics of platforms
    that affect the cultural politics of mass digitization projects.

    Crucially, then, this book seeks neither to condemn nor celebrate mass
    digitization, but rather to unpack the phenomenon and anchor it in its
    contemporary political reality. It offers a story of the ways in which mass
    digitization produces new cultural memory institutions online that may be
    entwined in the cultural politics of their analog origins, but also raises new
    political questions to the collections.

    ## Setting the Stage: Assembling the Motley Crew of Mass Digitization

    The dream and practice of mass digitizing cultural works has been around for
    decades and, as this section attests, the projects vary significantly in
    shape, size, and form. While rudimentary and nonexhaustive, this section
    gathers a motley collection of mass digitization initiatives, from some of the
    earliest digitization programs to later initiatives. The goal of this section
    is thus not so much to meticulously map mass digitization programs, but rather
    to provide examples of projects that might illuminate the purpose of this book
    and its efforts to highlight the infrastructural politics of mass
    digitization. As the section attests, mass digitization is anything but a
    streamlined process. Rather, it is a painstakingly complex process mired in
    legal, technical, personal, and political challenges and problems, and it is a
    vision whose grand rhetoric often works to conceal its messy reality.

    It is pertinent to note that mass digitization suffers from the combined
    gendered and racialized reality of cultural institutions, tech corporations,
    and infrastructural projects: save a few exceptions, there is precious little
    diversity in the official map of mass digitization, even in those projects
    that emerge bottom-up. This does not mean that women and minorities have not
    formed a crucial part of mass digitization, selecting cultural objects,
    prepping them (for instance ironing newspapers to ensure that they are flat),
    scanning them, and constructing their digital infrastructures. However, more
    often than not, their contributions fade into the background as tenders of the
    infrastructures of mass digitization rather than as the (predominantly white,
    male) “face” of mass digitization. As such, an important dimension of the
    politics of these infrastructural projects is their reproduction of
    established gendered and racialized infrastructures already present in both
    cultural institutions and the tech industry.3 This book hints at these crucial
    dimensions of mass digitization, but much more work is needed to change the
    familiar cast of cultural memory institutions, both in the analog and digital
    realms.

    With these introductory remarks in place, let us now turn to the long and
    winding road to mass digitization as we know it today. Locating the exact
    origins of this road is a subjective task that often ends up trapping the
    explorer in the mirror halls of technology. But it is worth noting that of
    course there existed, before the Internet, numerous attempts at capturing and
    remediating books in scalable forms, for the purposes both of preservation and
    of extending the reach of library collections. One of the most revolutionary
    of such technologies before the digital computer or the Internet was
    microfilm, which was first held forth as a promising technology of
    preservation and remediation in the middle of the 1800s.4 At the beginning of
    the twentieth century, the Belgian author, entrepreneur, visionary, lawyer,
    peace activist, and one of the founders of information science, Paul Otlet,
    brought the possibilities of microfilm to bear directly on the world of
    libraries. Otlet authored two influential think pieces that outlined the
    benefits of microfilm as a stable and long-term remediation format that could,
    ultimately, also be used to extend the reach of literature, just as he and his
    collaborator, inventor and engineer Robert Goldschmidt, co-authored a work on
    the new form of the book through microphotography, _Sur une forme nouvelle du
    livre: le livre microphotographique_. 5 In his analyses, Otlet suggested that
    the most important transformations would not take place in the book itself,
    but in substitutes for it. Some years later, beginning in 1927 with the
    Library of Congress microfilming more than three million pages of books and
    manuscripts in the British Library, the remediation of cultural works in
    microformat became a widespread practice across the world, and microfilm is
    still in use to this day.6 Otlet did not confine himself to thinking only
    about microphotography, however, but also pursued a more speculative vein,
    inspired by contemporary experiments with electromagnetic waves, arguing that
    the most radical change of the book would be wireless technology. Moreover, he
    also envisioned and partly realized a physical space, _Mundaneum_ , for his
    dreams of a universal archive. Paul Otlet and Nobel Peace Prize Winner Henri
    La Fontaine conceived of Mundaneum in 1895 as part of their work on
    documentation science. Otlet called the Mundaneum “… an Idea, an Institution,
    a Method, a Body of work materials and collections, a Building, a Network.” In
    more concrete, but no less ambitious terms, the Mundaneum was to gather
    together all the world’s knowledge and classify it according to a universal
    system they developed called the “Universal Decimal Classification.” In 1910,
    Otlet and Fontaine found a place for their work in the Palais du
    Cinquantenaire, a government building in Brussels. Later, Otlet commissioned
    Le Corbusier to design a building for the Mundaneum in Geneva. The cooperation
    ended unsuccesfully, however, and it later led a nomadic life, moving from The
    Hague to Brussels and then in 1993 to the city of Mons in Belgium, where it
    now exists as a museum called the Mundaneum Archive Center. Fatefully, Mons, a
    former mining district, also houses Google’s largest data center in Europe and
    it did not take Google long to recognize the cultural value in entering a
    partnership with the Mundaneum, the two parties signing a contract in 2013.
    The contract entailed among other things that Google would sponsor a traveling
    exhibit on the Mundaneum, as well as a series of talks on Internet issues at
    the museum and the university, and that the Mundaneum would use Google’s
    social networking service, Google Plus, as a promotional tool. An article in
    the _New York Times_ described the partnership as “part of a broader campaign
    by Google to demonstrate that it is a friend of European culture, at a time
    when its services are being investigated by regulators on a variety of
    fronts.” 7 The collaboration not only spurred international interest, but also
    inspired a group of influential tech activists and artists closely associated
    with the creative work of shadow libraries to create the critical archival
    project Mondotheque.be, a platform for “discussing and exploring the way
    knowledge is managed and distributed today in a way that allows us to invent
    other futures and different narrations of the past,”8 and a resulting digital
    publication project, _The Radiated Book,_ authored by an assembly of
    activists, artists, and scholars such as Femke Snelting, Tomislav Medak,
    Dusan Barok, Geraldine Juarez, Shin Joung Yeo, and Matthew Fuller. 9

    Another early precursor of mass digitization emerged with Project Gutenberg,
    often referred to as the world’s oldest digital library. Project Gutenberg was
    the brainchild of author Michael S. Hart, who in 1971, using technologies such
    as ARPANET, Bulletin Board Systems (BSS), and Gopher protocols, experimented
    with publishing and distributing books in digital form. As Hart reminisced in
    his later text, “The History and Philosophy of Project Gutenberg,”10 Project
    Gutenberg emerged out of a donation he received as an undergraduate in 1971,
    which consisted of $100 million worth of computing time on the Xerox Sigma V
    mainframe at the University of Illinois at Urbana-Champaign. Wanting to make
    good use of the donation, Hart, in his own words, “announced that the greatest
    value created by computers would not be computing, but would be the storage,
    retrieval, and searching of what was stored in our libraries.”11 He therefore
    committed himself to converting analog cultural works into digital text in a
    format not only available to, but also accessible/readable to, almost all
    computer systems: “Plain Vanilla ASCII” (ASCII for “American Standard Code for
    Information Interchange”). While Project Gutenberg only converted about 50
    works into digital text in the 1970s and the 1980s (the first was the
    Declaration of Independence), it today hosts up to 56,000 texts in its
    distinctly lo-fi manner.12 Interestingly, Michael S. Hart noted very early on
    that the intention of the project was never to reproduce authoritative
    editions of works for readers—“who cares whether a certain phrase in
    Shakespeare has a ‘:’ or a ‘;’ between its clauses”—but rather to “release
    etexts that are 99.9% accurate in the eyes of the general reader.”13 As the
    present book attests, this early statement captures one of the central points
    of contestation in mass digitization: the trade-off between accuracy and
    accessibility, raising questions both of the limits of commercialized
    accelerated digitization processes (see chapter 2 on Google Books) and of
    class-based and postcolonial implications (see chapter 4 on shadow libraries).

    If Project Gutenberg spearheaded the efforts of bringing cultural works into
    the digital sphere through manual conversion of analog text into lo-fi digital
    text, a French mass digitization project affiliated with the construction of
    the Bibliothèque nationale de France (BnF) initiated in 1989 could be
    considered one of the earliest examples of actually digitizing cultural works
    on an industrial scale.14 The French were thus working on blueprints of mass
    digitization programs before mass digitization became a widespread practice __
    as part of the construction of a new national library, under the guidance of
    Alain Giffard and initiated by François Mitterand. In a letter sent in 1990 to
    Prime Minister Michel Rocard, President Mitterand outlined his vision of a
    digital library, noting that “the novelty will be in the possibility of using
    the most modern computer techniques for access to catalogs and documents of
    the Bibliothèque nationale de France.”15 The project managed to digitize a
    body of 70,000–80,000 titles, a sizeable amount of works for its time. As
    Alain Giffard noted in hindsight, “the main difficulty for a digitization
    program is to choose the books, and to choose the people to choose the
    books.”16 Explaining in a conversation with me how he went about this task,
    Giffard emphasized that he chose “not librarians but critics, researchers,
    etc.” This choice, he underlined, could be made only because the digitization
    program was “the last project of the president and a special mission” and thus
    not formally a civil service program.17 The work process was thus as follows:

    > I asked them to prepare a list. I told them, “Don’t think about what exists.
    I ask of you a list of books that would be logical in this concept of a
    library of France.” I had the first list and we showed it to the national
    library, which was always fighting internally. So I told them, “I want this
    book to be digitized.” But they would never give it to us because of
    territory. Their ship was not my ship. So I said to them, “If you don’t give
    me the books I shall buy the books.” They said I could never buy them, but
    then I started buying the books from antiques suppliers because I earned a lot
    of money at that time. So in the end I had a lot of books. And I said to them,
    “If you want the books digitized you must give me the books.” But of the
    80,000 books that were digitized, half were not in the collection. I used the
    staff’s garages for the books, 80,000 books. It is an incredible story.18

    Incredible indeed. And a wonderful anecdote that makes clear that mass
    digitization, rather than being just a technical challenge, is also a
    politically contingent process that raises fundamental questions of territory
    (institutional as well as national), materiality, and culture. The integration
    of the digital _très grande bibliothèque_ into the French national mass
    digitization project Gallica, later in 1997, also foregrounds the
    infrastructural trajectory of early national digitization programs into later
    glocal initiatives. 19

    The question of pan-national digitization programs was precisely at the
    forefront of another early prominent mass digitization project, namely the
    Universal Digital Library (UDL), which was launched in 1995 by Carnegie Mellon
    computer scientist Raj Reddy and developed by linguist Jaime Carbonell,
    physicist Michael Shamos, and Carnegie Mellon Foundation dean of libraries
    Gloriana St. Clair. In 1998, the project launched the Thousand Book Project.
    Later, the UDL scaled its initial efforts up to the Million Book Project,
    which they successfully completed in 2007.20 Organizationally, the UDL stood
    out from many of the other digitization projects by including initial
    participation from three non-Western entities in addition to the Carnegie
    Mellon Foundation—the governments of India, China, and Egypt.21 Indeed, India
    and China invested about $10 million in the initial phase, employing several
    hundred people to find books, bring them in, and take them back. While the
    project ambitiously aimed to provide access “to all human knowledge, anytime,
    anywhere,” it ended its scanning activities 2008. As such, the Universal
    Digital Library points to another central infrastructural dimension of mass
    digitization: its highly contingent spatio-temporal configurations that are
    often posed in direct contradistinction to the universalizing discourse of
    mass digitization. Across the board, mass digitization projects, while
    confining themselves in practice to a limited target of how many books they
    will digitize, employ a discourse of universality, perhaps alluding vaguely to
    how long such an endeavor will take but in highly uncertain terms (see
    chapters 3 and 5 in particular).

    No exception from the universalizing discourse, another highly significant
    mass digitization project, the Internet Archive, emerged around the same time
    as the Universal Digital Library. The Internet Archive was founded by open
    access activist and computer engineer Brewster Kahle in 1996, and although it
    was primarily oriented toward preserving born-digital material, in particular
    the Internet ( _Wired_ calls Brewster Kahle “the Internet’s de facto
    librarian” 22), the Archive also began digitizing books in 2005, supported by
    a grant from the Alfred Sloan Foundation. Later that year, the Internet
    Archive created the infrastructural initiative, Open Content Alliance (OCA),
    and was now embedded in an infrastructure that included over 30 major US
    libraries, as well as major search engines (by Yahoo! and Microsoft),
    technology companies (Adobe and Xerox), a commercial publisher (O’Reilly
    Media, Inc.), and a not-for-profit membership organization of more than 150
    institutions, including universities, research libraries, archives, museums,
    and historical societies.23 The Internet Archive’s mass digitization
    infrastructure was thus from the beginning a mesh of public and private
    cooperation, where libraries made their collections available to the Alliance
    for scanning, and corporate sponsors or the Internet Archive conversely funded
    the digitization processes. As such, the infrastructures of the Internet
    Archive and Google Books were rather similar in their set-ups.24 Nevertheless,
    the initiative of the Internet Archive’s mass digitization project and its
    attendant infrastructural alliance, OCA, should be read as both a technical
    infrastructure responding to the question of _how_ to mass digitize in
    technical terms, and as an infrapolitical reaction in response to the forces
    of the commercial world that were beginning to gather around mass
    digitization, such as Amazon 25 and Google. The Internet Archive thus
    positioned itself as a transparent open source alternative to the closed doors
    of corporate and commercial initiatives. Yet, as Kalev Leetaru notes, the case
    was more complex than that. Indeed, while the OCA was often foregrounded as
    more transparent than Google, their technical infrastructural components and
    practices were in fact often just as shrouded in secrecy.26 As such, the
    Internet Archive and the OCA draw attention to the important infrapolitical
    question in mass digitization, namely how, why, and when to manage
    visibilities in mass digitization projects.

    Although the media sometimes picked up stories on mass digitization projects
    already outlined, it wasn’t until Google entered the scene that mass
    digitization became a headline-grabbing enterprise. In 2004, Google founders
    Larry Page and Sergey Brin traveled to Frankfurt to make a rare appearance at
    the Frankfurt Book Fair. Google was at that time still considered a “scrappy”
    Internet company in some quarters, as compared with tech giants such as
    Microsoft.27 Yet Page and Brin went to Frankfurt to deliver a monumental
    announcement: Google would launch a ten-year plan to make available
    approximately 15 million digitized books, both in- and out-of-copyright
    works.28 They baptized the program “Google Print,” a project that consisted of
    a series of partnerships between Google and five English-language libraries:
    the University of Michigan at Ann Arbor, Stanford, Harvard, Oxford (Bodleian
    Library), and the New York City Public Library. While Page’s and Brin’s
    announcement was surprising to some, many had anticipated it; as already
    noted, advances toward mass digitization proper had already been made, and
    some of the partnership institutions had been negotiating with Google since
    2002.29 As with many of the previous mass digitization projects, Google found
    inspiration for their digitization project in the long-lived utopian ideal of
    the universal library, and in particular the mythic library of Alexandria.30
    As with other Google endeavors, it seemed that Page was intent on realizing a
    utopian ideal that scholars (and others) had long dreamed of: a library
    containing everything ever written. It would be realized, however, not with
    traditional human-centered means drawn from the world of libraries, but rather
    with an AI approach. Google Books would exceed human constraints, taking the
    seemingly impossible vision of digitizing all the books in the world as a
    starting point for constructing an omniscient Artificial Intelligence that
    would know the entire human symbol system and allow flexible and intuitive
    recollection. These constraints were physical (how to digitize and organize
    all this knowledge in physical form); legal (how to do it in a way that
    suspends existing regulation); and political (how to transgress territorial
    systems). The invocation of the notion of the universal library was not a
    neutral action. Rather, the image of Google Books as a library worked as a
    symbolic form in a cultural scheme that situated Google as a utopian, and even
    ethical, idealist project. Google Books seemingly existed by virtue of
    Goethe’s famous maxim that “To live in the ideal world is to treat the
    impossible as if it were possible.”31 At the time, the industry magazine
    _Bookseller_ wrote in response to Google’s digitization plans: “The prospect
    is both thrilling and frightening for the book industry, raising a host of
    technical and theoretical issues.” 32 And indeed, while some reacted with
    enthusiasm and relief to the prospect of an organization being willing to
    suffer the cost of mass digitization, others expressed economic and ethical
    concerns. The Authors Guild, a New York–based association, promptly filed a
    copyright infringement suit against Google. And librarians were forced to
    revisit core ethical principles such as privacy and public access.

    The controversies of Google Books initially played out only in US territory.
    However, another set of concerns of a more territorial and political nature
    soon came to light. The French President at the time, Jacques Chirac, called
    France to cultural-political arms, urging his culture minister, Renaud
    Donnedieu de Vabres, and Jean-Noël Jeanneney, then-head of France’s
    Bibliothèque nationale, to do the same with French texts as Google planned to
    do with their partner libraries, but by means of a French search engine.33
    Jeanneney initially framed this French cultural-political endeavor as a
    European “contre-attaque” against Google Books, which, according to Jeanneney,
    could pose “une domination écrasante de l'Amérique dans la définition de
    l'idée que les prochaines générations se feront du monde.” (“a crushing
    American domination of the formation of future generations’ ideas about the
    world”)34 Other French officials insisted that the French digitization project
    should be seen not primarily as a cultural-political reaction _against_
    Google, but rather as a cultural-political incentive within France and Europe
    to make European information available online. “I really stress that it's not
    anti-American,” an official at France’s Ministry of Culture and Communication,
    speaking on the condition of anonymity, noted in an interview. “It is not a
    reaction. The objective is to make more material relevant to European heritage
    available. … Everybody is working on digitization projects.” Furthermore, the
    official did not rule out potential cooperation between Google and the
    European project. 35 There was no doubt, however, that the move to mass
    digitization “was a political drive by the French,” as Stephen Bury, head of
    European and American collections at the British Library, emphasized.36

    Despite its mixed messages, the French reaction nevertheless underscored the
    controversial nature of mass digitization as a symbolic, as well as technical,
    aspiration: mass digitization was a process that not only neutrally scanned
    and represented books but could also produce a new mode of world-making,
    actively structuring archives as well as their users.37 Now questions began to
    surface about where, or with whom, to place governance over this new archive:
    who would be the custodian of the keys to this new library? And who would be
    the librarians? A series of related questions could also be asked: who would
    determine the archival limits, the relations between the secret and the non-
    secret or the private and the public, and whether these might involve property
    or access rights, publication or reproduction rights, classification, and
    putting into order? France soon managed to rally other EU countries (Spain,
    Poland, Hungary, Italy, and Germany) to back its recommendation to the
    European Commission (EC) to construct a European alternative to Google’s
    search engine and archive and to set this out in writing. Occasioned by the
    French recommendation, the EC promptly adopted the idea of Europeana—the name
    of the proposed alternative—as a “flagship project” for the budding EU
    cultural policy.38 Soon after, in 2008, the EC launched Europeana, giving
    access to some 4.5 million digital objects from more than 1,000 institutions.

    Europeana’s Europeanizing discourse presents a territorializing approach to
    mass digitization that stands in contrast to the more universalizing tone of
    Mundaneum, Gutenberg, Google Books, and the Universal Digital Library. As
    such, it ties in with our final examples, namely the sovereign mass
    digitization projects that have in fact always been one of the primary drivers
    in mass digitization efforts. To this day, the map of mass digitization is
    populated with sovereign mass digitization efforts from Holland and Norway to
    France and the United States. One of the most impressive projects is the
    Norwegian mass digitization project at the National Library of Norway, which
    since 2004 has worked systematically to develop a digital National Library
    that encompasses text, audio, video, image, and websites. Impressively, the
    National Library of Norway offers digital library services that provide online
    access (to all with a Norwegian IP address) to full-text versions of all books
    published in Norway up until the year 2001, access to digital newspaper
    collections from the major national and regional newspapers in all libraries
    in the country, and opportunities for everyone with Internet access to search
    and listen to more than 40,000 radio programs recorded between 1933 and the
    present day.39 Another ambitious national mass digitization project is the
    Dutch National Library’s effort to digitize all printed publications since
    1470 and to create a National Platform for Digital Publications, which is to
    act both as a content delivery platform for its mass digitization output and
    as a national aggregator for publications. To this end, the Dutch National
    Library made deals with Google Books and Proquest to digitize 42 million pages
    just as it entered into partnerships with cross-domain aggregators such as
    Europeana.40 Finally, it is imperative to mention the Digital Public Library
    of America (DPLA), a national digital library conceived of in 2010 and
    launched in 2013, which aggregates digital collections of metadata from around
    the United States, pulling in content from large institutions like the
    National Archives and Records Administration and HathiTrust, as well as from
    smaller archives. The DPLA is in great part the fruit of the intellectual work
    of Harvard University’s Berkman Center for Internet and Society and the work
    of its Steering Committee, which consisted of influential names from the
    digital, legal, and library worlds, such as Robert Darnton, Maura Marx, and
    John Palfrey from Harvard University; Paul Courant of the University of
    Michigan; Carla Hayden, then of Baltimore’s Enoch Pratt Free Library and
    subsequently the Librarian of Congress; Brewster Kahle; Jerome McGann; Amy
    Ryan of the Boston Public Library; and Doron Weber of the Sloan Foundation.
    Key figures in the DPLA have often to great rhetorical effect positioned DPLA
    vis-à-vis Google Books, partly as a question of public versus private
    infrastructures.41 Yet, as the then-Chairman of DPLA John Palfrey conceded,
    the question of what constitutes “public” in a mass digitization context
    remains a critical issue: “The Digital Public Library of America has its
    critics. One counterargument is that investments in digital infrastructures at
    scale will undermine support for the traditional and the local. As the
    chairman of the DPLA, I hear this critique in the question-and-answer period
    of nearly every presentation I give. … The concern is that support for the
    DPLA will undercut already eroding support for small, local public
    libraries.”42 While Palfrey offers good arguments for why the DPLA could
    easily work in unison with, rather than jeopardize, smaller public libraries,
    and while the DPLA is building infrastructures to support this claim,43 the
    discussion nevertheless highlights the difficulties with determining when
    something is “public,” and even national.

    While the highly publicized and institutionalized projects I have just
    recounted have taken center stage in the early and later years of mass
    digitization, they neither constitute the full cast, nor the whole machinery,
    of mass digitization assemblages. Indeed, as chapter 4 in this book charts, at
    the margins of mass digitization another set of actors have been at work
    building new digital cultural memory assemblages, including projects such as
    Monoskop and Lib.ru. These actors, referred to in this book as shadow library
    projects (see chapter 4), at once both challenge and confirm the broader
    infrapolitical dimensions of mass digitization, including its logics of
    digital capitalism, network power, and territorial reconfigurations of
    cultural memory between universalizing and glocalizing discourses. Within this
    new “ecosystem of access,” unauthorized archives as Libgen, Gigapedia, and
    Sci-Hub have successfully built “shadow libraries” with global reach,
    containing massive aggregations of downloadable text material of both
    scholarly and fictional character.44 As chapter 4 shows, these initiatives
    further challenge our notions of public good, licit and illicit mass
    digitization, and the territorial borders of mass digitization, just as they
    add another layer of complexity to the question of the politics of mass
    digitization.

    Today, then, the landscape of mass digitization has evolved considerably, and
    we can now begin to make out the political contours that have shaped, and
    continue to shape, the emergent contemporary knowledge infrastructures of mass
    digitization, ripe as they are with contestation, cooperation, and
    competition. From this perspective, mass digitization appears as a preeminent
    example of how knowledge politics are configured in today’s world of
    “assemblages” as “multisited, transboundary networks” that connect
    subnational, national, supranational, and global infrastructures and actors,
    without, however, necessarily doing so through formal interstate systems.45 We
    can also see that mass digitization projects did not arise as a result of a
    sovereign decision, but rather emerged through a series of contingencies
    shaped by late-capitalist and late-sovereign forces. Furthermore, mass
    digitization presents us with an entirely new cultural memory paradigm—a
    paradigm that requires a shift in thinking about cultural works, collections,
    and contexts, from cultural records to be preserved and read by humans, to
    ephemeral machine-readable entities. This change requires a shift in thinking
    about the economy of cultural works, collections, and contexts, from scarce
    institutional objects to ubiquitous flexible information. Finally, it requires
    a shift in thinking about these same issues as belonging to national-global
    domains to conceiving them in terms of a set of political processes that may
    well be placed in national settings, but are oriented toward global agendas
    and systems.

    ## Interrogating Mass Digitization

    Mass digitization is often elastic in definition and elusive in practice.
    Concrete attempts have been made to delimit what mass digitization is, but
    these rarely go into specifics. The two characteristics most commonly
    associated with mass digitization are the relative lack of selectivity of
    materials, as compared to smaller-scale digitization projects, and the high
    speed and high volume of the process in terms of both digital conversion and
    metadata creation, which are made possible through a high level of
    automation.46 Mass digitization is thus concerned not only with preservation,
    but also with what kind of knowledge practices and values technology allows
    for and encourages, for example, in relation to de- and recontextualization,
    automation, and scale.47

    Studies of mass digitization are commonly oriented toward technology or
    information policy issues close to libraries, such as copyright, the quality
    of digital imagery, long-term preservation responsibility, standards and
    interoperability, and economic models for libraries, publishers, and
    booksellers, rather than, as here, the exploration of theory.48 This is not to
    say that existing work on mass digitization is not informed by theoretical
    considerations, but rather that the majority of research emphasizes policy and
    technical implementation at the expense of a more fundamental understanding of
    the cultural implications of mass digitization. In part, the reason for this
    is the relative novelty of mass digitization as an identifiable field of
    practice and policy, and its significant ramifications in the fields of law
    and information science.49 In addition to scholarly elucidations, mass
    digitization has also given rise to more ideologically fuelled critical books
    and articles on the topic.50

    Despite its disciplinary branching, work on mass digitization has mainly taken
    place in the fields of information science, law, and computer science, and has
    primarily problematized the “hows” of mass digitization and not the “whys.”51
    As with technical work on mass digitization, most nontechnical studies of mass
    digitization are “problem-solving” rather than “critical,” and this applies in
    particular to work originating from within the policy analysis community. This
    body seeks to solve problems within the existing social order—for example,
    copyright or metadata—rather than to interrogate the assumptions that underlie
    mass digitization programs, which would include asking what kinds of knowledge
    production mass digitization gives rise to. How does mass digitization change
    the ideological infrastructures of cultural heritage institutions? And from
    what political context does the urge to digitize on an industrial scale
    emerge? While the technical and problem-solving corpus on mass digitization is
    highly valuable in terms of outlining the most important stakeholders and
    technical issues of the field, it does not provide insight into the deeper
    structures, social mechanisms, and political implications of mass
    digitization. Moreover, it often fails to account for digitization as a force
    that is deeply entwined with other dynamics that shape its development and
    uses. It is this lack that the present volume seeks to mitigate.

    ## Assembling Mass Digitization

    Mass digitization is a composite and fluctuating infrastructure of
    disciplines, interests, and forces rooted in public-private assemblages,
    driven by ideas of value extraction and distribution, and supported by new
    forms of social organization. Google Books, for instance, is both a commercial
    project covered by nondisclosure agreements _and_ an academic scholarly
    project open for all to see. Similarly, Europeana is both a public
    digitization project directed at “citizens” _and_ a public-private partnership
    enterprise ripe with profit motives. Nevertheless, while it is tempting to
    speak about specific mass digitization projects such as Google Books and
    Europeana in monolithic and contrastive terms, mass digitization projects are
    anything but tightly organized, institutionally delineated, coherent wholes
    that produce one dominant reading. We do not find one “essence” in mass
    digitized archives. They are not “enlightenment projects,” “library services,”
    “software applications,” “interfaces,” or “corporations.” Nor are they rooted
    in one central location or single ideology. Rather, mass digitization is a
    complex material and social infrastructure performed by a diverse
    constellation of cultural memory professionals, computer scientists,
    information specialists, policy personnel, politicians, scanners, and
    scholars. Hence, this volume approaches mass digitization projects as
    “assemblages,” that is, as contingent arrangements consisting of humans,
    machines, objects, subjects, spaces and places, habits, norms, laws, politics,
    and so on. These arrangements cross national-global and public-private lines,
    producing what this volume calls “late-sovereign,” “posthuman,” and “late-
    capitalist” assemblages.

    To give an example, we can look at how the national and global aspects of
    cultural memory institutions change with mass digitization. The national
    museums and libraries we frequent today were largely erected during eras of
    high nationalism, as supreme acts of cultural and national territoriality.
    “The early establishment of a national collection,” as Belinda Tiffen notes,
    “was an important step in the birth of the new nation,” since it signified
    “the legitimacy of the nation as a political and cultural entity with its own
    heritage and culture worthy of being recorded and preserved.”52 Today, as the
    initial French incentive to build Europeana shows, we find similar
    nationalization processes in mass digitization projects. However,
    nationalizing a digital collection often remains a performative gesture than a
    practical feat, partly because the information environment in the digital
    sphere differs significantly from that of the analog world in terms of
    territory and materiality, and partly because the dichotomy between national
    and global, an agreed-upon construction for centuries, is becoming more and
    more difficult to uphold in theory and practice.53 Thus, both Google Books and
    Europeana link to sovereign frameworks such as citizens and national
    representation, while also undermining them with late-capitalist transnational
    economic agreements.

    A related example is the posthuman aspect of cultural memory politics.
    Cultural memory artifacts have always been thought of as profoundly human
    collections, in the sense that they were created by and for human minds and
    human meaning-making. Previously, humans also organized collections. But with
    the invention of computers, most cultural memory institutions also introduced
    a machine element to the management of accelerating amounts of information,
    such as computerized catalog systems and recollection systems. With the advent
    of mass digitization, machines have gained a whole new role in the cultural
    memory ecosystem, not only as managers, but also as interpreters. Thus,
    collections are increasingly digitized to be read by machines instead of
    humans, just as metadata is now becoming a question of machine analysis rather
    than of human contextualization. Machines are taking on more and more tasks in
    the realm of cultural memory that require a substantial amount of cognitive
    insight (just as mass digitization has created the need for new robot-like,
    and often poorly paid, human tasks, such as the monotonous work of book
    scanning). Mass digitization has thereby given rise to an entirely new
    cultural-legal category titled “non-consumptive research,” a term used to
    describe the large-scale analysis of texts, and which has been formalized by
    the Google Books Settlement, for instance, in the following way: “research in
    which computational analysis is performed on one or more books, but not
    research in which a researcher reads or displays.”54

    Lastly, mass digitization connects the politics of cultural memory to
    transnational late capitalism, and to one of its expressions in particular:
    digital capitalism.55 Of course, cultural memory collections have a long
    history with capitalism. The nineteenth century held very fuzzy boundaries
    between the cultural functions of libraries and the commercial interests that
    surrounded them, and, as historian of libraries Francis Miksa notes, Melvin
    Dewey, inventor of the Dewey Decimal System, was a great admirer of the
    corporate ideal, and was eager to apply it to the library system.56 Indeed,
    library development in the United States was greatly advanced by the
    philanthropy of capitalism, most notably by Andrew Carnegie.57 The question,
    then, is not so much whether mass digitization has brought cultural memory
    institutions, and their collections and users, into a capitalist system, but
    _what kind_ of capitalist system mass digitization has introduced cultural
    memory to: digital capitalism.

    Today, elements of the politics of cultural memory are being reassembled into
    novel knowledge configurations. As a consequence, their connections and
    conjugations are being transformed, as are their institutional embeddings.
    Indeed, mass digitization assemblages are a product of our time. They are new
    forms of knowledge institutions arising from a sociopolitical environment
    where vertical territorial hierarchies and horizontal networks entwine in a
    new political mesh: where solid things melt into air, and clouds materialize
    as material infrastructures, where boundaries between experts and laypeople
    disintegrate, and where machine cognition operates on a par with human
    cognition on an increasingly large scale. These assemblages enable new types
    of political actors—networked assemblages—which hold particular forms of power
    despite their informality vis-à-vis the formal political system; and in turn,
    through their practices, these actors partly build and shape those
    assemblages.

    Since concepts always respond to “a specific social and historical situation
    of which an intellectual occasion is part,”58 it is instructive to revisit the
    1980s, when the theoretical notion of assemblage emerged and slowly gained
    cross-disciplinary purchase.59 Around this time, the stable structures of
    modernist institutions began to give ground to postmodern forces: sovereign
    systems entered into supra-, trans-, and international structures,
    “globalization” became a buzzword, and privatizing initiatives drove wedges
    into the foundations of state structures. The centralized power exercised by
    disciplinary institutions was increasingly distributed along more and more
    lines, weakening the walls of circumscribed centralized authority.60 This
    disciplinary decomposition took place on all levels and across all fields of
    society, including institutional cultural memory containers such as libraries
    and museums. The forces of privatization, globalization, and digitization put
    pressures not only on the authority of these institutions but also on a host
    of related authoritative cultural memory elements, such as “librarians,”
    “cultural works,” and “taxonomies,” and cultural memory practices such as
    “curating,” “reading,” and “ownership.” Librarians were “disintermediated” by
    technology, cultural works fragmented into flexible data, and curatorial
    principles were revised and restructured just as reading was now beginning to
    take place in front of screens, meaning-making to be performed by machines,
    and ownership of works to be substituted by contractual renewals.

    Thinking about mass digitization as an “assemblage” allows us to abandon the
    image of a circumscribed entity in favor of approaching it as an aggregate of
    many highly varied components and their contingent connections: scanners,
    servers, reading devices, cables, algorithms; national, EU, and US
    policymakers; corporate CEOs and employees; cultural heritage professionals
    and laypeople; software developers, engineers, lobby organizations, and
    unsalaried labor; legal settlements, academic conferences, position papers,
    and so on. It gives us pause—every time we say “Google” or “Europeana,” we
    might reflect on what we actually mean. Does the researcher employed by a
    university library and working with Google Books also belong to Google Books?
    Do the underpaid scanners? Do the users of Google? Or, when we refer to Google
    Books, do we rather only mean to include the founders and CEOs of Google? Or
    has Google in fact become a metaphor that expresses certain characteristics of
    our time? The present volume suggests that all these components enter into the
    new phenomenon of mass digitization and produce a new field of potentiality,
    while at the same time they retain their original qualities and value systems,
    at least to some extent. No assemblage is whole and imperturbable, nor
    entirely reducible to its parts, but is simultaneously an accumulation of
    smaller assemblages and a member of larger ones.61 Thus Google Books, for
    example, is both an aggregation of smaller assemblages such as university
    libraries, scanners (both humans and machines), and books, _and_ a member of
    larger assemblages such as Google, Silicon Valley, neoliberal lobbies, and the
    Internet, to name but a few.

    While representations of assemblages such as the analyses performed in this
    volume are always doomed to misrepresent empirical reality on some level, this
    approach nevertheless provides a tool for grasping at least some of mass
    digitization’s internal heterogeneity, and the mechanisms and processes that
    enable each project’s continued assembled existence. The concept of the
    assemblage allows us to grasp mass digitization as comprised of ephemeral
    projects that are uncertain by nature, and sometimes even made up of
    contradictory components.62 It also allows us to recognize that they are more
    than mere networks: while ephemeral and networked, something enables them to
    cohere. Bruno Latour writes, “Groups are not silent things, but rather the
    provisional product of a constant uproar made by the millions of contradictory
    voices about what is a group and who pertains to what.”63 It is the “taming
    and constraining of this multivocality,” in particular by communities of
    knowledge and everyday practices, that enables something like mass
    digitization to cohere as an assemblage.64 This book is, among other things,
    about those communities and practices, and the politics they produce and are
    produced by. In particular, it addresses the politics of mass digitization as
    an infrapolitical activity that retreats into, and emanates from, digital
    infrastructures and the network effects they produce.

    ## Politics in Mass Digitization: Infrastructure and Infrapolitics

    If the concept of “assemblage” allows us to see the relational set-up of mass
    digitization, it also allows us to inquire into its political infrastructures.
    In political terms, assemblage thinking is partly driven by dissatisfaction
    with state-centric dominant ontologies, including reified units such as state,
    society, or capitalism, and the unilinear focus on state-centric politics over
    other forms of politics.65 The assemblage perspective is therefore especially
    useful for understanding the politics of late-sovereign and late-capitalist
    data projects such as mass digitization. As we will see in part 2, the
    epistemic frame of sovereignty continues to offer an organizing frame for the
    constitution and regulation of mass digitization and the virtues associated
    with it (such as national representation and citizen engagement). However, at
    the same time, mass digitization projects are in direct correspondence with
    neoliberal values such as privatization, consumerism, globalization, and
    acceleration, and its technological features allow for a complete
    restructuring of the disciplinary spaces of libraries to form vaster and even
    global scales of integration and economic organization on a multinational
    stage.

    Mass digitization is a concrete example of what cultural memory projects look
    like in a “late-sovereign” age, where globalization tests the political and
    symbolic authority of sovereign cultural memory politics to its limits, while
    sovereignty as an epistemic organizing principle for the politics of cultural
    memory nonetheless persists.66 The politics of cultural memory, in particular
    those practiced by cultural heritage institutions, often still cling to fixed
    sovereign taxonomies and epistemic frameworks. This focus is partly determined
    by their institutional anchoring in the framework of national cultural
    policies. In mass digitization, however, the formal political apparatus of
    cultural heritage institutions is adjoined by a politics that plays out in the
    margins: in lobbies, software industries, universities, social media, etc.
    Those evaluating mass digitization assemblages in macropolitical terms, that
    is, those who are concerned with political categories, will glean little of
    the real politics of mass digitization, since such politics at the margins
    would escape this analytic matrix.67 Assemblage thinking, by contrast, allows
    us to acknowledge the political mechanisms of mass digitization beyond
    disciplinary regulatory models, in societies where “where forces … not
    categories, clash.”68

    As Ian Hacking and many others have noted, the capacious usage of the notion
    of “politics” threatens to strip the word of meaning.69 But talk of a politics
    of mass digitization is no conceptual gimmick, since what is taking place in
    the construction and practice of mass digitization assemblages plainly is
    political. The question, then, is how best to describe the politics at work in
    mass digitization assemblages. The answer advanced by the present volume is to
    think of the politics of mass digitization as “infrapolitics.”

    The notion of infrapolitics has until now primarily and profoundly been
    advanced as a concept of hidden dissent or contestation (Scott, 1990).70 This
    volume suggests shifting the lens to focus on a different kind of
    infrapolitics, however, one that not only takes the shape of resistance but
    also of maintenance and conformity, since the story of mass digitization is
    both the story of contestation _and_ the politics of mundane and standard-
    seeking practices. 71 The infrapolitics of mass digitization is, then, a kind
    of politics “premised not on a subject, but on the infra,” that is, the
    “underlying rules of the world,” organized around glocal infrastructures.72
    The infrapolitics of mass digitization is the building and living of
    infrastructures, both as spaces of contestation and processes of
    naturalization.

    Geoffrey Bowker and Susan Leigh Star have argued that the establishment of
    standards, categories, and infrastructures “should be recognized as the
    significant site of political and ethical work that they are.”73 This applies
    not least in the construction and development of knowledge infrastructures
    such as mass digitization assemblages, structures that are upheld by
    increasingly complex sets of protocols and standards. Attaching “politics” to
    “infrastructure” endows the term—and hence mass digitization under this
    rubric—with a distinct organizational form that connects various stages and
    levels of politics, as well as a distinct temporality that relates mass
    digitization to the forces and ideas of industrialization and globalization.

    The notion of infrastructure has a surprisingly brief etymology. It first
    entered the French language in 1875 in relation to the excavation of
    railways.74 Over the following decades, it primarily designated fixed
    installations designed to facilitate and foster mobility. It did not enter
    English vocabulary until 1927, and as late as 1951, the word was still
    described by English sources as “new” (OED).75 When NATO adopted the term in
    the 1950s, it gained a military tinge. Since then, “infrastructure” has
    proliferated into ever more contexts and disciplines, becoming a “plastic
    word”76 often used to signify any vital and widely shared human-constructed
    resource.77

    What makes infrastructures central for understanding the politics of mass
    digitization? Primarily, they are crucial to understanding how industrialism
    has affected the ways in which we organize and engage with knowledge, but the
    politics of infrastructures are also becoming increasingly significant in the
    late-sovereign, late-capitalist landscape.

    The infrastructures of mass digitization mediate, combine, connect, and
    converge upon different institutions, social networks, and devices, augmenting
    the actors that take part in them with new agential possibilities by expanding
    the radius of their action, strengthening and prolonging the reach of their
    performance, and setting them free for other activities through their
    accelerating effects, time often reinvested in other infrastructures, such as,
    for instance, social media activities. The infrastructures of mass
    digitization also increase the demand for globalization and mobility, since
    they expand the radius of using/reading/working.

    The infrastructures of mass digitization are thus media of polities and
    politics, at times visible and at others barely legible or felt, and home both
    to dissent as well as to standardizing measures. These include legal
    infrastructures such as copyright, privacy, and trade law; material
    infrastructures such as books, wires, scanners, screens, server parks, and
    shelving systems; disciplinary infrastructures such as metadata, knowledge
    organization, and standards; cultural infrastructures such as algorithms,
    searching, reading, and downloading; societal infrastructures such as the
    realms of the public and private, national and global. These infrastructures
    are, depending, both the prerequisites for and the results of interactions
    between the spatial, temporal, and social classes that take part in the
    construction of mass digitization. The infrapolitics of mass digitization is
    thus geared toward both interoperability and standardization, as well as
    toward variation.78

    Often when thinking of infrastructures, we conceive of them in terms of
    durability and stability. Yet, while some infrastructures, such as railways
    and Internet cables, are fairly solid and rigid constructions, others—such as
    semantic links, time-limited contracts, and research projects—are more
    contingent entities which operate not as “fully coherent, deliberately
    engineered, end-to-end processes,” but rather as morphous contingent
    assemblages, as “ecologies or complex adaptive systems” consisting of
    “numerous systems, each with unique origins and goals, which are made to
    interoperate by means of standards, socket layers, social practices, norms,
    and individual behaviors that smooth out the connections among them.”79 This
    contingency has direct implications for infrapolitics, which become equally
    flexible and adaptive. These characteristics endow mass digitization
    infrastructures with vulnerabilities but also with tremendous cultural power,
    allowing them to distribute agency, and to create and facilitate new forms of
    sociality and culture.

    Building mass digitization infrastructures is a costly endeavor, and hence
    mass digitization infrastructures are often backed by public-private
    partnerships. Indeed infrastructures—and mass digitization infrastructures are
    no exceptions—are often so costly that a certain mixture of political or
    individual megalomania, state reach, and private capital is present in their
    construction.80 This mixed foundation means that a lot of the political
    decisions regarding mass digitization literally take place _beneath_ the radar
    of “the representative institutions of the political system of nation-states,”
    while also more or less aggressively filling out “gaps” in nation-state
    systems, and even creating transnational zones with their own policies. 81
    Hence the notion of “infra”: the infrapolitics of mass digitization hover at a
    frequency that lies _below_ and beyond formal sovereign state apparatus,
    organized, as they are, around glocal—and often private or privatized—material
    and social infrastructures.

    While distinct from the formalized sovereign political system, infrapolitical
    assemblages nevertheless often perform as late-sovereign actors by engaging in
    various forms of “sovereignty games.”82 Take Google, for instance, a private
    corporation that often defines itself as at odds with state practice, yet also
    often more or less informally meets with state leaders, engages in diplomatic
    discussions, and enters into agreements with state agencies and local
    political councils. The infrapolitical forces of Google in these sovereignty
    games can on the one hand exert political pressure on states—for instance in
    the name of civic freedom—but in Google’s embrace of politics, its
    infrapolitical forces can on the other hand also squeeze the life out of
    existing parliamentary ways, promoting instead various forms of apolitical or
    libertarian modes of life. The infrapolitical apparatus thus stands apart from
    more formalized politics, not only in terms of political arena, but also the
    constraints that are placed upon them in the form, for instance, of public
    accountability.83 What is described here can in general terms be called the
    infrapolitics of neoliberalism, whose scenery consists of lobby rooms, policy-
    making headquarters, financial zones, public-private spheres, and is populated
    by lobbyists, bureaucrats, lawyers, and CEOs.

    But the infrapolitical dynamics of mass digitization also operate in more
    mundane and less obvious settings, such as software design offices and
    standardization agencies, and are enacted by engineers, statisticians,
    designers, and even users. Infrastructures are—increasingly—essential parts of
    our everyday lives, not only in mass digitization contexts, but in all walks
    of life, from file formats and software programs to converging transportation
    systems, payment systems, and knowledge infrastructures. Yet, what is most
    significant about the majority of infrapolitical institutions is that they are
    so mundane; if we notice them at all, they appear to us as boring “lists of
    numbers and technical specifications.”84 And their maintenance and
    construction often occurs “behind the scenes.”85 There is a politics to these
    naturalizing processes, since they influence and frame our moral, scientific,
    and aesthetic choices. This is to say that these kinds of infrapolitical
    activities often retire or withdraw into a kind of self-evidence in which the
    values, choices, and influences of infrastructures are taken for granted and
    accorded a kind of obviousness, which is universally accepted. It is therefore
    all the more “politically and ethically crucial”86 to recognize the
    infrapolitics of mass digitization, not only as contestation and privatized
    power games, but also as a mode of existence that values professionalized
    standardization measures and mundane routines, not least because these
    infrapolitical modes of existence often outlast their material circumstances
    (“software outlasts hardware” as John Durham Peters notes).87 In sum,
    infrastructures and the infrapolitics they produce yield subtle but
    significant world-making powers.

    ## Power in Mass Digitization

    If mass digitization is a product of a particular social configuration and
    political infrastructure, it is also, ultimately, a site and an instrument of
    power. In a sense, mass digitization is an event that stages a fundamental
    confrontation between state and corporate power, while pointing to the
    reconfigurations of both as they become increasingly embedded in digital
    infrastructures. For instance, such confrontation takes place at the
    negotiating table, where cultural heritage directors face the seductive and
    awe-inspiring riches of Silicon Valley, as well as its overwhelmingly
    intricate contractual layouts and its intimidating entourage of lawyers.
    Confrontation also takes place at the level of infrastructural ideology, in
    the meeting between twentieth-century standardization ideals and the playful
    and flexible network dynamics of the twenty-first century, as seen for
    instance in the conjunction of institutionally fixed taxonomies and
    algorithmic retrieval systems that include feedback mechanisms. And it takes
    place at the level of users, as they experience a gain in some powers and the
    loss of others in their identity transition from national patrons of cultural
    memory institutions to globalized users of mass digitization assemblages.

    These transformations are partly the results of society’s increasing reliance
    on network power and its effects. Political theorists Michael Hardt and
    Antonio Negri suggested almost two decades ago that among other things, global
    digital systems enabled a shift in power infrastructures from robust national
    economies and core industrial sectors to interactive networks and flexible
    accumulation, creating a “form of network power, which requires the wide
    collaboration of dominant nation-states, major corporations, supra-national
    economic and political institutions, various NGOs, media conglomerates and a
    series of other powers.”88 From this landscape, according to their argument,
    emerged a new system of power in which morphing networks took precedence over
    reliable blocs. Hardt and Negri’s diagnosis was one of several similar
    arguments across the political spectrum that were formed within such a short
    interval that “the network” arguably became the “defining concept of our
    epoch.”89 Within this new epoch, the old centralized blocs of power crumbled
    to make room for new forms of decentralized “bastard” power phenomena, such as
    the extensive corporate/state mass surveillance systems revealed by Edward
    Snowden and others, and new forms of human rights such as “the right to be
    forgotten,” a right for which a more appropriate name would be “the right to
    not be found by Google.”90 Network power and network effects are therefore
    central to understanding how mass digitization assemblages operate, and why
    some mass digitization assemblages are more powerful than others.

    The power dynamics we find in Google Books, for instance, are directly related
    to the ways in which digital technologies harness network effects: the power
    of Google Books grows exponentially as its network expands.91 Indeed, as Siva
    Vaidhyanathan noted in his critical work on Google’s role in society, what he
    referred to as the “Googlization of books” was ultimately deeply intertwined
    with the “Googlization of everything.”92 The networks of Google thus weren’t
    external to both the success and the challenges of Google, but deeply endemic
    to it, from portals and ranking systems to anchoring (elite) institutions, and
    so on. The better Google Books becomes at harnessing network effects, the more
    fundamental its influence is in the digital sphere. And Google Books is very
    good at harnessing digital network power. Indeed, Google Books reached its
    “tipping point” almost before it launched: it had by then already attracted so
    many stakeholders that its mere existence decreased the power of any competing
    entities—and the fact that its heavy user traffic is embedded in Google only
    strengthened its network effects. Google Books’s tipping point tells us little
    about its quality in an abstract sense: “tipping points” are more often
    attained by proprietary measures, lobbying, expansion, and most typically by a
    mixture of all of the above, than by sheer quality.93 This explains not only
    the success of Google Books, but also its traction with even its critics:
    although Google Books was initially criticized heavily for its poor imagery
    and faulty metadata,94 its possible harmful impact on the public sphere,95 and
    later, over privacy concerns,96 it had already created a power hub to which,
    although they could have navigated around it, masses of people were
    nevertheless increasingly drawn.

    Network power is endemic not only to concrete digital networks, but also to
    globalization at large as a process that simultaneously gives rise to feelings
    of freedom of choice and loss of choice.97 Mass digitization assemblages, and
    their globalization of knowledge infrastructures, thus crystalize the more
    general tendencies of globalization as a process in which people participate
    by choice, but not necessarily voluntarily; one in which we are increasingly
    pushed into a game of social coordination, where common standards allow more
    effective coordination yet also entrap us in their pull for convergence.
    Standardization is therefore a key technique of network power: on the one
    hand, standardization is linked with globalization (and various neoliberal
    regimes) and the attendant widespread contraction of the state, while on the
    other hand, standardization implies a reconfiguration of everyday life.98
    Standards allow for both minute data analytics and overarching political
    systems that “govern at a distance.”99 Standardization understood in this way
    is thus a mode of capturing, conceptualizing, and configuring reality, rather
    than simply an economic instrument or lubricant. In a sense, standardization
    could even be said to be habit forming: through standardization, “inventions
    become commonplace, novelties become mundane, and the local becomes
    universal.”100

    To be sure, standardization has long been a crucial tool of world-making
    power, spanning both the early and late-capitalist eras.101 “Standard time,”
    as John Durham Peters notes, “is a sine qua non for international
    capitalism.”102 Without the standardized infrastructure of time there would be
    no global transportation networks, no global trade channels, and no global
    communication networks. Indeed, globalization is premised on standardization
    processes.

    What kind of standardization processes do we find, then, in mass digitization
    assemblages? Internet use alone involves direct engagement with hundreds of
    global standards, from Bluetooth to Wi-Fi standards, from protocol standards
    to file standards such as Word and MP4 and HTTP.103 Moreover, mass
    digitization assemblages confront users with a series of additional standards,
    from cultural standards of tagging to technical standards of interoperability,
    such as the European Data Model (EDM) and Google’s schema.org, or legal
    standards such as copyright and privacy regulations. Yet, while these
    standards share affinities with the standardization processes of
    industrialization, in many respects they also deviate from them. Instead, we
    experience in mass digitization “a new form of standardization,”104 in which
    differentiation and flexibility gain increasing influence without, however,
    dispensing with standardization processes.

    Today’s standardization is increasingly coupled with demands for flexibility
    and interoperability. Flexibility, as Joyce Kolko has shown, is a term that
    gained traction in the 1970s, when it was employed to describe putative
    solutions to the problems of Fordism.105 It was seen as an antidote to Fordist
    “rigidity”—a serious offense in the neoliberal regime. Thus, while the digital
    networks underlying mass digitization are geared toward standardization and
    expansion, since “information technology rewards scale, but only to the extent
    that practices are standardized,”106 they are also becoming increasingly
    flexible, since too-rigid standards hinder network effects, that is, the
    growth of additional networks. This is one reason why mass digitization
    assemblages increasingly and intentionally break down the so-called “silo”
    thinking of cultural memory institutions, and implement standard flexibility
    and interoperability to increase their range.107 One area of such
    reconfiguration in mass digitization is the taxonomic field, where stable
    institutional taxonomic structures are converted to new flexible modes of
    knowledge organization like linked data.108 Linked data can connect cultural
    memory artifacts as well as metadata in new ways, and the move from a cultural
    memory web of interlinked documents to a cultural memory web of interlinked
    data can potentially “amplify the impact of the work of libraries and
    archives.”109 However, in order to work effectively, linked data demands
    standards and shared protocols.

    Flexibility allows the user a freer range of actions, and thus potentially
    also the possibility of innovation. These affordances often translate into
    user freedom or empowerment. Yet flexibility does not necessarily equal
    fundamental user autonomy or control. On the contrary, flexibility is often
    achieved through decomposition, modularization, and black-boxing, allowing
    some components to remain stable while others are changed without implications
    for the rest of the system.110 These components are made “fluid” in the sense
    that they are dispersed of clear boundaries and allowed multiple identities,
    and in that they enable continuity and dissolution.

    While these new flexible standard-setting mechanisms are often localized in
    national and subnational settings, they are also globalized systems “oriented
    towards global agendas and systems.”111 Indeed, they are “glocal”
    configurations with digital networks at their cores. The increasing
    significance of these glocal configurations has not only cultural but also
    democratic consequences, since they often leave users powerless when it comes
    to influencing their cores.112 This more fundamental problematic also pertains
    to mass digitization, a phenomenon that operates in an environment that
    constructs and encourages less Habermasian public spheres than “relations of
    sociability,” from which “aggregate outcomes emerge not from an act of
    collective decision-making, but through the accumulation of decentralized,
    individual decisions that, taken together, nonetheless conduce to a
    circumstance that affects the entire group.”113 For example, despite the
    flexibility Google Books allows us in terms of search and correlation, we have
    very little sway over its construction, even though we arguably influence its
    dynamics. The limitations of our influence on the cores of mass digitization
    assemblages have implications not only for how we conceive of institutional
    power, but also for our own power within these matrixes.

    ## Notes

    1. Borghi 2012, 420. 2. Latour 2008. 3. For more on this, see Hicks 2018;
    Abbate 2012; Ensmenger 2012. In the case of libraries, (white) women still
    make out the majority of the workforce, but there is a disproportionate amount
    of men in senior positions, in comparison with their overall representation;
    see, for example, Schonfeld and Sweeney 2017. 4. Meckler 1982. 5. Otlet and
    Rayward 1990, chaps. 6 and 15. 6. For a historical and contemporary overview
    over some milestones in the use of microfilms in a library context, see Canepi
    et al. 2013, specifically “Historic Overview.” See also chap. 10 in Baker
    2002. 7. Pfanner 2012. 8.
    . 9. Medak et al.
    2016. 10. Michael S. Hart, “The History and Philosophy of Project Gutenberg,”
    Project Gutenberg, August 1992,
    .
    11. Ibid. 12. . 13. Ibid. 14. Bruno Delorme,
    “Digitization at the Bibliotheque Nationale De France, Including an Interview
    with Bruno Delorme,” _Serials_ 24 (3) (2011): 261–265. 15. Alain Giffard,
    “Dilemmas of Digitization in Oxford,” _AlainGiffard’s Weblog_ , posted May 29,
    2008, in-oxford>. 16. Ibid. 17. Author’s interview with Alain Giffard, Paris, 2010.
    18. Ibid. 19. Later, in 1997, François Mitterrand demanded that the digitized
    books should be brought online, accessible as text from everywhere. This,
    then, was what became known as Gallica, the digital library of BnF, which was
    launched in 1997. Gallica contains documents primarily out of copyright from
    the Middle Ages to the 1930s, with priority given to French-speaking culture,
    hosting about 4 million documents. 20. Imerito 2009. 21. Ambati et al. 2006;
    Chen 2005. 22. Ryan Singel, “Stop the Google Library, Net’s Librarian Says,”
    _Wired_ , May 19, 2009, library-nets-librarian-says>. 23. Alfred P. Sloan Foundation, Annual Report,
    2006,
    .
    24. Leetaru 2008. 25. Amazon was also a major player in the early years of
    mass digitization. In 2003 they gave access to a digital archive of more than
    120,000 books with the professed goal of adding Amazon’s multimillion-title
    catalog in the following years. As with all other mass digitization
    initiatives, Jeff Bezos faced a series of copyright and technological
    challenges. He met these with legal rhetorical ingenuity and the technical
    skills of Udi Manber, who later became the lead engineer with Google, see, for
    example, Wolf 2003. 26. Leetaru 2008. 27. John Markoff, “The Coming Search
    Wars,” _New York Times_ , February 1, 2004,
    . 28.
    Google press release, “Google Checks out Library Books,” December 14, 2004,
    .
    29. Vise and Malseed 2005, chap. 21. 30. Auletta 2009, 96. 31. Johann Wolfgang
    Goethe, _Sprüche in Prosa_ , “Werke” (Weimer edition), vol. 42, pt. 2, 141;
    cited in Cassirer 1944. 32. Philip Jones, “Writ to the Future,” _The
    Bookseller_ , October 22, 2015, future-315153>. 33. “Jacques Chirac donne l’impulsion à la création d’une
    bibliothèque numérique,” _Le Monde_ , March 16, 2005,
    donne-l-impulsion-a-la-creation-d-une-bibliotheque-
    numerique_401857_3246.html>. 34. “An overwhelming American dominance in
    defining future generations’ conception about the world” (author’s own
    translation). Ibid. 35. Labi 2005; “The worst scenario we could achieve would
    be that we had two big digital libraries that don’t communicate. The idea is
    not to do the same thing, so maybe we could cooperate, I don’t know. Frankly,
    I’m not sure they would be interested in digitizing our patrimony. The idea is
    to bring something that is complementary, to bring diversity. But this doesn’t
    mean that Google is an enemy of diversity.” 36. Chrisafis 2008. 37. Béquet
    2009. For more on the political potential of archives, see Foucault 2002;
    Derrida 1996; and Tygstrup 2014. 38. “Comme vous soulignez, nos bibliothèques
    et nos archives contiennent la mémoire de nos culture européenne et de
    société. La numérisation de leur collection—manuscrits, livres, images et
    sons—constitue un défi culturel et économique auquel il serait bon que
    l’Europe réponde de manière concertée.” (As you point out, our libraries and
    archives contain the memory of our European culture and society. Digitization
    of their collections—manuscripts, books, images, and sounds—is a cultural and
    economic challenge it would be good for Europe to meets in a concerted
    manner.) Manuel Barroso, open letter to Jacques Chirac, July 7, 2007,
    [http://www.peps.cfwb.be/index.php?eID=tx_nawsecuredl&u=0&file=fileadmin/sites/numpat/upload/numpat_super_editor/numpat_editor/documents/Europe/Bibliotheques_numeriques/2005.07.07reponse_de_la_Commission_europeenne.pdf&hash=fe7d7c5faf2d7befd0894fd998abffdf101eecf1](http://www.peps.cfwb.be/index.php?eID=tx_nawsecuredl&u=0&file=fileadmin/sites/numpat/upload/numpat_super_editor/numpat_editor/documents/Europe/Bibliotheques_numeriques/2005.07.07reponse_de_la_Commission_europeenne.pdf&hash=fe7d7c5faf2d7befd0894fd998abffdf101eecf1).
    39. Jøsevold 2016. 40. Janssen 2011. 41. Robert Darnton, “Google’s Loss: The
    Public’s Gain,” _New York Review of Books_ , April 28, 2011,
    . 42.
    Palfrey 2015, __ 104. 43. See, for example, DPLA’s Public Library
    Partnership’s Project, partnerships>. 44. Karaganis, 2018. 45. Sassen 2008, 3. 46. Coyle 2006; Borghi
    and Karapapa, _Copyright and Mass Digitization_ ; Patra, Kumar, and Pani,
    _Progressive Trends in Electronic Resource Management in Libraries_. 47.
    Borghi 2012. 48. Beagle et al. 2003; Lavoie and Dempsey 2004; Courant 2006;
    Earnshaw and Vince 2007; Rieger 2008; Leetaru 2008; Deegan and Sutherland
    2009; Conway 2010; Samuelson 2014. 49. The earliest textual reference to the
    mass digitization of books dates to the early 1990s. Richard de Gennaro,
    Librarian of Harvard College, in a panel on funding strategies, argued that an
    existing preservation program called “brittle books” should take precedence
    over other preservation strategies such as mass deacidification; see Sparks,
    _A Roundtable on Mass Deacidification_ , 46. Later the word began to attain
    the sense we recognize today, as referring to digitization on a large scale.
    In 2010 a new word popped up, “ultramass digitization,” a concept used to
    describe the efforts of Google vis-à-vis more modest large-scale digitization
    projects; see Greene 2010 _._ 50. Kevin Kelly, “Scan This Book!,” _New York
    Times_ , May 14, 2006, ; Hall 2008; Darnton 2009;
    Palfrey 2015. 51. As Alain Giffard notes, “I am not very confident with the
    programs of digitization full of technical and economical considerations, but
    curiously silent on the intellectual aspects” (Alain Giffard, “Dilemmas of
    Digitization in Oxford,” _AlainGiffard’s Weblog_ , posted May 29, 2008,
    oxford>). 52. Tiffen 2007. 344. See also Peatling 2004. 53. Sassen 2008. 54.
    See _The Authors Guild et al. vs. Google, Inc._ , Amended Settlement Agreement
    05 CV 8136, United States District Court, Southern District of New York,
    (2009) sec 7(2)(d) (research corpus), sec. 1.91, 14. 55. Informational
    capitalism is a variant of late capitalism, which is based on cognitive,
    communicative, and cooperative labor. See Christian Fuchs, _Digital Labour and
    Karl Marx_ (New York: Routledge, 2014), 135–152. 56. Miksa 1983, 93. 57.
    Midbon 1980. 58. Said 1983, 237. 59. For example, the diverse body of
    scholarship that employed the notion of “assemblage” as a heuristic and/or
    ontological device for grasping and formulating these changing relations of
    power and control; in sociology: Haggerty and Ericson 2000; Rabinow 2003; Ong
    and Collier 2005; Callon et al. 2016; in geography: Anderson and McFarlane
    2011, 124–127; in philosophy: Deleuze and Guattari 1987; DeLanda 2006; in
    cultural studies: Puar 2007; in political science: Sassen 2008. The
    theoretical scope of these works ranged from close readings of and ontological
    alignments with Deleuze and Guattari’s work (e.g., DeLanda), to more
    straightforward descriptive employments of the term as outlined in the OED
    (e.g., Sassen). What the various approaches held in common was the effort to
    steer readers away from thinking in terms of essences and stability toward
    thinking about more complex and unstable structures. Indeed, the “assemblage”
    seems to have become a prescriptive as much as a diagnostic tool (Galloway
    2013b; Weizman 2006). 60. Deleuze 1997; Foucault 2009; Hardt and Negri 2007.
    61. DeLanda 2006; Paul Rabinow, “Collaborations, Concepts, Assemblages,” in
    Rabinow and Foucault 2011, 113–126, at 123. 62. Latour 2005, __ 28. 63. Ibid.,
    35. 64. Tim Stevens, _Cyber Security and the Politics of Time_ (Cambridge:
    Cambridge University Press, 2015), 33. 65. Abrahamsen and Williams 2011. 66.
    Walker 2003. 67. Deleuze and Guattari 1987, 116. 68. Parisi 2004, 37. 69.
    Hacking 1995, 210. 70. Scott 2009. In James C. Scott’s formulation,
    infrapolitics is a form of micropolitics, that is, the term refers to
    political acts that evade the formal political apparatus. This understanding
    was later taken up by Robin D. G. Kelley and Alberto Moreires, and more
    recently by Stevphen Shukaitis and Angela Mitropolous. See Kelley 1994;
    Shukaitis 2009; Mitropoulos 2012; Alterbo Moreiras, _Infrapolitics: the
    Project and Its Politics. Allegory and Denarrativization. A Note on
    Posthegemony_. eScholarship, University of California, 2015. 71. James C.
    Scott also concedes as much when he briefly links his notion of infrapolitics
    to infrastructure, as the “cultural and structural underpinning of the more
    visible political action on which our attention has generally been focused”;
    Scott 2009, 184. 72. Mitropoulos 2012, 115. 73. Bowker and Star 1999, 319. 74.
    Centre National de Ressource Textuelle et Lexicales,
    . 75. For an English
    etymological examination, see also Batt 1984, 1–6. 76. This is on account of
    their malleability and the uncanny way they are used to fit every
    circumstance. For more on the potentials and problems of plastic words, see
    Pörksen 1995. 77. Edwards 2003, 186–187. 78. Mitropoulos 2012, 117. 79.
    Edwards et al. 2012. 80. Peters 2015, at 31. 81. Beck 1996, 1–32, at 18;
    Easterling 2014. 82. Adler-Nissen and Gammeltoft-Hansen 2008. 83. Holzer and
    Mads 2003. 84. Star 1999, 377. 85. Ibid. 86. Bowker and Star 1999, 326. 87.
    Peters 2015, 35. 88. Hardt and Negri 2009, 205. 89. Chun 2017. 90. As argued
    by John Naughton at the _Negotiating Cultural Rights_ conference, National
    Museum, Copenhagen, Denmark, November 13–14, 2015,
    .
    91. The “tipping point” is a metaphor for sudden change first introduced by
    Morton Grodzins in 1960, later used by sociologists such as Thomas Schelling
    (for explaining demographic changes in mixed-race neighborhoods), before
    becoming more generally familiar in urbanist studies (used by Saskia Sassen,
    for instance, in her analysis of global cities), and finally popularized by
    mass psychologists and trend analysts such as Malcolm Gladwell, in his
    bestseller of that name; see Gladwell 2000. 92. “Those of us who take
    liberalism and Enlightenment values seriously often quote Sir Francis Bacon’s
    aphorism that ‘knowledge is power.’ But, as the historian Stephen Gaukroger
    argues, this is not a claim about knowledge: it is a claim about power.
    ‘Knowledge plays a hitherto unrecognized role in power,’ Gaukroger writes.
    ‘The model is not Plato but Machiavelli.’1 Knowledge, in other words, is an
    instrument of the powerful. Access to knowledge gives access to that
    instrument of power, but merely having knowledge or using it does not
    automatically confer power. The powerful always have the ways and means to use
    knowledge toward their own ends. … How can we connect the most people with the
    best knowledge? Google, of course, offers answers to those questions. It’s up
    to us to decide whether Google’s answers are good enough.” See Vaidhyanathan
    2011, 149–150. 93. Easley and Kleinberg 2010, 528. 94. Duguid 2007; Geoffrey
    Nunberg, “Google’s Book Search: A Disaster for Scholars,” _Chronicle of Higher
    Education,_ August 31, 2009; _The Idea of Order: Transforming Research
    Collections for 21st Century Scholarship_ (Washington, DC: Council on Library
    and Information Resources, 2010), 106–115. 95. Robert Darnton, “Google’s Loss:
    The Public’s Gain,” _New York Review of Books_ , April 28, 2011,
    . 96.
    Jones and Janes 2010. 97. David S. Grewal, _Network Power: The Social Dynamics
    of Globalization_ (New Haven: Yale University Press, 2008). 98. Higgins and
    Larner, _Calculating the Social: Standards and the Reconfiguration of
    Governing_ (Basingstoke: Palgrave Macmillan, 2010). 99. Ponte, Gibbon, and
    Vestergaard 2011; Gibbon and Henriksen 2012. 100. Russell 2014. See also Wendy
    Chun on the correlation between habit and standardization: Chun 2017. 101.
    Busch 2011. 102. Peters 2015, 224. 103. DeNardis 2011. 104. Hall and Jameson
    1990. 105. Kolko 1988. 106. Agre 2000. 107. For more on the importance of
    standard flexibility in digital networks, see Paulheim 2015. 108. Linked data
    captures the intellectual information users add to information resources when
    they describe, annotate, organize, select, and use these resources, as well as
    social information about their patterns of usage. On one hand, linked data
    allows users and institutions to create taxonomic categories for works on a
    par with cultural memory experts—and often in conflict with such experts—for
    instance by linking classical nudes with porn; and on the other hand, it
    allows users and institutions to harness social information about patterns of
    use. Linked data has ideological and economic underpinnings as much as
    technical ones. 109.  _The National Digital Platform: for Libraries, Archives
    and Museums_ , 2015, report-national-digital-platform>. 110. Petter Nielsen and Ole Hanseth, “Fluid
    Standards. A Case Study of a Norwegian Standard for Mobile Content Services,”
    under review,
    .
    111. Sassen 2008, 3. 112. Grewal 2008. 113. Ibid., 9.

    # II
    Mapping Mass Digitization

    # 2
    The Trials, Tribulations, and Transformations of Google Books

    ## Introduction

    In a 2004 article in the cultural theory journal _Critical Inquiry_ , book
    historian Roger Chartier argued that the electronic world had created a triple
    rupture in the world of text: by providing new techniques for inscribing and
    disseminating the written word, by inspiring new relationships with texts, and
    by imposing new forms of organization onto them. Indeed, Chartier foresaw that
    “the originality and the importance of the digital revolution must therefore
    not be underestimated insofar as it forces the contemporary reader to
    abandon—consciously or not—the various legacies that formed it.”1 Chartier’s
    premonition was inspired by the ripples that digitization was already
    spreading across the sea of texts. People were increasingly writing and
    distributing electronically, interacting with texts in new ways, and operating
    and implementing new textual economies.2 These textual transformations __ gave
    rise to a range of emotional reactions in readers and publishers, from
    catastrophizing attititudes and pessimism about “the end of the book” to the
    triumphalist mythologizing of liquid virtual books that were shedding their
    analog ties like butterflies shedding their cocoons.

    The most widely publicized mass digitization project to date, Google Books,
    precipitated the entire emotional spectrum that could arise from these textual
    transversals: from fears that control over culture was slipping from authors
    and publishers into the hands of large tech companies, to hopeful ideas about
    the democratizing potential of bringing knowledge that was once locked up in
    dusty tomes at places like Harvard and Stanford, and to a utopian
    mythologizing of the transcendent potential of mass digitization. Moreover,
    Google Books also affected legal and professional transformations of the
    infrastructural set-up of the book, creating new precedents and a new
    professional ethos. The cultural, legal, and political significance of Google
    Books, whether positive or negative, not only emphasizes its fundamental role
    in shaping current knowledge landscapes, it also allows us to see Google Books
    as a prism that reflects more general political tendencies toward
    globalization, privatization, and digitization, such as modulations in
    institutional infrastructures, legal landscapes, and aesthetic and political
    conventions. But how did the unlikely marriage between a tech company and
    cultural memory institutions even come about? Who drove it forward, and around
    and within which infrastructures? And what kind of cultural memory politics
    did it produce? The following sections of this chapter will address some of
    these problematics.

    ## The New Librarians

    It was in the midst of a turbulent restructuring of the world of text, in
    October 2004 at the Frankfurt International Book Fair, that Larry Page and
    Sergey Brin of Google announced the launch of Google Print, a cooperation
    between Google and leading Anglophone publishers. Google Print, which later
    became Google Partner Program, would significantly alter the landscape and
    experience of cultural memory, as well as its regulatory infrastructures. A
    decade later, the traditional practices of reading, and the guardianship of
    text and cultural works, had acquired entirely new meanings. In October 2004,
    however, the publishing world was still unaware of Google’s pending influence
    on the institutional world of cultural memory. Indeed, at that time, Amazon’s
    mounting dominance in the field of books, which began a decade earlier in
    1995, appeared to pose much more significant implications. The majority of
    publishers therefore greeted Google’s plans in Frankfurt as a welcome
    alternative to Jeff Bezos’s growing online behemoth.

    Larry Page and Sergey Brin withheld a few details from their announcement at
    Frankfurt, however; Google’s digitization plans would involve not only
    cooperation with publishers, but also with libraries. As such, what would
    later become Google Books would in fact consist of two separate, yet
    interrelated, programs: Google Print (which would later become Google Partner
    Program) and Google Library Project. In all secrecy, Google had for many
    months prior to the Frankfurt Book Fair worked with select libraries in the US
    and the UK to digitize their holdings. And in December 2004 the true scope of
    Google’s mass digitization plans were revealed: what Page and Brin were
    building was the foundation of a groundbreaking cultural memory archive,
    inspired by the myth of Alexandria.3 The invocation of Alexandria situated the
    nascent Google Books project in a cultural schema that historicized the
    project as a utopian, even moral and idealist, project that could finally,
    thanks to technology, exceed existing human constraints—legal, political, and
    physical.4

    Google’s utopian discourse was not foreign to mass digitization enthusiasts.
    Indeed, it was the _langue du jour_ underpinning most large-scale digitization
    projects, a discourse nurtured and influenced by the seemingly borderless
    infrastructure of the web itself (which was often referred to in
    universalizing terms). 5 Yet, while the universalizing discourse of mass
    digitization was familiar, it had until then seemed like aspirational talk at
    best, and strategic policy talk in the face of limited public funding, complex
    copyright landscapes, and lumbering infrastructures, at worst. Google,
    however, faced the task with a fresh attitude of determination and a will to
    disrupt, as well as a very different form of leverage in terms of
    infrastructural set-up. Google was already the world’s preferred search
    engine, having mastered the tactical skill of navigating its users through
    increasingly complex information landscapes on the web, and harvesting their
    metadata in the process to continuously improve Google’s feedback systems.
    Essentially ever-larger amounts of information (understood here as “users”)
    were passing through Google’s crawling engines, and as the masses of
    information in Google’s server parks grew, so did their computational power.
    Google Books, then, as opposed to most existing digitization projects, which
    were conceived mainly in terms of “access,” was embedded in the larger system
    of Google that understood the power and value of “feedback,” collecting
    information and entering it into feedback loops between users, machines, and
    engineers. Google also understood that information power didn’t necessarily
    lie in owning all the information they gave access to, but rather in
    controlling the informational processes themselves.

    Yet, despite Google’s advances in information seeking behaviors, the idea of
    Google Books appeared as an odd marriage. Why was a private company in Silicon
    Valley, working in the futuristic and accelerating world of software and fluid
    information streams, intent on partnering up with the slow-paced world of
    cultural memory institutions, traditionally more concerned with the past?
    Despite the apparent clash of temporal and cultural regimes, however, Google
    was in fact returning home to its point of inception. Google was born of a
    research project titled the Stanford Integrated Digital Library Project, which
    was part of the NSF’s Digital Libraries Initiative (1994–1999). Larry Page and
    Sergey Brin were students then, working on the Stanford component of this
    project, intending to develop the base technologies required to overcome the
    most critical barriers to effective digital libraries, of which there were
    many.6 Page’s and Brin’s specific project, titled Google, was presented as a
    technical solution to the increasing amount of information on the World Wide
    Web.7 At Stanford, Larry Page also tried to facilitate a serious discussion of
    mass digitization at Stanford, and of whether or not it was feasible. But his
    ideas received little support, and he was forced to leave the idea on the
    drawing board in favor of developing search technologies.8

    In September 1998, Sergey Brin and Larry Page left the library project to
    found Google as a company and became immersed in search engine technologies.
    However, a few years later, Page resuscitated the idea of mass digitization as
    a part of their larger self-professed goal to change the world of information
    by increasing access, scaling the amount of information available, and
    improving computational power. They convinced Eric Schmidt, the new CEO of
    Google, that the mass digitization of cultural works made sense not only from
    a information perspective, but also from a business perspective, since the
    vast amounts of information Google could extract from books would improve
    Google’s ability to deliver information that was hitherto lacking, and this
    new content would eventually also result in an increase in traffic and clicks
    on ads.9

    ## The Scaling Techniques of Mass Digitization

    A series of experiments followed on how to best approach the daunting task.
    The emergence and decay of these experiments highlight the ways in which mass
    digitization assemblages consist not only of thoughts, ideals, and materials,
    but also a series of cultural techniques that entwine temporality,
    materiality, and even corporeality. This perspective on mass digitization
    emphasizes the mixed nature of mass digitization assemblages: what at first
    glance appears as a relatively straightforward story about new technical
    inventions, at a closer look emerges as complex entanglements of human and
    nonhuman actors, with implications not only for how we approach it as a legal-
    technical entity but also an infrapolitical phenomenon. As the following
    section shows, attending to the complex cultural techniques of mass
    digitization (its “how”) enables us to see that its “minor” techniques are not
    excluded from or irrelevant to, but rather are endemic to, larger questions of
    the infrapolitics of digital capitalism. Thus, Google’s simple technique of
    scaling scanning to make the digitization processes go faster becomes
    entangled in the creation of new habits and techniques of acceleration and
    rationalization that tie in with the politics of digital culture and digital
    devices. The industrial scaling of mass digitization becomes a crucial part of
    the industrial apparatus of big data, which provide new modes of inscription
    for both individuals and digital industries that in turn can be capitalized on
    via data-mining, just as it raises questions of digital labor and copyright.

    Yet, what kinds of scaling techniques—and what kinds of investments—Google
    would have to leverage to achieve its initial goals were still unclear to
    Google in those early years. Larry Page and co-worker Marissa Mayer therefore
    began to experiment with the best ways to proceed. First, they created a
    makeshift scanning device, whereby Marissa Mayer would turn the page and Larry
    Page would click the shutter of the camera, guided by the pace of a
    metronome.10 These initial mass digitization experiments signaled the
    industrial nature of the mass digitization process, providing a metronomic
    rhythm governed by the implacable regularity of the machine, in addition to
    the temporal horizon of eternity in cultural memory institutions (or at least
    of material decay).11 After some experimentation with scale and time, Google
    bought a consignment of books from a second-hand book store in Arizona. They
    scanned them and subsequently experimented with how to best index these works
    not only by using information from the book, but also by pulling data about
    the books from various other sources on the web. These extractions allowed
    them to calculate a work’s relevance and importance, for instance by looking
    at the number of times it had been referred to.12

    In 2004 Google was also granted patent rights to a scanner that would be able
    to scan the pages of works without destroying them, and which would make them
    searchable thanks to sophisticated 3D scanning and complex algorithms.13
    Google’s new scanner used infrared camera technology that detected the three-
    dimensional shape and angle of book pages when the book was placed in the
    scanner. The information from the book was then transmitted to Optical
    Character Recognition (OCR), which adjusted image focus and allowed the OCR
    software to read images of curved surfaces more accurately.

    ![11404_002_fig_001.jpg](images/11404_002_fig_001.jpg)

    Figure 2.1 François-Marie Lefevere and Marin Saric. “Detection of grooves in
    scanned images.” U.S. Patent 7508978B1. Assigned to Google LLC.

    These new scanning technologies allowed Google to unsettle the fixed content
    of cultural works on an industrial scale and enter them into new distribution
    systems. The untethering and circulation of text already existed, of course,
    but now text would mutate on an industrial scale, bringing into coexistence a
    multiplicity of archiving modes and textual accumulation. Indeed, Google’s
    systematic scaling-up of already existing technologies on an industrial and
    accelerated scale posed a new paradigm in mass digitization, to a much larger
    extent than, for instance, inventions of new technologies.14 Thus, while
    Google’s new book scanners did expand the possibilities of capturing
    information, Google couldn’t solve the problem of automating the process of
    turning the pages of the books. For that they had to hire human scanners who
    were asked to manually turn pages. The work of these human scanners was
    largely invisible to the public, who could only see the books magically
    appearing online as the digital archive accumulated. The scanners nevertheless
    left ghostly traces, in the form of scanning errors such as pink fingers and
    missing and crumbled pages—visual traces that underlined the historically
    crucial role of human labor in industrializing and automating processes.15
    Indeed, the question of how to solve human errors in the book scanning process
    led to a series of inventive systems, such as the patent granted to Google in
    2009 (filed in 2003), which describes a system that would minimize scanning
    errors with the help of music.16 Later, Google open sourced plans for a book
    scanner named “Linear Book Scanner” that would turn the pages automatically
    with the help of a vacuum cleaner and a cleverly designed sheet metal
    structure, after passing them over two image sensors taken from a desktop
    scanner.17

    Eventually, after much experimentation, Google consolidated its mass
    digitization efforts in collaboration with select libraries.18 While some
    institutions immediately and enthusiastically welcomed Google’s aspirations as
    aligning with their own mission to improve access to information, others were
    more hesitant, an institutional vacillation that hinted ominously at
    controversy to come. Some libraries, such as the University of Michigan,
    greeted the initiative with enthusiasm, whereas others, such as the Library of
    Congress, saw a red flag pop up: copyright, one of the most fundamental
    elements in the rights of texts and authors.19 The Library of Congress
    questioned whether it was legal to scan and index books without a rights
    holder’s permission. Google, in response, argued that it was within the fair
    use provisions of the law, but the argument was speculative in so far as there
    was no precedent for what Google was going to do. While some universities
    agreed with Google’s views on copyright and shared its desire to disrupt
    existing copyright practices, others allowed Google to make digital copies of
    their holdings (a precondition for creating an index of it). Hence, some
    libraries gave full access, others allowed only the scanning of books in the
    public domain (published before 1923), and still others denied access
    altogether. While the reticence of libraries was scattered, it was also a
    precursor of a much more zealous resistance to Google Books, an opposition
    that was mounted by powerful voices in the cultural world, namely publishers
    and authors, and other commercial infrastructures of cultural memory.

    ![11404_002_fig_002.jpg](images/11404_002_fig_002.jpg)

    Figure 2.2 Joseph K. O’Sullivan, Alexander Proudfooot, and Christopher R.
    Uhlik. “Pacing and error monitoring of manual page turning operator.” U.S.
    Patent 7619784B1. Assigned to Google LLC, Google Technology Holdings LLC.

    While Google’s announcement of its cooperation with publishers at the
    Frankfurt Book Fair was received without drama—even welcomed by many—the
    announcement of its cooperation with libraries a few months later caused a
    commercial uproar. The most publicized point of contestation was the fact that
    Google was now not only displaying books in cooperation with publishers, but
    also building a library of its own, without remunerating publishers and
    authors. Why would readers buy books if they could read them free online?
    Moreover, the Authors Guild worried that Google’s digital library would
    increase the risk of piracy. At a deeper level, the case also emphasized
    authors’ and publishers’ desire to retain control over their copyrighted works
    in the face of the threat that the Library Project (unlike the Partner
    Program) was posing: Google was digitizing without the copyright holder’s
    permission. Thus, to them, the Library Project fundamentally threatened their
    copyrights and, on a more fundamental level, existing copyright systems. Both
    factors, they argued, would make book buying a superfluous activity.20 The
    harsher criticisms framed Google Books as a book thief rather than as a global
    philanthropist.21 Google, on its behalf, launched a defense of their actions
    based on the notion of “fair use,” which as the following section shows,
    eventually became the fundamental legal question.

    ## Infrastructural Transformations

    Google Books became the symbol of the painful confusion and territorial
    battles that marred the publishing world as it underwent a transformation from
    analog to digital. The mounting and diverse opposition to Google Books was
    thus not an isolated affair, but rather a persistent symptom—increasingly loud
    stress signals emitting from the infrastructural joints of the analog realm of
    books as it buckled under the strain of digital logic. As media theorist John
    Durham Peters (drawing on media theorist Harold Innis) notes, the history of
    media is also an “occupational history” that tells the tales of craftspeople
    mastering medium-specific skills tactically battling for monopolies of
    knowledge and guarding their access.22 And in the occupational history of
    Google Books, the craftspeople of the printed book were being challenged by a
    new breed of artificers who were excelling not so much in how to print, which
    book sellers to negotiate with, or how to sell books to people, but rather in
    the medium-specific tactical skills of the digital, such as building software
    and devising search technologies, skills they were leveraging to their own
    gain to create new “monopolies of knowledge” in the process.

    As previously mentioned, the concerns expressed by publishers and authors in
    regards to remuneration was accompanied by a more abstract sense of a loss of
    control over their works and how this loss of control would affect the
    copyrights. These concerns did not arise out of thin air, but were part of a
    more general discourse on digital information as something that _cannot_ be
    secured and controlled in the same way as analog commodities can. Indeed, it
    seemed that authors and publishers were part of a world entirely different
    from Google Books: while publishers and authors were still living in and
    defending a “regime of scarcity,” 23 Google Books, by contrast, was busy
    building a “realm of plenitude and infinite replenishment.” As such, the clash
    between the traditional infrastructures of the analog book and the new
    infrastructures of Google Books was symptomatic of the underlying radical
    reorganization of information from a state of trade and exchange to a state of
    constant transmission and contagion.24

    Foregrounding the fair use defense25, Google argued that the public benefits
    of scanning outweighed the negative consequences for authors.26 Influential
    legal scholars such as Lawrence Lessig, among others, supported this argument,
    suggesting that inclusion in a search engine in a way that does not erode the
    value of the book was of such societal importance that it should be deemed
    legal.27 The copyright owners, however, insisted that the burden should be on
    Google to request permission to scan each work.28

    Google and copyright owners reached a proposed settlement on October 28, 2008.
    The proposal would allow Google not only to continue its scanning activities
    and to show free snippets online, but would also give Google exclusive rights
    to sell digital copies of out-of-print books. In return, Google would provide
    all libraries in the United States with one free subscription to the digital
    database, but Google could also sell additional subscriptions. Moreover,
    Google was to pay $125 million, part of which would go to the construction of
    a Book Rights Registry that identified rights holders and handled payments to
    lawyers.29 Yet before the settlement was even formally treated, a mounting
    opposition to it was launched in public.

    The proposed settlement was received with harsh words, for instance by
    Internet archivist Brewster Kahle and legal scholar Lawrence Lessig, who
    opposed the settlement with words ranging from “insanity” to “cultural
    asphyxiation” and “information monopoly.”30 Privacy proponents also spoke out
    against Google Books, bringing attention to the implications of Google being
    able to follow and track reading habits, among other things.31 The
    organization Privacy Authors, including writers such as Jonathan Lethem, Bruce
    Schneier, and Michael Chabon, and publishers, argued that although Google
    Books was an “extremely exciting” project, it failed in its current form to
    protect the privacy of readers, thus creating a “real risk of disclosure” of
    sensitive information to “prying governmental entities and private litigants,”
    potentially giving rise to a “chilling effect,” hurting not only readers but
    also authors and publishers, not least those writing about sensitive or
    controversial topics.32 The Association of Libraries also raised a set of
    concerns, such as the cost of library subscriptions and privacy.33 And most
    predictably, companies such as Amazon and Microsoft, who also had a stake in
    mass digitization, opposed the settlement; Microsoft even funded some nuanced
    research efforts into its implications.34 Finally, and most damningly, the
    Department of Justice decided to get involved with an antitrust argument.

    By this point, opposition to the Google Books project, as it was outlined in
    the proposed settlement, wasn’t only motivated by commercial concerns; it was
    now also motivated by a public that framed Google’s mass digitization project
    as a parasitical threat to the public sphere itself. The framing of Google as
    a potential menace was a jarring image that stood in stark contrast to Larry
    Page’s and Sergey Brin’s philanthropic attitudes and to Google’s famous “Don’t
    be evil” slogan. The public reaction thus signaled a change in Google’s
    reputation as the company metamorphosed in the public eye from a small
    underdog company to a multinational corporation with a near-monopoly in the
    search industry. Google’s initially inspiring approach to information as a
    realm of plenitude now appeared in the public view more similar to the actions
    of megalomaniac land-grabbers.

    Google, however, while maintaining its universalizing mission regarding
    information, also countered the accusations of monopoly building, arguing that
    potential competitors could just step up, since nothing in the agreements
    entered into by the libraries and Google “precludes any other company or
    organization from pursuing their own similar effort.”35 Nevertheless Judge
    Denny Chin denied the settlement in March 2011 with the following statement:
    “The question presented is whether the ASA is fair, adequate, and reasonable.
    I conclude that it is not.”36 Google left the proposed settlement behind, and
    appealed the decision of their initial case with new amicus briefs focusing on
    their argument that book scanning was fair use. They argued that they were not
    demanding exclusivity on the information they scanned, that they didn’t
    prohibit other actors from digitizing the works they were digitizing, and that
    their main goal was to enrich the public sphere with more information, not to
    build an information monopoly. In July 2013 Judge Denny Chin issued a new
    opinion confirming that Google Books was indeed fair use.37 Chin’s opinion was
    later consolidated in a major victory for Google in 2015 when Judge Pierre
    Leval in the Second Circuit Court legalized Google Books with the words
    “Google’s unauthorized digitizing of copyright-protected works, creation of a
    search functionality, and display of snippets from those works are non-
    infringing fair uses.“38 Leval’s decision marked a new direction, not only for
    Google Books, but also for mass digitization in general, as it signaled a
    shift in cultural expectations about what it means to experience and
    disseminate cultural artifacts.

    Once again, the story of Google Books took a new turn. What was first
    presented as a gift to cultural memory institutions and the public, and later
    as theft from and threat to these same entities, on closer inspection revealed
    itself as a much more complex circulatory system of expectations, promises,
    risks, and blame. Google Books thus instigated a dynamic and forceful
    connection between Google and cultural memory institutions, where the roles of
    giver and receiver, and the first giver and second giver/returner, were
    difficult to decode. Indeed, the binding nature of the relationship between
    Google Books and cultural memory institutions proved to be much more complex
    than the simple physical exchange of books and digital files. As the next
    section outlines, this complex system of cultural production was held together
    by contractual arrangement—central joints, as it were, connecting data and
    works, public and private, local and global, in increasingly complex ways. For
    Google Books, these contractual relations appear as the connective tissues
    that make these assemblages possible, and which are therefore fundamental to
    their affective dimensions.

    ## The Infrapolitics of Contract

    In common parlance a contract is a legal tool that formalizes a “mutual
    agreement between two or more parties that something shall be done or forborne
    by one or both,” often enforceable by law.39 Contractual systems emerged with
    the medieval merchant regime, and later evolved with classical liberalism into
    an ideological revolt against paternalist systems as nothing less than
    freedom, a legal construct that could destroy the sentimental bonds of
    personal dependence.40 As the classic liberal social scientist William Graham
    Sumner argued, “[c]ontract … is rational … realistic, cold, and matter-of-
    fact.” The rational nature of contracts also affected their temporality, since
    a contract endures only “so long as the reason for it endures,” and their
    spatiality, relegating any form of sentiment from the public sphere to “the
    sphere of private and personal relations.”41

    Sentiments prevailed, however, as the contracts tying together Google and
    cultural memory institutions emerged. Indeed, public and professional
    evaluations of the agreements often took an affective, even sexualized, form.
    The economist Paul Courant situated libraries “in bed with Google”42; library
    consultant and media experts Jeff Ubois and Peter B. Kaufman recounted _how_
    they got in bed with Google—“[w]e were approached singly, charmed in
    confidence, the stranger was beguiling, and we embraced” 43; communication
    scholar Evelyn Bottando announced that “libraries not only got in bed with
    Google. They got married”44; and librarian Jessamyn West finally pondered on
    the relationship ruins, “[s]till not sure, after all that, how we got this all
    so wrong. Didn’t we both want the same thing? Maybe it really wasn’t us, it
    was them. Most days it’s hard to remember what we saw in Google. Why did we
    think we’d make good partners?”45

    The evaluative discourse around Google Books dispels the idea of contracts as
    dispassionate transactions for services and labor, showing rather that
    contracts are infrapolitical apparatuses that give rise to emotions and
    affect; and that, moreover, they are systems of doctrines, relations, and
    social artifacts that organize around specific ideologies, temporalities,
    materialities, and techniques.46 First and foremost, contracts give rise to
    new kinds of infrastructures in the field of cultural memory: they mediate,
    connect, and converge cultural memory institutions globally, giving rise to
    new institutional networks, in some cases increasing globalization and
    mobility for both users and objects, and in other cases restricting the same.
    The Google Books contracts display both technical and symbolic aspects: as
    technical artifacts they establish intricate frameworks of procedures,
    commitments, rights, and incentives for governing the transactions of cultural
    memory artifacts and their digitized copies. As symbolic artifacts they evoke
    normative principles, expressing different measures of good will toward
    libraries, but also—as all contracts do—introduce the possibility of distrust,
    conflict and betrayal.47

    Despite their centrality to mass digitization assemblages, and although some
    of them have been made available to the public,48 the content of these
    particular contracts still suffer from the epistemic gap incurred in practical
    and symbolic form by Google’s Agreements and Non-Disclosure Agreements (NDA),
    a kind of agreement most libraries are required to sign when entering the
    agreement. Like all contracts, the individual contracts signed by the
    partnership libraries vary in nature and have different implications. While
    many of Google’s agreements may be publically available, they have often only
    been made public through requests and transparency mechanisms such as the
    Freedom of Information Act. As the Open Rights Alliance notes in their
    publication of the agreement entered between the British Library and Google,
    “We asked the British Library for a copy of the agreement with Google, which
    was not uploaded to their transparency website with other similar contracts,
    as it didn’t involve monetary exchange. This may be a loophole transparency
    activists want to look at. After some toing and froing with the Freedom of
    Information Act we got a copy.”49

    While the culture of contractual secrecy is native to the business world, with
    its safeguarding of business processes, and is easily navigated by business
    partners, it is often opposed to the ethos of state-subsidized cultural
    institutions who “draw their financial and moral support from a public that
    expects transparency in their activities, ranging from their materials
    acquisitions to their business deals.”50 For these reasons, library
    organizations have recommended that nondisclosure agreements should be avoided
    if possible, and minimized if they are necessary.51 Google, in response, noted
    on its website that: “[t]hough not all of the library contracts have been made
    public, we can say that all of them are non-exclusive, meaning that all of our
    library partners are free to continue their own scanning projects or work with
    others while they work with Google to digitize their books.”52

    Regardless of their contractual content and later publication, the contracts
    are a vital instrument in Google’s broader management of visibility. As Mikkel
    Flyverbom, Clare Birchall, and others have argued, this practice of visibility
    management—which they define as “the many ways in which organizations seek to
    curate and control their presence, relations, and comprehension vis-à-vis
    their surroundings” through practices of transparency, secrecy, opacity,
    surveillance, and disclosure—is in the digital age a complex issue closely
    tied to the question of governance and power. While each publication act may
    serve to create an uncomplicated picture of transparency, it nevertheless
    happens in a paradoxical global regulatory environment that on the one hand
    encourages “sunshine” laws that demand that governments, corporations, and
    civil-sector organizations provide access to information, yet on the other
    hand also harbors regulatory agencies that seek mechanisms and rules by which
    to keep information hidden. Thus, as Flyverbom et al. conclude, the “everyday
    practices of organizing invariably implicate visibility management,” whose
    valences are “attached to transparency and opacity” that are not simple and
    straightforward, but rather remain “dependent upon the actor, the context, and
    the purpose of organizations and individuals.”53

    Steven Levy recounts how Google began its scanning operations in “near-total
    stealth,” a “cloak-and-dagger” approach that stood in contrast to Google’s
    public promotion of transparency as a new mode of existence. As Levy argues,
    “[t]he secrecy was yet another expression of the paradox of a company that
    sometimes embraced transparency and other times seemed to model itself on the
    NSA.”54 Yet, while secrecy practices may have suited some of Google’s
    operations, they sit much more uneasily with their book scanning programs: “If
    Google had a more efficient way to scan books, sharing the improved techniques
    could benefit the company in the long run—inevitably, much of the output would
    find its way onto the web, bolstering Google’s indexes. But in this case,
    paranoia and a focus on short-term gain kept the machines under wraps.”55 The
    nondisclosure agreements show that while boundaries may be blurred between
    Google Books and libraries, we may still identify different regulatory models
    and modes of existence within their networks, including the explicit _library
    ethos_ (in the Weberian sense of the term) of public access, not only to the
    front end but also to some areas of the back end, and the business world’s
    secrecy practices. 56

    Entering into a mass digitization public-private partnership (PPP) with a
    corporation such as Google is thus not only a logical and pragmatic next step
    for cultural memory institutions, it is also a political step. As already
    noted, Google Books, through its embedding in Google, injects cultural memory
    objects into new economic and cultural infrastructures. These infrastructures
    are governed less by the hierarchical world of curators, historians, and
    politicians, and more by feedback networks of tech companies, users, and
    algorithms. Moreover, they forge ever closer connections to data-driven market
    logics, where computational rather than representational power counts. Mass
    digitization PPPs such as Google Books are thus also symptoms of a much more
    pervasive infrapolitical situation, in which cultural memory institutions are
    increasingly forced to alter their identities from public caretakers of
    cultural heritage to economic actors in the EU internal market, controlled by
    the framework of competition law, time-limited contracts, and rules on state
    aid.57 Moreover, mastering the rules of these new infrastructures is not
    necessarily an easy feat for public institutions.58 Thus, while Google claims
    to hold a core commitment regarding free digital access to information, and
    while its financial apparatus could be construed as making Google an eligible
    partner in accordance with the EU’s policy objectives toward furthering
    public-private partnerships in Europe,59 it is nevertheless, as legal scholar
    Maurizio Borghi notes, relevant to take into account Google’s previous
    monopoly-building history.60

    ## The Politics of Google Books

    A final aspect of Google Books relates to the universal aspiration of Google
    Books’s collection, its infrapolitics, and what it empirically produces in
    territorial terms. As this chapter’s previous sections have outlined, it was
    an aspiration of Google Books to transcend the cultural and political
    limitations of physical cultural memory collections by gathering the written
    material of cultural memory institutions into one massive digitized
    collection. Yet, while the collection spans millions of works in hundreds of
    languages from hundreds of countries,61 it is also clear that even large-scale
    mass digitization processes still entail procedures of selection on multiple
    levels from libraries to works. These decisions produce a political reality
    that in some respects reproduces and accentuates the existing politics of
    cultural memory institutions in terms of territorial and class-based
    representations, and in other respects give rise to new forms of cultural
    memory politics that part ways with the political regimes of traditional
    curatorial apparatuses.

    One obvious area in which to examine the politics produced by the Google Books
    assemblage is in the selection of libraries that Google chooses to partner
    with.62 While the full list of Google Books partners is not disclosed on
    Google’s own webpage, it is clear from the available list that, up to now,
    Google Books has mainly partnered with “great libraries,” such as elite
    university libraries and national libraries. The rationale for choosing these
    libraries has no doubt been to partner up with cultural memory institutions
    that preside over as much material as possible, and which are therefore able
    to provide more pieces of the puzzle than, say, a small-town public library
    that only presides over a fraction of their collections. Yet, while these
    libraries provide Google Books with an impressive and extensive collection of
    rare and valuable artifacts that give the impression of a near-universal
    collection, they nevertheless also contain epistemological and historical
    gaps. Historian and digital humanist Andrew Prescott notes, for example, the
    limited collections of literature written by workers and other lower-class
    people in the early eighteenth century in elite libraries. This institutional
    lack creates a pre-filtered collection in Google Books, favoring “[t]hose
    writers of working class origins who had a success story to report, who had
    become distinguished statesmen, successful businessmen, religious leaders and
    so on,” that is, the people who were “able to find commercial publishers who
    were interested in their story.”63 Google’s decision to partner with elite
    libraries thus inadvertently reproduces the class-based biases of analog
    cultural memory institutions.

    In addition to the reproduction of analog class-based bias in its digital
    collection, the Google Books corpus also displays a genre bias, veering
    heavily toward scientific publications. As mathematicians Eitan Pechenik et
    al. show, the contents of the Google Books corpus in the period of the 1900s
    is “increasingly dominated by scientific publications rather than popular
    works,” and “even the first data set specifically labeled as fiction appears
    to be saturated with medical literature.”64 The fact that Google Books is
    constellated in such a manner thus challenges a “vast majority of existing
    claims drawn from the Google Books corpus,” just as it points to the need “to
    fully characterize the dynamics of the corpus before using these data sets to
    draw broad conclusions about cultural and linguistic evolution.”65

    Last but not least, Google Books’s collection still bespeaks its beginnings:
    it still primarily covers Anglophone ground. There is hardly any literature
    that reviews the geographic scope in Google Books, but existing work does
    suggest that Google is still heavily oriented toward US-based libraries.66
    This orientation does not necessarily give rise to an Anglophone linguistic
    hegemony, as some have feared, since many of the Anglophone libraries hold
    considerable collections of foreign language books. But it does invariably
    limit its collections to the works in foreign languages that the elite
    libraries deemed worthy of preserving. The gaps and biases of Google Books
    reveal it to be less of a universal and monolithic collection, and more of an
    impressive, but also specific and contingent, assemblage of works, texts, and
    relations that is determined by the relations Google Books has entered into in
    terms of class, discipline, and geographical scope.

    Google Books is not only the result of selection processes on the level of
    partnering institutions, but also on the level of organizational
    infrastructure. While the infrastructures of Google Books in fact depart from
    those of its parent company in many regards to avoid copyright infringement
    charges, there is little doubt, however, that people working actively on
    Google’s digitization activities (included here are both users and Google
    employees) are also globally distributed in networked constellations. The
    central organization for cultural digitization, the Google Cultural Institute,
    is located in Paris, France. Yet the people affiliated with this hub are
    working across several countries. Moreover, people working on various aspects
    of Google Books, from marketing to language technology, to software
    developments and manual scanning processes, are dispersed across the globe.
    And it is perhaps in this way that we tend to think of Google in general—as a
    networked global company—and for good reasons. Google has been operating
    internationally almost for as long as it has been around. It has offices in
    countries all over the globe, and works in numerous languages. Today it is one
    of the most important global information institutions, and as more and more
    people turn to Google for its services, Google also increasingly reflects
    them—indeed they enter into a complex cognitive feedback mechanism system.
    Google depends on the growing diversity of its “inhabitants” and on its
    financial and cultural leverage on a global scale, and to this effect it is
    continuously fine-tuning its glocalization strategies, blending the universal
    and the particular. This glocal strategy does not necessarily create a
    universal company, however; it would be more correct to say that Google’s
    glocality brings the globe to Google, redefining it as an “American”
    company.67 Hence, while there is little doubt that Google, and in effect
    Google Books, increasingly tailors to specific consumers,68 and that this
    tailoring allows for a more complex global representation generated by
    feedback systems, Google’s core nevertheless remains lodged on American soil.
    This is underlined by the fact that Google Books still effectively belongs to
    US jurisdiction.69 Google Books is thus on the one hand a globalized company
    in terms of both content and institutional framework; yet it also remains an
    _American_ multinational corporation, constrained by US regulation and social
    standards, and ultimately reinforcing the capacities of the American state.
    While Google Books operates as a networked glocal project with universal
    aspirations, then, it also remains fenced in by its legal and cultural
    apparatuses.

    In sum, just as a country’s regulatory and political apparatus affects the
    politics of its cultural memory institutions in the analog world, so is the
    politics of Google Books co-determined by the operations of Google. Thus,
    curatorial choices are made not only on the basis of content, but also of the
    location of server parks, existing company units, lobbying efforts, public
    policy concerns, and so on. And the institutional identity of Google Books is
    profoundly late-sovereign in this regard: on one hand it thrives on and
    operates with horizontal network formations; on the other, it still takes into
    account and has to operate with, and around, sovereign epistemologies and
    political apparatuses. These vertical and horizontal lines ultimately rewire
    the politics of cultural memory, shifting the stakes from sovereign
    territorial possessions to more functional, complex, and effective means of
    control.

    ## Notes

    1. Chartier 2004. 2. As philosopher Jacques Derrida noted anecdotally on his
    colleagues’ way of reading, “some of my American colleagues come along to
    seminars or to lecture theaters with their little laptops. They don’t print
    out; they read out directly, in public, from the screen. I saw it being done
    as well at the Pompidou Center [in Paris] a few days ago. A friend was giving
    a talk there on American photography. He had this little Macintosh laptop
    there where he could see it, like a prompter: he pressed a button to scroll
    down his text. This assumed a high degree of confidence in this strange
    whisperer. I’m not yet at that point, but it does happen.” (Derrida 2005, 27).
    3. As Ken Auletta recounts, Eric Schmidt remembers when Page surprised him in
    the early 2000s by showing off a book scanner he had built which was inspired
    by the great library of Alexandria, claiming that “We’re going to scan all the
    books in the world,” and explaining that for search to be truly comprehensive
    “it must include every book ever published.” Page literally wanted Google to
    be a “super librarian” (Auletta 2009, __ 96). 4. Constraints of a physical
    character (how to digitize and organize all this knowledge in physical form);
    legal character (how to do it in a way that suspends existing regulation); and
    political character (how to transgress territorial systems). 5. Take, for
    instance, project Bibliotheca Universalis, comprising American, Japanese,
    German, and British libraries among others, whose professed aim was “to
    exploit existing digitization programs in order to … make the major works of
    the world’s scientific and cultural heritage accessible to a vast public via
    multimedia technologies, thus fostering … exchange of knowledge and dialogue
    over national and international borders.” It was a joint project of the French
    Ministry of Culture, the National Library of France, the Japanese National
    Diet Library, the Library of Congress, the National Library of Canada,
    Discoteca di Stato, Deutsche Bibliothek, and the British Library:
    . The project took its name
    from the groundbreaking Medieval publication _Bibliotecha Universalis_
    (1545–1549), a four-volume alphabetical bibliography that listed all the known
    books printed in Latin, Greek, or Hebrew. Obviously, the dream of the total
    archive is not limited to the realm of cultural memory institutions, but has a
    much longer and more generalized lineage; for a contemporary exploration of
    these dreams see, for instance, issue six of _Limn Magazine_ , March 2016,
    . 6. As the project noted in its research summary,
    “One of these barriers is the heterogeneity of information and services.
    Another impediment is the lack of powerful filtering mechanisms that let users
    find truly valuable information. The continuous access to information is
    restricted by the unavailability of library interfaces and tools that
    effectively operate on portable devices. A fourth barrier is the lack of a
    solid economic infrastructure that encourages providers to make information
    available, and give users privacy guarantees”; Summary of the Stanford Digital
    Library Technologies Project,
    . 7. Brin and Page
    1998. 8. Levy 2011, 347. 9. Levy 2011, 349. 10. Levy 2011, 349. 11. Young
    1988. 12. They had a hard time, however, creating a new PageRank-like
    algorithm for books; see Levy 2011, 349. 13. Google Inc., “Detection of
    Grooves in Scanned Images,” March 24, 2009,
    [https://www.google.ch/patents/US7508978?dq=Detection+Of+Grooves+In+Scanned+Images&hl=da&sa=X&ved=0ahUKEwjWqJbV3arMAhXRJSwKHVhBD0sQ6AEIHDAA](https://www.google.ch/patents/US7508978?dq=Detection+Of+Grooves+In+Scanned+Images&hl=da&sa=X&ved=0ahUKEwjWqJbV3arMAhXRJSwKHVhBD0sQ6AEIHDAA).
    14. See, for example, Jeffrey Toobin. “Google’s Moon Shot,” _New Yorker_ ,
    February 4, 2007, shot>. 15. Scanners whose ghostly traces are still found in digitized books
    today are evidenced by a curious little blog collecting the artful mistakes of
    scanners, _The Art of Google Books_ , .
    For a more thorough and general introduction to the historical relationship
    between humans and machines in labor processes, see Kang 2011. 16. The
    abstract from the patent reads as follows: “Systems and methods for pacing and
    error monitoring of a manual page turning operator of a system for capturing
    images of a bound document are disclosed. The system includes a speaker for
    playing music having a tempo and a controller for controlling the tempo based
    on an imaging rate and/or an error rate. The operator is influenced by the
    music tempo to capture images at a given rate. Alternative or in addition to
    audio, error detection may be implemented using OCR to determine page numbers
    to track page sequence and/or a sensor to detect errors such as object
    intrusion in the image frame and insufficient light. The operator may be
    alerted of an error with audio signals and signaled to turn back a certain
    number of pages to be recaptured. When music is played, the tempo can be
    adjusted in response to the error rate to reduce operator errors and increase
    overall throughput of the image capturing system. The tempo may be limited to
    a maximum tempo based on the maximum image capture rate.” See Google Inc.,
    “Pacing and Error Monitoring of Manual Page Turning Operator,” November 17,
    2009, . 17. Google, “linear-book-
    scanner,” _Google Code Archive_ , August 22, 2012,
    . 18. The libraries of
    Harvard, the University of Michigan, Oxford, Stanford, and the New York Public
    Library. 19. Levy 2011, 351. 20.  _The Authors Guild et al. vs. Google, Inc._
    , Class Action Complaint 05 CV 8136, United States District Court, Southern
    District of New York, September 20, 2005,
    /settlement-resources.attachment/authors-
    guild-v-google/Authors%20Guild%20v%20Google%2009202005.pdf>. 21. As the
    Authors Guild notes, “The problem is that before Google created Book Search,
    it digitized and made many digital copies of millions of copyrighted books,
    which the company never paid for. It never even bought a single book. That, in
    itself, was an act of theft. If you did it with a single book, you’d be
    infringing.” Authors Guild v. Google: Questions and Answers,
    . 22.
    Peters 2015, 21. 23. Hayles 2005. 24. Purdon 2016, 4. 25. Fair use constitutes
    an exception to the exclusive right of the copyright holder under the United
    States Copyright Act; if the use of a copyright work is a “fair use,” no
    permission is required. For a court to determine if a use of a copyright work
    is fair use, four factors must be considered: (1) the purpose and character of
    the use, including whether such use is of a commercial nature or is for
    nonprofit educational purposes; (2) the nature of the copyrighted work; (3)
    the amount and substantiality of the portion used in relation to the
    copyrighted work as a whole; and (4) the effect of the use upon the potential
    market for or value of the copyrighted work. 26. “Do you really want … the
    whole world not to have access to human knowledge as contained in books,
    because you really want opt out rather than opt in?” as quoted in Levy 2011,
    360. 27. “It is an astonishing opportunity to revive our cultural past, and
    make it accessible. Sure, Google will profit from it. Good for them. But if
    the law requires Google (or anyone else) to ask permission before they make
    knowledge available like this, then Google Print can’t exist” (Farhad Manjoo,
    “Indexing the Planet: Throwing Google at the Book,” _Spiegel Online
    International_ , November 9, 2005, /indexing-the-planet-throwing-google-at-the-book-a-383978.html>.) Technology
    lawyer Jonathan Band also expressed his support: Jonathan Band, “The Google
    Print Library Project: A Copyright Analysis,” _Journal of Internet Banking and
    Commerce_ , December 2005, google-print-library-project-a-copyright-analysis.php?aid=38606>. 28.
    According to Patricia Schroeder, the Association of American Publishers (AAP)
    President, Google’s opt-out procedure “shifts the responsibility for
    preventing infringement to the copyright owner rather than the user, turning
    every principle of copyright law on its ear.” BBC News, “Google Pauses Online
    Books Plan,” _BBC News_ , August 12, 2005,
    . 29. Professor of law,
    Pamela Samuelson, has conducted numerous progressive and detailed academic and
    popular analyses of the legal implications of the copyright discussions; see,
    for instance, Pamela Samuelson, “Why Is the Antitrust Division Investigating
    the Google Book Search Settlement?,” _Huffington Post_ , September 19, 2009,
    divi_b_258997.html>; Samuelson 2010; Samuelson 2011; Samuelson 2014. 30. Levy
    2011, 362; Lessig 2010; Brewster Kahle, “How Google Threatens Books,”
    _Washington Post_ , May 19, 2009, dyn/content/article/2009/05/18/AR2009051802637.html>. 31. EFF, “Google Book
    Search Settlement and Reader Privacy,” Electronic Frontier Foundation, n.d.,
    . 32.  _The Authors Guild et
    al. vs. Google Inc_., 05 Civ. 8136-DC, United States Southern District of New
    York, March 22, 2011,
    [http://www.nysd.uscourts.gov/cases/show.php?db=special&id=115](http://www.nysd.uscourts.gov/cases/show.php?db=special&id=115).
    33. Brief of Amicus Curiae, American Library Association et al. in relation to
    _The Authors Guild et al. vs. Google Inc_., 05 Civ. 8136-DC, filed on August 1
    2012,
    .
    34. Steven Levy, “Who’s Messing with the Google Books Settlement? Hint:
    They’re in Redmond, Washington,” _Wired_ , March 3, 2009,
    . 35. Sergey Brin, “A Library
    to Last Forever,” _New York Times_ , October 8, 2009,
    . 36.  _The Authors
    Guild et al. vs. Google Inc_., 05 Civ. 8136-DC, United States Southern
    District of New York, March 22, 2011,
    [http://www.nysd.uscourts.gov/cases/show.php?db=special&id=115](http://www.nysd.uscourts.gov/cases/show.php?db=special&id=115).
    37. “Google does, of course, benefit commercially in the sense that users are
    drawn to the Google websites by the ability to search Google Books. While this
    is a consideration to be acknowledged in weighing all the factors, even
    assuming Google’s principal motivation is profit, the fact is that Google
    Books serves several important educational purposes. Accordingly, I conclude
    that the first factor strongly favors a finding of fair use.” _The Authors
    Guild et al. vs. Google Inc_., 05 Civ. 8136-DC, United States Southern
    District of New York, November 14, 2013,
    [http://www.nysd.uscourts.gov/cases/show.php?db=special&id=355](http://www.nysd.uscourts.gov/cases/show.php?db=special&id=355).
    38.  _Authors Guild v. Google, Inc_., 13–4829-cv, December 16, 2015,
    81c0-23db25f3b301/1/doc/13-4829_opn.pdf>. In the aftermath of Pierre Leval’s
    decision the Authors Guild has yet again filed yet another petition for the
    Supreme Court to reverse the appeals court decision, and has publically
    reiterated the framing of Google as a parasite rather than a benefactor. A
    brief supporting the Guild’s petition and signed by a diverse group of authors
    such as Malcolm Gladwell, Margaret Atwood, J. M. Coetzee, Ursula Le Guin, and
    Yann Martel noted that the legal framework used to assess Google knew nothing
    about “the digital reproduction of copyrighted works and their communication
    on the Internet or the phenomenon of ‘mass digitization’ of vast collections
    of copyrighted works”; nor, they argued, was the fair-use doctrine ever
    intended “to permit a wealthy for-profit entity to digitize millions of works
    and to cut off authors’ licensing of their reproduction, distribution, and
    public display rights.” Amicus Curiae filed on behalf of Author’s Guild
    Petition, No. 15–849, February 1, 2016, content/uploads/2016/02/15-849-tsac-TAA-et-al.pdf>. 39. Oxford English
    Dictionary,
    [http://www.oed.com/view/Entry/40328?rskey=bCMOh6&result=1&isAdvanced=false#eid8462140](http://www.oed.com/view/Entry/40328?rskey=bCMOh6&result=1&isAdvanced=false#eid8462140).
    40. The contract as we know it today developed within the paradigm of Lex
    Mercatoria; see Teubner 1997. The contract is therefore a device of global
    reach that has developed “mainly outside the political structures of nation-
    states and international organisations for exchanges primarily in a market
    economy” (Snyder 2002, 8). In the contract theory of John Locke, the
    signification of contracts developed from a mere trade tool to a distinction
    between the free man and the slave. Here, the societal benefits of contracts
    were presented as a matter of time, where the bounded delineation of work was
    characterized as contractual freedom; see Locke 2003 and Stanley 1998. 41.
    Sumner 1952, 23. 42. Paul Courant, “On Being in Bed with Google,” _Au Courant_
    , November 4, 2007, google>. 43. Kaufman and Ubois 2007. 44. Bottando 2012. 45. Jessamyn West,
    “Google’s Slow Fade With Librarians: Maybe They’re Just Not That Into Us,”
    _Medium_ , February 2, 2015, with-librarians-fddda838a0b7>. 46. Suchman 2003. The lack of research into
    contracts and emotions is noted by Hillary M. Berk in her fascinating research
    on contracts in the field of surrogacy: “Despite a rich literature in law and
    society embracing contracts as exchange relations, empirical work has yet to
    address their emotional dimensions” (Berk 2015). 47. Suchman 2003, 100. 48.
    See a selection on the Public Index:
    , and The Internet Archive:
    . You may also find
    contracts here: the University of Michigan ( /michigan-digitization-project>), the University of Cali­fornia
    (), the Committee on
    Institutional Cooperation ( google-agreement>), and the British Library
    ( google-books-and-the-british-library>), to name but a few. 49. Javier Ruiz,
    “Is the Deal between Google and the British Library Good for the Public?,”
    Open Rights Group, August 24, 2011, /access-to-the-agreement-between-google-books-and-the-british-library>. 50.
    Kaufman and Ubois 2007. 51. Association of Research Libraries, “ARL Encourages
    Members to Refrain from Signing Nondisclosure or Confidentiality Clauses,”
    _ARL News_ , June 5, 2009, encourages-members-to-refrain-from-signing-nondisclosure-or-confidentiality-
    clauses#.Vriv-McZdE4>. 52. Google, “About the Library Project,” _Google Books
    Help,_ n.d.,
    [https://support.google.com/books/partner/faq/3396243?hl=en&rd=1](https://support.google.com/books/partner/faq/3396243?hl=en&rd=1).
    53. Flyverbom, Leonardi, Stohl, and Stohl 2016. 54. Levy 2011, 354. 55. Levy
    2011, 352. 56. To be sure, however, the practice of secrecy is no stranger to
    libraries. Consider only the closed stack that the public is never given
    access to; the bureaucratic routines that are kept from the public eye; and
    the historic relation between libraries and secrecy so beautifully explored by
    Umberto Eco in numerous of his works. Yet, the motivations for nondisclosure
    agreements on the one hand and public sector secrets on the other differ
    significantly, the former lodged in a commercial logic and the latter in an
    idea, however abstract, about “the public good.” 57. Belder 2015. For insight
    into the societal impact of contractual regimes on civil rights regimes, see
    Somers 2008. For insight into relations between neoliberalism and contracts,
    see Mitropoulos 2012. 58. As engineer and historian Henry Petroski notes, for
    a PPP contract to be successful a contract must be written “properly” but “the
    public partners are not often very well versed in these kinds of contracts and
    they don’t know how to protect themselves.” See Buckholtz 2016. 59. As argued
    by Lucky Belder in “Cultural Heritage Institutions as Entrepreneurs,” 2015.
    60. Borghi 2013, 92–115. 61. Stephan Heyman, “Google Books: A Complex and
    Controversial Experiment,” _New York Times_ , October 28, 2015,
    and-controversial-experiment.html>. 62. Google, “Library Partners,” _Google
    Books_ , . 63. Andrew
    Prescott, “How the Web Can Make Books Vanish,” _Digital Riffs_ , August 2013,
    .
    64. Pechenick, Danforth, Dodds, and Barrat 2015. 65. What Pechenik et al.
    refer to here is of course the claims of Erez Aiden and Jean-Baptiste Michel
    among others, who promote “culturomics,” that is, the use of huge amounts of
    digital information—in this case the corpus of Google Books—to track changes
    in language, culture, and history. See Aiden and Michel 2013; and Michel et
    al. 2011. 66. Neubert 2008; and Weiss and James 2012, 1–3. 67. I am indebted
    to Gayatri Spivak here, who makes this argument about New York in the context
    of globalization; see Spivak 2000. 68. In this respect Google mirrors the
    glocalization strategies of media companies in general; see Thussu 2007, 19.
    69. Although the decisions of foreign legislation of course also affect the
    workings of Google, as is clear from the growing body of European regulatory
    casework on Google such as the right to be forgotten, competition law, tax,
    etc.

    # 3
    Sovereign Soul Searching: The Politics of Europeana

    ## Introduction

    In 2008, the European Commission launched the European mass digitization
    project, Europeana, to great fanfare. Although the EC’s official
    communications framed the project as a logical outcome of years of work on
    converging European digital library infrastructures, the project was received
    in the press as a European counterresponse to Google Books.1 The popular media
    framings of Europeana were focused in particular on two narratives: that
    Europeana was a public response to Google’s privatization of cultural memory,
    and that Europeana was a territorial response to American colonization of
    European information and culture. This chapter suggests that while both of
    these sentiments were present in Europeana’s early years, the politics of what
    Europeana was—and is—paints a more complicated picture. A closer glance at
    Europeana’s social, economic, and legal infrastructures thus shows that the
    European mass digitization project is neither an attempt to replicate Google’s
    glocal model, nor is it a continuation of traditional European cultural
    policies. Rather, Europeana produces a new form of cultural memory politics
    that converge national and supranational imaginaries with global information
    infrastructures.

    If global information infrastructures and national politics today seemingly go
    hand in hand in Europeana, it wasn’t always so. In fact, in the 1990s,
    networked technologies and national imaginaries appeared to be mutually
    exclusive modes of existence. The fall of the Berlin Wall in 1989 nourished a
    new antisovereign sentiment, which gave way to recurring claims in the 1990s
    that the age of sovereignty had passed into an age of post-sovereignty. These
    claims were fueled by a globalized set of economic, political, and
    technological forces, not least of which the seemingly ungovernable nature of
    the Internet—which appeared to unbuckle the nation-state’s control and voice
    in the process of globalization and gave rise to a sense of plausible anarchy,
    which in turn made John Perry Barlow’s (in)famous ‘‘Declaration of the
    Independence of Cyberspace’’ appear not as pure utopian fabulation, but rather
    as a prescient diagnosis.2 Yet, while it seemed in the early 2000s that the
    Internet and the cultural and economic forces of globalization had made the
    notion and practice of the nation-state redundant on both practical and
    cultural levels, the specter of the nation nevertheless seemed to linger.
    Indeed, the nation-state continued to remain a fixed point in political and
    cultural discourses. In fact, it not only lingered as a specter, but borders
    were also beginning to reappear as regulatory forces. The borderless world
    was, as Tim Wu and Jack Goldsmith noted in 2006, an illusion;3 geography had
    revenged itself, not least in the digital environment.4

    Today, no one doubts the cultural-political import of the national imaginary.
    The national imaginary has fueled antirefugee movements, the surge of
    nationalist parties, the EU’s intensified crisis, and the election of Donald
    Trump, to name just a few critical political events in the 2010s. Yet, while
    the nationalist imaginary is becoming ever stronger, paradoxically its
    communicative infrastructures are simultaneously becoming ever more
    globalized. Thus, globally networked digital infrastructures are quickly
    supplementing, and in many cases even substituting, those national
    communicative infrastructures that were instrumental in establishing a
    national imagined community in the first place—infrastructures such as novels
    and newspapers.5 The convergence of territorially bounded imaginaries and
    global networks creates new cultural-political constellations of cultural
    memory where the centripetal forces of nationalism operate alongside,
    sometimes with and sometimes against, the centrifugal forces of digital
    infrastructures. Europeana is a preeminent example of these complex
    infrastructural and imaginary dynamics.

    ## A European Response

    When Google announced their digitization program at the Frankfurt Book Fair in
    2004, it instantly created ripples in the European cultural-political
    landscape, in France in particular. Upon hearing the news about Google’s
    plans, Jacques Chirac, president of France at the time, promptly urged the
    then-culture minister, Renaud Donnedieu de Vabres, and Jean-Noël Jeanneney,
    head of France’s Bibliothèque nationale, to commence a similar digitization
    project and to persuade other European countries to join them.6 The seeds for
    Europeana were sown by France, “the deepest, most sedimented reservoir of
    anti-American arguments,”7 as an explicitly political reaction to Google
    Books.

    Europeana was thus from its inception laced with the ambiguous political
    relationship between two historically competing universalist-exceptionalist
    nations: the United States and France.8 A relationship that France sometimes
    pictures as a question of Americanization, and at other times extends to an
    image of a more diffuse Anglo-Saxon constellation. Highlighting the effects
    Google Books would have on French culture, Jeanneney argued that Google’s mass
    digitization efforts would pose several possible dangers to French cultural
    memory such as bias in the collecting and organizing practices of Google Books
    and an Anglicization of the cultural memory regulatory system. Explaining why
    Google Books should be seen not only as an American, but also as an Anglo-
    Saxon project, Jeanneney noted that while Google Books “was obviously an
    American project,” it was nevertheless also one “that reached out to the
    British.” The alliance between the Bodleian Library at Oxford and Google Books
    was thus not only a professional partnership in Jeanneney’s eyes, but also a
    symbolic bond where “the familiar Anglo-Saxon solidarity” manifested once
    again vis-à-vis France, only this time in the digital sphere. Jeanneney even
    paraphrased Churchill’s comment to Charles de Gaulle, noting that Oxford’s
    alliance with Google Books yet again evidenced how British institutions,
    “without consulting anyone on the other side of the English Channel,” favored
    US-UK alliances over UK-Continental alliances “in search of European
    patriotism for the adventure under way.”9

    How can we understand Jeanneney’s framing of Google Books as an Anglo-Saxon
    project and the function of this framing in his plea for a nation-based
    digitization program? As historian Emile Chabal suggests, the concept of the
    Anglo-Saxon mentality is a preeminently French construct that has a clear and
    rich rhetorical function to strengthen the French self-understanding vis-à-vis
    a stereotypical “other.”10 While fuzzy in its conceptual infrastructure, the
    French rhetoric of the Anglo-Saxon is nevertheless “instinctively understood
    by the vast majority of the French population” to denote “not simply a
    socioeconomic vision loosely inspired by market liberalism and
    multiculturalism” but also (and sometimes primarily) “an image of
    individualism, enterprise, and atomization.”11 All these dimensions were at
    play in Jeanneney’s anti-Google Books rhetoric. Indeed, Jeanneney suggested,
    Google’s mass digitization project was not only Anglo-Saxon in its collecting
    practices and organizational principles, but also in its regulatory framework:
    “We know how Anglo-Saxon law competes with Latin law in international
    jurisdictions and in those of new nations. I don’t want to see Anglo-Saxon law
    unduly favored by Google as a result of the hierarchy that will be
    spontaneously established on its lists.”12

    What did Jeanneney suggest as infrastructural protection against the network
    power of the Anglo-Saxon mass digitization project? According to Jeanneney,
    the answer lay in territorial digitization programs: rather than simply
    accepting the colonizing forces of the Anglo-Saxon matrix, Jeanneney argued, a
    national digitization effort was needed. Such a national digitization project
    would be a “ _contre-attaque_ ” against Google Books that should protect three
    dimensions of French cultural sovereignty: its language, the role of the state
    in cultural policy, and the cultural/intellectual order of knowledge in the
    cultural collections.13 Thus Jeanneney suggested that any Anglo-Saxon mass
    digitization project should be competed against and complemented by mass
    digitization projects from other nations and cultures to ensure that cultural
    works are embedded in meaningful cultural contexts and languages. While the
    nation was the central base of mass digitization programs, Jeanenney noted,
    such digitization programs necessarily needed to be embedded in a European, or
    Continental, infrastructure. Thus, while Jeanneney’s rallying cry to protect
    the French cultural memory was voiced from France, he gave it a European
    signature, frequently addressing and including the rest of Europe as a natural
    ally in his _contre-attaque_ against Google Books. 14 Jeanenney’s extension of
    French concerns to a European level was characteristic for France, which had
    historically displayed a leadership role in formulating and shaping the EU.15
    The EU, Jeanneney argued, could provide a resilient supranational
    infrastructure that would enable French diversity to exist within the EU while
    also providing a protective shield against unhampered Anglo-Saxon
    globalization.

    Other French officials took on a less combative tone, insisting that the
    French digitization project should be seen not merely as a reaction to Google
    but rather in the context of existing French and European efforts to make
    information available online. “I really stress that it’s not anti-American,”
    stated one official at the Ministry of Culture and Communication. Rather than
    framing the French national initiatives as a reaction to Google Books, the
    official instead noted that the prime objective was to “make more material
    relevant to European patrimony available,” noting also that the national
    digitization efforts were neither unique nor exclusionary—not even to
    Google.16 The disjunction between Jeanneney’s discursive claims to mass
    digitization sovereignty and the anonymous bureaucrat’s pragmatic and
    networked approach to mass digitization indicates the late-sovereign landscape
    of mass digitization as it unfolded between identity politics and pragmatic
    politics, between discursive claims to sovereignty and economic global
    cooperation. And as the next section shows, the intertwinement of these
    discursive, ideological, and economic infrastructures produced a memory
    politics in Europeana that was neither sovereign nor post-sovereign, but
    rather late-sovereign.

    ## The Infrastructural Reality of Late-Sovereignty

    Politically speaking, Europeana was always more than just an empty
    countergesture or emulating response to Google. Rather, as soon as the EU
    adopted Europeana as a prestige project, Europeana became embedded in the
    political project of Europeanization and began to produce a political logic of
    its own. Latching on to (rather than countering) a sovereign logic, Europeana
    strategically deployed the European imaginary as a symbolic demarcation of its
    territory. But the means by which Europeana was constructed and distributed
    its territorial imaginaries nevertheless took place by means of globalized
    networked infrastructures. The circumscribed cultural imaginary of Europeana
    was thus made interoperable with the networked logic of globalization. This
    combination of a European imaginary and neoliberal infrastructure in Europeana
    produced an uneasy balance between national and supranational infrastructural
    imaginaries on the one hand and globalized infrastructures on the other.

    If France saw Europeana primarily through the prism of sovereign competition,
    the European Commission emphasized a different dispositive: economic
    competition. In his 2005 response to Jaques Chirac, José Manuel Barroso
    acknowledged that the digitization of European cultural heritage was an
    important task not only for nation-states but also for the EU as a whole.
    Instead of the defiant tone of Jeanneney and De Vabres, Barraso and the EU
    institutions opted for a more neutral, pragmatic, and diplomatic mass
    digitization discourse. Instead of focusing on Europeana as a lever to prop up
    the cultural sovereignty of France, and by extension Europe, in the face of
    Americanization, Barosso framed Europeana as an important economic element in
    the construction of a knowledge economy.17

    Europeana was thus still a competitive project, but it was now reframed as one
    that would be much more easily aligned with, and integrated into, a global
    market economy.18 One might see the difference in the French and the EU
    responses as a question of infrastructural form and affordance. If French mass
    digitization discourses were concerned with circumscribing the French cultural
    heritage within the territory of the nation, the EC was in practice more
    attuned to the networked aspects of the global economy and an accompanying
    discourse of competition and potentiality. The infrastructural shift from
    delineated sphere to globalized network changed the infrapolitics of cultural
    memory from traditional nation-based issues such as identity politics
    (including the formation of canons) to more globally aligned trade-related
    themes such as copyright and public-private governance.

    The shift from canon to copyright did not mean, however, that national
    concerns dissipated. On the contrary, ministers from the European Union’s
    member countries called for an investigation into the way Google Books handled
    copyright in 2008.19 In reality, Google Books had very little to do with
    Europe at that time, in the sense that Google Books was governed by US
    copyright law. Yet the global reach of Google Books made it a European concern
    nevertheless. Both German and French representatives emphasized the rift
    between copyright legislation in the US and in EU member states. The German
    government proposed that the EC examine whether Google Books conformed to
    Europe’s copyright laws. In France, President Nicolas Sarkozy stated in more
    flamboyant terms that he would not permit France to be “stripped of our
    heritage to the benefit of a big company, no matter how friendly, big, or
    American it is.”20 Both countries moreover submitted _amicus curia_ briefs 21
    to judge Denny Chin (who was in charge of the ongoing Google Books settlement
    lawsuit in the US22), in which they argued against the inclusion of foreign
    authors in the lawsuit.23 They further brought separate suits against Google
    Books for their scanning activities and sought to exercise diplomatic pressure
    against the advancement of Google Books.24

    On an EU level, however, the territorial concerns were sidestepped in favor of
    another matrix of concern: the question of public-private governance. Thus,
    despite pressure from some member states, the EC decided not to write a
    similar “amicus brief” on behalf of the EU.25 Instead, EC Commissioners
    McCreevy and Reding emphasized the need for more infrastructures connecting
    the public and private sectors in the field of mass digitization.26 Such PPPs
    could range from relatively conservative forms of cooperation (e.g., private
    sponsoring, or payments from the private sector for links provided by
    Europeana) to more far-reaching involvement, such as turning the management of
    Europeana over to the private sector.27 In a similar vein, a report authored
    by a high-level reflection group (Comité des Sages) set down by the European
    Commission opened the door for public-private partnerships and also set a time
    frame for commercial exploitation.28 It was even suggested that Google could
    play a role in the construction of Europeana. These considerations thus
    contrasted the French resistance against Google with previous statements made
    by the EC, which were concerned with preserving the public sector in the
    administration of Europeana.

    Did the European Commission’s networked politics signal a post-sovereign
    future for Europeana? This chapter suggests no: despite the EC’s strategies,
    it would be wrong to label the infrapolitics of Europeana as post-sovereign.
    Rather, Europeana draws up a _late-sovereign_ 29 mass digitization landscape,
    where claims to national sovereignty exist alongside networked
    infrastructures.30 Why not post-sovereign? Because, as legal scholar Neil
    Walker noted in 2003,31 the logic of sovereignty never waned even in the face
    of globalized capitalism and legal pluralism. Instead, it fused with these
    more globalized infrastructures to produce a form of politics that displayed
    considerable continuity with the old sovereign order, yet also had distinctive
    features such as globalized trade networks and constitutional pluralisms. In
    this new system, seemingly traditional claims to sovereignty are carried out
    irrespective of political practices, showing that globally networked
    infrastructures and sovereign imaginaries are not necessarily mutually
    exclusive; rather, territory and nation continue to remain powerful emotive
    forces. Since Neil Walker’s theoretical corrective to theories on post-
    sovereignty, the notion of late sovereignty seems to have only gained in
    relevance as nationalist imaginaries increase in strength and power through
    increasingly globalized networks.

    As the following section shows, Europeana is a product of political processes
    that are concerned with both the construction of bounded spheres and canons
    _and_ networked infrastructures of connectivity, competition, and potentiality
    operating beyond, below, and between national societal structures. Europeana’s
    late-sovereign framework produces an infrapolitics in which the discursive
    political juxtaposition between Europeana and Google Books exists alongside
    increased cooperation between Google Books and Europeana, making it necessary
    to qualify the comparative distinctions in mass digitization projects on a
    much more detailed level than merely territorial delineations, without,
    however, disposing of the notion of sovereignty. The simultaneous
    contestations and connections between Europeana and Google Books thus make
    visible the complex economic, intellectual, and technological infrastructures
    at play in mass digitization.

    What form did these infrastructures take? In a sense, the complex
    infrastructural set-up of Europeana as it played out in the EU’s framework
    ended up extending along two different axes: a vertical axis of national and
    supranational sovereignty, where the tectonic territorial plates of nation-
    states and continents move relative to each other by converging, diverging,
    and transforming; and a horizontal axis of deterritorializing flows that
    stream within, between, and throughout sovereign territories consisting both
    of capital interests (in the form of transnational lobby organizations working
    to protect, promote, and advance the interests of multinational companies or
    nongovernmental organizations) and the affective relations of users.

    ## Harmonizing Europe: From Canon to Copyright

    Even if the EU is less concerned with upholding the regulatory boundaries of
    the nation-state in mass digitization, bordering effects are still found in
    mass digitized collections—this time in the form of copyright regulation. As
    in the case of Google Books, mass digitization also raised questions in Europe
    about the future role of copyright in the digital sphere. On the one hand,
    cultural industries were concerned about the implications of mass digitization
    for their production and copyrights32; on the other hand, educational
    institutions and digital industries were interested in “unlocking” the
    cognitive and cultural potentials that resided within the copyrighted
    collections in cultural heritage institutions. Indeed, copyright was such a
    crucial concern that the EC repeatedly stated the necessity to reform and
    harmonize European copyright regulation across borders.

    Why is copyright a concern for Europeana? Alongside economic challenges, the
    current copyright legislation is _the_ greatest obstacle against mass
    digitization. Copyright effectively prohibits mass digitization of any kind of
    material that is still within copyright, creating large gaps in digitized
    collections that are often referred to as “the twentieth-century black hole.”
    These black holes appear as a result of the way European “copyright interacts
    with the digitization of cultural heritage collections” and manifest
    themselves as “marked lack of online availability of twentieth-century
    collections.” 33 The lack of a common copyright mechanism not only hinders
    online availability, but also challenges European cross-border digitization
    projects as well as the possibilities for data-mining collections à la Google
    because of the difficulties connected to ascertaining the relevant
    public domain and hence definitively flagging the public domain status of an
    object.34

    While Europeana’s twentieth-century black hole poses a problem, Europe would
    not, as one worker in the EC’s Directorate-General (DG) Copyright unit noted,
    follow Google’s opt-out mass digitization strategy because “the European
    solution is not the Google solution. We do a diligent search for the rights
    holder before digitizing the material. We follow the law.”35 By positioning
    herself as on the right side of the law, the DG employee implicitly also
    placed Google on the wrong side of the law. Yet, as another DG employee
    explained with frustration, the right side of the law was looking increasingly
    untenable in an age of mass digitization. Indeed, as she noted, the demands
    for diligent search was making her work near impossible, not least due to the
    different legal regimes in the US and the EU:

    > Today if one wants to digitize a work, one has to go and ask the rights
    holder individually. The problem is often that you can’t find the rights
    holder. And sometimes it takes so much time. So there is a rights holder, you
    know that he would agree, but it takes so much time to go and find out. And
    not all countries have collective management … you have to go company by
    company. In Europe we have producing companies that disappear after the film
    has been made, because they are created only to make that film. So who are you
    going to ask? While in the States the situation is different. You have the
    majors, they have the rights, you know who to ask because they are very
    stable. But in Europe we have this situation, which makes it very difficult,
    the cultural access to cultural heritage. Of course we dream of changing
    this.36

    The dream is far from realized, however. Since the EU has no direct
    legislative competence in the area of copyright, Europeana is the center of a
    natural tension between three diverging, but sometimes overlapping instances:
    the exclusivity of national intellectual property laws, the economic interests
    toward a common market, and the cultural interests in the free movement of
    information and knowledge production—a tension that is further amplified by
    the coexistence of different legal traditions across member states.37 Seeking
    to resolve this tension, the European Parliament and certain units in the
    European Commission have strategically used Europeana as a rhetorical lever to
    increase harmonization of copyright legislation and thus make it easier for
    institutions to make their collections available online.38 “Harmonization” has
    thus become a key concept in the rights regime of mass digitization,
    essentially signaling interoperability rather than standardization of national
    copyright regimes. Yet stakeholders differ in their opinions concerning who
    should hold what rights over what content, over what period of time, at what
    price, and how things should be made available. So within the process of
    harmonization is a process that is less than harmonious, namely bringing
    stakeholders to the table and committing. As the EC interviewee confirms,
    harmonization requires not only technical but also political cooperation.

    The question of harmonization illustrates the infrapolitical dimensions of
    Europeana’s copyright systems, showing that they are not just technical
    standards or “direct mirrors of reality” but also “co-produced responses to
    technoscientific and political uncertainty.”39 The European attempts to
    harmonize copyright standards across national borders therefore pit not only
    one technical standard against the other, but also “alternative political
    cultures and their systems of public reasoning against one another”40
    (Jasanoff, 133). Harmonization thus compresses, rather than eliminates,
    national varieties within Europe.41 Hence, Barroso’s vision of Europeana as a
    collective _European_ cultural memory is faced with the fragmented patterns of
    national copyright regimes, producing if not overtly political borders in the
    collections, then certainly infrapolitical manifestations of the cultural
    barriers that still exist between European countries.

    ## The Infrapolitics of Interoperability

    Copyright is not the only infrastructural regime that upholds borders in
    Europeana’s collections; technical standards also pose great challenges for
    the dream of an European connective cultural memory.42 The notion of
    _interoperability_ 43 has therefore become a key concern for mass
    digitization, as interoperability is what allows digitized cultural memory
    institutions to exchange and share documents, queries, and services.44

    The rise of interoperability as a key concept in mass digitization is a side-
    effect of the increasing complexity of economic, political, and technological
    networks. In the twentieth century, most European cultural memory institutions
    existed primarily as small “sovereign” institutions, closed spheres governed
    by internal logics and with little impetus to open up their internal machinery
    to other institutions and cooperate. The early 2000s signaled a shift in the
    institutional infrastructural layout of cultural memory institutions, however.
    One early significant articulation of this shift was a 324-page European
    Commission report entitled _Technological Landscapes for Tomorrow’s Cultural
    Economy: Unlocking the Value of Cultural Heritage_ (or the DigiCULT study), a
    “roadmap” that outlined the political, organizational, and technological
    challenges faced by European museums, libraries, and archives in the period
    2002–2006. A central passage noted that the “conditions for success of the
    cultural and memory institutions in the Information Society is (sic) the
    ‘network logic,’ a logic that is of course directly related to the necessity
    of being interoperable.” 45 The network logic and resulting demand for
    interoperability was not merely a question of digital connections, the report
    suggested, but a more pervasive logic of contemporary society. The report thus
    conceived interoperability as a question that ran deeper that technological
    logic.46 The more complex cultural memory infrastructures become, the more
    interoperability is needed if one wants the infrastructures to connect and
    communicate with each other.47 As information scholar Christine Borgman notes,
    interoperability has therefore long been “the holy grail of digital
    libraries”—a statement echoed by Commissioner Reding on Europeana in 2005 when
    she stated that “I am not suggesting that the Commission creates a single
    library. I envisage a network of many digital libraries—in different
    institutions, across Europe.”48 Reding’s statement shows that even at the
    height of the French exceptionalist discourse on European mass digitization,
    other political forces worked instead to reformat the sovereign sphere into a
    network. The unravelling of the bounded spheres of cultural memory
    institutions into networked infrastructures is therefore both an effect of,
    and the further mobilization of, increased interoperability.

    Interoperability is not only a concern for mass digitization projects,
    however; rather, the calls for interoperability takes place on a much more
    fundamental level. A European Council Conclusion on Europeana identifies
    interoperability as a key challenge for the future construction of Europeana,
    but also embeds this concern within the overarching European interoperability
    strategy, _European Interoperability Framework for pan-European eGovernment
    services_. 49 Today, then, interoperability appears to be turning into a
    social theory. The extension of the concept of interoperability into the
    social sphere naturally follows the socialization of another technical term:
    infrastructure. In the past decades, Susan Leigh Star, Geoffrey Bowker, and
    others have successfully managed to frame infrastructure “not only in terms of
    human versus technological components but in terms of a set of interrelated
    social, organizational, and technical components or systems (whether the data
    will be shared, systems interoperable, standards proprietary, or maintenance
    and redesign factored in).”50 It follows, then, as Christine Borgman notes,
    that even if interoperability in technical terms is a “feature of products and
    services that allows the connection of people, data, and diverse systems,”51
    policy practice, standards and business models, and vested interest are often
    greater determinants of interoperability than is technology.52 In similar
    terms, information science scholar Jerome Mcdonough notes that “we need to
    cease viewing [interoperability] purely as a technical problem, and
    acknowledge that it is the result of the interplay of technical and social
    factors.”53 Pushing the concept of interoperability even further, legal
    scholars Urs Gasser and John Palfrey have even argued for viewing the world
    through a theory of interoperability, naming their project “interop theory,”54
    while Internet governance scholar Laura Denardis proposes a political theory
    of interoperability.55

    More than denoting a technical fact, then, interoperability emerges today as
    an infrastructural logic, one that promotes openness, modularity, and
    connectivity. Within the field of mass digitization, the notion of
    interoperability is in particular promoted by the infrastructural workers of
    cultural memory (e.g., archivists, librarians, software developers, digital
    humanists, etc.) who dream of opening up the silos they work on to enrich them
    with new meanings.56 As noted in chapter 1, European cultural memory
    institutions had begun to address unconnected institutions as closed “silos.”
    Mass digitization offered a way of thinking of these institutions anew—not as
    frigid closed containers, but rather as vital connective infrastructures.
    Interoperability thus gives rise to a new infrastructural form of cultural
    memory: the traditional delineated sovereign spheres of expertise of analog
    cultural memory institutions are pried open and reformatted as networked
    ecosystems that consist not only of the traditional national public providers,
    but also of additional components that have hitherto been alien in the
    cultural memory industry, such as private individual users and commercial
    industries.57

    The logic of interoperability is also born of a specific kind of
    infrapolitics: the politics of modular openness. Interoperability is motivated
    by the “open” data movements that seek to break down proprietary and
    disciplinary boundaries and create new cultural memory infrastructures and
    ways of working with their collections. Such visions are often fueled by
    Lawrence Lessig’s conviction that “the most important thing that the Internet
    has given us is a platform upon which experience is interoperable.”58 And they
    have given rise to the plethora of cultural concepts we find on the Internet
    in the age of digital capitalism, such as “prosumers”, “produsers”, and so on.
    These concepts are becoming more and more pervasive in the digital environment
    where “any format of sound can be mixed with any format of video, and then
    supplemented with any format of text or images.”59 According to Lessig, the
    challenge to this “open” vision are those “who don’t play in this
    interoperability game,” and the contestation between the “open” and the
    “closed” takes place in the “the network,” which produces “a world where
    anyone can clip and combine just about anything to make something new.”60

    Despite its centrality in the mass digitization rhetoric, the concept of
    interoperability and the politics it produces is rarely discussed in critical
    terms. Yet, as Gasser and Palfrey readily conceded in 2007, interoperability
    is not necessarily in itself an “unalloyed good.” Indeed, in “certain
    instances,” Palfrey and Gasser noted, interoperability brings with it possible
    drawbacks such as increased homogeneity, lack of security, lack of
    reliability.61 Today, ten years on, Urs Gasser’s and John Palfrey’s admissions
    of the drawbacks of interoperability appear too modest, and it becomes clear
    that while their theoretical apparatus was able to identify the centrality of
    interoperability in a digital world, their social theory missed its larger
    political implications.

    When scanning the literature and recommendations on interoperability, certain
    words emerge again and again: innovation, choice, diversity, efficiency,
    seamlessness, flexibility, and access. As Tara McPherson notes in her related
    analysis of the politics of modularity, it is not much of a stretch to “layer
    these traits over the core tenets of post-Fordism” and note their effect on
    society: “time-space compression, transformability, customization, a
    public/private blur, etc.”62 The result, she suggests, is a remaking of the
    Fordist standardization processes into a “neoliberal rule of modularity.”
    Extending McPherson’s critique into the temporal terrain, Franco Bifo Berardi
    emphasizes the semantic politics of speed that is also inherent in
    connectivity and interoperability: “Connection implies smooth surfaces with no
    margins of ambiguity … connections are optimized in terms of speed and have
    the potential to accelerate with technological developments.63 The
    connectivity enabled by interoperability thus implies modularity with
    components necessarily “open to interfacing and interoperability.”
    Interoperability, then, is not only a question of openness, but also a way of
    harnessing network effects by means of speed and resilience.

    While interoperability may be an inherent infrastructural tenet of neoliberal
    systems, increased interoperability does not automatically make mass
    digitization projects neoliberal. Yet, interoperability does allow for
    increased connectivity between individual cultural memory objects and a
    neoliberal economy. And while the neoliberal economy may emulate critical
    discourses on freedom and creativity, its main concern is profit. The same
    systems that allow users to create and navigate collections more freely are
    made interoperable with neoliberal systems of control.64

    ## The “Work” in Networking

    What are the effects of interoperability for the user? The culture of
    connectivity and interoperability has not only allowed Europeana’s collections
    to become more visible to a wider public, it has also enabled these publics to
    become intentionally or unintentionally involved in the act of describing and
    ordering these same collections, for instance by inviting users to influence
    existing collections as well as to generate their own collections. The
    increased interaction with works also transform them from stable to mobile
    objects.65 Mass digitization has thus transformed curatorial practice,
    expanding it beyond the closed spheres of cultural memory institutions into
    much broader ecosystems and extending the focus of curatorial attention from
    fixed objects to dynamic network systems. As a result, “curatorial work has
    become more widely distributed between multiple agents including technological
    networks and software.”66 From having played a central role in the curatorial
    practice, the curator is now only part of this entire system and increasingly
    not central to it. Sharing the curator’s place are users, algorithms, software
    engineers, and a multitude of other factors.

    At the same time, the information deluge generated by digitization has
    enhanced the necessity of curation, both within and outside institutions. Once
    considered as professional caretaking for collections, the curatorial concept
    has now been modulated to encompass a whole host of activities and agents,
    just as curatorial practices are now ever more engaged in epistemic meaning
    making, selecting and organizing materials in an interpretive framework
    through the aggregation of global connection.67 And as the already monumental
    and ever accelerating digital collections exceed human curatorial capacity,
    the computing power of machines and cognitive capabilities of ordinary
    citizens is increasingly needed to penetrate and make meaning of the data
    accumulations.

    What role is Europeana’s user given in this new environment? With the
    increased modulation of public-private boundaries, which allow different
    modules to take on different tasks and on different levels, the strict
    separation between institution and environment is blurring in Europeana. So is
    the separation between user, curator, consumer, and producer. New characters
    have thus arisen in the wake of these transformations, hereunder the two
    concepts of the “amateur” and the “citizen scientist.”

    In contrast to much of the microlabor that takes place in the digital sphere,
    Europeana’s participatory structures often consist in cognitive tasks that are
    directly related to the field of cultural memory. This aligns with the
    aspirations of the Citizen Science Alliance, which requires that all their
    crowdsourcing projects answer “a real scientific research question” and “must
    never waste the ‘clicks,’ or time, of volunteers.”68 Citizen science is an
    emergent form of research practice in which citizens participate in research
    projects on different levels and in different constellations with established
    research communities. The participatory structures of citizen science range
    from highly complex processes to more simple tasks, such as identifying
    colors, themes, patterns that challenge machinic analyses, and so on. There
    are different ways of classifying these participatory structures, but the most
    prevalent participatory structures in Europeana include:

    1. 1\. Contribution, where visitors are solicited to provide limited and specified objects, actions, or ideas to an institutionally controlled process, for example, Europeana’s _1914–1918_ exhibition, which allowed (and still allows) users to contribute photos, letters, and other memorabilia from that period.
    2. 2\. Correction and transcription, where users correct faulty OCR scans of books, newspapers, etc.
    3. 3\. Contextualization, that is, the practice of placing or studying objects in a meaningful context.
    4. 4\. Augmenting collections, that is, enriching collections with additional dimensions. One example is the recently launched Europeana Sound Connections, which encourages and enables visitors to “actively enrich geo-pinned sounds from two data providers with supplementary media from various sources. This includes using freely reusable content from Europeana, Flickr, Wikimedia Commons, or even individuals’ own collections.”69
    5. 5\. And finally, Europeana also offers participation through classification, that is, a social tagging system in which users contribute with classifications.

    All these participatory structures fall within the general rubric of
    crowdsourcing, and they are often framed in social terms and held up as an
    altruistic alternative to the capitalist exploitation of other crowdsourcing
    projects, because, as new media theorist Mia Ridge argues, “unlike commercial
    crowdsourcing, participation in cultural memory crowdsourcing is driven by
    pleasure, not profit. Rather than monetary recompense, GLAM (Galleries,
    Museums, Archives, and Libraries) projects provide an opportunity for
    altruistic acts, activated by intrinsic motivations, applied to inherently
    engaging tasks, encouraged by a personal interest in the subject or task.”70
    In addition—and based on this notion of altruism—these forms of crowdsourcing
    are also subversive successors of, or correctives to, consumerism.

    The idea of pitting the activities of citizen science against more simple
    consumer logics has been at the heart of Europeana since its inception,
    particularly influenced by the French philosopher Bernard Stiegler, who has
    been instrumental not only in thinking about, but also building, Europeana’s
    software infrastructures around the character of the “amateur.” Stiegler’s
    thesis was that the amateur could subvert the industrial ethos of production
    because he/she is not driven by a desire to consume as much as a desire to
    love, and thus is able to imbue the archive with a logic different from pure
    production71 without withdrawing from participation (the word “amateur” comes
    from the French word _aimer_ ).72 Yet it appears to me that the convergence of
    cultural memory ecosystems leaves little room for the philosophical idea of
    mobilizing amateurism as a form of resistance against capitalist logics.73 The
    blurring of production boundaries in the new cultural memory ecosystems raises
    urgent questions to cultural memory institutions of how they can protect the
    ethos of the amateur in citizen archives,74 while also aligning them with
    institutional strategies of harvesting the “cognitive surplus” of users75 in
    environments where play is increasingly taking on aspects of labor and vice
    versa. As cultural theorist Angela Mitropoulos has noted, “networking is also
    net-working.”76 Thus, while many of the participatory structures we find in
    Europeana are participatory projects proper and not just what we might call
    participation-lite—or minimal participation77—models, the new interoperable
    infrastructures of cultural memory ecosystems make it increasingly difficult
    to uphold clear-cut distinctions between civic practice and exploitation in
    crowdsourcing projects.

    ## Collecting Europe

    If Europeana is a late-sovereign mass digitization project that maintains
    discursive ties to the national imaginary at the same time that it undercuts
    this imaginary by means of networked infrastructures through increased
    interoperability, the final question is: what does this late-sovereign
    assemblage produce in cultural terms? As outlined above, it was an aspiration
    of Europeana to produce and distribute European cultural memory by means of
    mass digitization. Today, its collection gathers more than 50 million cultural
    works in differing formats—from sound bites to photographs, textiles, films,
    files, and books. As the previous sections show, however, the processes of
    gathering the cultural artifacts have generated a lot of friction, producing a
    political reality that in some respects reproduces and accentuates the
    existing politics of cultural memory institutions in terms of representation
    and ownership, and in other respects gives rise to new forms of cultural
    memory politics that part ways with the political regimes of traditional
    curatorial apparatuses.

    The story of how Europeana’s initial collection was published and later
    revised offers a good opportunity to examine its late-sovereign political
    dynamics. Europeana launched in 2008, giving access to some 4.5 million
    digital objects from more than 1,000 institutions. Shortly after its launch,
    however, the site crashed for several hours. The reason given by EU officials
    was that Europeana was a victim of its own success: “On the first day of its
    launch, Europe’s digital library Europeana was overwhelmed by the interest
    shown by millions of users in this new project … thousands of users searching
    in the very same second for famous cultural works like the _Mona Lisa_ or
    books from Kafka, Cervantes, or James Joyce. … The site was down because of
    massive interest, which shows the enormous potential of Europeana for bringing
    cultural treasures from Europe’s cultural institutions to the wide public.” 78
    The truth, however, lay elsewhere. As a Europeana employee explained, the site
    didn’t buckle under the enormous interest shown in it, but rather because
    “people were hitting the same things everywhere.” The problem wasn’t so much
    the way they were hitting on material, but _what_ they were hitting; the
    Europeana employee explained that people’s search terms took the Commission by
    surprise, “even hitting things the Commission didn’t want to show. Because
    people always search for wrong things. People tend to look at pornographic and
    forbidden material such as _Mein Kampf_ , etc.”79 Europeana’s reaction was to
    shut down and redesign Europeana’s search interface. Europeana’s crash was not
    caused by user popularity, but rather was caused by a decision made by the
    Commission and Europeana staff to rework the technical features of Europeana
    so that the most popular searches would not be public and to remove
    potentially politically contentious material such as _Mein Kampf_ and nude
    works by Peter Paul Rubens and Abraham Bloemaert, among others. Another
    Europeana employee explained that the launch of Europeana had been forced
    through before its time because of a meeting among the cultural ministers in
    Europe, making it possible to display only a prototype. This beta version was
    coded to reveal the most popular searches, producing a “carousel” of the same
    content because, as the previous quote explains, people would search for the
    same things, in particular “porn” and “ _Mein Kampf_ ,” allegedly leading the
    US press to call Europeana a collection of fascist and porn material.

    On a small scale, Europeana’s early glitch highlighted the challenge of how to
    police the incoming digital flows from national cultural heritage institutions
    for in-copyright works. With hundreds of different institutions feeding
    hundreds of thousands of texts, images, and sounds into the portal, scanning
    the content for illegal material was an impossible task for Europeana
    employees. Many in-copyright works began flooding the portal. One in-copyright
    work that appeared in the portal stood out in particular: Hitler’s _Mein
    Kampf_. A common conception has been that _Mein Kampf_ was banned after WWII.
    The truth was more complicated and involved a complex copyright case. When
    Hitler died, his belongings were given to the state of Bavaria, including his
    intellectual property rights to _Mein Kampf_. Since Hitler’s copyright was
    transferred as part of the Allies’ de-Nazification program, the Bavarian state
    allowed no one to republish the book. 80 Therefore, reissues of _Mein Kampf_
    only reemerged in 2015, when the copyright was released. The premature digital
    distribution of _Mein Kampf_ in Euro­peana was thus, according to copyright
    legislation, illegal. While the _Mein Kampf_ case was extraordinary, it
    flagged a more fundamental problem of how to police and analyze all the
    incoming data from individual cultural heritage institutions.

    On a more fundamental level, however, _Mein Kampf_ indicated not only a legal,
    but also a political, issue for Europeana: how to deal with the expressions
    that Europeana’s feedback mechanisms facilitated. Mass digitization promoted a
    new kind of cultural memory logic, namely of feedback. Feedback mechanisms are
    central to data-driven companies like Google because they offer us traces of
    the inner worlds of people that would otherwise never appear in empirical
    terms, but that can be catered to in commercial terms. 81 Yet, while the
    traces might interest the corporation (or sociologist) on the hunt for
    people’s hidden thoughts, a prestige project such as Europeana found it
    untenable. What Europeana wanted was to present Europe’s cultural memory; what
    they ended up showing was Europeans’ intense fascination with fascism and
    porn. And this was problematic because Europeana was a political project of
    representation, not a commercial project of capture.82

    Since its glitchy launch, Europeana has refined its interface techniques, is
    becoming more attuned to network analytics, and has grown exponentially both
    in terms of institutional and in material scope. There are, at the time of
    this writing, more than 50 million items in Europeana, and while its numbers
    are smaller than Google Books, its scope is much larger, including images,
    texts, sounds, videos, and 3-D objects. The platform features carefully
    curated exhibitions highlighting European themes, from generalized exhibitions
    about World War I and European artworks to much more specialized exhibitions
    on, for instance, European cake culture.

    But how is Europe represented in statistical terms? Since Europeana’s
    inception, there have been huge variances in how much each nation-state
    contributes to Europeana.83 So while Europeana is in principle representing
    Europe’s collective cultural memory, in reality it represents a highly
    fragmented image of Europe with a lot of European countries not even appearing
    in the databases. Moreover, even these numbers are potentially misleading, as
    one information scholar formerly working with Europeana notes: to pump up
    their statistical representation, many institutions strategically invented
    counting systems that would make their representation seem bigger than it
    really is, for example, by declaring each scanned page in a medieval
    manuscript as an object instead of as the entire work.84 The strategic acts of
    volume increase are interesting mass digitization phenomena for many reasons:
    first, they reveal the ultimately volume-based approach of mass digitization.
    According to the scholar, this volume-based approach finds a political support
    in the EC system, for whom “the object will always be quantitative” since
    volume is “the only thing the commission can measure in terms of funding and
    result.”85 In a way then, the statistics tell more than one story: in
    political terms, they recount not only the classic tale of a fragmented Europe
    but also how Europe is increasingly perceived, represented, and managed by
    calculative technologies. In technical terms, they reveal the gray areas of
    how to delineate and calculate data: what makes a data object? And in cultural
    policy terms, they reflect the highly divergent prioritization of mass
    digitization in European countries.

    The final question is, then: how is this fragmented European collection
    distributed? This is the point where Europeana’s territorial matrix reveals
    its ultimately networked infrastructure. Europeana may be entered through
    Google, Facebook, Twitter, and Pinterest, and vice versa. Therefore a click on
    the aforementioned cake exhibition, for example, takes one straight to Google
    Arts and Culture. The transportation from the Europeana platform to Google
    happens smoothly, without any friction or notice, and if one didn’t look at
    the change in URL, one would hardly notice the change at all since the
    interface appears almost similar. Yet, what are the implications of this
    networked nature? An obvious consequence is that Europeana is structurally
    dependent on the social media and search engine companies. According to one
    Europeana report, Google is the biggest source of traffic to the Europeana
    portal, accounting for more than 50 percent of visits. Any changes in Google’s
    algorithm and ranking index therefore significantly impact traffic patterns on
    the Europeana portal, which in turn affects the number of Europeana pages
    indexed by Google, which then directly impacts on the number of overall visits
    to the Europeana portal.86 The same holds true for Facebook, Pinterest,
    Google+, etc.

    Held together, the feedback mechanisms, the statistical variance, and the
    networked infrastructures of Europeana show just how difficult it is to
    collect Europe in the digital sphere. This is not to say that territorial
    sentiments don’t have power, however—far from it. Within the digital sphere we
    are already seeing territorial statements circulated in Europe on both
    national and supranational scales, with potentially far-reaching implications
    on both. Yet, there is little to suggest that the territorial sentiments will
    reproduce sovereign spheres in practice. To the extent that reterritorializing
    sentiments are circulated in globalizing networks, this chapter has sought to
    counter both ideas about post sovereignty and pure nationalization, viewing
    mass digitization instead through the lens of late-sovereignty. As this
    chapter shows, the notion of late-sovereignty allows us to conceptualize mass
    digitization programs, such as Europeana, as globalized phenomena couched
    within the language of (supra)national sovereignty. In the age where rampant
    nationalist movements sweep through globalized communication networks, this
    approach feels all the more urgent and applicable not only to mass
    digitization programs, but also to reterritorializing communication phenomena
    more broadly. Only if we take the ways in which the nationalist imaginary
    works in the infrastructural reality of late capitalism, can we begin to
    account for the infrapolitics of the highly mediated new territorial
    imaginaries.

    ## Notes

    1. Lefler 2007; Henry W., “Europe’s Digital Library versus Google,” Café
    Babel, September 22, 2008, /europes-digital-library-versus-google.html>; Chrisafis 2008. 2. While
    digitization did not stand apart from the political and economic developments
    in the rapidly globalizing world, digital theorists and activists soon gave
    rise to the Internet as an inherent metaphor for this integrative development,
    a sign of the inevitability of an ultimately borderless world, where as
    Negroponte notes, time zones would “probably play a bigger role in our digital
    future than trade zones” (Negroponte 1995, 228). 3. Goldsmith and Wu 2006. 4.
    Rogers 2012. 5. Anderson 1991. 6. “Jacques Chirac donne l’impulsion à la
    création d’une bibliothèque numérique,” _Le Monde_ , March 16, 2005,
    donne-l-impulsion-a-la-creation-d-une-bibliotheque-
    numerique_401857_3246.html>. 7. Meunier 2007. 8. As Sophie Meunier reminds us,
    the _Ursprung_ of the competing universalisms can be located in the two
    contemporary revolutions that lent legitimacy to the universalist claims of
    both the United States and France. In the wake of the revolutions, a perceived
    competition arose between these two universalisms, resulting in French
    intellectuals crafting anti-American arguments, not least when French
    imperialism “was on the wane and American imperialism on the rise.” See
    Meunier 2007, 141. Indeed, Muenier suggests, anti-Americanism is “as much a
    statement about France as it is about America—a resentful longing for a power
    that France no longer has” (ibid.). 9. Jeanneney 2007, 3. 10. Emile Chabal
    thus notes how the term is “employed by prominent politicians, serious
    academics, political commentators, and in everyday conversation” to “cover a
    wide range of stereotypes, pre-conceptions, and judgments about the Anglo-
    American world” (Chabal 2013, 24). 11. Chabal 2013, 24–25. 12. Jeanneney 2007.
    13. While Jeanneney framed this French cultural-political endeavor as a
    European “contre-attaque” against Google Books, he also emphasized that his
    polemic was not at all to be read as a form of aggression. In particular he
    pointed to the difficulties of translating the word _défie_ , which featured
    in the title of the piece: “Someone rightly pointed out that the English word
    ‘defy,’ with which American reporters immediately rendered _défie,_ connotes a
    kind of violence or aggressiveness that isn’t implied by the French word. The
    right word in English is ‘challenge,’ which has a different implication, more
    sporting, more positive, more rewarding for both sides” (Jeanneney 2007, 85).
    14. See pages 12, 22, and 24 for a few examples in Jeanneney 2007. 15. On the
    issue of the common currency, see, for instance, Martin and Ross 2004. The
    idea of France as an appropriate spokesperson for Europe was familiar already
    in the eighteenth century when Voltaire declared French “la Langue de
    l’Europe”; see Bivort 2013. 16. The official thus first noted that, “Everybody
    is working on digitization projects … cooperation between Google and the
    European project could therefore well occur.” and later added that ”The worst
    scenario we could achieve would be that we had two big digital libraries that
    don’t communicate. … The idea is not to do the same thing, so maybe we could
    cooperate, I don’t know. Frankly, I’m not sure they would be interested in
    digitizing our patrimony. The idea is to bring something that is
    complementary, to bring diversity. But this doesn’t mean that Google is an
    enemy of diversity.” See Labi 2005. 17. Letter from Manuel Barroso to Jaques
    Chirac, July 7, 2005,
    [http://www.peps.cfwb.be/index.php?eID=tx_nawsecuredl&u=0&file=fileadmin/sites/numpat/upload/numpat_super_editor/numpat_editor/documents/Europe/Bibliotheques_numeriques/2005.07.07reponse_de_la_Commission_europeenne.pdf&hash=fe7d7c5faf2d7befd0894fd998abffdf101eecf1](http://www.peps.cfwb.be/index.php?eID=tx_nawsecuredl&u=0&file=fileadmin/sites/numpat/upload/numpat_super_editor/numpat_editor/documents/Europe/Bibliotheques_numeriques/2005.07.07reponse_de_la_Commission_europeenne.pdf&hash=fe7d7c5faf2d7befd0894fd998abffdf101eecf1).
    18. As one EC communication noted, a digitization project on the scale of
    Europeana could sharpen Europe’s competitive edge in digitization processes
    compared to those in the US as well India and China; see European Commission,
    “i2010: Digital Libraries,” _COM(2005) 465 final_ , September 30, 2005, [eur-
    lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52005DC0465&from=EN](http
    ://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52005DC0465&from=EN).
    19. “Google Books raises concerns in some member states,” as an anonymous
    Czech diplomatic source put it; see Paul Meller, “EU to Investigate Google
    Books’ Copyright Policies,” _PCWorld_ , May 28, 2009,
    .
    20. Pfanner 2011; Doward 2009; Samuel 2009. 21. Amicus brief is a legal term
    that in Latin means “friend of the court.” Frequently, a person or group who
    is not a party to a lawsuit, but has a strong interest in the matter, will
    petition the court for permission to submit a brief in the action with the
    intent of influencing the court’s decision. 22. See chapter 4 in this volume.
    23. de la Durantaye 2011. 24. Kevin J. O’Brien and Eric Pfanner, “Europe
    Divided on Google Book Deal,” _New York Times_ , August 23, 2009,
    ; see
    also Courant 2009; Darnton 2009. 25. de la Durantaye 2011. 26. Viviane Reding
    and Charlie McCreevy, “It Is Time for Europe to Turn over a New E-Leaf on
    Digital Books and Copyright,” MEMO/09/376, September 7, 2009, [europa.eu/rapid
    /press-release_MEMO-09-376_en.htm?locale=en](http://europa.eu/rapid/press-
    release_MEMO-09-376_en.htm?locale=en). 27. European Commission,
    “Europeana—Next Steps,” COM(2009) 440 final, August 28, 2009, [eur-
    lex.europa.eu/LexUriServ/LexUriServ.do?uri=COM:2009:0440:FIN:en:PDF](http
    ://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=COM:2009:0440:FIN:en:PDF).
    28. “It is logical that the private partner seeks a period of preferential use
    or commercial exploitation of the digitized assets in order to avoid free-
    rider behaviour of competitors. This period should allow the private partner
    to recoup its investment, but at the same time be limited in time in order to
    avoid creating a one-market player situation. For these reasons, the Comité
    set the maximum time of preferential use of material digitised in public-
    private partnerships at maximum 7 years” (Niggemann 2011). 29. Walker 2003.
    30. Within this complex environment it is not even possible to draw boundaries
    between the networked politics of the EU and the sovereign politics of member
    states. Instead, member states engage in double-talk. As political scientist
    Sophie Meunier reminds us, even member states such as France engage in double-
    talk on globalization, with France on the one hand becoming the “worldwide
    champion of anti-globalization,” and on the other hand “a country whose
    economy and society have quietly adapted to this much-criticized
    globalization” (Meunier 2003). On political two-level games, see also Putnam
    1988. 31. Walker 2003. 32. “Google Books Project to Remove European Titles,”
    _Telegraph_ , September 7, 2009,
    remove-European-titles.html>. 33. “Europeana Factsheet,” Europeana, September
    28, 2015,
    /copy-of-europeana-policy-illustrating-the-20th-century-black-hole-in-the-
    europeana-dataset.pdf> . 34. C. Handke, L. Guibault, and J. J. Vallbé, “Is
    Europe Falling Behind in Data Mining? Copyright’s Impact on Data Mining in
    Academic Research,” 2015, id-12015-15-handke-elpub2015-paper-23>. 35. Interview with employee, DG
    Copyright, DC Commission, 2010. 36. Interview with employee, DG Information
    and Society, DC Commission, 2010. 37. Montagnani and Borghi 2008. 38. Julia
    Fallon and Paul Keller, “European Parliament Demands Copyright Rules that
    Allow Cultural Heritage Institutions to Share Collections Online,” Europeana
    Pro, rules-better-fit-for-a-digital-age>. 39. Jasanoff 2013, 133 40. Ibid. 41. Tate
    2001. 42. It would be tempting to suggest the discussion on harmonization
    above would apply to interoperability as well. But while the concepts of
    harmonization and interoperability—along with the neighboring term
    standardization—are used intermittently and appear similar at first glance,
    they nevertheless have precise cultural-legal meanings and implicate different
    infrastructural set-ups. As noted above, the notion of harmonization is
    increasingly used in the legal context of harmonizing regulatory
    apparatuses—in the case of mass digitization especially copyright laws. But
    the word has a richer semantic meaning, suggesting a search for commonalities,
    literally by means of fitting together or arranging units into a whole. As
    such the notion of harmony suggests something that is both pleasing and
    presupposes a cohesive unit(y), for example, a door hinged to a frame, an arm
    hinged to a body. While used in similar terms, the notion of interoperability
    expresses a very different infrastructural modality. If harmonization suggests
    unity, interoperability rather alludes to modularity. For more on the concepts
    of standardization and harmonization in regulatory contexts, see Tay and
    Parker 1990. 43. The notion of interoperability is often used to express a
    system’s ability to transfer, render and connect to useful information across
    systems, and calls for interoperability have increased as systems have become
    increasingly complex. 44. There are “myriad technical and engineering issues
    associated with connecting together networks, databases, and other computer-
    based systems”; digitized cultural memory institutions have the option of
    providing “a greater array of services” than traditional libraries and
    archives from sophisticated search engines to document reformatting as rights
    negotiations; digitized cultural memory materials are often more varied than
    the material held in traditional libraries; and finally and most importantly,
    mass digitization institutions are increasingly becoming platforms that
    connect “a large number of loosely connected components” because no “single
    corporation, professional organization, or government” would be able to
    provide all that is necessary for a project such as Europeana; not least on an
    international scale. EU-NSF Digital Library Working Group on Interoperability
    between Digital Libraries Position Paper, 1998,
    . 45.  _The
    Digicult Report: Technological Landscapes for Tomorrow’s Cultural Economy:
    Unlocking the Value of Cultural Heritage: Executive Summary_ (Luxembourg:
    Office for Official Publications of the European Communities, 2002), 80. 46.
    “… interoperability in organisational terms is not foremost dependent on
    technologies,” ibid. 47. As such they align with what Internet governance
    scholar Laura Denardis calls the Internet’s “underlying principle” (see
    DeNardis 2014). 48. The results of the EC Working Group on Digital Library
    Interoperability are reported in the briefing paper by Stephan Gradman
    entitled “Interoperability: A Key Concept for Large Scale, Persistent Digital
    Libraries” (Gradmann 2009). 49. “Semantic operability ensures that programmes
    can exchange information, combine it with other information resources and
    subsequently process it in a meaningful manner: _European Interoperability
    Framework for pan-European eGovernment services_ , 2004,
    . In the case of
    Europeana, this could consist of the development of tools and technologies to
    improve the automatic ingestion and interpretation of the metadata provided by
    cultural institutions, for example, by mapping the names of artists so that an
    artist known under several names is recognised as the same person.” (Council
    Conclusions on the Role of Europeana for the Digital Access, Visibility and
    Use of European Cultural Heritage,” European Council Conclusion, June 1, 2016,
    .) 50.
    Bowker, Baker, Millerand, and Ribes 2010. 51. Tsilas 2011, 103. 52. Borgman
    2015, 46. 53. McDonough 2009. 54. Palfrey and Gasser 2012. 55. DeNardis 2011.
    56. The .txtual Condition: Digital Humanities, Born-Digital Archives, and the
    Future Literary; Palfrey and Gasser 2012; Matthew Kirschenbaum, “Distant
    Mirrors and the Lamp,” talk at the 2013 MLA Presidential Forum Avenues of
    Access session on “Digital Humanities and the Future of Scholarly
    Communication.” 57. Ping-Huang 2016. 58. Lessig 2005 59. Ibid. 60. Ibid. 61.
    Palfrey and Gasser 2012. 62. McPherson 2012, 29. 63. Berardi, Genosko, and
    Thoburn 2011, 29–31. 64. For more on the nexus of freedom and control, see
    Chun 2006. 65. The mere act of digitization of course inflicts mobility on an
    object as digital objects are kept in a constant state of migration. 66. Krysa
    2006. 67. See only the wealth of literature currently generated on the
    “curatorial turn,” for example, O’Neill and Wilson 2010; and O’Neill and
    Andreasen 2011. 68. Romeo and Blaser 2011. 69. Europeana Sound Connections,
    collections-on-a-social-networking-platform.html>. 70. Ridge 2013. 71. Carolyn
    Dinshaw has argued for the amateur’s ability in similar terms, focusing on her
    potential to queer the archive (see Dinshaw 2012). 72. Stiegler 2003; Stiegler
    n.d. The idea of the amateur as a subversive character precedes digitization,
    of course. Think only of Roland Barthes’s idea of the amateur as a truly
    subversive character that could lead to a break with existing ideologies in
    disciplinary societies; see, for instance, Barthes’s celebration of the
    amateur as a truly anti-bourgeois character (Barthes 1977 and Barthes 1981).
    73. Not least in light of recent writings on the experience as even love
    itself as a form of labor (see Weigel 2016). The constellation of love as a
    form of labor has a long history (see Lewis 1987). 74. Raddick et al. 2009;
    Proctor 2013. 75. “Many companies and institutions, that are successful
    online, are good at supporting and harnessing people’s cognitive surplus. …
    Users get the opportunity to contribute something useful and valuable while
    having fun” (Sanderhoff, 33 and 36). 76. Mitropoulos 2012, 165. 77. Carpentier
    2011. 78. EC Commission, “Europeana Website Overwhelmed on Its First Day by
    Interest of Millions of Users,” MEMO/08/733, November 21, 2008,
    . See also Stephen
    Castle, “Europeana Goes Online and Is Then Overwhelmed,” _New York Times_ ,
    November 21, 2008,
    [nytimes.com/2008/11/22/technology/Internet/22digital.html](http://nytimes.com/2008/11/22/technology/Internet/22digital.html).
    79. Information scholar affiliated with Europeana, interviewed by Nanna Bonde
    Thylstrup, Brussels, Belgium, 2011. 80. See, for instance, Martina Powell,
    “Bayern will mit ‘Mein Kampf’ nichts mehr zu tun haben,” _Die Zeit_ , December
    13, 2013, soll-erscheinen>. Bavaria’s restrictive publishing policy of _Mein Kampf_
    should most likely be interpreted as a case of preventive precaution on behalf
    of the Bavarian State’s diplomatic reputation. Yet by transferring Hitler’s
    author’s rights to the Bavarian Ministry, they allocated _Mein Kampf_ to an
    existence in a gray area between private and public law. Since then, the book
    has been the center of attention in a rift between, on the one hand, the
    Ministry of Finance who has rigorously defended its position as the formal
    rights holder, and, on the other hand, historians and intellectuals who,
    supported the Bavarian science minister Wolfgang Heubisch, have argued that an
    academic annotated version of _Mein Kampf_ should be made publicly accessible
    in the name of Enlightenment. 81. Latour 2007. 82. Europeana’s more
    traditional curatorial approach to mass digitization was criticized not only
    by the media, but also others involved in mass digitization projects, who
    claimed that Europeana had fundamentally misunderstood the point of mass
    digitization. One engineer working on mass digitization projects is the
    influential cultural software developer organization, IRI, argued that
    Europeana’s production pattern was comparable to “launching satellites”
    without thinking of the messages that are returned by the satellites. Google,
    he argued, was differently attuned to the importance of feedback, because
    “feedback is their business.” 83. In the most recent published report, Germany
    contributes with about 15 percent and France with around 16 percent of the
    total amount of available works. At the same time, Belgium and Slovenia only
    count around 1 percent and Denmark along with Greece, Luxembourg, Portugal,
    and a slew of other countries doesn’t even achieve representation in the pie
    chart; see “Europeana Content Report,” August 6, 2015,
    /europeana-dsi-ms7-content-report-august.pdf>. 84. Europeana information
    scholar interview, 2011. 85. Ibid. 86. Wiebe de Jager, “MS15: Annual traffic
    report and analysis,” Europeana, May 31 2014,
    .

    # 4
    The Licit and Illicit Nature of Mass Digitization

    ## Introduction: Lurking in the Shadows

    A friend has just recommended an academic book to you, and now you are dying
    to read it. But you know that it is both expensive and hard to get your hands
    on. You head down to your library to request the book, but you soon realize
    that the wait list is enormous and that you will not be able to get your hands
    on the book for a couple of weeks. Desperate, you turn to your friend for
    help. She asks, “Why don’t you just go to a pirate library?” and provides you
    with a link. A new world opens up. Twenty minutes later you have downloaded 30
    books that you felt were indispensable to your bookshelf. You didn’t pay a
    thing. You know what you did was illegal. Yet you also felt strangely
    justified in your actions, not least spurred on by the enthusiastic words on
    the shadow library’s front page, which sets forth a comforting moral compass.
    You begin thinking to yourself: “Why are pirate libraries deemed more illegal
    than Google’s controversial scanning project?” and “What are the moral
    implications of my actions vis-à-vis the colonial framework that currently
    dictates Europeana’s copyright policies?”

    The existence of what this book terms shadow libraries raises difficult
    questions, not only to your own moral compass but also to the field of mass
    digitization. Political and popular discourses often reduce the complexity of
    these questions to “right” and “wrong” and Hollywood narratives of pirates and
    avengers. Yet, this chapter wishes to explore the deeper infrapolitical
    implications of shadow libraries, setting out the argument that shadow
    libraries offer us a productive framework for examining the highly complex
    legal landscape of mass digitization. Rather than writing a chapter that
    either supports or counters shadow libraries, the chapter seeks to chart the
    complexity of the phenomenon and tease out its relevance for mass digitization
    by framing it within what we might call an infrapolitics of parasitism.

    In _The Parasite_ , a strange and fabulating book that brings together
    information theory and cybernetics, physics, philosophy, economy, biology,
    politics, and folk tales, French philosopher Michel Serres constructs an
    argument about the conceptual figure of the parasite to explore the parasitic
    nature of social relations. In a dizzying array of images and thought-
    constructs, Serres argues against the idea of a balanced exchange of energy,
    suggesting instead that our world is characterized by one parasite stealing
    energy by feeding on another organism. For this purpose he reminds us of the
    three meanings of parasite in the French language. In French, the term
    parasite has three distinct, but related meanings. The first relates to one
    organism feeding off another and giving nothing in return. Second, it refers
    to the social concept of the freeloader, who lives off society without giving
    anything in return. Both of these meanings are fairly familiar to most, and
    lay the groundwork for our annoyance with both bugs and spongers. The third
    meaning, however, is less known in most languages except French: here the
    parasite is static noise or interference in a channel, interrupting the
    seemingly balanced flow of things, mediating and thus transforming relations.
    Indeed, for Serres, the parasite is itself a disruptive relation (rather than
    entity). The parasite can also change positions of sender, receiver, and
    noise, making it exceedingly difficult to discern parasite from nonparasite;
    indeed, to such an extent that Serres himself exclaims “I no longer really
    know how to say it: the parasite parasites the parasites.”1 Serres thus uses
    his parasitic model to make a claim about the nature of cybernetic
    technologies and the flow of information, arguing that “cybernetics gets more
    and more complicated, makes a chain, then a network. Yet it is founded on the
    theft of information, quite a simple thing.”2 The logic of the parasite,
    Serres argues, is the logic of the interrupter, the “excluded third” or
    “uninvited guest” who intercepts and confuses relations in a process of theft
    that has a value both of destruction and a value of construction. The parasite
    is thus a generative force, inventing, affecting, and transforming relations.
    Hence, parasitism refers not only to an act of interference but also to an
    interruption that “invents something new.”3

    Michel Serres’s then-radical philosophy of the parasite is today echoed by a
    broader recognition of the parasite as not only a dangerous entity, but also a
    necessary mediator. Indeed, as Jeanette Samyn notes, we are today witnessing a
    “pro-parasitic” movement in science in which “scientists have begun to
    consider parasites and other pathogens not simply as problems but as integral
    components of ecosystems.”4 In this new view, “… the parasite takes from its
    host without ever taking its place; it creates new room, feeding off excess,
    sometimes killing, but often strengthening its milieu.” In the following
    sections, the lens of the parasite will help us explore the murky waters of
    shadow libraries, not (only) as entities, but also as relational phenomena.
    The point is to show how shadow libraries belong to the same infrapolitical
    ecosystem as Google Books and Europeana, sometimes threatening them, but often
    also strengthening them. Moreover, it seeks to show how visitors’ interactions
    with shadow libraries are also marked by parasitical relations with Google,
    which often mediates literature searches, thus entangling Google and shadow
    libraries in a parasitical relationship where one feeds off the other and vice
    versa.

    Despite these entangled relations, the mass digitization strategies of shadow
    libraries, Europeana, and Google Books differ significantly. Basically, we
    might say that Google Books and Europeana each represent different strategies
    for making material available on an industrial scale while maintaining claims
    to legality. The sprawling and rapidly growing group of mass digitization
    projects interchangeably termed shadow libraries represents a third set of
    strategies. Shadow libraries5 share affinities with Europeana and Google Books
    in the sense that they offer many of the same services: instant access to a
    wealth of cultural works spanning journal articles, monographs, and textbooks
    among others. Yet, while Google Books and Europeana promote visibility to
    increase traffic, embed themselves in formal systems of communication, and
    operate within the legal frameworks of public funding and private contracting,
    shadow libraries in contrast operate in the shadows of formal visibility and
    regulatory systems. Hence, while formal mass digitization projects such as
    Google Books and Europeana publicly proclaim their desire to digitize the
    world’s cultural memory, another layer of people, scattered across the globe
    and belonging to very diverse environments, harbor the same aspirations, but
    in much more subtle terms. Most of these people express an interest in the
    written word, a moral conviction of free access, and a political view on
    existing copyright regulations as unjust and/or untimely. Some also express
    their fascination with the new wonders of technology and their new
    infrastructural possibilities. Others merely wish to practice forms of access
    that their finances, political regime, or geography otherwise prohibit them
    from doing. And all of them are important nodes in a new shadowy
    infrastructural system that provides free access worldwide to books and
    articles on a scale that collectively far surpasses both Google and Europeana.

    Because of their illicit nature, most analyses of shadowy libraries have
    centered on their legal transgressions. Yet, their cultural trajectories
    contain nuances that far exceed legal binaries. Approaching shadow libraries
    through the lens of infrapolitics is helpful for bringing forth these much
    more complex cultural mass digitization systems. This chapter explores three
    examples of shadow libraries, focusing in particular on their stories of
    origin, their cultural economies, and their sociotechnical infrastructures.
    Not all shadow libraries fit perfectly into the category of mass digitization.
    Some of them are smaller in size, more selective, and less industrial.
    Nevertheless, I include them because their open access strategies allow for
    unlimited downloads. Thus, shadow libraries, while perhaps selective in size
    themselves, offer the opportunity to reproduce works at a massive and
    distributed scale. As such, they are the perfect example of a mass
    digitization assemblage.

    The first case centers on lib.ru, an early Russia-based file-sharing platform
    for exchanging books that today has grown into a massive and distributed file-
    sharing project. It is primarily run by individuals, but it has also received
    public funding, which shows that what at first glance appears as a simple case
    of piracy simultaneously serves as a much more complex infrapolitical
    structure. The second case, Monoskop, distinguishes itself by its boutique
    approach to digitization. Monoskop too is characterized by its territorial
    trajectory, rooted in Bratislava’s digital scene as an attempt to establish an
    intellectual platform for the study of avant-garde (digital) cultures that
    could connect its Bratislava-based creators to a global scene. Finally, the
    chapter looks at UbuWeb, a shadow library dedicated to avant-garde cultural
    works ranging from text and audio to images and film. Founded in 1996 as a US-
    based noncommercial file-sharing site by poet Kenneth Goldsmith in response to
    the marginal distribution of crucial avant-garde material, UbuWeb today offers
    a wealth of avant-garde sound art, video, and textual works.

    As the case studies show, shadow libraries have become significant mass
    digitization infrastructures that offer the user free access to academic
    articles and books, often by means of illegal file-sharing. They are informal
    and unstable networks that rely on active user participation across a wide
    spectrum, from deeply embedded people who have established file-sharing sites
    to the everyday user occasionally sending the odd book or article to a friend
    or colleague. As Lars Eckstein notes, most shadow libraries are characterized
    not only by their informal character, but also by the speed with which they
    operate, providing “a velocity of media content” which challenges legal
    attacks and other forms of countermeasures.6 Moreover, shadow libraries also
    often operate in a much more widely distributed fashion than both Europeana
    and Google, distributing and mirroring content across multiple servers, and
    distributing labor and responsibility in a system that is on the one hand more
    robust, more redundant, and more resistant to any single point of failure or
    control, and on the other hand more ephemeral, without a central point of
    back-up. Indeed, some forms of shadow libraries exist entirely without a
    center, instead operating infrastructurally along communication channels in
    social media; for example, the use of the Twitter hashtag #ICanHazPDF to help
    pirate scientific papers.

    Today, shadow libraries exist as timely reminders of the infrapolitical nature
    of mass digitization. They appear as hypertrophied versions of the access
    provided by Google Books and Europeana. More fundamentally, they also exist as
    political symptoms of the ideologies of the digital, characterized by ideals
    of velocity and connectivity. As such, we might say that although shadow
    libraries often position themselves as subversives, in many ways they also
    belong to the same storyline as other mass digitization projects such as
    Google Books and Europeana. Significantly, then, shadow libraries are
    infrapolitical in two senses: first, they have become central infrastructural
    elements in what James C. Scott calls the “infrapolitics of subordinate
    groups,” providing everyday resistance by creating entrance points to
    hitherto-excluded knowledge zones.7 Second, they represent and produce the
    infrapolitics of the digital _tout court_ with their ideals of real-time,
    globalized, and unhindered access.

    ## Lib.ru

    Lib.ru is one of the earliest known digital shadow libraries. It was
    established by the Russian computer science professor Maxim Moshkov, who
    complemented his academic practice of programming with a personal hobby of
    file-sharing on the so-called RuNet, the Russian-language segment of the
    Internet.8 Moshkov’s collection had begun as an e-book swapping practice in
    1990, but in 1994 he uploaded the material to his institute’s web server where
    he then divided the site into several section such as “my hobbies,” “my work,”
    and “my library.”9 If lib.ru began as a private project, however, the role of
    Moshkov’s library soon changed as it quickly became Russia’s preferred shadow
    library, with users playing an active role in its expansion by constantly
    adding new digitized books. Users would continually scan and submit new texts,
    while Moshkov, in his own words, worked as a “receptionist” receiving and
    handling the material.10

    Shadow libraries such as Moshkov’s were most likely born not only out of a
    love of books, but also out of frustration with Russia’s lack of access to up-
    to-date and affordable Western works.11 As they continued to grow and gain in
    popularity, shadow libraries thus became not only points of access, but also
    signs of infrastructural failure in the formal library system.12 After lib.ru
    outgrew its initial server storage at Moshkov’s institute, Moshkov divided it
    into smaller segments that were then distributed, leaving only the Russian
    literary classics on the original site.13 Neighboring sites hosted other
    genres, ranging from user-generated texts and fan fiction on a shadow site
    called [samizdat.lib.ru](http://samizdat.lib.ru) to academic books in a shadow
    library titled Kolkhoz, named after the commons-based agricultural cooperative
    of the early Soviet era and curated and managed by “amateur librarians.”14 The
    steadily accumulating numbers of added works, digital distributors, and online
    access points expanded not only the range of the shadow collections, but also
    their networked affordances. Lib.ru and its offshoots thus grew into an
    influential node in the global mass digitization landscape, attracting both
    political and legal attention.

    ### Lib.ru and the Law

    Until 2004, lib.ru deployed a practice of handling copyright complaints by
    simply removing works at the first request from the authors.15 But in 2004 the
    library received its first significant copyright claim from the big Russian
    publisher Kirill i Mefody (KM). KM requested that Moshkov remove access to a
    long list of books, claiming exclusive Internet rights on the books, along
    with works that were considered public domain. Moshkov refused to honor the
    request, and a lawsuit ensued. The Ostankino Court of Moscow initially denied
    the lawsuit because the contracts for exclusive Internet rights were
    considered invalid. This did not deter KM, however, which then approached the
    case from a different perspective, filing applications on behalf of well-known
    Russian authors, including the crime author Alexandra Marinina and the science
    fiction writer Eduard Gevorkyan. In the end, only Eduard Gevorkyan maintained
    his claim, which was of the considerable size of one million rubles.16

    During the trial, Moshkov’s library received widespread support from both
    technologists and users of lib.ru, expressed, for example, in a manifesto
    signed by the International Union of Internet Professionals, which among other
    things touched upon the importance of online access not only to cultural works
    but also to the Russian language and culture:

    > Online libraries are an exceptionally large intellectual fund. They lessen
    the effect of so-called “brain drain,” permitting people to stay in the orbit
    of Russian language and culture. Without online libraries, the useful effect
    of the Internet and computers in Russian education system is sharply lowered.
    A huge, openly available mass of Russian literary texts is a foundation
    permitting further development of Russian-language culture, worldwide.17

    Emphasizing that Moshkov often had an agreement with the authors he put
    online, the manifesto also called for a more stable model of online public
    libraries, noting that “A wide list of authors who explicitly permitted
    placing their works in the lib.ru library speaks volumes about the
    practicality of the scheme used by Maxim Moshkov. However, the litigation
    underway shows its incompleteness and weak spots.”18 Significantly, Moshkov’s
    shadow library also received both moral and financial support from the state,
    more specifically in the form of funding of one million rubles granted by the
    Federal Agency for the Press and Mass Media. The funding came with the
    following statement from the Agency’s chairman, Mikhail Seslavinsky:
    “Following the lively discussion on how copyright could be protected in
    electronic libraries, we have decided not to wait for a final decision and to
    support the central library of RuNet—Maxim Moshkov’s site.”19 Seslavinsky’s
    support not only reflected the public’s support of the digital library, but
    also his own deep-seated interests as a self-confessed bibliophile, council
    chair of the Russian organization National Union of Bibliophiles since 2011,
    and author of numerous books on bibliology and bibliophilia. Additionally, the
    support also reflected the issues at stake for the Russian legislative
    framework on copyright. The framework had just passed a second reading of a
    revised law “On Copyright and Related Rights” in the Russian parliament on
    April 21, 2004, extending copyright from 50 years after an author’s death to
    70 years, in accordance with international law and as a condition of Russia’s
    entry into the World Trade Organization.20

    The public funding, Moshkov stated, was spent on modernizing the technical
    equipment for the shadow library, including upgrading servers and performing
    OCR scanning on select texts.21 Yet, despite the widespread support, Moshkov
    lost the copyright case to KM on May 31, 2005. The defeat was limited,
    however. Indeed, one might even read the verdict as a symbolic victory for
    Moshkov, as the court fined Moshkov only 30,000 rubles, a fragment of what KM
    had originally sued for. The verdict did have significant consequences for how
    Moshkov manages lib.ru, however. After the trial, Moshkov began extending his
    classical literature section and stopped uploading books sent by readers into
    his collection, unless they were from authors who submitted them because they
    wished to publish in digital form.

    What can we glean from the story of lib.ru about the infrapolitics of mass
    digitization? First, the story of lib.ru illustrates the complex and
    contingent historical trajectory of shadow libraries. Second, as the next
    section shows, it offers us the possibility of approaching shadow libraries
    from an infrastructural perspective, and exploring the infrapolitical
    dimensions of shadow libraries in the area of tension between resistance and
    standardization.

    ### The Infrapolitics of Lib.ru: Infrastructures of Culture and Dissent

    While global in reach, lib.ru is first and foremost a profoundly
    territorialized project. It was born out of a set of political, economic, and
    aesthetic conditions specific to Russia and carries the characteristics of its
    cultural trajectory. First, the private governance of lib.ru, initially
    embodied by Moshkov, echoes the general development of the Internet in Russia
    from 1991 to 1998, which was constructed mainly by private economic and
    cultural initiatives at a time when the state was in a period of heavy
    transition. Lib.ru’s minimalist programming style also made it a cultural
    symbol of the early RuNet, acting as a marker of cultural identity for Russian
    Internet users at home and abroad.22

    The infrapolitics of lib.ru also carry the traits of the media politics of
    Russia, which has historically been split into two: a political and visible
    level of access to cultural works (through propaganda), and an infrapolitical
    invisible level of contestation and resistance, enabling Russian media
    consumers to act independently from official institutionalized media channels.
    Indeed, some scholars tie the practice of shadow libraries to the Soviet
    Union’s analog shadow activities, which are often termed _samizdat_ , that is,
    illegal cultural distribution, including illegally listening to Western radio,
    illegally trafficking Western music, and illegally watching Western films.23
    Despite often circulating Western pop culture, the late-Soviet era samizdat
    practices were often framed as noncapitalist practices of dissent without
    profit motives.24 The dissent, however, was not necessarily explicitly
    expressed. Lacking the defining fervor of a clear political ideology, and
    offering no initiatives to overthrow the Soviet regime, samizdat was rather a
    mode of dissent that evaded centralized ideological control. Indeed, as
    Aleksei Yurchak notes, samizdat practices could even be read as a mode of
    “suspending the political,” thus “avoiding the political concerns that had a
    binary logic determined by the sovereign state” to demonstrate “to themselves
    and to others that there were subjects, collectivities, forms of life, and
    physical and symbolic spaces in the Soviet context that, without being overtly
    oppositional or even political, exceeded that state’s abilities to define,
    control, and understand them.”25 Yurchak thus reminds us that even though
    samizdat was practiced as a form of nonpolitical practice, it nevertheless
    inherently had significant political implications.

    The infrapolitics of samizdat not only referred to a specific social practice
    but were also, as Ann Komaromi reminds us, a particular discourse network
    rooted in the technology of the typewriter: “Because so many people had their
    own typewriters, the production of samizdat was more individual and typically
    less linked to ideology and organized political structures. … The circulation
    of Samizdat was more rhizomatic and spontaneous than the underground
    press—samizdat was like mushroom ‘spores.’”26 The technopolitical
    infrastructure of samizdat changed, however, with the fall of the Berlin Wall
    in 1989, the further decentralization of the Russian media landscape, and the
    emergence of digitization. Now, new nodes emerged in the Russian information
    landscape, and there was no centralized authority to regulate them. Moreover,
    the transmission of the Western capitalist system gave rise to new types of
    shadow activity that produced items instead of just sharing items, adding a
    new consumerist dimension to shadow libraries. Indeed, as Kuznetsov notes, the
    late-Soviet samizdat created a dynamic textual space that aligned with more
    general tendencies in mass digitization where users were “both readers and
    librarians, in contrast to a traditional library with its order, selection,
    and strict catalogisation.”27

    If many of the new shadow libraries that emerged in the 1990s and 2000s were
    inspired by the infrapolitics of samizdat, then, they also became embedded in
    an infrastructural apparatus that was deeply nested within a market economy.
    Indeed, new digital libraries emerged under such names as Aldebaran,
    Fictionbook, Litportal, Bookz.ru, and Fanzin, which developed new platforms
    for the distribution of electronic books under the label “Liters,” offering
    texts to be read free of charge on a computer screen or downloaded at a
    cost.28 In both cases, the authors receive a fee, either from the price of the
    book or from the site’s advertising income. Accompanying these new commercial
    initiatives, a concomitant movement rallied together in the form of Librusek,
    a platform hosted on a server in Ecuador that offered its users the
    possibility of uploading works on a distributed basis.29 In contrast to
    Moshkov’s centralized control, then, the library’s operator Ilya Larin adhered
    to the international piracy movement, calling his site a pirate library and
    gracing Librusek’s website with a small animated pirate, complete with sabre
    and parrot.

    The integration and proliferation of samizdat practices into a complex
    capitalist framework produced new global readings of the infrapolitics of
    shadow libraries. Rather than reading shadow libraries as examples of late-
    socialist infrapolitics, scholars also framed them as capitalist symptoms of
    “market failure,” that is, the failure of the market to meet consumer
    demands.30 One prominent example of such a reading was the influential Social
    Science Research Council report edited by Joe Karaganis in 2006, titled “Media
    Piracy in Emerging Economies,” which noted that cultural piracy appears most
    notably as “a failure to provide affordable access to media in legal markets”
    and concluded that within the context of developing countries “the pirate
    market cannot be said to compete with legal sales or generate losses for
    industry. At the low end of the socioeconomic ladder where such distribution
    gaps are common, piracy often simply is the market.”31

    In the Western world, Karaganis’s reading was a progressive response to the
    otherwise traditional approach to media piracy as a legal failure, which
    argued that tougher laws and increased enforcement are needed to stem
    infringing activity. Yet, this book argues that Karaganis’s report, and the
    approach it represents, also frames the infrapolitics of shadow libraries
    within a consumerist framework that excises the noncommercial infrapolitics of
    samizdat from the picture. The increasing integration of Russian media
    infrapolitics into Western apparatuses, and the reframing of shadow libraries
    from samizdat practices of political dissent to market failure, situates the
    infrapolitics of shadow libraries within a consumerist dispositive and the
    individual participants as consumers. As some critical voices suggest, this
    has an impact on the political potential of shadow libraries because they—in
    contrast to samizdat—actually correspond “perfectly to the industrial
    production proper to the legal cultural market production.”32 Yet, as the
    final section in this chapter shows, one also risks missing the rich nuances
    of infrapolitics by conflating consumerist infrastructures with consumerist
    practice.33

    The political stakes of shadow libraries such as lib.ru illustrate the
    difficulties in labeling shadow libraries in political terms, since they are
    driven neither by pure globalized dissent nor by pure globalized and
    commodified infrastructures. Rather, they straddle these binaries as
    infrapolitical entities, the political dynamics of which align both with
    standardization and dissent. Revisiting once more the theoretical debate, the
    case of lib.ru shows that shadow libraries may certainly be global phenomena,
    yet one should be careful with disregarding the specific cultural-political
    trajectories that shape each individual shadow library. Lib.ru demonstrates
    how the infrapolitics of shadow libraries emerge as infrastructural
    expressions of the convergence between historical sovereign trajectories,
    global information infrastructures, and public-private governance structures.
    Shadow libraries are not just globalized projects that exist in parallel to
    sovereign state structures and global economic flows. Instead, they are
    entangled in territorial public-private governance practices that produce
    their own late-sovereign infrapolitics, which, paradoxically, are embedded in
    larger mass digitization problematics, both on their own territory and on the
    global scene.

    ## Monoskop

    In contrast to the broad and distributed infrastructure of lib.ru, other
    shadow libraries have emerged as specialized platforms that cater to a
    specific community and encourage a specific practice. Monoskop is one such
    shadow library. Like lib.ru, Monoskop started as a one-man project and in many
    respects still reflects its creator, Dušan Barok, who is an artist, writer,
    and cultural activist involved in critical practices in the fields of
    software, art, and theory. Prior to Monoskop, his activities were mainly
    focused on the Bratislava cultural media scene, and Monoskop was among other
    things set up as an infrastructural project, one that would not only offer
    content but also function as a form of connectivity that could expand the
    networked powers of the practices of which Barok was a part.34 In particular,
    Barok was interested in researching the history of media art so that he could
    frame the avant-garde media practices in which he engaged in Bratislava within
    a wider historical context and thus lend them legitimacy.

    ### The Shadow Library as a Legal Stratagem

    Monoskop was partly motivated by Barok’s own experiences of being barred from
    works he deemed of significance to the field in which he was interested. As he
    notes, the main impetus to start a blog “came from a friend who had access to
    PDFs of books I wanted to read but could not afford go buy as they were not
    available in public libraries.”35 Barok thus began to work on Monoskop with a
    group of friends in Bratislava, initially hiding it from search engine bots to
    create a form of invisibility that obfuscated its existence without, however,
    preventing people from finding the Log and uploading new works. Information
    about the Log was distributed through mailing lists on Internet culture, among
    many other posts on e-book torrent trackers, DC++ networks, extensive
    repositories such as LibGen and Aaaaarg, cloud directories, document-sharing
    platforms such as Issuu and Scribd, and digital libraries such as the Internet
    Archive and Project Gutenberg.36 The shadow library of Monoskop thus slowly
    began to emerge, partly through Barok’s own efforts at navigating email lists
    and downloading material, and partly through people approaching Monoskop
    directly, sending it links to online or scanned material and even offering it
    entire e-book libraries. Rather than posting these “donated” libraries in
    their entirety, however, Barok and his colleagues edited the received
    collection and materials so that they would fit Monoskop’s scope, and they
    also kept scanning material themselves.

    Today Monoskop hosts thematically curated collections of downloadable books on
    art, culture, media studies, and other topics, partly in order to stimulate
    “collaborative studies of the arts, media, and humanities.”37 Indeed, Monoskop
    operates with a _boutique_ approach, offering relatively small collections of
    personally selected publications to a steady following of loyal patrons who
    regularly return to the site to explore new works. Its focal points are
    summarized by its contents list, which is divided into three main categories:
    “Avant-garde, modernism and after,” “Media culture,” and “Media, theory and
    the humanities.” Within these three broad focal points, hundreds of links
    direct the user to avant-garde magazines, art exhibitions and events, art and
    design schools, artistic and cultural themes, and cultural theorists.
    Importantly, shadow libraries such as Monoskop do not just host works
    unbeknownst to the authors—authors also leak their own works. Thus, some
    authors publishing with brand name, for-profit, all-rights-reserving, print-
    on-paper-only publishing houses will also circulate a copy of their work on a
    free text-sharing network such as Monoskop. 38

    How might we understand Monoskop’s legal situation and maneuverings in
    infrapolitical terms? Shadow libraries such as Monoskop draw their
    infrapolitical strength not only from the content they offer but also from
    their mode of engagement with the gray zones of new information
    infrastructures. Indeed, the infrapolitics of shadow libraries such as
    Monoskop can perhaps best be characterized as a stratagematic form of
    infrapolitics. Monoskop neither inhabits the passive perspective of the
    digital spectator nor deploys a form of tactics that aims to be failure free.
    Rather, it exists as a body of informal practices and knowledges, as cunning
    and dexterous networks that actively embed themselves in today’s
    sociotechnical infrastructures. It operates with high sociotechnical
    sensibilities, living off of the social relations that bring it into being and
    stabilize it. Most significantly, Monoskop skillfully exploits the cracks in
    the infrastructures it inhabits, interchangeably operating, evading, and
    accompanying them. As Matthew Fuller and Andrew Goffey point out in their
    meditation on stratagems in digital media, they do “not cohere into a system”
    but rather operate as “extensive, open-ended listing[s]” that “display a
    certain undecidability because inevitably a stratagem does not describe or
    prescribe an action that is certain in its outcome.”39 Significantly, then,
    failures and errors not only represent negative occurrences in stratagematic
    approaches but also appeal to willful dissidents as potentially beneficial
    tools. Dušan Barok’s response to a question about the legal challenges against
    Monoskop evidences this stratagematic approach, as he replies that shadow
    libraries such as Monoskop operate in the “gray zone,” which to him is also
    the zone of fair use.40 Barok thus highlights the ways in which Monoskop
    engages with established media infrastructures, not only on the level of
    discursive conventions but also through their formal logics, technical
    protocols, and social proprieties.

    Thus, whereas Google lights up gray zones through spectacle and legal power
    plays, and Europeana shuns gray zones in favor of the law, Monoskop literally
    embraces its shadowy existence in the gray zones of the law. By working in the
    shadows, Monoskop and likeminded operations highlight the ways in which the
    objects they circulate (including the digital artifacts, their knowledge
    management, and their software) can be manipulated and experimented upon to
    produce new forms of power dynamics.41 Their ethics lie more in the ways in
    which they operate as shadowy infrastructures than in intellectual reflections
    upon the infrastructures they counter, without, however, creating an
    opposition between thinking and doing. Indeed, as its history shows, Monoskop
    grew out of a desire to create a space for critical reflection. The
    infrapolitics of Monoskop is thus an infrapolitics of grayness that marks the
    breakdown of clearly defined contrasts between legal and illegal, licit and
    illicit, desire and control, instead providing a space for activities that are
    ethically ambiguous and in which “everyone is sullied.”42

    ### Monoskop as a Territorializing Assemblage

    While Monoskop’s stratagems play on the infrapolitics of the gray zones of
    globalized digital networks, the shadow library also emerges as a late-
    sovereign infrastructure. As already noted, Monoskop was from the outset
    focused on surfacing and connecting art and media objects and theory from
    Central and Eastern Europe. Often, this territorial dimension recedes into the
    background, with discussions centering more on the site’s specialized catalog
    and legal maneuvers. Yet Monoskop was initially launched partly as a response
    to criticisms on new media scenes in the Slovak and Czech Republics as
    “incomprehensible avant-garde.”43 It began as a simple invite-only instance of
    wiki in August 2004, urging participants to collaboratively research the
    history of media art. It was from the beginning conceived more as a
    collaborative social practice and less as a material collection, and it
    targeted noninstitutionalized researchers such as Barok himself.

    As the nodes in Monoskop grew, its initial aim to research media art history
    also expanded into looking at wider cultural practices. By 2010, it had grown
    into a 100-gigabyte collection which was organized as a snowball research
    collection, focusing in particular on “the white spots in history of art and
    culture in East-Central Europe,” spanning “dozens of CDs, DVDs, publications,
    as well as recordings of long interviews [Barok] did”44 with various people he
    considered forerunners in the field of media arts. Indeed, Barok at first had
    no plans to publish the collection of materials he had gathered over time. But
    during his research stay in Rotterdam at the influential Piet Zwart Institute,
    he met the digital scholars Aymeric Mansoux and Marcell Mars, who were both
    active in avant-garde media practices, and they convinced him to upload the
    collection.45 Due to the fragmentary character of his collection, Barok found
    that Monoskop corresponded well with the pre-existing wiki, to which he began
    connecting and embedding videos, audio clips, image files, and works. An
    important motivating factor was the publication of material that was otherwise
    unavailable online. In 2009, Barok launched Monoskop Log, together with his
    colleague Tomáš Kovács. This site was envisioned as an affiliated online
    repository of publications for Monoskop, or, as Barok terms it, “a free access
    living archive of writings on art, culture, and media technologies.”46

    Seeking to create situated spaces of reflection and to shed light on the
    practices of media artists in Eastern and Central Europe, Monoskop thus
    launched several projects devoted to excavating media art from a situated
    perspective that takes its local history into account. Today, Monoskop remains
    a rich source of information about artistic practices in Central and Eastern
    Europe, Poland, Hungary, Slovakia, and the Czech Republic, relating it not
    only to the art histories of the region, but also to its history of
    cybernetics and computing.

    Another early motivation for Monoskop was to provide a situated nodal point in
    the globalized information infrastructures that emphasized the geographical
    trajectories that had given rise to it. As Dušan Barok notes in an interview,
    “For a Central European it is mind-boggling to realize that when meeting a
    person from a neighboring country, what tends to connect us is not only
    talking in English, but also referring to things in the far West. Not that the
    West should feel foreign, but it is against intuition that an East-East
    geographical proximity does not translate into a cultural one.”47 From this
    perspective, Monoskop appears not only as an infrapolitical project of global
    knowledge, but also one of situated sovereignty. Yet, even this territorial
    focus holds a strategic dimension. As Barok notes, Monoskop’s ambition was not
    only to gain new knowledge about media art in the region, but also to cash in
    on the cultural capital into which this knowledge could potentially be
    converted. Thus, its territorial matrix first and foremost translates into
    Foucault’s famous dictum that “knowledge is power.” But it is nevertheless
    also testament to the importance of including more complex spatial dynamics in
    one’s analytical matrix of shadow libraries, if one wishes to understand them
    as more than globalized breakers of code and arbiters of what Manuel Castells
    once called the “space of flows.”48

    ## UbuWeb

    If Monoskop is one of the most comprehensive shadow libraries to emerge from
    critical-artistic practice, UbuWeb is one of the earliest ones and has served
    as an inspirational example for Monoskop. UbuWeb is a website that offers an
    encyclopedic scope of downloadable audio, video, and plain-text versions of
    avant-garde art recordings, films, and books. Most of the books fall in the
    category of small-edition artists’ books and are presented on the site with
    permission from the artists in question, who are not so concerned with
    potential loss of revenue since most of the works are officially out of print
    and never made any money even when they were commercially available. At first
    glance, UbuWeb’s aesthetics appear almost demonstratively spare. Still
    formatted in HTML, it upholds a certain 1990s net aesthetics that has resisted
    the revamps offered by the new century’s more dynamic infrastructures. Yet, a
    closer look reveals that UbuWeb offers a wealth of content, ranging from high
    art collections to much more rudimentary objects. Moreover, and more
    fundamentally, its critical archival practice raises broader infrapolitical
    questions of cultural hierarchies, infrastructures, and domination.

    ### Shadow Libraries between Gift Economies and Marginalized Forms of
    Distribution

    UbuWeb was founded by poet Kenneth Goldsmith in response to the marginal
    distribution of crucial avant-garde material. It provides open access both to
    out-of-print works that find a second life through digital art reprint and to
    the work of contemporary artists. Upon its opening in 2001, Kenneth Goldsmith
    termed UbuWeb’s economic infrastructure a “gift economy” and framed it as a
    political statement that highlighted certain problems in the distribution of
    and access to intellectual materials:

    > Essentially a gift economy, poetry is the perfect space to practice utopian
    politics. Freed from profit-making constraints or cumbersome fabrication
    considerations, information can literally “be free”: on UbuWeb, we give it
    away. … Totally independent from institutional support, UbuWeb is free from
    academic bureaucracy and its attendant infighting, which often results in
    compromised solutions; we have no one to please but ourselves. … UbuWeb posts
    much of its content without permission; we rip full-length CDs into sound
    files; we scan as many books as we can get our hands on; we post essays as
    fast as we can OCR them. And not once have we been issued a cease and desist
    order. Instead, we receive glowing emails from artists, publishers, and record
    labels finding their work on UbuWeb, thanking us for taking an interest in
    what they do; in fact, most times they offer UbuWeb additional materials. We
    happily acquiesce and tell them that UbuWeb is an unlimited resource with
    unlimited space for them to fill. It is in this way that the site has grown to
    encompass hundreds of artists, thousands of files, and several gigabytes of
    poetry.49

    At the time of its launch, UbuWeb garnered extraordinary attention and divided
    communities along lines of access and rights to historical and contemporary
    artists’ media. It was in this range of responses to UbuWeb that one could
    discern the formations of new infrastructural positions on digital archives,
    how they should be made available, and to whom. Yet again, these legal
    positions were accompanied by a territorial dynamic, including the impact of
    regional differences in cultural policy on UbuWeb. Thus, as artist Jason Simon
    notes, there were significant differences between the ways in which European
    and North American distributors related to UbuWeb. These differences, Simon
    points out, were rooted in “medium-specific questions about infrastructure,”
    which differ “from the more interpretive discussion that accompanied video's
    wholesale migration into fine art exhibition venues.”50 European pre-recession
    public money thus permitted nonprofit distributors to embrace infrastructures
    such as UbuWeb, while American distributors were much more hesitant toward
    UbuWeb’s free-access model. When recession hit Europe in the late 2000s,
    however, the European links to UbuWeb’s infrastructures crumbled while “the
    legacy American distributors … have been steadily adapting.”51 The territorial
    modulations in UbuWeb’s infrastructural set-up testify not only to how shadow
    libraries such as UbuWeb are inherently always linked up to larger political
    events in complex ways, but also to latent ephemerality of the entire project.

    Goldsmith has more than once asserted that UbuWeb’s insistence on
    “independent” infrastructures also means a volatile existence: “… by the time
    you read this, UbuWeb may be gone. Cobbled together, operating on no money and
    an all-volunteer staff, UbuWeb has become the unlikely definitive source for
    all things avant-garde on the internet. Never meant to be a permanent archive,
    Ubu could vanish for any number of reasons: our ISP pulls the plug, our
    university support dries up, or we simply grow tired of it.” Goldsmith’s
    emphasis on the ephemerality of UbuWeb is a shared condition of most shadow
    libraries, most of which exist only as ghostly reminders with nonfunctional
    download links or simply as 404 pages, once they pull the plug. Rather than
    lamenting this volatile existence, however, Goldsmith embraces it as an
    infrapolitical stance. As Cornelia Solfrank points out, UbuWeb was—and still
    is—as much an “archival critical practice that highlights the legal and social
    ramifications of its self-created distribution and archiving system as it is
    about the content hosted on the site.”52 UbuWeb is thus not so much about
    authenticity as it is about archival defiance, appropriation, and self-
    reflection. Such broader and deeper understandings of archival theory and
    practice allow us to conceive of it as the kind of infrapolitics that,
    according to James C. Scott, “provides much of the cultural and structural
    underpinning of the more visible political attention on which our attention
    has generally been focused.”53 The infrapolitics of UbuWeb is devoted to
    hatching new forms of organization, creating new enclaves of freedom in the
    midst of orthodox ways of life, and inventing new structures of production and
    dissemination that reveal not only the content of their material but also
    their marginalized infrastructural conditions and the constellation of social
    forces that lead to their online circulation.54

    The infrapolitics of UbuWeb is testament not only to avant-garde cultures, but
    also to what Hito Steyerl in her _Defense of Poor Images_ refers to as the
    “neoliberal radicalization of the culture as commodity” and the “restructuring
    of global media industries.” 55 These materials “circulate partly in the void
    left by state organizations” that find it too difficult to maintain digital
    distribution infrastructures and the art world’s commercial ecosystems, which
    offer the cultural materials hosted on UbuWeb only a liminal existence. Thus,
    while UbuWeb on the one hand “reveals the decline and marginalization of
    certain cultural materials” whose production were often “considered a task of
    the state,”56 on the other hand it shows how intellectual content is
    increasingly privatized, not only in corporate terms but also through
    individuals, which in UbuWeb’s case is expressed in Kenneth Goldsmith, who
    acts as the sole archival gatekeeper.57

    ## The Infrapolitics of Shadow Libraries

    If the complexity of shadow libraries cannot be reduced to the contrastive
    codes of “right” and “wrong” and global-local binaries, the question remains
    how to theorize the cultural politics of shadow libraries. This final section
    outlines three central infrapolitical aspects of shadow libraries: access,
    speed, and gift.

    Mass digitization poses two important questions to knowledge infrastructures:
    a logistical question of access and a strategic question of to whom to
    allocate that access. Copyright poses a significant logistical barrier between
    users and works as a point of control in the ideal free flow of information.
    In mass digitization, increased access to information stimulates projects,
    whereas in publishing industries with monopoly possibilities, the drive is
    toward restriction and control. The uneasy fit between copyright regulations
    and mass digitization projects has, as already shown, given rise to several
    conflicts, either as legal battles or as copyright reform initiatives arguing
    that current copyright frameworks cast doubt upon the political ideal of total
    access. As with Europeana and Google Books, the question of _access_ often
    stands at the core of the infrapolitics of shadow libraries. Yet, the
    strategic responses to the problem of copyright vary significantly: if
    Europeana moves within the established realm of legality to reform copyright
    regulations and Google Books produces claims to new cultural-legal categories
    such as “nonconsumptive reading,” shadow libraries offer a third
    infrastructural maneuver—bypassing copyright infrastructures altogether
    through practices of illicit file distribution.

    Shadow libraries elicit a range of responses and discourses that place
    themselves on a spectrum between condemnation and celebration. The most
    straightforward response comes, unsurprisingly, from the publishing industry,
    highlighting the fundamentally violent breaches of the legal order that
    underpins the media industry. Such responses include legal action, policy
    initiatives, and public campaigns against piracy, often staging—in more or
    less explicit terms—the “pirate” as a common enemy of mankind, beyond legal
    protection and to be fought by whatever means necessary.

    The second response comes from the open source movement, represented among
    others by the pro-reform copyright movement Creative Commons (CC), whose
    flexible copyright framework has been adopted by both Europeana and Google
    Books.58 While the open source movement has become a voice on behalf of the
    telos of the Internet and its possibilities of offering free and unhindered
    access, its response to shadow libraries has revealed the complex
    infrapolitics of access as a postcolonial problematic. As Kavita Philip
    argues, CC’s founder Lawrence Lessig maintains the image of the “good” Western
    creative vis-à-vis the “bad” Asian pirate, citing for instance his statement
    in his influential book _Free Culture_ that “All across the world, but
    especially in Asia and Eastern Europe, there are businesses that do nothing
    but take other people’s copyrighted content, copy it, and sell it. … This is
    piracy plain and simple, … This piracy is wrong.” 59 Such statements, Kavita
    Philip argues, frames the Asian pirate as external to order, whether it be the
    order of Western law or neoliberalism.60

    The postcolonial critique of CC’s Western normative discourse has instead
    sought to conceptualize piracy, not as deviatory behavior in information
    economies, but rather as an integral infrastructure endemic to globalized
    information economies.61 This theoretical development offers valuable insights
    for understanding the infrapolitics of shadow libraries. First of all, it
    allows us to go beyond moral discussions of shadow libraries, and to pay
    attention instead to the ways in which their infrastructures are built, how
    they operate, and how they connect to other infrastructures. As Lawrence Liang
    points out, if infrastructures traditionally belong to the domain of the
    state, often in cooperation with private business, pirate infrastructures
    operate in the gray zones of this set-up, in much the same way as slums exist
    as shadow cities and copies are regarded as shadows of the original.62
    Moreover, and relatedly, it reminds us of the inherently unstable form of
    shadow libraries as a cultural construct, and the ways in which what gets
    termed piracy differs across cultures. As Brian Larkin notes, piracy is best
    seen as emerging from specific domains: dynamic localities with particular
    legal, aesthetic, and social assemblages.63 In a final twist, research on
    users of shadow libraries shows that usage of shadow libraries is distributed
    globally. Multiple sources attest to the fact that most Sci-Hub usage occurs
    outside the Anglosphere. According to Alexa Internet analytics, the top five
    country sources of traffic to Sci-Hub were China, Iran, India, Brazil, and
    Japan, which account for 56.4 percent of recent traffic. As of early 2016,
    data released by Sci-Hub’s founder Alexandra Elbakyan also shows high usage in
    developed countries, with a large proportion of the downloads coming from the
    US and countries within the European Union.64 The same tendency is evident in
    the #ICanHazPDF Twitter phenomenon, which while framed as “civil disobedience”
    to aid users in the Global South65 nevertheless has higher numbers of posts
    from the US and Great Britain.66

    This brings us to the second cultural-political production, namely the
    question of distribution. In their article “Book Piracy as Peer Preservation,”
    Denis Tenen and Maxwell Henry Foxman note that rather than condemning book
    piracy _tout court_ , established libraries could in fact learn from the
    infrastructural set-ups of shadow libraries in relation to participatory
    governance, technological innovation, and economic sustainability.67 Shadow
    libraries are often premised upon an infrastructure that includes user
    participation without, however, operating in an enclosed sphere. Often, shadow
    libraries coordinate their actions by use of social media platforms and online
    forums, including Twitter, Reddit, and Facebook, and the primary websites used
    to host the shared files are AvaxHome, LibGen, and Sci-Hub. Commercial online
    cloud storage accounts (such as Dropbox and Google Drive) and email are also
    used to share content in informal ways. Users interested in obtaining an
    article or book chapter will disseminate their request over one or more of the
    platforms mentioned above. Other users of those platforms try to get the
    requested content via their library accounts or employer-provided access, and
    the actual files being exchanged are often hosted on other websites or emailed
    to the requesting users. Through these networks, shadow libraries offer
    convenient and speedy access to books and articles. Little empirical evidence
    is available, but one study does indicate that a large number of shadow
    library downloads are made because obtaining a PDF from a shadow library is
    easier than using the legal access methods offered by a university’s
    traditional channels of access, including formalized research libraries.68
    Other studies indicate, however, that many downloads occur because the users
    have (perceived) lack of full-text access to the desired texts.69

    Finally, as indicated in the introduction to this chapter, shadow libraries
    produce what we might call a cultural politics of parasitism. In the normative
    model of shadow libraries, discourse often centers upon piracy as a theft
    economy. Other discourses, drawing upon anthropological sources, have pointed
    out that peer-to-peer file-sharing sites in reality organize around a gift
    economy, that is, “a system of social solidarity based on a structured set of
    gift exchange and social relationships among consumers.”70 This chapter,
    however, ends with a third proposal: that shadow libraries produce a
    parasitical form of infrapolitics. In _The Parasite_ , philosopher Michel
    Serres speculates a way of thinking about relations of transfer—in social,
    biological, and informational contexts—as fundamentally parasitic, that is, a
    subtractive form of “taking without giving.” Serres contrasts the parasitic
    model with established models of society based on notions such as exchange and
    gift giving.71 Shadow libraries produce an infrapolitics that denies the
    distinction between producers and subtractors of value, allowing us instead to
    focus on the social roles infrastructural agents perform. Restoring a sense of
    the wider context of parasitism to shadow libraries does not provide a clear-
    cut solution as to when and where shadow libraries should be condemned and
    when and where they should be tolerated. But it does help us ask questions in
    a different way. And it certainly prevents the regarding of shadow libraries
    as the “other” in the landscape of mass digitization. Shadow libraries
    instigate new creative relations, the dynamics of which are infrastructurally
    premised upon the medium they use. Just as typewriters were an important
    component of samizdat practices in the Soviet Union, digital infrastructures
    are central components of shadow libraries, and in many respects shadow
    libraries bring to the fore the same cultural-political questions as other
    forms of mass digitization: questions of territorial imaginaries,
    infrastructures, regulation, speed, and ethics.

    ## Notes

    1. Serres 1982, 55. 2. Serres 1982, 36. 3. Serres 1982, 36. 4. Samyn 2012. 5.
    I stick with “shadow library,” a term that I first found in Lawrence Liang’s
    (2012) writings on copyright and have since seen meaningfully unfolded in a
    variety of contexts. Part of its strength is its sidestepping of the question
    of the pirate and that term’s colonial connotations. 6. Eckstein and Schwarz
    2014. 7. Scott 2009, 185–201. 8. See also Maxim Moshkov’s own website hosted
    on lib.ru, . 9. Carey 2015. 10. Schmidt 2009. 11. Bodó
    2016. “Libraries in the post-scarcity era.” As Balazs Bodó notes, the first
    Russian mass-digitized shadow archives in Russia were run by professors from
    the hard sciences, but the popularization of computers soon gave rise to much
    more varied and widespread shadow library terrain, fueled by “enthusiastic
    readers, book fans, and often authors, who spared no effort to make their
    favorite books available on FIDOnet, a popular BBS system in Russia.” 12.
    Stelmakh 2008, 4. 13. Bodó 2016. 14. Bodó 2016. 15. Vul 2003. 16. “In Defense
    of Maxim Moshkov's Library,” n.d., The International Union of Internet
    Professionals, . 17. Ibid. 18. Ibid. 19.
    Schmidt 2009, 7. 20. Ibid. 21. Carey 2015. 22. Mjør 2009, 84. 23. Bodó 2015.
    24. Kiriya 2012. 25. Yurchak 2008, 732. 26. Komaromi, 74. 27. Mjør, 85. 28.
    Litres.ru, . 29. Library Genesis,
    . 30. Kiriya 2012. 31. Karaganis 2011, 65, 426. 32.
    Kiriya 2012, 458. 33. For a great analysis of the late-Soviet youth’s
    relationship with consumerist products, read Yurchak’s careful study in
    _Everything Was Forever, Until It Was No More: The Last Soviet Generation_
    (2006). 34. “Dušan Barok: Interview,” _Neural_ 44 (2010), 10. 35. Ibid. 36.
    Ibid. 37. Monoskop,” last modified March 28, 2018, Monoskop.
    . . 38. “Dušan
    Barok: Interview,” _Neural_ 44 (2010), 10. 39. Fuller and Goffey 2012, 21. 40.
    “Dušan Barok: Interview,” _Neural_ 44 (2010), 11. 41. In an interview, Dušan
    Barok mentions his inspirations, including early examples such as textz.com, a
    shadow library created by the Berlin-based artist Sebastian Lütgert. Textz.com
    was one of the first websites to facilitate free access to books on culture,
    politics, and media theory in the form of text files. Often the format would
    itself toy with legal limits. Thus, Lütgert declared in a mischievous manner
    that the website would offer a text in various formats during a legal debacle
    with Surhkamp Verlag: “Today, we are proud to announce the release of
    walser.php (), a 10,000-line php script
    that is able to generate the plain ascii version of ‘Death of a Critic.’ The
    script can be redistributed and modified (and, of course, linked to) under the
    terms of the GNU General Public License, but may not be run without written
    permission by Suhrkamp Verlag. Of course, reverse-engineering the writings of
    senile German revisionists is not the core business of textz.com, so
    walser.php includes makewalser.php, a utility that can produce an unlimited
    number of similar (both free as in speech and free as in copy) php scripts for
    any digital text”; see “Suhrkamp recalls walser.pdf, textz.com releases
    walser.php,” Rolux.org,
    .
    42. Fuller and Goffey 2012, 11. 43. “MONOSKOP Project Finished,” COL-ME Co-
    located Media Expedition, [www.col-me.info/node/841](http://www.col-
    me.info/node/841). 44. “Dušan Barok: Interview,” _Neural_ 44 (2010), 10. 45.
    Aymeric Mansoux is a senior lecturer at the Piet Zwart Institute whose
    research deals with the defining, constraining, and confining of cultural
    freedom in the context of network-based practices. Marcel Mars is an advocate
    of free software and a researcher who is also active in a shadow library named
    _Public Library,_ (also interchangeably
    known as Memory of the World). 46. “Dušan Barok,” Memory of the World,
    . 47. “Dušan Barok: Interview,”
    _Neural_ 44 (2010), 10. 48. Castells 1996. 49. Kenneth Goldsmith,”UbuWeb Wants
    to Be Free” (last modified July 18, 2007),
    . 50. Jacob King and
    Jason Simon, “Before and After UbuWeb: A Conversation about Artists’ Film and
    Video Distribution,” _Rhizome_ , February 20, 2014.
    artists-film-and-vid>. 51. King and Simon 2014. 52. Sollfrank 2015. 53. Scott
    1990, 184. 54. For this, I am indebted to Hito Steyerl’s essay ”In Defense of
    the Poor Image,” in her book _The Wretched of the Screen_ , 31–59. 55. Steyerl
    2012, 36. 56. Steyerl 2012, 39. 57. Sollfrank 2015. 58. Other significant open
    source movements include Free Software Foundation, the Wikimedia Foundation,
    and several open access initiatives in science. 59. Lessig 2005, 57. 60.
    Philip 2005, 212. 61. See, for instance, Larkin 2008; Castells and Cardoso
    2012; Fredriksson and Arvanitakis 2014; Burkart 2014; and Eckstein and Schwarz
    2014. 62. Liang 2009. 63. Larkin 2008. 64. John Bohannon, “Who’s Downloading
    Pirated Papers? Everyone,” _Science Magazine_ , April 28, 2016,
    everyone>. 65. “The Scientists Encouraging Online Piracy with a Secret
    Codeword,” _BBC Trending_ , October 21, 2015, trending-34572462>. 66. Liu 2013. 67. Tenen and Foxman 2014. 68. See Kramer
    2016. 69. Gardner and Gardner 2017. 70. Giesler 2006, 283. 71. Serres 2013, 8.

    # III
    Diagnosing Mass Digitization

    # 5
    Lost in Mass Digitization

    ## The Desire and Despair of Large-Scale Collections

    In 1995, founding editor of _Wired_ magazine Kevin Kelly mused upon how a
    digital library would look:

    > Two decades ago nonlibrarians discovered Borges’s Library in silicon
    circuits of human manufacture. The poetic can imagine the countless rows of
    hexagons and hallways stacked up in the Library corresponding to the
    incomprehensible micro labyrinth of crystalline wires and gates stamped into a
    silicon computer chip. A computer chip, blessed by the proper incantation of
    software, creates Borges’s Library on command. … Pages from the books appear
    on the screen one after another without delay. To search Borges’s Library of
    all possible books, past, present, and future, one needs only to sit down (the
    modern solution) and click the mouse.1

    At the time of Kelly’s writing, book digitization on a massive scale had not
    yet taken place. Building his chimerical dream around Jorge Luis Borges’s own
    famous magic piece of speculation regarding the Library of Babel, Kelly not
    only dreamed up a fantasy of what a digital library might be in an imaginary
    dialogue with Borges; he also argued that Jorge Luis Borges’s vision had
    already taken place, by grace of nonlibrarians, or—more
    specifically—programmers. Specifically, Kelly mentions Karl Sims, a computer
    scientist working on a supercomputer called Connection Machine 5 (you may
    remember it from the set of _Jurassic Park_ ), who had created a simulated
    version of Borges’s library.2

    Twenty years after Kelly’s vision, a whole host of mass digitization projects
    have sought more or less explicitly to fulfill Kelly’s vision. Incidentally,
    Brewster Kahle, one of the lead engineers of the aforementioned Connection
    Machine, has become a key figure in the field. Kahle has long dreamed of
    creating a universal digital library, and has worked to fulfill it in
    practical terms through the nonprofit Internet Archive project, which he
    founded in 1996 with the stated mission of creating “universal access to all
    knowledge.” In an op-ed in 2017, Kahle lamented the recent lack of progress in
    mass digitization and argued for the need to create a new vision for mass
    digitization, stating, “The Internet Archive, working with library partners,
    proposes bringing millions of books online, through purchase or digitization,
    starting with the books most widely held and used in libraries and
    classrooms.”3 Reminding us that three major entities have “already digitized
    modern materials at scale: Google, Amazon, and the Internet Archive, probably
    in that order of magnitude,”4 Kahle nevertheless notes that “bringing
    universal access to books” has not yet been achieved because of a fractured
    field that diverges on questions of money, technology, and legal clarity. Yet,
    outlining his new vision for how a sustainable mass digitization project could
    be achieved, Kahle remains convinced that mass digitization is both a
    necessity and a possibility.

    While Brewster Kahle, Kevin Kelly, Google, Amazon, Europeana’s member
    institutions, and others disagree on how to achieve mass digitization, for
    whom, and in what form, they are all united in their quest for digitization on
    a massive scale. Many shadow libraries operate with the same quantitative
    statements, proudly asserting the quantities of their massive holdings on the
    front page.

    Given the fractured field of mass digitization, and the lack of economic
    models for how to actually make mass digitization sustainable, why does the
    common dream of mass digitization persist? As this chapter shows, the desire
    for quantity, which drives mass digitization, is—much like the Borges stories
    to which Kelly also refers—laced with ambivalence. On the one hand, the
    quantitative aspirations are driven forth by the basic assumption that “more
    is more”: more data and more cultural memory equal better industrial and
    intellectual progress. One the other hand, the sheer scale of ambition also
    causes frustration, anxiety, and failed plans.

    The sense that sheer size and big numbers hold the promise of progress and
    greatness is nothing new, of course. And mass digitization brings together
    three fields that have each historically grown out of scalar ambitions:
    collecting practices, statistics, and industrialization processes.
    Historically, as cultural theorist Couze Venn reminds us, most large
    collections bear the imprint of processes of (cultural) colonization, human
    desires, and dynamics of domination and superiority. We therefore find in
    large collections the “impulses and yearnings that have conditioned the
    assembling of most of the collections that today establish a monument to past
    efforts to gather together knowledge of the world and its treasury of objects
    and deeds.”5 The field of statistics, moreover, so vital to the evolution of
    modern governance models, is also premised upon the accumulation of ever-more
    information.6 And finally, we all recognize the signs of modern
    industrialization processes as they appear in the form of globalization,
    standardization, and acceleration. Indeed, as French sociologist Henri
    Lefebvre once argued (with a nod to Marx), the history of modern society could
    plainly and simply be seen as the history of accumulation: of space, of
    capital, of property.7

    In mass digitization, we hear the political echoes of these histories. From
    Jeanneney’s war cry to defend European patrimonies in the face of Google’s
    cultural colonization to Google’s megalomaniac numbers game and Europeana’s
    territorial maneuverings, scale is used as a point of reference not only to
    describe the space of cultural objects in themselves but also to outline a
    realm of cultural command.

    A central feature in the history of accumulation and scale is the development
    of digital technology and the accompanying new modes of information
    organization. But even before then, the invention of new technologies offered
    not only new modes of producing and gathering information and new
    possibilities of organizing information assemblages, but also new questions
    about the implications of these leaps in information production. As historians
    Ann Blair and Peter Stallybrass show, “infolust,” that is, the cultural
    attitude that values expansive collections for long-term storage, emerged in
    the early Renaissance period.8 In that period, new print technology gave rise
    to a new culture of accumulating and stockpiling notes and papers, even
    without having a specific compositional purpose in mind. Within this scholarly
    paradigm, new teleologies were formed that emphasized the latent value of any
    piece of information, expressed for instance by Joachim Jungius’s exclamation
    that “no field was too remote, no author too obscure that it would not yield
    some knowledge or other” and Gabriel Naudé’s observation that there is “no
    book, however bad or decried, which will not be sought after by someone over
    time.”9 The idea that any piece of information was latently valuable was later
    remarked upon by Melvin Dewey, who noted at the beginning of the twentieth
    century that a “normal librarian’s instinct is to keep every book and
    pamphlet. He knows that possibly some day, somebody wants it.”10

    Today, mass digitization repeats similar concerns. It reworks the old dream of
    an all-encompassing and universal library and has foregrounded once again
    questions about what to save and what to let go. What, one might ask, would
    belong in such a library? One important field of interest is the question of
    whether, and how, to preserve metadata—today’s marginalia. Is it sufficient to
    digitize cultural works, or should all accompanying information about the
    provenance of the work also be included? And how can we agree upon what
    marginalia actually is across different disciplines? Mass digitization
    projects in natural history rarely digitize marginalia such as logs and
    written accounts, focusing only on what to that discipline is the main object
    at hand, for example, a piece of rock, a fly specimen, a pressed plant. Yet,
    in the history of science, logs are an invaluable source of information about
    how the collected object ended up in the collection, the meaning it had to the
    collector, and the place it takes in the collection.11 In this way, new
    questions with old trajectories arise: What is important for understanding a
    collection and its life? What should be included and excluded? And how will we
    know what will turn out to be important in the future?

    In the era of big data, the imperative is often to digitize and “save all.”
    Prestige mass digitization projects such as Google Books and Europeana have
    thus often contextualized their importance in terms of scale. Indeed, as we
    saw in the previous chapters, the question of scale has been a central point
    of political contestation used to signal infrastructural power. Thus the hype
    around Google Books, as well as the political ire it drew, centered on the
    scale of the project just as quantitative goals are used in Europeana to
    signal progress and significance. Inherent in these quantitative claims are
    not only ideas about political power, but also the widespread belief in
    digital circles—and the political regimes that take inspiration from them—that
    the more information the user is able to access, the more empowered the user
    is to navigate and make meaning on their own. In recent years, the imaginaries
    of freedom of navigation have also been adjoined by fantasies of freedom of
    infrastructural construction through the image of the platform. Mass
    digitization projects should therefore not only offer the user the potential
    to navigate collections freely, but also to build new products and services on
    top of them.12 Yet, as this chapter argues, the ethos of potentially unlimited
    expansion also prompts a new set of infrapolitical questions about agency and
    control. While these questions are inherently related to the larger questions
    of territory and power explored in the previous chapters, they occur on a
    different register, closer to the individual user and within the spatialized
    imaginaries of digital information.

    As many critics have noted, the logic of expansion and scale, and the
    accompanying fantasies of the empowered user, often builds on neoliberal
    subjectification processes. While highly seductive, they often fail to take
    into account the reality of social complexity. Therefore, as Lisa Nakamura
    notes, the discourse of complete freedom of navigation through technological
    liberation—expressed aptly in Microsoft’s famous slogan “Where do you want to
    go today?”—assumes, wrongly, that everyone is at liberty to move about
    unhindered.13 And the fantasy of empowerment through platforming is often also
    shot through with neoliberal ideals that not only fail to take into account
    the complex infrapolitical realities of social interaction, but also rely on
    an entrepreneurial epistemology that evokes “a flat, two-dimensional stage on
    which resources are laid out for users to do stuff with” and which we are not
    “inclined to look underneath or behind it, or to question its structure.”14

    This chapter unfolds these central infrapolitical problematics of the spatial
    imaginaries of knowledge in relation to a set of prevalent cultural spatial
    tropes that have gained new life in digital theory and that have informed the
    construction and development of mass digitization projects: the flaneur, the
    labyrinth, and the platform. Cultural reports, policy papers, and digital
    design strategies often use these three tropes to elicit images of pleasure
    and playfulness in mass digitization projects; yet, as the following sections
    show, they also raise significant questions of control and agency, not least
    against the backdrop of ever-increasing scales of information production.

    ## Too Much—Never Enough

    The question of scale in mass digitization is often posed as a rational quest
    for knowledge accumulation and interoperability. Yet this section argues that
    digitized collections are more than just rational projects; they strike deep
    affective cords of desire, domination, and anxiety. As Couze Venn reminds us,
    collections harbor an intimate connection between cognition and affective
    economy. In this connection, the rationalized drive to collect is often
    accompanied by a slippage, from a rationalized urge to a pathological drive
    ultimately associated with desire, power, domination, anxiety, nostalgia,
    excess, and—sometimes even—compulsion and repetition.15 The practice of
    collecting objects thus not only signals a rational need but often also
    springs from desire, and as psychoanalysis has taught us, a sense of lack is
    the reflection of desire. As Slavoj Zizek puts it, “desire’s _raison d’être_
    is not to realize its goal, to find full satisfaction, but to reproduce itself
    as desire.” 16 Therefore, no matter how much we collect, the collector will
    rarely experience their collection as complete and will often be haunted by
    the desire to collect more.

    In addition to the frightening (yet titillating) aspect of never having our
    desires satisfied, large collections also give rise to a set of information
    pathologies that, while different in kind, share an understanding of
    information as intimidation. The experience is generally induced by two
    inherently linked factors. First, the size of the cultural collection has
    historically also often implied a powerful collector with the means to gather
    expensive materials from all over the world, and a large collection has thus
    had the basic function of impressing and, if need be, intimidating people.
    Second, large collections give rise to the sheer subjective experience of
    being overwhelmed by information and a mental incapacity to take it all in.
    Both factors point to questions of potency and importance. And both work to
    instill a fear in the visitor. As Voltaire once noted, “a great library has
    the quality of frightening those who look upon it.”17

    The intimidating nature of large collections has been a favored trope in
    cultural representations. The most famous example of a gargantuan, even
    insanity-inducing, library is of course Jorge Luis Borges’s tale of the
    Library of Babel, the universal totality of which becomes both a monstrosity
    in the characters’ lives and a source of hope, depending on their willingness
    to make peace and submit themselves to the library’s infinite scale and
    Kafkaesque organization.18 But Borges’s nonfiction piece from 1939, _The Total
    Library,_ also serves as an elegant tale of an informational nightmare. _The
    Total Library_ begins by noting that the dream of the utopia of the total
    library “has certain characteristics that are easily confused with virtues”
    and ends with a more somber caution: “One of the habits of the mind is the
    invention of horrible imaginings. … I have tried to rescue from oblivion a
    subaltern horror: the vast, contradictory Library, whose vertical wildernesses
    of books run the incessant risk of changing into others that affirm, deny, and
    confuse everything like a delirious god.” 19

    Few escape the intimidating nature of large collections. But while attention
    has often been given to the citizen subjected to the disciplining force of the
    sovereign state in the form of its institutions, less attention has been given
    to those that have had to structure and make sense of these intimidating
    collections. Until recently, cultural collections were usually oriented toward
    the figure of the patron or, in more abstract geographical terms, (God-given)
    patrimony. Renaissance cabinets of curiosities were meant to astonish and
    dazzle; the ostentatious wealth of the Baroque museums of the seventeenth and
    eighteenth centuries displayed demonstrations of Godly power; and bourgeois
    museums of the nineteenth century positioned themselves as national
    institutions of _Bildung_. But while cultural memory institutions have worked
    first and foremost to mirror to an external audience the power and the psyche
    of their owners in individual, religious, and/or geographical terms, they have
    also consistently had to grapple internally with the problem of how to best
    organize and display these collections.

    One of the key generators of anxiety in vast libraries has been the question
    of infrastructure. Each new information paradigm and each new technology has
    induced new anxieties about how best to organize information. The fear of
    disorder haunted both institutions and individuals. In his illustrious account
    of Ephraim Chamber’s _Cyclopaedia_ (the forerunner of Denis Diderot’s and Jean
    le Rond d’Alembert’s famous Enlightenment project, the _Encyclopédie_ ),
    Richard Yeo thus recounts how Gottfried Leibniz complained in 1680 about “that
    horrible mass of books which keeps on growing” so that eventually “the
    disorder will become nearly insurmountable.”20 Five years on, the French
    scholar and critic Adrien Baillet warned his readers, “We have reason to fear
    that the multitude of books which grows every day in a prodigious fashion will
    make the following centuries fall into a state as barbarous as that of the
    centuries that followed the fall of the Roman Empire.”21 And centuries later,
    in the wake of the typewriter, the annual report of the Secretary of the
    Smithsonian Institution in Washington, DC, drew attention to the
    infrastructural problem of organizing the information that was now made
    available through the typewriter, noting that “about twenty thousand volumes …
    purporting to be additions to the sum of human knowledge, are published
    annually; and unless this mass be properly arranged, and the means furnished
    by which its contents may be ascertained, literature and science will be
    overwhelmed by their own unwieldy bulk.”22 The experience of feeling
    overwhelmed by information and lacking the right tools to handle it is no
    joke. Indeed, a number of German librarians actually went documentably insane
    between 1803 and 1825 in the wake of the information glut that followed the
    secularization of ecclesiastical libraries.23 The desire for grand collections
    has thus always also been followed by an accompanying anxiety relating to
    questions of infrastructure.

    As the history of collecting pathologies shows, reducing mass digitization
    projects to rational and technical information projects would deprive them of
    their rich psychological dimensions. Instead of discounting these pathologies,
    we should acknowledge them, and examine not only their nature, but also their
    implications for the organization of mass digitization projects. As the
    following section shows, the pathologies not only exist as psychological
    forces, but also as infrastructural imaginaries that directly impact theories
    on how best to organize information in mass digitization. If the scale of mass
    digitization projects is potentially limitless, how should they be organized?
    And how will we feel when moving about in their gargantuan archives?

    ## The Ambivalent flaneur

    In an article on cultures of archiving, sociologist Mike Featherstone asked
    whether “the expansion of culture available at our fingertips” could be
    “subjected to a meaningful ordering,” or whether the very “desire to remedy
    fragmentation” should be “seen as clinging to a form of humanism with its
    emphasis upon cultivation of the persona and unity which are now regarded as
    merely nostalgic.”24 Featherstone raised the question in response to the
    popularization of the Internet at the turn of the millennium. Yet, as the
    previous section has shown, his question is probably as old as the collecting
    practices themselves. Such questions have become no less significant with mass
    digitization. How are organizational practices conceived of as meaningful
    today? As we shall see, this question not only relates to technical
    characteristics but is also informed by a strong spatial imaginary that often
    takes the shape of labyrinthine infrastructures and often orients itself
    toward the figure of the user. Indeed, the role of the organizer of knowledge,
    and therefore the accompanying responsibility of making sense of collections,
    has been conferred from knowledge professionals to individuals.

    Today, as seen in all the examples of mass digitization we have explored in
    the previous chapters, cultural memory institutions face a different paradigm
    than that of the eighteenth- and nineteenth-century disciplining cultural
    memory institution. In an age that encourages individualism, democratic
    ideals, and cultural participation, the orientations of the cultural memory
    institutions have shifted in discourse, practice, or both, toward an emphasis
    on the importance of the subjective experience and active participation of the
    individual visitor. As part of this shift, and as a result of the increasing
    integration of the digital imaginary and production apparatus into the field
    of cultural memory, the visitor has thus metamorphosed from a disciplinary
    subject to a prosumer, produser, participant, and/or user.

    The organizational shift in the cultural memory ecosystem means that
    visionaries and builders of mass digitization infrastructures now pay
    attention not only to how collections may reflect upon the institution that
    holds the collection, but also on how the user experiences the informational
    navigation of collections. This is not to say that making an impression, or
    even disciplining the user, is not a concern for many mass digitization
    projects. Mass digitizations’ constant public claims to literal greatness
    through numbers evidence this. Yet, today’s projects also have to contend with
    the opinion of the public and must make their projects palatable and
    consumable rather than elitist and intimidating. The concern of the builders
    of mass digitization infrastructure is therefore not only to create an
    internal logic to their collections, but also to maximize the user’s
    experience of being offered a wealth of information, while mitigating the
    danger of giving the visitor a sense of losing oneself, or even drowning, in
    information. An important question for builders of mass digitization projects
    has therefore been how to build visual and semantic infrastructures that offer
    the user a sense of meaningful direction as well as a desire to keep browsing.

    While digital collections are in principle no longer tethered to their
    physical origins in spatial terms, we still encounter ideas about them in
    spatialized terms, often using notions such as trails, paths, and alleyways to
    visualize the spaces of digital collections.25 This form of spatialized logic
    did not emerge with the mass digitization of cultural heritage collections,
    however, but also resides at the heart of some of the most influential early
    digital theories on the digital realm.26 These theorized and conceptualized
    the web as a new form of architectural infrastructure, not only in material
    terms (such as cables and servers) but also as a new experiential space.27 And
    in this spatialized logic, the figure of the flaneur became a central
    character. Thus, we saw in the 1990s the rise of a digital interpretation of
    the flaneur, originally an emblematic figure of modern urban culture at the
    turn of the twentieth century, in the form of the virtual flaneur or the
    cyberflaneur. In 1994, German net artists Heiko Idensen and Matthias Krohn
    paid homage to the urban figure, noting in a text that “the screen winks at
    the flaneur” and locating the central tenets of computer culture with the
    “intoxication of the flânerie. Screens as streets and homes … of the crowd?”28
    Later, artist Steven Goldate provided a simple equation between online and
    offline spaces, noting among other things that “What the city and the street
    was to the flaneur, the Internet and the Superhighway have become to the
    Cyberflaneur.”29

    Scholars, too, explored the potentials and limits of thinking about the user
    of the Internet in flaneurian terms. Thus, Mike Featherstone drew parallels
    between the nineteenth-century flaneur and the virtual flaneur, exploring the
    similarities and differences between navigational strategies, affects, and
    agencies in the early urban metropolis and the emergent digital realm of the
    1990s.30

    Although the discourse on the digital flaneur was most prevalent in the 1990s,
    it still lingers on in contemporary writings about digitized cultural heritage
    collections and their design. A much-cited article by computer scientists
    Marian Dörk, Sheelagh Carpendale, and Carey Williamson, for instance, notes
    the striking similarity between the “growing cities of the 19th century and
    today’s information spaces” and the relationship between “the individual and
    the whole.”31 Dörk, Carpendale, and Williamson use the figure of the flaneur
    to emphasize the importance of supporting not only utilitarian information
    needs through grand systems but also leisurely information surfing behaviors
    on an individual level. Dörk, Carpendale, and Willliamson’s reflections relate
    to the experience of moving about in a mass of information and ways of making
    sense of this information. What does it mean to make sense of mass
    digitization? How can we say or know that the past two hours we spent
    rummaging about in the archives of Google Books, digging deeper in Europeana,
    or following hyperlinks in Monoskop made sense, and by whose standards? And
    what are the cultural implications of using the flaneur as a cultural
    reference point for these ideals? We find few answers to these questions in
    Dörk, Carpendale, and Williamson’s article, or in related articles that invoke
    the flaneur as a figure of inspiration for new search strategies. Thus, the
    figure of the flaneur is predominantly used to express the pleasurable and
    productive aspect of archival navigation. But in its emphasis on pleasure and
    leisure, the figure neglects the much more ambivalent atmosphere that
    enshrouds the flaneur as he navigates the modern metropolis. Nor does it
    problematize the privileged viewpoint of the flaneur.

    The character of the flaneur, both in its original instantiations in French
    literature and in Walter Benjamin’s early twentieth-century writings, was
    certainly driven by pleasure; yet, on a more fundamental level, his existence
    was also, as Elizabeth Wilson points out in her feminist reading of the
    flaneur, “a sorrowful engagement with the melancholy of cities,” which arose
    “partly from the enormous, unfulfilled promise of the urban spectacle, the
    consumption, the lure of pleasure and joy which somehow seem destined to be
    disappointed.”32 Far from an optimistic and unproblematic engagement with
    information, then, the figure of the flaneur also evokes deeper anxieties
    arising from commodification processes and the accompanying melancholic
    realization that no matter how much one strolls and scrolls, nothing one
    encounters can ever fully satisfy one’s desires. Benjamin even strikingly
    spatializes (and sexualizes) this mental state in an infrastructural
    imaginary: the labyrinth. The labyrinth is thus, Benjamin suggests, “the home
    of the hesitant. The path of someone shy of arrival at a goal easily takes the
    form of a labyrinth. This is the way of the (sexual) drive in those episodes
    which precede its satisfaction.”33

    Benjamin’s hesitant flaneur caught in an unending maze of desire stands in
    contrast to the uncomplicated flaneur invoked in celebratory theories on the
    digital flaneur. Yet, recent literature on the design of digital realms
    suggests that the hesitant man caught in a drive for more information is a
    much more accurate image of the digital flaneur than the man-in-the-know.34
    Perhaps, then, the allegorical figure of the flaneur in digital design should
    be used less to address pleasurable wandering and more to invoke “the most
    characteristic response of all to the wholly new forms of life that seemed to
    be developing: ambivalence.”35 Caught up in the commodified labyrinth of the
    modern digitized archive, the digital flaneur of mass digitization might just
    as easily get stuck in a repetitive, monotonous routine of scrolling and
    downloading new things, forever suspended in a state of unfulfilled desire,
    than move about in meaningful and pleasurable ways.36

    Moreover, and just as importantly, the figure of the flaneur is also entangled
    in a cultural matrix of assumptions about gender, capabilities, and colonial
    implications. In short: the flaneur is a white, able-bodied male. As feminist
    theory attests to, the concept of the flaneur is male by definition. Some
    feminists such as Griselda Pollock and Janet Wolff have denied the possibility
    of a female variant altogether, because of women’s status as (often absent)
    objects rather than subjects in the nineteenth-century urban environment.37
    Others, such as Elizabeth Wilson, Deborah Epstein Nord, and Mica Nava have
    complicated the issue by alluding the opportunities and limitations of
    thinking about a female variant of the flaneur, for instance a flâneuse.38
    These discussions have also reverberated in the digital sphere in new
    variations.39 Whatever position one assumes, it is clear that the concept of
    the flaneur, even in its female variant, is a complicated figure that has
    problematic allusions to a universal privileged figure.

    In similar terms, the flaneur also has problematic colonial and racial
    connotations. As James Smalls points out in his essay “'Race As Spectacle in
    Late-Nineteenth-Century French Art and Popular Culture,” the racial dimension
    of the flaneur is “conspicuously absent” from most critical engagements with
    the concept.40 Yet, as Smalls notes, the question of race is crucial, since
    “the black man … is not privileged to lose himself in the Parisian crowd, for
    he is constantly reminded of his epidermalized existence, reflected back at
    him not only by what he sees, but by what we see as the assumed ‘normal’
    white, universal spectator.”41 This othering is, moreover, not limited to the
    historical scene of nineteenth-century Paris, but still remains relevant
    today. Thus, as Garnette Cadogan notes in his essay “Walking While Black,”
    non-white people are offered none of the freedoms of blending into the crowd
    that Baudelaire’s and Benjamin’s flaneurs enjoyed. “Walking while black
    restricts the experience of walking, renders inaccessible the classic Romantic
    experience of walking alone. It forces me to be in constant relationship with
    others, unable to join the New York flaneurs I had read about and hoped to
    join.”42

    Lastly, the classic figure of the flaneur also assumes a body with no
    disabilities. As Marian Ryan notes in an essay in the _New York Times_ , “The
    art of flânerie entails blending into the crowd. The disabled flaneur can’t
    achieve that kind of invisibility.”43 What might we take from these critical
    interventions into the uncomplicated discourse of the flaneur? Importantly,
    they counterbalance the dominant seductive image of the empowered user, and
    remind us of the colonial male gaze inherent in any invocation of the metaphor
    of the flaneur, which for the majority of users is a subject position that is
    simply not available (nor perhaps desirable).

    The limitations of the figure of the flaneur raise questions not only about
    the metaphor itself, but also about the topography of knowledge production it
    invokes. As already noted, Walter Benjamin placed the flaneur within a larger
    labyrinthine topology of knowledge production, where the flaneur could read
    the spectacle in front of him without being read himself. Walter Benjamin
    himself put the flaneur to rest with an analysis of an Edgar Allen Poe story,
    where he analyzed the demise of the flaneur in an increasingly capitalist
    topography, noting in melancholy terms that, “The bazaar is the last hangout
    of the flaneur. If in the beginning the street had become an interieur for
    him, now this interieur turned into a street, and he roamed through the
    labyrinth of merchandise as he had once roamed through the labyrinth of the
    city. It is a magnificent touch in Poe’s story that it includes along with the
    earliest description of the flaneur the figuration of his end.”44 In 2012,
    Evgeny Morozov in similar terms declared the death of the cyberflaneur.
    Linking the commodification of urban spaces in nineteenth-century Paris to the
    commodification of the Internet, Morozov noted that “it’s no longer a place
    for strolling—it’s a place for getting things done” and that “Everything that
    makes cyberflânerie possible—solitude and individuality, anonymity and
    opacity, mystery and ambivalence, curiosity and risk-taking—is under
    assault.”45 These two death sentences, separated by a century, link the
    environment of the flaneur to significant questions about the commodification
    of space and its infrapolitical implications.

    Exploring the implications of this topography, the following section suggests,
    will help us understand the infrapolitics of the spatial imaginaries of mass
    digitization, not only in relation to questions of globalization and late
    sovereignty, but also to cultural imaginaries of knowledge infrastructures.
    Indeed, these two dimensions are far from mutually exclusive, but rather
    belong to the same overarching tale of the politics of mass digitization.
    Thus, while the material spatial infrastructures of mass digitization projects
    may help us appreciate certain important political dynamics of Europeana,
    Google Books, and shadow libraries (such as their territorializing features or
    copyright contestations in relation to knowledge production), only an
    inclusion of the infrastructural imaginaries of knowledge production will help
    us understand the complex politics of mass digitization as it metamorphoses
    from analog buildings, shelves, and cabinets to the circulatory networks of
    digital platforms.

    ## Labyrinthine Imaginaries: Infrastructural Perspectives of Power and
    Knowledge Production

    If the flaneur is a central early figure in the cultural imaginary of the
    observer of cultural texts, the labyrinth has long served as a cultural
    imaginary of the library, and, in larger terms, the spatialized
    infrastructural conditions of knowledge and power. Thus, literature is rife
    with works that draw on libraries and labyrinths to convey stories about
    knowledge production and the power struggles hereof. Think only of the elderly
    monk-librarian in Umberto Eco’s classic, _The Name of the Rose,_ who notes
    that: “the library is a great labyrinth, sign of the labyrinth of the world.
    You enter and you do not know whether you will come out” 46; or consider the
    haunting images of being lost in Jose Luis Borges’s tales about labyrinthine
    libraries.47 This section therefore turns to the infrastructural space of the
    labyrinth, to show that this spatial imaginary, much like the flaneur, is
    loaded with cultural ambivalence, and to explore the ways in which the
    labyrinthine infrastructural imaginary emphasizes and crystallizes the
    infrapolitical tension in mass digitization projects between power and
    perspective, agency and environment, playful innovation and digital labor.

    The labyrinth is a prevalent literary trope, found in authors from Ovid,
    Virgil, and Dante to Dickens and Nietzsche, and it has been used particularly
    in relation to issues of knowledge and agency, and in haunting and nightmarish
    terms in modern literature.48 As the previous section indicates, the labyrinth
    also provides a significant image for understanding our relationship to mass
    digitization projects as sites of both knowledge production and experience.
    Indeed, one shadow library is even named _Aleph_ , which refers to the ancient
    Hebrew letter and likely also nods at Jose Luis Borges’s labyrinthine short
    story, _Aleph,_ on infinite labyrinthine architectures. Yet, what kind of
    infrastructure is a labyrinth, and how does it relate to the potentials and
    perils of mass digitization?

    In her rich historical study of labyrinths, Penelope Doob argues that the
    labyrinth possesses a dual potentiality: on the one hand, if experienced from
    within, the labyrinth is a sign of confusion; on the other, when viewed from
    above, it is a sign of complex order.49 As Harold Bloom notes, “all of us have
    had the experience of admiring a structure when outside it, but becoming
    unhappy within it.”50 Envisioning the labyrinth from within links to a
    claustrophobic sense of ignorance, while also implying the possibility of
    progress if you just turn the next corner. What better way to describe one’s
    experience in the labyrinthine infrastructures of mass digitization projects
    such as Google Books with its infrastructural conditions and contexts of
    experience and agency? On the one hand, Google Books appears to provide the
    view from above, lending itself as a logistical aid in its information-rich
    environment. On the other hand, Google Books also produces an alienating
    effect of impenetrability on two levels. First, although Google presents
    itself as a compass, its seemingly infinite and constantly rearranging
    universe nevertheless creates a sense of vertigo, only reinforced by the
    almost existential question “Do you feel lucky?” Second, Google Books also
    feels impenetrable on a deeper level, with its black-boxed governing and
    ordering principles, hidden behind complex layers of code, corporate cultures,
    and nondisclosure agreements.51 But even less-commercial mass digitization
    projects such as, for instance, Europeana and Monoskop can produce a sense of
    claustrophobia and alienation in the user. Think only of the frustration
    encountered when reaching dead ends in the form of broken links or in lack of
    access set down by European copyright regulations. Or even the alienation and
    dissatisfaction that can well up when there are seemingly no other limits to
    knowledge, such as in Monoskop, than one’s own cognitive shortcomings.

    The figure of the labyrinth also serves as a reminder that informational
    strolling is not only a leisurely experience, but also a laborious process.
    Penelope Doob thus points out the common medieval spelling of labyrinth as
    _laborintus_ , which foregrounds the concept of labor and “difficult process,”
    whether frustrating, useful, or both.52 In an age in which “labor itself is
    now play, just as play becomes more and more laborious,”53 Doob’s etymological
    excursion serves to highlight the fact that in many mass digitization projects
    it is indeed the user’s leisurely information scrolling that in the end
    generates profit, cultural value, and budgetary justification for mass
    digitization platforms. Jose van Dijck’s analysis of the valuation of traffic
    in a digital environment is a timely reminder of how traffic is valued in a
    cultural memory environment that increasingly orients itself toward social
    media, “Even though communicative traffic on social media platforms seems
    determined by social values such as popularity, attention, and connectivity,
    they are impalpably translated into monetary values and redressed in business
    models made possible by digital technology.”54 This is visible, for instance,
    in Europeana’s usage statistic reports, which links the notions of _traffic_
    and _performance_ together in an ontological equation (in this equation poor
    performance inevitably means a mark of death). 55 In a blogpost marking the
    launch of the _Europeana Statistics Dashboard_ , we are told that information
    about mass digitization traffic is “vital information for a modern cultural
    institution for both reporting and planning purposes and for public
    accountability.”56 Thus, although visitors may feel solitary in their digital
    wanderings, their digital footsteps are in fact obsessively traced and tracked
    by mass digitization platforms and often also by numerous third parties.

    Today, then, the user is indeed at work as she makes her way in the
    labyrinthine infrastructures of mass digitization by scrolling, clicking,
    downloading, connecting, and clearing and creating new paths. And while
    “search” has become a keyword in digital knowledge environments, digital
    infrastructures in mass digitization projects in fact distract as much as they
    orient. This new economy of cultural memory begs the question: if mass
    digitization projects, as labyrinthine infrastructures, invariably disorient
    the wanderer as much as they aid her, how might we understand their
    infrapolitics? After all, as the previous chapters have shown, mass
    digitization projects often present a wide array of motivations for why
    digitization should happen on a massive scale, with knowledge production and
    cultural enlightenment usually featuring as the strongest arguments. But as
    the spatialized heuristics of the flaneur and the labyrinth show, knowledge
    production and navigation is anything but a simple concept. Rather, the
    political dimensions of mass digitization discussed in previous chapters—such
    as standardization, late sovereignty, and network power—are tied up with the
    spatial imaginaries of what knowledge production and cultural memory are and
    how they should and could be organized and navigated.

    The question of the spatial imaginaries of knowledge production and
    imagination has a long philosophic history. As historian David Bates notes,
    knowledge in the Enlightenment era was often imagined as a labyrinthine
    journey. A classic illustration of how this journey was imagined is provided
    by Enlightenment philosopher Jean-Louis Castilhon, whose frustration is
    palpable in this exclamation: “How cruel and painful is the situation of a
    Traveller who has imprudently wandered into a forest where he knows neither
    the winding paths, nor the detours, nor the exits!”57 These Enlightenment
    journeys were premised upon an infrastructural framework that linked error and
    knowledge, but also upon an experience of knowledge quests riddled by loss of
    oversight and lack of a compass. As the previous sections show, the labyrinth
    as a form of knowledge production in relation to truth and error persists as
    an infrastructural trope in the digital. Yet, it has also metamorphosed
    significantly since Castilhon. The labyrinthine infrastructural imaginaries we
    find in digital environments thus differ significantly from more classical
    images, not least under the influence of the rhizomatic metaphors of
    labyrinths developed by Deleuze and Guattari and Eco. If the labyrinth of the
    Renaissance had an endpoint and a truth, these new labyrinthine
    infrastructures, as Kristin Veel points out, had a much more complex
    relationship to the spatial organization of the truth. Eco and Deleuze and
    Guattari thus conceived of their labyrinths as networks “in which all points
    can be connected with one another” with “no center” but “an almost unlimited
    multiplicity of alternative paths,” which makes it “impossible to rise above
    the structure and observe it from the outside, because it transcends the
    graphic two-dimensionality of the two earlier forms of labyrinths.”58 Deleuze
    expressed the senselessness of these contemporary labyrinths as a “theater
    where nothing is fixed, a labyrinth without a thread (Ariadne has hung
    herself).”59

    In mass digitization, this new infrastructural imaginary feeds a looming
    concern over how best to curate and infrastructurate cultural collections. It
    is this concern that we see at play in the aforementioned institutional
    concerns over how to best create meaningful paths in the cultural collections.
    The main question that resounds is: where should the paths lead if there is no
    longer one truth, that is, if the labyrinth has no center? Some mass
    digitization projects seem to revel in this new reality. As we have seen,
    shadow libraries such as Monoskop and UbuWeb use the affordances of the
    digital to create new cultural connections outside of the formal hierarchies
    of cultural memory institutions. Yet, while embraced by some, predictably the
    new distribution of authority generates anxiety in the cultural memory circles
    that had hitherto been able to hold claim to knowledge organization expertise.
    This is the dizzying perspective that haunts the cultural memory professionals
    faced with Europeana’s data governance model. Thus, as one Europeana
    professional explained to me in 2010, “Europeana aims at an open-linked-data
    model with a number of implications. One implication is that there will be no
    control of data usage, which makes it possible, for instance, to link classics
    with porn. Libraries do not agree to this loss of control which was at the
    base of their self-understanding.”60 The Europeana professional then proceeded
    to recount the profound anxiety experienced and expressed by knowledge
    professionals as they increasingly came face-to-face with a curatorial reality
    that is radically changing what counts as knowledge and context, where a
    search for Courbet could, in theory, not only lead the user to other French
    masters of painting but also to a copy of a porn magazine (provided it is out
    of copyright). The anxiety experienced by knowledge professionals in the new
    cultural memory ecosystem can of course be explained by a rationalized fear of
    job insecurity and territorial concerns. Yet, the fear of knowledge
    infrastructures without a center may also run deeper. As Penelope Doob reminds
    us, the center of the labyrinth historically played a central moral and
    epistemological role in the labyrinthine topos, as the site that held the
    epiphanous key to unravel whatever evils or secrets the labyrinth contained.
    With no center, there is no key, no epiphany.61 From this perspective, then,
    it is not only a job that is lost. It is also the meaning of knowledge
    itself.62

    What, then, can we take from these labyrinthine wanderings as we pursue a
    greater understanding of the infrapolitics of mass digitization? Certainly, as
    this section shows, the politics of mass digitization is entangled in
    spatialized imaginaries that have a long and complex cultural and affective
    trajectory interlinked with ontological and epistemological questions about
    the very nature of knowledge. Cladding the walls of these trajectories are, of
    course, the ever-present political questions of authority and territory, but
    also deeper cultural and affective questions about the nature and meaning of
    knowledge as it bandies about in our cultural imaginaries, between discoveries
    and dead-ends, between freedom and control.

    As the next section will show, one concept has in particular come to
    encapsulate these concerns: the notion of serendipity. While the notion of
    serendipity has a long history, it has gained new relevance with mass
    digitization, where it is used to express the realm of possibilities opened up
    by the new digital infrastructures of knowledge production. As such, it has
    come to play a role, not only as a playful cultural imaginary, but also as an
    architectural ideal in software developments for mass digitization. In the
    following section, we will look at a few examples of these architectures, as
    well as the knowledge politics they are entangled in.

    ## The Architecture of Serendipitous Platforms

    Serendipity has for long been a cherished word in archival studies, used to
    describe a magical moment of “Eureka!” A fickle and fabulating concept, it
    belongs to the world of discovery, capturing the moment when a meandering
    soul, a flaneur, accidentally stumbles upon a valuable find. As such, the
    moment of serendipity is almost always a happy circumstance of chance, and
    never an unfortunate moment of risk. Serendipity also embodies the word in its
    own origins. This section outlines the origins of this word and situate its
    reemergence in theories on libraries and on digital realms of knowledge
    production.

    The English aristocrat Horace Walpole coined the word serendipity in a letter
    to Horace Mann in 1754, in which he explained his fascination with a Persian
    fairy tale about three princes from the _Isle of Serendip_ _63_ who possess
    superpowers of observation. In his letter, Walpole linked the contents of the
    fantastical story to his view of how new discoveries are made: “As their
    highnesses travelled, they were always making discoveries, by “accidental
    sagacity,” of things which they were not in quest of.” 64 And he proposed a
    new word—“serendipity”—to describe this sublime talent for discovery.

    Walpole’s conceptual invention did not immediately catch fire in common
    parlance.65 But a few centuries after its invention, it suddenly took hold.
    Who awakened the notion from its dormant state, and why? Sociologists Robert
    K. Merton and Elinor Barber provided one influential answer in their own
    enjoyable exploration of the word. As they note, serendipity had a particular
    playful tone to it, expressing a sense that knowledge comes about not only
    through sheer willpower and discipline, but also via pleasurable chance. This
    almost hedonistic dimension made it incompatible with the serious ethos of the
    nineteenth century. As Merton and Barber note, “The serious early Victorians
    were not likely to pick up serendipity, except perhaps to point to it as a
    piece of frivolous whimsy. … Although the Victorians, and especially Victorian
    scientists, were familiar with the part played by accident in the process of
    discovery, they were likely neither to highlight that factor nor to clothe the
    phenomenon of accidental discovery in so lighthearted a word as
    serendipity.”66 But in the 1940s and 1950s something happened—the word began
    to catch on. Merton and Barber link this turn of linguistic events not only to
    pure chance, but also a change in scientific networks and paradigms. Traveling
    from the world of letters, as they recount, the word began making its way into
    scientific circles, where attention was increasingly turned to “splashy
    discoveries in lab and field.”67 But as Lorraine Daston notes, “discoveries,
    especially those made by serendipity, depend partly on luck, and scientists
    schooled in probability theory are loathe to ascribe personal merit to the
    merely lucky,” and scientists therefore increasingly began to “domesticate
    serendipity.”68 Daston remarks that while scientists schooled in probability
    were reluctant to ascribe their discoveries to pure chance, the “historians
    and literary scholars who struck serendipitous gold in the archives did not
    seem so eager to make a science out of their good fortune.”69 One tale of how
    literary and historical scholars struck serendipitous gold in the archive is
    provided by Mike Featherstone:

    > Once in the archive, finding the right material which can be made to speak
    may itself be subject to a high degree of contingency—the process not of
    deliberate rational searching, but serendipity. In this context it is
    interesting to note the methods of innovatory historians such as Norbert Elias
    and Michel Foucault, who used the British and French national libraries in
    highly unorthodox ways by reading seemingly haphazardly “on the diagonal,”
    across the whole range of arts and sciences, centuries and civilizations, so
    that the unusual juxtapositions they arrived at summoned up new lines of
    thought and possibilities to radically re-think and reclassify received
    wisdom. Here we think of the flaneur who wanders the archival textual city in
    a half-dreamlike state in order to be open to the half-formed possibilities of
    the material and sensitive to unusual juxtapositions and novel perceptions.70

    English scholar Nancy Schultz in similar terms notes that the archive “in the
    humanities” represents a “prime site for serendipitous discovery.”71 In most
    of these cases, serendipity is taken to mean some form of archival insight,
    and often even a critical intellectual process. Deb Verhoeven, Associate Dean
    of Engagement and Innovation at the University of Technology Sydney, reminds
    us in relation to feminist archival work that “stories of accidental
    discovery” can even take on dimensions of feminist solace, consoling “the
    researcher, and us, with the idea that no system, whatever its claims to
    discipline, comprehensiveness, and structure, is exempt from randomness, flux,
    overflow, and therefore potential collapse.”72

    But with mass digitization processes, their fusion of probability theories and
    archives, and their ideals of combined fun and fact-finding, the questions
    raised in the hard sciences about serendipity, its connotations of freedom and
    chance, engineering and control, now also haunt the archives of historians and
    literary scholars. Serendipity has now often come to be used as a motivating
    factor for digitization in the first place, based on arguments that mass
    digitized archives allow not only for dedicated and target-oriented research,
    but also for new modes of search, of reading haphazardly “on the diagonal”
    across genres and disciplines, as well as across institutional and national
    borders that hitherto kept works and insights apart. As one spokesperson from
    a prominent mass digitization company states, “digital collections have been
    designed both to assist researchers in accessing original primary source
    materials and to enable them to make serendipitous discoveries and unexpected
    connections between sources.”73 And indeed, this sentiment reverberates in all
    mass digitization projects from Europeana and Google Books to smaller shadow
    libraries such as UbuWeb and Monoskop. Some scholars even argue that
    serendipity takes on new forms due to digitization.74

    It seems only natural, then, that mass digitization projects, and their
    actors, have actively adopted the discourse of serendipity, both as a selling
    point and a strategic claim. Talking about Google’s digitization program, Dr.
    Sarah Thomas, Bodley’s Librarian and Director of Oxford University Library
    Services, notes: “Library users have always loved browsing books for the
    serendipitous discoveries they provide. Digital books offer a similar thrill,
    but on multiple levels—deep entry into the texts or the ability to browse the
    virtual shelf of books assembled from the world's great libraries.”75 But it
    has also raised questions for those people who are in charge, not only of
    holding serendipity forth as an ideal, but also building the architecture to
    facilitate it. Dan Cohen, speaking on behalf of the DPLA, thus noted the
    centrality of the concept, but also the challenges that mass digitization
    raised in practical terms: “At DPLA, we’ve been thinking a lot about what’s
    involved with serendipitous discovery. Since we started from scratch and
    didn’t need to create a standard online library catalog experience, we were
    free to experiment and provide novel ways into our collection of over five
    million items. How to arrange a collection of that scale so that different
    users can bump into items of unexpected interest to them?” While adopting the
    language of serendipity is easy, its infrastructural construction is much
    harder to envision. This challenge clearly troubles the strategic team
    developing Europeana’s infrastructure, as it notes in a programmatic tone that
    stands hilariously at odds with the curiosity it must cater to:

    > Reviewing the personas developed for the D6.2 Requirements for Europeana.eu8
    deliverable—and in particular those of the “culture vultures”—one finds two
    somewhat-opposed requirements. On the one hand, they need to be able to find
    what they are looking for, and navigate through clear and well-structured
    data. On the other hand, they also come to Europeana looking for
    “inspiration”—that is to say, for something new and unexpected that points
    them towards possibilities they had previously been unaware of; what, in the
    formal literature of user experience and search design, is sometimes referred
    to as “serendipity search.” Europeana’s users need the platform to be
    structured and predictable—but not entirely so.76

    To achieve serendipity, mass digitization projects have often sought to take
    advantage of the labyrinthine infrastructures of digitization, relying not
    only on their own virtual bookshelves, but also on the algorithmic highways
    and back alleys of social media. Twitter, in particular, before it adopted
    personalization methods, became a preferred infrastructure for mass
    digitization projects, who took advantage of Twitter’s lack of personalized
    search to create whimsical bots that injected randomness into the user’s feed.
    One example was the Digital Public Library of America’s DPLA Bot, which grabs
    a random noun and uses its API to share the first result it finds. The DPLA
    Bot aims to “infuse what we all love about libraries—serendipitous
    discovery—into the DPLA” and thus seeks to provide a “kind of ‘Surprise me!’
    search function for DPLA.”77 It did not take the programmer Peter Meyr much
    time to develop a similar bot for Europeana. In an interview with
    EuropeanaPro, Peter Meyr directly related the EuropeanaBot to the
    serendipitous affordances of Twitter and its rewards for mass digitization
    projects, noting that:

    > The presentation of digital resources is difficult for libraries. It is no
    longer possible to just explore, browse the stacks and make serendipitous
    findings. With Europeana, you don't even have a physical library to go to. So
    I was interested in bringing a little bit of serendipity back by using a
    Twitter bot. … If I just wanted to present (semi)random Europeana findings, I
    wouldn’t have needed Twitter—an RSS-Feed or a web page would be enough.
    However, I wanted to infuse EuropeanaBot with a little bit of “Twitter
    culture” and give it a personality.78

    The British Library also developed a Twitter bot titled the Mechanical
    Curator, which posts random resources with no customization except a special
    focus on images in the library’s seventeenth- to nineteenth-century
    collections.79 But there were also many projects that existed outside social
    media platforms and operated across mass digitization projects. One example
    was the “serendipity engine,” Serendip-o-matic, which first examined the
    user’s research interests and then, based on this data, identified “related
    content in locations such as the Digital Public Library of America (DPLA),
    Europeana, and Flickr Commons.”80 While this initiative was not endorsed by
    any of these mass digitization projects, they nevertheless featured it on
    their blogs, integrating it into the mass digitization ecosystem.

    Yet, while mass digitization for some represents the opportunity to amplify
    the chance of chance, other scholars increasingly wonder whether the
    engineering processes of mass digitization would take serendipity out of the
    archive. Indeed, to them, the digital is antithetical to chance. One such
    viewpoint is uttered by historian Tristram Hunt in an op-ed charging against
    Google’s British digitization program under the title, “Online is fine, but
    history is best hands on.” In it, Hunt argues that the digital, rather than
    providing a new means of chance finding, would impede historical discovery and
    that only the analog archival environment could foster real historical
    discoveries, since it is “… only with MS in hand that the real meaning of the
    text becomes apparent: its rhythms and cadences, the relationship of image to
    word, the passion of the argument or cold logic of the case. Then there is the
    serendipity, the scholar’s eternal hope that something will catch his eye,”81
    In similar terms, Graeme Davison describes the lacking of serendipitous
    errings in digital archives, as he likens digital search engines with driving
    “a high-powered car down a freeway, compared with walking or cycling. It gets
    us there more quickly but we skirt the towns and miss a lot of interesting
    scenery on the way.”82 William McKeen also links the loss of serendipity to
    the acceleration of method in the digital:

    > Think about the library. Do people browse anymore? We have become such a
    directed people. We can target what we want, thanks to the Internet. Put a
    couple of key words into a search engine and you find—with an irritating hit
    or miss here and there—exactly what you’re looking for. It’s efficient, but
    dull. You miss the time-consuming but enriching act of looking through
    shelves, of pulling down a book because the title interests you, or the
    binding. Inside, the book might be a loser, a waste of the effort and calories
    it took to remove it from its place and then return. Or it might be a dark
    chest of wonders, a life-changing first step into another world, something to
    lead your life down a path you didn't know was there.83

    Common to all these statements is the sentiment that the engineering of
    serendipity removes the very chance of serendipity. As Nicholas Carr notes,
    “Once you create an engine—a machine—to produce serendipity, you destroy the
    essence of serendipity. It becomes something expected rather than
    unexpected.”84 It appears, then, that computational methods have introduced
    historians and literary scholars to the same “beaverish efforts”85 to
    domesticate serendipity as the hard sciences had to face at the beginning of
    the twentieth century.

    To my knowledge, few systematic studies exist about whether mass digitization
    projects such as Europeana and Google Books hamper or foster creative and
    original research in empirical terms. How one would go about such a study is
    also an open question. The dichotomy between digital and analog does seem a
    bit contrived, however. As Dan Cohen notes in a blogpost for DPLA, “bookstores
    and libraries have their own forms of ‘serendipity engineering,’ from
    storefront staff picks to behind-the-scenes cataloguing and shelving methods
    that make for happy accidents.”86 Yet there is no doubt that the discourse of
    serendipity has been infused with new life that sometimes veers toward a
    “spectacle of serendipity.”87

    Over the past decade, the digital infrastructures that organize our cultural
    memory have become increasingly integrated in a digital economy that valuates
    “experience” as a cultural currency that can be exchanged to profit, and our
    affective meanderings as a form of industrial production. This digital economy
    affects the architecture and infrastructure of digital archives. The archival
    discourse on digital serendipity is thus now embroiled in a more deep-seated
    infrapolitics of workspace architecture, influenced by Silicon Valley’s
    obsession with networks, process, and connectivity.88 Think only of the
    increasing importance of Google and Facebook to mass digitization projects:
    most of these projects have a Facebook page on which they showcase their
    material, just as they take pains to make themselves “algorithmically
    recognizable”89 to Google and other search engines in the hope of reaching an
    audience beyond the echo chamber of archives and to distribute their archival
    material on leisurely tidbit platforms such as Pinterest and Twitter.90 If
    serendipity is increasingly thought of as a platform problem, the final
    question we might pose is what kind of infrapolitics this platform economy
    generates and how it affects mass digitization projects.

    ## The Infrapolitics of Platform Power

    As the previous sections show, mass digitization projects rely upon spatial
    metaphors to convey ideas about, and ideals of, cultural memory
    infrastructures, their knowledge production, and their serendipitous
    potential. Thus, for mass digitization projects, the ideal scenario is that
    the labyrinthine errings of the user result in serendipitous finds that in
    turn bring about new forms of cultural value. From the point of the user,
    however, being caught up in the labyrinth might just as easily give rise to an
    experience of being confronted with a sense of lack of oversight and
    alienation in the alleyways of commodified infrastructures. These two
    scenarios co-exist because of what Penelope Doob (as noted in the section on
    labyrinthine imaginaries) refers to as the dual potentiality of the labyrinth,
    which when experienced from within can be become a sign of confusion, and when
    viewed from above becomes a sign of complex order.91

    In this final section, I will turn to a new spatial metaphor, which appears to
    have resolved this dual potentiality of the spatial perspective of mass
    digitization projects: the platform. The platform has recently emerged as a
    new buzzword in the digital economy, connoting simultaneously a perspective, a
    business strategy, and a political ideology. Ideally the platform provides a
    different perspective than the labyrinth, offering the user the possibility of
    simultaneously constructing the labyrinth and viewing it from above. This
    final section therefore explores how we might understand the infrapolitics of
    the platform, and its role in the digital economy.

    In its recent business strategy, Europeana claimed that it was moving from
    operating as a “portal” to operating as a “platform.”92 The announcement was
    part of a broader infrastructural transition in the field of cultural memory,
    undergirded by a process of opening up and connecting the cultural memory
    sector to wider knowledge ecosystems.93 Indeed, Europeana’s move is part of a
    much larger discursive and material reality of a more fundamental process of
    “platformization” of the web.94 The notion of the platform has thus recently
    become an important heuristic for understanding the cultural development of
    the web and its economy, fusing the computational understanding of the
    platform as an environment in which a code is executed95 and the political and
    social understanding of a platform as a site of politics.96

    While the infrapolitics of the platformization of the web has become a central
    discussion in software and communication studies, little interest has been
    paid to the implications of platforms for the politics of cultural memory.
    Yet, Europeana’s business strategy illustrates the significant infrapolitical
    role that platforms are given in mass digitization literature. Citing digital
    historian Tim Sherratt’s claim that “portals are for visiting, platforms for
    building on,”97 Europeana’s strategy argues that if cultural memory sites free
    themselves and their content from the “prison of portals” in favor of more
    openness and flexibility, this will in turn empower users to created their own
    “pathways” through the digital cultural memory, instead of being forced to
    follow predetermined “narrative journeys.”98 The business plan’s reliance on
    Sherratt’s theory of platforms shows that although the platform has a
    technical meaning in computation, Europeana’s discourse goes beyond mere
    computational logic. It instead signifies an infrapolitics that carries with
    it an assumption about the political dynamics of software, standing in for the
    freedom to act in the labyrinthine infrastructures of digital collections.

    Yet, what is a platform, and how might we understand its infrapolitics? As
    Tarleton Gillespie points out, the oldest definition of platform is
    architectural, as a level or near-level surface, often elevated.99 As such,
    there is something inherently simple about platforms. As architect Sverre Fehn
    notes, “the simplest form of architecture is to cultivate the surface of the
    earth, to make a platform.”100 Fehn’s statement conceals a more fundamental
    insight about platforms, however: in the establishment of a low horizontal
    platform, one also establishes a social infrastructure. Platforms are thus not
    only material constructions, they also harbor infrapolitical affordances. The
    etymology of the notion of “platform” evidences this infrapolitical dimension.
    Originally a spatial concept, the notion of platform appeared in
    architectural, figurative, and military formations in the sixteenth century,
    soon developing into specialized discourses of party programs and military and
    building construction,101 religious congregation,102 and architectural vantage
    points.103 Both the architectural and social understandings of the term
    connote a process in which sites of common ground are created in
    contradistinction to other sites. In geology, for instance, platforms emerge
    from abrasive processes that elevate and distinguish one area in relation to
    others. In religious and political discourse, platforms emerge as
    organizational sites of belonging, often in contradistinction to other forms
    of organization. Platforms, then, connote both common ground and demarcated
    borders that emerge out of abrasive processes. In the nineteenth century, a
    third meaning adjoined the notion of platforms, namely trade-related
    cooperation. This introduced a dynamic to the word that is less informed by
    abrasive processes and more by the capture processes of what we might call
    “connective capitalism.” Yet, despite connectivity taking center stage, even
    these platforms were described as territorializing constructs that favor some
    organizations and corporations over others.104

    In the twentieth and twenty-first centuries, as Gilles Deleuze and Felix
    Guattari successfully urged scholars and architects to replace roots with
    rhizomes, the notion of platform began taking on yet another meaning. Deleuze
    and Guattari began fervently arguing for the nonexistence of rooted
    platforms.105 Their vision soon gave rise to a nonfoundational understanding
    of the world as a “limitless multiplicity of positions from which it is
    possible only to erect provisional constructions.”106 Deleuze and Guattari’s
    ontology became widely influential in theorizing the web _in toto_ ; as Rem
    Koolhaas once noted, the “language of architecture—platform, blueprint,
    structure—became almost the preferred language for indicating a lot of
    phenomenon that we’re facing from Silicon Valley.”107 From the singular
    platforms of military and party politics, emerged, then, the thousand
    platforms of the digital, where “nearly every surge of research and investment
    pursued by the digital industry—e-commerce, web services, online advertising,
    mobile devices and digital media sales—has seen the term migrate to it.”108

    What infrapolitical logic can we glean from Silicon Valley’s adoption of the
    vernacular notion of the platform? Firstly, it is an infrapolitics of
    temporality. As Tarleton Gillespie points out, the semantic aspects of
    platforms “point to a common set of connotations: a ‘raised level surface’
    designed to facilitate some activity that will subsequently take place. It is
    anticipatory, but not causal.”109 The inscription of platforms into the
    material infrastructures of the Internet thus assume a value-producing
    futurity. If serendipity is what is craved, then platforms are the site in
    which this is thought to take place.

    Despite its inclusion in the entrepreneurial discourse of Silicon Valley, the
    notion of the platform is also used to signal an infrapolitics of
    collaboration, even subversion. Olga Gurionova, for instance, explores the
    subversive dynamics of critical artistic platforms,110 and Trebor Sholtz
    promotes the term “platform cooperativism” to advance worker-based
    cooperatives that would “design their own apps-based platforms, fostering
    truly peer-to-peer ways of providing services and things, and speak truth to
    the new platform capitalists.”111 Shadow libraries such as Monoskop appear as
    perfect examples of such subversive platforms and evidence of Srnicek’s
    reminder that not _all_ social interactions are co-opted into systems of
    profit generation. 112 Yet, as the territorial, legal, and social
    infrastructures of mass digitization become increasingly labyrinthine, it
    takes a lot of critical consciousness to properly interpret and understand its
    infrapolitics. Engage with the shadow library Library Genesis on Facebook, for
    instance, and you submit to platform capitalism.

    A significant trait of platform-based corporations such as Google and Facebook
    is that they more often than not present themselves as apolitical, neutral,
    and empowering tools of connectivity, passive until picked up by the user.
    Yet, as Lisa Nakamura notes, “reading’s economies, cultures of sharing, and
    circuits of travel have never been passive.”113 One of digital platforms’ most
    important infrapolitical traits is their dependence on network effects and a
    winner-takes-all logic, where the platform owner is not only conferred
    enormous power vis-à-vis other less successful platforms but also vis-à-vis
    the platform user.114 Within this game, the platform owner determines the
    rules of the product and the service on offer. Entering into the discourse of
    platforms implies, then, not only constructing a software platform, but also
    entering into a parasitical game of relational network effects, where
    different platforms challenge and use each other to gain more views and
    activity. This gives successful platforms a great advantage in the digital
    economy. They not only gain access to data, but they also control the rules of
    how the data is to be managed and governed. Therefore, when a user is surfing
    Google Books, Google—and not the library—collects the user’s search queries,
    including results that appeared in searches and pages the user visited from
    the search. The browser, moreover, tracks the user’s activity, including pages
    the user has visited and when, user data, and possibly user login details with
    auto-fill features, user IP address, Internet service provider, device
    hardware details, operating system and browser version, cookies, and cached
    data from websites. The labyrinthine infrastructure of the mass digitization
    ecosystem also means that if you access one platform through another, your
    data will be collected in different ways. Thus, if you visit Europeana through
    Facebook, it will be Facebook that collects your data, including name and
    profile; biographical information such as birthday, hometown, work history,
    and interests; username and unique identifier; subscriptions, location,
    device, activity date, time and time-zone, activities; and likes, check-ins,
    and events.115 As more platforms emerge from which one can access mass
    digitized archives, such as social media sites like Facebook, Google+,
    Pinterest, and Twitter, as well as mobile devices such as Android, gaining an
    overview of who collects one’s data and how becomes more nebulous.

    Europeana’s reminder illustrates the assemblatic infrastructural set-up of
    mass digitization projects and how they operate with multiple entry points,
    each of which may attach its own infrapolitical dynamics. It also illustrates
    the labyrinthine infrastructures of privacy settings, over which a mapping is
    increasingly difficult to attain because of constant changes and
    reconfigurations. It furthermore illustrates the changing legal order from the
    relatively stable sovereign order of human rights obligations to the
    modulating landscape of privacy policies.

    How then might we characterize the infrapolitics of the spatial imaginaries of
    mass digitization? As this chapter has sought to convey, writings about mass
    digitization projects are shot through with spatialized metaphors, from the
    flaneur to the labyrinth and the platform, either in literal terms or in the
    imaginaries they draw on. While this section has analyzed these imaginaries in
    a somewhat chronological fashion, with the interactivity of the platform
    increasingly replacing the more passive gaze of the spectator, they coexist in
    that larger complex of spatial digital thinking. While often used to elicit
    uncomplicated visions of empowerment, desire, curiosity, and productivity,
    these infrapolitical imaginaries in fact show the complexity of mass
    digitization projects in their reinscription of users and cultural memory
    institutions in new constellations of power and politics.

    ## Notes

    1. Kelly 1994, p. 263. 2. Connection Machines were developed by the
    supercomputer manufacturer Thinking Machines, a concept that also appeared in
    Jorge Luis Borges’s _The Total Library_. 3. Brewster Kahle, “Transforming Our
    Libraries from Analog to Digital: A 2020 Vision,” _Educause Review_ , March
    13, 2017, from-analog-to-digital-a-2020-vision>. 4. Ibid. 5. Couze Venn, “The
    Collection,” _Theory, Culture & Society_ 23, no. 2–3 (2006), 36. 6. Hacking
    2010. 7. Lefebvre 2009. 8. Blair and Stallybrass 2010, 139–163. 9. Ibid., 143.
    10. Dewey 1926, 311. 11. See, for instance, Lorraine Daston’s wonderful
    account of the different types of historical consciousness we find in archives
    across the sciences: Daston 2012. 12. David Weinberger, “Library as Platform,”
    _Library Journal_ , September 4, 2012, /future-of-libraries/by-david-weinberger/#_>. 13. Nakamura 2002, 89. 14.
    Shannon Mattern,”Library as Infrastructure,” _Places Journal_ , June 2014,
    . 15. Couze
    Venn, “The Collection,” _Theory, Culture & Society_ 23, no. 2–3 (2006), 35–40.
    16. Žižek 2009, 39. 17. Voltaire, “Une grande bibliothèque a cela de bon,
    qu’elle effraye celui qui la regarde,” in _Dictionaire Philosophique_ , 1786,
    265. 18. In his autobiography, Borges asserted that it “was meant as a
    nightmare version or magnification” of the municipal library he worked in up
    until 1946. Borges describes his time at this library as “nine years of solid
    unhappiness,” both because of his co-workers and the “menial” and senseless
    cataloging work he performed in the small library. Interestingly, then, Borges
    translated his own experience of being informationally underwhelmed into a
    tale of informational exhaustion and despair. See “An Autobiographical Essay”
    in _The Aleph and Other Stories_ , 1978, 243. 19. Borges 2001, 216. 20. Yeo
    2003, 32. 21. Cited in Blair 2003, 11. 22. Bawden and Robinson 2009, 186. 23.
    Garrett 1999. 24. Featherstone 2000, 166. 25. Thus, for instance, one
    Europeana-related project with the apt acronym PATHS, argues for the need to
    “make use of current knowledge of personalization to develop a system for
    navigating cultural heritage collections that is based around the metaphor of
    paths and trails through them” (Hall et al. 2012). See also Walker 2006. 26.
    Inspiring texts for (early) spatial thinking of the Internet, see: Hayles
    1993; Nakamura 2002; Chun 2006. 27. Much has been written about whether or not
    it makes sense to frame digital realms and infrastructures in spatial terms,
    and Wendy Chun has written an excellent account of the stakes of these
    arguments, adding her own insightful comments to them; see chapter 1, “Why
    Cyberspace?” in Chun 2013. 28. Cited in Hartmann 2004, 123–124. 29. Goldate
    1996. 30. Featherstone 1998. 31. Dörk, Carpendale, and Williamson 2011, 1216.
    32. Wilson 1992, 108. 33. Benjamin. 1985a, 40. 34. See, for instance, Natasha
    Dow Schüll’s fascinating study of the addictive design of computational
    culture: Schüll 2014. For an industry perspective, see Nir Eyal, _Hooked: How
    to Build Habit-Forming Products_ (Princeton, NJ: Princeton University Press,
    2014). 35. Wilson 1992, 93. 36. Indeed, it would be interesting to explore the
    link between Susan Buck Morss’s reinterpretation of Benjamin’s anesthetic
    shock of phantasmagoria and today’s digital dopamine production, as described
    by Natasha Dow Schüll in _Addicted by Design_ (2014); see Buck-Morss 2006. See
    also Bjelić 2016. 37. Wolff 1985; Pollock 1998. 38. Wilson 1992; Nord 1995;
    Nava and O’Shea 1996, 38–76. 39. Hartmann 1999. 40. Smalls 2003, 356. 41.
    Ibid., 357. 42. Cadogan 2016. 43. Marian Ryan, “The Disabled flaneur,” _New
    York Times_ , December 12, 2017, /the-disabled-flaneur.html>. 44. Benjamin. 1985b, 54. 45. Evgeny Morozov, “The
    Death of the Cyberflaneur,” _New York Times_ , February 4, 2012. 46. Eco 2014,
    169. 47. See also Koevoets 2013. 48. In colloquial English, “labyrinth” is
    generally synonymous with “maze,” but some people observe a distinction, using
    maze to refer to a complex branching (multicursal) puzzle with choices of path
    and direction, and using labyrinth for a single, non-branching (unicursal)
    path, which leads to a center. This book, however, uses the concept of the
    labyrinth to describe all labyrinthine infrastructures. 49. Doob 1994. 50.
    Bloom 2009, xvii. 51. Might this be the labyrinthine logic detected by
    Foucault, which unfolds only “within a hidden landscape,” revealing “nothing
    that can be seen” and partaking in the “order of the enigma”; see Foucault
    2004, 98. 52. Doob 1994, 97. Doob also finds this perspective in the
    fourteenth century in Chaucer’s _House of Fame_ , in which the labyrinth
    “becomes an emblem of the limitations of knowledge in this world, where all we
    can finally do is meditate on _labor intus_ ” (ibid., 313). Lady Mary Wroth’s
    work _Pamphilia to Amphilanthus_ provides the same imagery, telling the story
    of the female heroine, Pamphilia, who fails to escape a maze but nevertheless
    engages her experience within it as a source of knowledge. 53. Galloway 2013a,
    29. 54. van Dijck 2012. 55. “Usage Stats for Europeana Collections,”
    _EuropeanaPro,_ usage-statistics>. 56. Joris Pekel, “The Europeana Statistics Dashboard is
    here,” _EuropeanaPro_ , April 6, 2016, /introducing-the-europeana-statistics-dashboard>. 57. Bates 2002, 32. 58. Veel
    2003, 154. 59. Deleuze 2013, 56. 60. Interview with professor of library and
    information science working with Europeana, Berlin, Germany, 2011. 61. Borges
    mused upon the possible horrendous implications of such a lack, recounting two
    labyrinthine scenarios he once imagined: “In the first, a man is supposed to
    be making his way through the dusty and stony corridors, and he hears a
    distant bellowing in the night. And then he makes out footprints in the sand
    and he knows that they belong to the Minotaur, that the minotaur is after him,
    and, in a sense, he, too, is after the minotaur. The Minotaur, of course,
    wants to devour him, and since his only aim in life is to go on wandering and
    wandering, he also longs for the moment. In the second sonnet, I had a still
    more gruesome idea—the idea that there was no minotaur—that the man would go
    on endlessly wandering. That may have been suggested by a phrase in one of
    Chesterton’s Father Brown books. Chesterton said, ‘What a man is really afraid
    of is a maze without a center.’ I suppose he was thinking of a godless
    universe, but I was thinking of the labyrinth without a minotaur. I mean, if
    anything is terrible, it is terrible because it is meaningless.” Borges and
    Dembo 1970, 319. 62. Borges actually found a certain pleasure in the lack of
    order, however, noting that “I not only feel the terror … but also, well, the
    pleasure you get, let’s say, from a chess puzzle or from a good detective
    novel.” Ibid. 63. Serendib, also spelled Serendip (Arabic Sarandīb), was the
    Persian/Arabic word for the island of Sri Lanka, recorded in use as early as
    AD 361. 64. Letter to Horace Mann, 28 January 1754, in _Walpole’s
    Correspondence_ , vol. 20, 407–411. 65. As Robert Merton and Elinor Barber
    note, it first made it into the OED in 1912 (Merton and Barber 2004, 72). 66.
    Merton and Barber 2004, 40. 67. Lorraine Daston, “Are You Having Fun Today?,”
    _London Review of Books_ , September 23, 2004. 68. Ibid. 69. Ibid. 70.
    Featherstone 2000, 594. 71. Nancy Lusignan Schulz, “Serendipity in the
    Archive,” _Chronicle of Higher Education_ , May 15, 2011,
    . 72.
    Verhoeven 2016, 18. 73. Caley 2017, 248. 74. Bishop 2016 75. “Oxford-Google
    Digitization Project Reaches Milestone,” Bodleian Library and Radcliffe
    Camera, March 26, 2009.
    . 76. Timothy
    Hill, David Haskiya, Antoine Isaac, Hugo Manguinhas, and Valentine Charles
    (eds.), _Europeana Search Strategy_ , May 23, 2016,
    .
    77. “DPLAbot,” _Digital Public Library of America_ , .
    78. “Q&A with EuropeanaBot developer,” _EuropeanaPro_ , August 20, 2013,
    . 79. There
    are of course many other examples, some of which offer greater interactivity,
    such as the TroveNewsBot, which feeds off of the National Library of
    Australia’s 370 million resources, allowing the user to send the bot any text
    to get the bot digging through the Trove API for a matching result. 80.
    Serendip-o-matic, n.d. . 81. Tristram Hunt,
    “Online Is Fine, but History Is Best Hands On,” _Guardian_ July 3, 2011,
    library-google-history>. 82. Davison 2009. 83. William McKeen, “Serendipity,”
    _New York Times,_ (n.d.),
    . 84. Carr 2006.
    We find this argument once again in Aleks Krotoski, who highlights the man-
    machine dichotomy, noting that the “controlled binary mechanics” of the search
    engine actually make serendipitous findings “more challenging to find” because
    “branching pathways of possibility are too difficult to code and don’t scale”
    (Aleks Krokoski, “Digital serendipity: be careful what you don't wish for,”
    _Guardian_ , August 11, 2011,
    profiling-aleks-krotoski>.) 85. Lorraine Daston, “Are You Having Fun Today?,”
    _London Review of Books_ , September 23, 2004. 86. Dan Cohen, “Planning for
    Serendipity,” _DPLA_ News and Blog, February 7, 2014,
    . 87. Shannon
    Mattern, “Sharing Is Tables,” _e-flux_ , October 17, 2017,
    furniture-for-digital-labor/>. 88. Greg Lindsay, “Engineering Serendipity,”
    _New York Times_ , April 5, 2013,
    serendipity.html>. 89. Gillespie 2017. 90. See, for instance, Milena Popova,
    “Facebook Awards History App that Will Use Europeana’s Collections,”
    _EuropeanaPro_ , March 7, 2014, awards-history-app-that-will-use-europeanas-collections>. 91. Doob 1994. 92.
    “Europeana Strategy Impact 2015–2020,”
    .
    93. Ping-Huang 2016, 53. 94. Helmond 2015. 95. Ian Bogost and Nick Montfort.
    2009. “Platform studies: freduently asked questions.” _Proceeding of the
    Digital Arts and Culture Conference_.
    . 96. Srnicek 2017; Helmond 2015;
    Gillespie 2010. 97. “While a portal can present its aggregated content in a
    way that invites exploration, the experience is always constrained—pre-
    determined by a set of design decisions about what is necessary, relevant and
    useful. Platforms put those design decisions back into the hands of users.
    Instead of a single interface, there are innumerable ways of interacting with
    the data.” See Tim Sherratt, “From Portals to Platforms; Building New
    Frameworks for User Engagement,” National Library of Australia, November 5,
    2013, platform>. 98. “Europeana Strategy Impact 2015–2020,”
    .
    99. Gillespie 2010, 349. 100. Fjeld and Fehn 2009, 108. 101. Gießmann 2015,
    126. 102. See, for example, C. S. Lewis’s writings on Calvinism in _English
    Literature in the Sixteenth Century Excluding Drama_. Or how about
    Presbyterian minster Lyman Beecher, who once noted in a sermon: “in organizing
    any body, in philosophy, religion, or politics, you must _have_ a platform;
    you must stand somewhere; on some solid ground.” Such a platform could gather
    people, so that they could “settle on principles just as … bees settle in
    swarms on the branches, fragrant with blossoms and flowers.” See Beecher 2012,
    21. 103. “Platform, in architecture, is a row of beams which support the
    timber-work of a roof, and lie on top of the wall, where the entablature ought
    to be raised. This term is also used for a kind of terrace … from whence a
    fair prospect may be taken of the adjacent country.” See Nicholson 1819. 104.
    As evangelist Calvin Colton noted in his work on the US’s public economy, “We
    find American capital and labor occupying a very different position from that
    of the same things in Europe, and that the same treatment applied to both,
    would not be beneficial to both. A system which is good for Great Britain may
    be ruinous to the United States. … Great Britain is the only nation that is
    prepared for Free Trade … on a platform of universal Free Trade, the advanced
    position of Great Britain … in her skill, machinery, capital and means of
    commerce, would make all the tributary to her; and on the same platform, this
    distance between her and other nations … instead of diminishing, would be
    forever increasing, till … she would become the focus of the wealth, grandeur,
    and power of the world.” 105. Deleuze and Guattari 1987. 106. Solá-Morales
    1999, 86. 107. Budds 2016. 108. Gillespie 2010, 351. 109. Gillespie 2010, 350.
    Indeed, it might be worth resurrecting the otherwise-extinct notion of
    “plotform” to reinscribe agency and planning into the word. See Tawa 2012.
    110. As Olga Gurionova points out, platforms have historically played a
    significant role in creative processes as a “set of shared resources that
    might be material, organizational, or intentional that inscribe certain
    practices and approaches in order to develop collaboration, production, and
    the capacity to generate change.” Indeed, platforms form integral
    infrastructures in the critical art world for alternative systems of
    organization and circulation that could be mobilized to “disrupt
    institutional, representational, and social powers.” See Olga Goriunova, _Art
    Platforms and Cultural Production on the Internet_ (New York: Routledge,
    2012), 8. 111. Trebor Scholz, “Platform Cooperativism vs. the Sharing
    Economy,” _Medium_ , December 5, 2016, cooperativism-vs-the-sharing-economy-2ea737f1b5ad>. 112. Srnicek 2017, 28–29.
    113. Nakamura 2013, 243. 114. John Zysman and Martin Kennedy, “The Next Phase
    in the Digital Revolution: Platforms, Automation, Growth, and Employment,”
    _ETLA Reports_ 61, October 17, 2016, /ETLA-Raportit-Reports-61.pdf>. 115. Europeana’s privacy page explicitly notes
    this, reminding the user that, “this site may contain links to other websites
    that are beyond our control. This privacy policy applies solely to the
    information you provide while visiting this site. Other websites which you
    link to may have privacy policies that are different from this Privacy
    Policy.” See “Privacy and Terms,” _Europeana Collections_ ,
    .

    # 6
    Concluding Remarks

    I opened this book claiming that the notion of mass digitization has shifted
    from a professional concept to a cultural political phenomenon. If the former
    denotes a technical way of duplicating analog material in digital form, mass
    digitization as a cultural practice is a much more complex apparatus. On the
    one hand, it offers the simple promise of heightened public and private access
    to—and better preservation of—the past; one the other, it raises significant
    political questions about ethics, politics, power, and care in the digital
    sphere. I locate the emergence of these questions within the infrastructures
    of mass digitization and the ways in which they not only offer new ways of
    reading, viewing, and structuring cultural material, but also new models of
    value and its extraction, and new infrastructures of control. The political
    dynamic of this restructuring, I suggest, may meaningfully be referred to as a
    form of infrapolitics, insofar as the political work of mass digitization
    often happens at the level of infrastructure, in the form of standardization,
    dissent, or both. While mass digitization entwines the cultural politics of
    analog artifacts and institutions with the infrapolitical logics of the new
    digital economies and technologies, there is no clear-cut distinction between
    between the analog and digital realms in this process. Rather, paraphrasing N.
    Katherine Hayles, I suggest that mass digitization, like a Janus-figure,
    “looks to past and future, simultaneously reinforcing and undermining both.”1

    A persistent challenge in the study of mass digitization is the mutability of
    the analytical object. The unstable nature of cultural memory archives is not
    a new phenomenon. As Derrida points out, they have always been haunted by an
    unintended instability, which he calls “archive fever.” Yet, mass digitization
    appears to intensify this instability even further, both in its material and
    cultural instantiations. Analog preservation practices that seek to stabilize
    objects are in the digital realm replaced with dynamic processes of content
    migration and software updates. Cultural memory objects become embedded in
    what Wendy Chun has referred to as the enduring ephemerality of the digital as
    well as the bleeding edge of obsolescence.2

    Indeed, from the moment when the seed for this book was first planted to the
    time of its publication, the landscape of mass digitization, and the political
    battles waged on its maps, has changed considerably. Google Books—which a
    decade ago attracted the attention, admiration, and animosity of all—recently
    metamorphosed from a giant flood to a quiet trickle. After a spectacle of
    press releases on quantitative milestones, epic legal battles, and public
    criticisms, Google apparently lost interest in Google Books. Google’s gradual
    abandonment of the project resembled more an act of prolonged public ghosting
    than a clear-cut break-up, leaving the public to read in between the lines
    about where the company was headed: scanning activities dwindled; the Google
    Books blog closed along with its Twitter feed; press releases dried up; staff
    was laid off; and while scanning activities are still ongoing, they are
    limited to works in the public domain, changing the scale considerably.3 One
    commentator diagnosed the change of strategy as the demise of “the greatest
    humanistic project of our time.”4 Others acknowledged in less dramatic terms
    that while Google’s scanning activities may have stopped, its legacy lives on
    and is still put to active use.5

    In the present context, the important point to make is that a quiet life does
    not necessarily equal death. Indeed, this is the lesson we learn from
    attending to the subtle workings of infrastructure: the politics of
    infrastructure is the politics of what goes on behind the curtains, not only
    what is launched to the front page. Thus, as one engineer notes when
    confronted with the fate of Google Books, “We’re not focused on shiny features
    and things that are very visible to users. … It’s more like behind-the-scenes
    work and perfecting the technology—acquiring content, processing it properly
    so that we can view the entire book online, and adjusting the search
    algorithm.”6 This is a timely reminder that any analysis of the infrapolitics
    of mass digitization has to tend not only to the visible and loud politics of
    construction, but also the quiet and ongoing politics of infrastructure
    maintenance. It makes no sense to write an obituary for Google Books if the
    infrastructure is still at work. Moreover, the assemblatic nature of mass
    digitization also demands that we do not stop at the immediate borders of a
    project when making analytical claims about their infrapolitics. Thus, while
    Google Books may have stopped in its tracks, other trains of mass digitization
    have pulled up instead, carrying the project of mass digitization forward
    toward new, divergent, and experimental sites. Google’s different engagements
    with cultural digitization shows that an analysis of the politics of Google’s
    memory work needs to operate with an assemblatic method, rather than a
    delineating approach.7 Europeana and DPLA also are mutable analytical objects,
    both in economic and cultural form. Therefore, Europeana leads a precarious
    life from one EU budget framework to the next, and its cultural identity and
    software instantiations have transformed from a digital library, to a portal,
    to a platform over the course of only a few decades. Last, but not least,
    shadow libraries are mediating and multiplying cultural memory objects from
    servers and mirror links that sometimes die just as quickly as they emerged.
    The question of institutionalization matters greatly in this respect,
    outlining what we might call a spectrum of contingency. If a mass digitization
    project lives in the margins of institutions, such as in the case of many
    shadow libraries, its infrastructure is often fraught with uncertainties. Less
    precarious, but nonetheless tumultuous, are the corporate institutions with
    their increasingly short market-driven lifespans. And, at the other end of the
    spectrum, we find mass digitization projects embedded in bureaucratic
    apparatuses whose lumbering budget processes provide publically funded mass
    digitization projects with more stable infrastructures.

    The temporal dimension of mass digitization projects also raises important
    questions about the horizon of cultural memory in material terms. Should mass
    digitization, one might ask, also mean whither analog cultural memory? This
    question seems relevant not least in cases where institutions consider
    digitization as a form of preservation that allows them to discard analog
    artifacts once digitized. In digital form, we further have to contend with a
    new temporal horizon of cultural memory itself, based not on only on
    remembrance but on anticipation in the manner of “If you liked this, you might
    also like. ….” Thus, while cultural memory objects link to objects of the
    past, mass digitized cultural memory also gives rise to new methods of
    prediction and preemption, for instance in the form of personalization. In
    this anticipatory regime, cultural memory becomes subject to perpetual
    calculatory activities, processing affects, and activities in terms of
    likelihoods and probabilistic outcomes.

    Thus, cultural memory has today become embedded in new glocalized
    infrastructures. On the one hand, these infrastructures present novel
    opportunities. Cultural optimists have suggested that mass digitization has
    the potential to give rise to new cosmopolitan public spheres tethered from
    the straitjackets of national territorializing forces. On the other hand,
    critics argue that there is little evidence that cosmopolitan dynamics are in
    fact at work. Instead, new colonial and neoliberal platforms arise from a
    complex infrastructural apparatus of private and public institutions and
    become shaped by political, financial, and social struggles over
    representation, control, and ownership of knowledge.

    In summary, it is obvious that the scale of mass digitization, public and
    private, licit and illicit, has transformed how we engage with texts, cultural
    works, and cultural memory. People today have instant access to a wealth of
    works that would previously have required large amounts of money, as well as
    effort, to engage with. Most of us enjoy the new cultural freedoms we have
    been given to roam the archives, collecting and exploring oddities along the
    way, and making new connections between works that would previously have been
    held separate by taxonomy, geography, and time in the labyrinthine material
    and social infrastructures of cultural memory.

    A special attraction of mass digitization no doubt lies in its unfathomable
    scale and linked nature, and the fantasy and “spectacle of collecting.”8 The
    new cultural environment allows the user to accelerate the pace of information
    by accessing key works instantly as well as idly rambling in the exotic back
    alleys of digitized culture. Mass digitized archives can be explored to
    functional, hedonistic, and critical ends (sometimes all at the same time),
    and can be used to exhume forgotten works, forgotten authors, and forgotten
    topics. Within this paradigm, the user takes center stage—at least
    discursively. Suddenly, a link made between a porn magazine and a Courbet
    painting could well be a valued cultural connection instead of a frowned-upon
    transgression in the halls of high culture. Users do not just download books;
    they also upload new folksonomies, “ego-documents,” and new cultural
    constellations, which are all welcomed in the name of “citizen science.”
    Digitization also infuses texts with new life due to its new connective
    properties that allow readers and writers to intimately and
    exhibitionistically interact around cultural works, and it provides new ways
    of engaging with texts as digital reading migrates toward service-based rather
    than hardware-based models of consumption. Digitization allows users to
    digitally collect works themselves and indulge in alluring archival riches in
    new ways.

    But mass digitization also gives rise to a range of new ethical, political,
    aesthetic, and methodological questions concerning the spatio-temporality,
    ownership, territoriality, re-use, and dissemination of cultural memory
    artifacts. Some of those dimensions have been discussed in detail in the
    present work and include questions about digital labor, platformization,
    management of visibility, ownership, copyright, and other new forms of control
    and de- and recentralization and privatization processes. Others have only
    been alluded to but continue to gain in relevance as processes of mass
    digitization excavate and make public sensitive and contested archival
    material. Thus, as the cultural memories and artifacts of indigenous
    populations, colonized territories and other marginalized groups are brought
    online, as well as artifacts that attest to the violent regimes of colonialism
    and patriarchy, an attendant need has emerged for an ethics of care that goes
    beyond simplistic calls for right to access, to instead attend to the
    sensitivity of the digitized material and the ways in which we encounter these
    materials.

    Combined, these issues show that mass digitization is far from a
    straightforward technical affair. Rather, the productive dimensions of mass
    digitization emerge from the rubble of disruptive and turbulent political
    processes that violently dislocate established frontiers and power dynamics
    and give rise to new ones that are yet to be interpreted. Within these
    turbulent processes, the familiar narratives of empowered users collecting and
    connecting works and ideas in new and transgressive ways all too often leave
    out the simultaneous and integrated story of how the labyrinthine
    infrastructures of mass digitization also writes itself on the back of the
    users, collecting them and their thoughts in the process, and subjecting them
    to new economic logics and political regimes. As Lisa Nakamura reminds us, “by
    availing ourselves of its networked virtual bookshelves to collect and display
    our readerliness in a postprint age, we have become objects to be collected.”9
    Thus, as we gather vintage images on Pinterest, collect books in Google Books,
    and retweet sounds files from Europeana, we do best not only to question the
    cultural logic and ethics of these actions but also to remember that as we
    collect and connect, we are also ourselves collected and connected.

    If the power of mass digitization happens at the level of infrastructure,
    political resistance will have to take the form of infrastructural
    intervention. We play a role in the formulation of the ethics of such
    interventions, and as such we have to be willing to abandon the predominant
    tropes of scale, access, and acceleration in favor of an infrapolitics of
    care—a politics that offers opportunities for mindful, slow, and focused
    encounters.

    ## Notes

    1. Hayles 1999, 17. 2. Chun. 2008; Chun 2017. 3. Murrell 2017. 4. James
    Somers, “Torching the Modern-Day Library of Alexandria,” _The Atlantic_ ,
    April 20, 2017. 5. Jennifer Howard, “What Happened to Google’s Effort to Scan
    Millions of University Library Books?,” _EdSurge_ , August 10, 2017,
    scan-millions-of-university-library-books>. 6. Scott Rosenberg, “How Google
    Books Got Lost,” _Wired_ , November 4, 2017, /how-google-book-search-got-lost>. 7. What to make, for instance, of the new
    trend of employing Google’s neural networks to find one’s museum doppelgänger
    from the company’s image database? Or the fact that Google Cultural Institute
    is consistently turning out new cultural memory hacks such as its cardboard VR
    glasses, its indoor mapping of museum spaces, and its gigapixel Art Camera
    which reproduces artworks in uncanny detail. Or the expansion of their remit
    from cultural memory institutions to also encompass natural history museums?
    See, for example, Adrien Chen, “The Google Arts & Culture App and the Rise of
    the ‘Coded Gaze,’” _New Yorker_ , January 26, 2018,
    the-rise-of-the-coded-gaze-doppelganger>. 8. Nakamura 2013, 240. 9. Ibid.,
    241.

    #
    References

    1. Abbate, Janet. 2012. _Recoding Gender: Women’s Changing Participation in Computing_. Cambridge, MA: MIT Press.
    2. Abrahamsen, Rita, and Michael C. Williams. 2011. _Security beyond the State: Private Security in International Politics_. Cambridge: Cambridge University Press.
    3. Adler-Nissen, Rebecca, and Thomas Gammeltoft-Hansen. 2008. _Sovereignty Games: Instrumentalizing State Sovereignty in Europe and Beyond_. New York: Palgrave Macmillan.
    4. Agre, Philip E. 2000. “The Market Logic of Information.” _Knowledge, Technology & Policy_ 13 (3): 67–77.
    5. Aiden, Erez, and Jean-Baptiste Michel. 2013. _Uncharted: Big Data as a Lens on Human Culture_. New York: Riverhead Books.
    6. Ambati, Vamshi, N. Balakrishnan, Raj Reddy, Lakshmi Pratha, and C. V. Jawahar. 2006. “The Digital Library of India Project: Process, Policies and Architecture.” _CiteSeer_. .
    7. Amoore, Louise. 2013. _The Politics of Possibility: Risk and Security beyond Probability_. Durham, NC: Duke University Press.
    8. Anderson, Ben, and Colin McFarlane. 2011. “Assemblage and Geography.” _Area_ 43 (2): 124–127.
    9. Anderson, Benedict. 1991. _Imagined Communities: Reflections on the Origin and Spread of Nationalism_. London: Verso.
    10. Arms, William Y. 2000. _Digital Libraries_. Cambridge, MA: MIT Press.
    11. Arvanitakis, James, and Martin Fredriksson. 2014. _Piracy: Leakages from Modernity_. Sacramento, CA: Litwin Books.
    12. Association of Research Libraries. 2009. “ARL Encourages Members to Refrain from Signing Nondisclosure or Confidentiality Clauses.” _ARL News_ , June 5.
    13. Auletta, Ken. 2009. _Googled: The End of the World As We Know It_. New York: Penguin Press.
    14. Baker, Nicholson. 2002. _The Double Fold: Libraries and the Assault on Paper_. London: Vintage Books.
    15. Barthes, Roland. 1977. “From Work to Text” and “The Grain of the Voice.” In _Image Music Text_ , ed. Roland Barthes. London: Fontana Press.
    16. Barthes, Roland. 1981. _Camera Lucida: Reflections on Photography_. New York: Hill and Wang.
    17. Bates, David W. 2002. _Enlightenment Aberrations: Error and Revolution in France_. Ithaca, NY: Cornell University Press.
    18. Batt, William H. 1984. “Infrastructure: Etymology and Import.” _Journal of Professional Issues in Engineering_ 110 (1): 1–6.
    19. Bawden, David, and Lyn Robinson. 2009. “The Dark Side of Information: Overload, Anxiety and Other Paradoxes and Pathologies.” _Journal of Information Science_ 35 (2): 180–191.
    20. Beck, Ulrick. 1996. “World Risk Society as Cosmopolitan Society? Ecological Questions in a Framework of Manufactured Uncertainties.” _Theory, Culture & Society_ 13 (4), 1–32.
    21. Beecher, Lyman. 2012. _Faith Once Delivered to the Saints: A Sermon Delivered at Worcester, Mass., Oct. 15, 1823._ Farmington Hills, MI: Gale, Sabin Americana.
    22. Belder, Lucky. 2015. “Cultural Heritage Institutions as Entrepreneurs.” In _Cultivate!: Cultural Heritage Institutions, Copyright & Cultural Diversity in the European Union & Indonesia_, eds. M. de Cock Buning, R. W. Bruin, and Lucky Belder, 157–196. Amsterdam: DeLex.
    23. Benjamin, Walter. 1985a. “Central Park.” _New German Critique, NGC_ 34 (Winter): 32–58.
    24. Benjamin, Walter. 1985b. “The flaneur.” In _Charles Baudelaire: a Lyric Poet in the Era of High Capitalism_. Translated by Harry Zohn. London: Verso.
    25. Benjamin, Walter. 1999. _The Arcades Project_. Cambridge, MA: Harvard University Press.
    26. Béquet, Gaëlle. 2009. _Digital Library as a Controversy: Gallica vs Google_. Proceedings of the 9th Conference Libraries in the Digital Age (Dubrovnik, Zadar, May 25–29, 2009). .
    27. Berardi, Franco, Gary Genosko, and Nicholas Thoburn. 2011. _After the Future_. Edinburgh, UK: AK Press.
    28. Berk, Hillary L. 2015. “The Legalization of Emotion: Managing Risk by Managing Feelings in Contracts for Surrogate Labor.” _Law & Society Review_ 49 (1): 143–177.
    29. Bishop, Catherine. 2016. “The Serendipity of Connectivity: Piecing Together Women’s Lives in the Digital Archive.” _Women’s History Review_ 26 (5): 766–780.
    30. Bivort, Olivier. 2013. “ _Le romantisme et la ‘langue de Voltaire_.’” Revue Italienne d’études Françaises, 3. DOI: 10.4000/rief.211.
    31. Bjelić, Dušan I. 2016. _Intoxication, Modernity, and Colonialism: Freud’s Industrial Unconscious, Benjamin’s Hashish Mimesis_. New York: Palgrave Macmillan.
    32. Blair, Ann, and Peter Stallybrass. 2010. “Mediating Information, 1450–1800”. In _This Is Enlightenment_ , eds. Clifford Siskin and William B. Warner. Chicago: University of Chicago Press.
    33. Blair, Ann. 2003. “Reading Strategies for Coping with Information Overload ca. 1550–1700.” _Journal of the History of Ideas_ 64 (1): 11–28.
    34. Bloom, Harold. 2009. _The Labyrinth_. New York: Bloom’s Literary Criticism.
    35. Bodó, Balazs. 2015. “The Common Pathways of Samizdat and Piracy.” In _Samizdat: Between Practices and Representations_ , ed. V. Parisi. Budapest: CEU Institute for Advanced Study. Available at SSRN; .
    36. Bodó, Balazs. 2016. “Libraries in the Post-Scarcity Era.” In _Copyrighting Creativity: Creative Values, Cultural Heritage Institutions and Systems of Intellectual Property_ , ed. Helle Porsdam. New York: Routledge.
    37. Bogost, Ian, and Nick Montfort. 2009. “Platform Studies: Frequently Asked Questions.” _Proceeding of the Digital Arts and Culture Conference_. .
    38. Borges, Jorge Luis. 1978. “An Autobiographical Essay.” In _The Aleph and Other Stories, 1933–1969: Together with Commentaries and an Autobiographical Essay_. New York: E. P. Dutton.
    39. Borges, Jorge Luis. 2001. “The Total Library.” In _The Total Library: Non-fiction 1922–1986_. London: Penguin.
    40. Borges, Jorge Luis, and L. S. Dembo. 1970. “An Interview with Jorge Luis Borges.” _Contemporary Literature_ 11 (3): 315–325.
    41. Borghi, Maurizio. 2012. “Knowledge, Information and Values in the Age of Mass Digitisation.” In _Value: Sources and Readings on a Key Concept of the Globalized World_ , ed. Ivo de Gennaro. Leiden, the Netherlands: Brill.
    42. Borghi, Maurizio, and Stavroula Karapapa. 2013. _Copyright and Mass Digitization: A Cross-Jurisdictional Perspective_. Oxford: Oxford University Press.
    43. Borgman, Christine L. 2015. _Big Data, Little Data, No Data: Scholarship in the Networked World_. Cambridge, MA: MIT Press.
    44. Bottando, Evelyn. 2012. _Hedging the Commons: Google Books, Libraries, and Open Access to Knowledge_. Iowa City: University of Iowa.
    45. Bowker, Geoffrey C., Karen Baker, Florence Millerand, and David Ribes. 2010. “Toward Information Infrastructure Studies: Ways of Knowing in a Networked Environment.” In _The International Handbook of Internet Research_ , eds. Hunsinger Lisbeth Klastrup Jeremy and Matthew Allen. Dordrecht, the Netherlands: Springer.
    46. Bowker, Geoffrey C, and Susan L. Star. 1999. _Sorting Things Out: Classification and Its Consequences_. Cambridge, MA: MIT Press.
    47. Brin, Sergey. 2009. “A Library to Last Forever.” _New York Times_ , October 8.
    48. Brin, Sergey, and Lawrence Page. 1998. “The Anatomy of a Large-Scale Hypertextual Web Search Engine.” _Computer Networks and ISDN Systems_ 30 (1–7): 107. .
    49. Buckholtz, Alison. 2016. “New Ideas for Financing American Infrastructure: A Conversation with Henry Petroski.” _World Bank Group, Public-Private Partnerships Blog_ , March 29.
    50. Buck-Morss, Susan. 2006. “The flaneur, the Sandwichman and the Whore: The Politics of Loitering.” _New German Critique_ (39): 99–140.
    51. Budds, Diana. 2016. “Rem Koolhaas: ‘Architecture Has a Serious Problem Today.’” _CoDesign_ 21 (May). .
    52. Burkart, Patrick. 2014. _Pirate Politics: The New Information Policy Contests_. Cambridge, MA: MIT Press.
    53. Burton, James, and Daisy Tam. 2016. “Towards a Parasitic Ethics.” _Theory, Culture & Society_ 33 (4): 103–125.
    54. Busch, Lawrence. 2011. _Standards: Recipes for Reality_. Cambridge, MA: MIT Press.
    55. Caley, Seth. 2017. “Digitization for the Masses: Taking Users Beyond Simple Searching in Nineteenth-Century Collections Online.” _Journal of Victorian Culture : JVC_ 22 (2): 248–255.
    56. Cadogan, Garnette. 2016. “Walking While Black.” Literary Hub. July 8. .
    57. Callon, Michel, Madeleine Akrich, Sophie Dubuisson-Quellier, Catherine Grandclément, Antoine Hennion, Bruno Latour, Alexandre Mallard, et al. 2016. _Sociologie des agencements marchands: Textes choisis_. Paris: Presses des Mines.
    58. Cameron, Fiona, and Sarah Kenderdine. 2007. _Theorizing Digital Cultural Heritage: A Critical Discourse_. Cambridge, MA: MIT Press.
    59. Canepi, Kitti, Becky Ryder, Michelle Sitko, and Catherine Weng. 2013. _Managing Microforms in the Digital Age_. Association for Library Collections & Technical Services. .
    60. Carey, Quinn Ann. 2015, “Maksim Moshkov and lib.ru: Russia’s Own ‘Gutenberg.’” _TeleRead: Bring the E-Books Home_. December 5. .
    61. Carpentier, Nico. 2011. _Media and Participation: A Site of Ideological-Democratic Struggle_. Bristol, UK: Intellect.
    62. Carr, Nicholas. 2006. “The Engine of Serendipity.” _Rough Type_ , May 18.
    63. Cassirer, Ernst. 1944. _An Essay on Man: An Introduction to a Philosophy of Human Culture_. New Haven, CT: Yale University Press.
    64. Castells, Manuel. 1996a. _The Rise of the Network Society_. Malden, MA: Blackwell Publishers.
    65. Castells, Manuel. 1996b. _The Informational City: Information Technology, Economic Restructuring, and the Urban-Regional Process_. Cambridge: Blackwell.
    66. Castells, Manuel, and Gustavo Cardoso. 2012. “Piracy Cultures: Editorial Introduction.” _International Journal of Communication_ 6 (1): 826–833.
    67. Chabal, Emile. 2013. “The Rise of the Anglo-Saxon: French Perceptions of the Anglo-American World in the Long Twentieth Century.” _French Politics, Culture & Society_ 31 (1): 24–46.
    68. Chartier, Roger. 2004. “Languages, Books, and Reading from the Printed Word to the Digital Text.” _Critical Inquiry_ 31 (1): 133–152.
    69. Chen, Ching-chih. 2005. “Digital Libraries and Universal Access in the 21st Century: Realities and Potential for US-China Collaboration.” In _Proceedings of the 3rd China-US Library Conference, Shanghai, China, March 22–25_ , 138–167. Beijing: National Library of China.
    70. Chrisafis, Angelique. 2008. “Dante to Dialects: EU’s Online Renaissance.” _Guardian_ , November 21. .
    71. Chun, Wendy H. K. 2006. _Control and Freedom: Power and Paranoia in the Age of Fiber Optics_. Cambridge, MA: MIT Press.
    72. Chun, Wendy Hui Kyong. 2008. “The Enduring Ephemeral, or the Future Is a Memory.” _Critical Inquiry_ 35 (1): 148–171.
    73. Chun, Wendy H. K. 2017. _Updating to Remain the Same_. Cambridge, MA: MIT Press.
    74. Clarke, Michael Tavel. 2009. _These Days of Large Things: The Culture of Size in America, 1865–1930_. Ann Arbor: University of Michigan Press.
    75. Cohen, Jerome Bernard. 2006. _The Triumph of Numbers: How Counting Shaped Modern Life_. New York: W.W. Norton.
    76. Conway, Paul. 2010. “Preservation in the Age of Google: Digitization, Digital Preservation, and Dilemmas.” _The Library Quarterly: Information, Community, Policy_ 80 (1): 61–79.
    77. Courant, Paul N. 2006. “Scholarship and Academic Libraries (and Their Kin) in the World of Google.” _First Monday_ 11 (8).
    78. Coyle, Karen. 2006. “Mass Digitization of Books.” _Journal of Academic Librarianship_ 32 (6): 641–645.
    79. Darnton, Robert. 2009. _The Case for Books: Past, Present, and Future_. New York: Public Affairs.
    80. Daston, Lorraine. 2012. “The Sciences of the Archive.” _Osiris_ 27 (1): 156–187.
    81. Davison, Graeme. 2009. “Speed-Relating: Family History in a Digital Age.” _History Australia_ 6 (2). .
    82. Deegan, Marilyn, and Kathryn Sutherland. 2009. _Transferred Illusions: Digital Technology and the Forms of Print_. Farnham, UK: Ashgate.
    83. de la Durantaye, Katharine. 2011. “H Is for Harmonization: The Google Book Search Settlement and Orphan Works Legislation in the European Union.” _New York Law School Law Review_ 55 (1): 157–174.
    84. DeLanda, Manuel. 2006. _A New Philosophy of Society: Assemblage Theory and Social Complexity_. London: Continuum.
    85. Deleuze, Gilles. 1997. “Postscript on Control Societies.” In _Negotiations 1972–1990_ , 177–182. New York: Columbia University Press.
    86. Deleuze, Gilles. 2013. _Difference and Repetition_. London: Bloomsbury Academic.
    87. Deleuze, Gilles, and Félix Guattari. 1987. _A Thousand Plateaus: Capitalism and Schizophrenia_. Minneapolis: University of Minnesota Press.
    88. DeNardis, Laura. 2011. _Opening Standards: The Global Politics of Interoperability_. Cambridge, MA: MIT Press.
    89. DeNardis, Laura. 2014. “The Social Media Challenge to Internet Governance.” In _Society and the Internet: How Networks of Information and Communication Are Changing Our Lives_ , eds. Mark Graham and William H. Dutton. Oxford: Oxford University Press.
    90. Derrida, Jacques. 1996. _Archive Fever: A Freudian Impression_. Chicago: University of Chicago Press.
    91. Derrida, Jacques. 2005. _Paper Machine_. Stanford, CA: Stanford University Press.
    92. Dewey, Melvin. 1926. “Our Next Half-Century.” _Bulletin of the American Library Association_ 20 (10): 309–312.
    93. Dinshaw, Carolyn. 2012. _How Soon Is Now?: Medieval Texts, Amateur Readers, and the Queerness of Time_. Durham, NC: Duke University Press.
    94. Doob, Penelope Reed. 1994. _The Idea of the Labyrinth: From Classical Antiquity Through the Middle Ages_. Ithaca, NY: Cornell University Press.
    95. Dörk, Marian, Sheelagh Carpendale, and Carey Williamson. 2011. “The Information flaneur: A Fresh Look at Information Seeking.” _Conference on Human Factors in Computing Systems—Proceedings_ , 1215–1224.
    96. Doward, Jamie. 2009. “Angela Merkel Attacks Google’s Plans to Create a Global Online Library.” _Guardian_ , October 11. .
    97. Duguid, Paul. 2007. “Inheritance and Loss? A Brief Survey of Google Books.” _First Monday_ 12 (8). .
    98. Earnshaw, Rae A., and John Vince. 2007. _Digital Convergence: Libraries of the Future_. London: Springer.
    99. Easley, David, and Jon Kleinberg. 2010. _Networks, Crowds, and Markets: Reasoning About a Highly Connected World_. New York: Cambridge University Press.
    100. Easterling, Keller. 2014. _Extrastatecraft: The Power of Infrastructure Space_. Verso.
    101. Eckstein, Lars, and Anja Schwarz. 2014. _Postcolonial Piracy: Media Distribution and Cultural Production in the Global South_. London: Bloomsbury.
    102. Eco, Umberto. 2014. _The Name of the Rose_. Boston: Mariner Books.
    103. Edwards, Paul N. 2003. “Infrastructure and Modernity: Force, Time and Social Organization in the History of Sociotechnical Systems.” In _Modernity and Technology_ , eds. Thomas J. Misa, Philip Brey, and Andrew Feenberg. Cambridge, MA: MIT Press.
    104. Edwards, Paul N., Steven J. Jackson, Melissa K. Chalmers, Geoffrey C. Bowker, Christine L. Borgman, David Ribes, Matt Burton, and Scout Calvert. 2012. _Knowledge Infrastructures: Intellectual Frameworks and Research Challenges_. Report of a workshop sponsored by the National Science Foundation and the Sloan Foundation University of Michigan School of Information, May 25–28. .
    105. Ensmenger, Nathan. 2012. _The Computer Boys Take Over: Computers, Programmers, and the Politics of Technical Expertise_. Cambridge, MA: MIT Press.
    106. Eyal, Nir. 2014. _Hooked: How to Build Habit-Forming Products_. Princeton, NJ: Princeton University Press.
    107. Featherstone, Mike. 1998. “The flaneur, the City and Virtual Public Life.” _Urban Studies (Edinburgh, Scotland)_ 35 (5–6): 909–925.
    108. Featherstone, Mike. 2000. “Archiving Cultures.” _British Journal of Sociology_ 51 (1): 161–184.
    109. Fiske, John. 1987. _Television Culture_. London: Methuen.
    110. Fjeld, Per Olaf, and Sverre Fehn. 2009. _Sverre Fehn: The Pattern of Thoughts_. New York: Monacelli Press.
    111. Flyverbom, Mikkel, Paul M. Leonardi, Cynthia Stohl, and Michael Stohl. 2016. “The Management of Visibilities in the Digital Age.” _International Journal of Communication_ 10 (1): 98–109.
    112. Foucault, Michel. 2002. _Archaeology of Knowledge_. London: Routledge.
    113. Foucault, Michel. 2004. _Death and the Labyrinth: The World of Raymond Roussel_. Continuum International Publishing Group Ltd.
    114. Foucault, Michel. 2009. _Security, Territory, Population: Lectures at the College de France, 1977–1978_. Basingstoke, UK: Palgrave Macmillan.
    115. Fredriksson, Martin, and James Arvanitakis. 2014. _Piracy: Leakages from Modernity_. Sacramento, CA: Litwin Books.
    116. Freedgood, Elaine. 2013. “Divination.” _PMLA_ 128 (1): 221–225.
    117. Fuchs, Christian. 2014. _Digital Labour and Karl Marx_. New York: Routledge.
    118. Fuller, Matthew, and Andrew Goffey. 2012. _Evil Media_. Cambridge, MA: MIT Press.
    119. Galloway, Alexander R. 2013a. _The Interface Effect_. Cambridge: Polity Press.
    120. Galloway Alexander, R. 2013b. “The Poverty of Philosophy: Realism and Post-Fordism.” _Critical Inquiry_ 39 (2): 347–366.
    121. Gardner, Carolyn Caffrey, and Gabriel J. Gardner. 2017. “Fast and Furious (at Publishers): The Motivations behind Crowdsourced Research Sharing.” _College & Research Libraries_ 78 (2): 131–149.
    122. Garrett, Jeffrey. 1999. “Redefining Order in the German Library, 1775–1825.” _Eighteenth-Century Studies_ 33 (1): 103–123.
    123. Gibbon, Peter, and Lasse F. Henriksen. 2012. “A Standard Fit for Neoliberalism.” _Comparative Studies in Society and History_ 54 (2): 275–307.
    124. Giesler, Markus. 2006. “Consumer Gift Systems.” _Journal of Consumer Research_ 33 (2): 283–290.
    125. Gießmann, Sebastian. 2015. _Medien Der Kooperation_. Siegen, Germany: Universitet Verlag.
    126. Gillespie, Tarleton. 2010. “The Politics of ‘Platforms.’” _New Media & Society_ 12 (3): 347–364.
    127. Gillespie, Tarleton. 2017. “Algorithmically Recognizable: Santorum’s Google Problem, and Google’s Santorum Problem.” _Information Communication and Society_ 20 (1): 63–80.
    128. Gladwell, Malcolm. 2000. _The Tipping Point: How Little Things Can Make a Big Difference_. Boston: Little, Brown.
    129. Goldate, Steven. 1996. “The Cyberflaneur: Spaces and Places on the Internet.” _Art Monthly Australia_ 91:15–18.
    130. Goldsmith, Jack L., and Tim Wu. 2006. _Who Controls the Internet?: Illusions of a Borderless World_. New York: Oxford University Press.
    131. Goldsmith, Kenneth. 2007. “UbuWeb Wants to Be Free.” Last modified July 18, 2007. .
    132. Golumbia, David. 2009. _The Cultural Logic of Computation_. Cambridge, MA: Harvard University Press.
    133. Goriunova, Olga. 2012. _Art Platforms and Cultural Production on the Internet_. New York: Routledge.
    134. Gradmann, Stephan. 2009. “Interoperability: A Key Concept for Large Scale, Persistent Digital Libraries.” 1st DL.org Workshop at 13th European Conference on Digital Libraries (ECDL).
    135. Greene, Mark. 2010. “MPLP: It’s Not Just for Processing Anymore.” _American Archivist_ 73 (1): 175–203.
    136. Grewal, David S. 2008. _Network Power: The Social Dynamics of Globalization_. New Haven, CT: Yale University Press.
    137. Hacking, Ian. 1995. _Rewriting the Soul: Multiple Personality and the Sciences of Memory_. Princeton, NJ: Princeton University Press.
    138. Hacking, Ian. 2010. _The Taming of Chance_. Cambridge: Cambridge University Press.
    139. Hagel, John. 2012. _The Power of Pull: How Small Moves, Smartly Made, Can Set Big Things in Motion_. New York: Basic Books.
    140. Haggerty, Kevin D, and Richard V. Ericson. 2000. “The Surveillant Assemblage.” _British Journal of Sociology_ 51 (4): 605–622.
    141. Hall, Gary. 2008. _Digitize This Book!: The Politics of New Media, or Why We Need Open Access Now_. Minneapolis: University of Minnesota Press.
    142. Hall, Mark, et al. 2012. “PATHS—Exploring Digital Cultural Heritage Spaces.” In _Theory and Practice of Digital Libraries. TPDL 2012_ , vol. 7489, 500–503. Lecture Notes in Computer Science. Berlin: Springer.
    143. Hall, Stuart, and Fredric Jameson. 1990. “Clinging to the Wreckage: a Conversation.” _Marxism Today_ (September): 28–31.
    144. Hardt, Michael, and Antonio Negri. 2007. _Empire_. Cambridge, MA: Harvard University Press.
    145. Hardt, Michael, and Antonio Negri. 2009. _Commonwealth_. Cambridge, MA: Harvard University Press.
    146. Hartmann, Maren. 1999. “The Unknown Artificial Metaphor or: The Difficult Process of Creation or Destruction.” In _Next Cyberfeminist International_ , ed. Cornelia Sollfrank. Hamburg, Germany: obn. .
    147. Hartmann, Maren. 2004. _Technologies and Utopias: The Cyberflaneur and the Experience of “Being Online.”_ Munich: Fischer.
    148. Hayles, N. Katherine. 1993. “Seductions of Cyberspace.” In _Lost in Cyberspace: Essays and Far-Fetched Tales_ , ed. Val Schaffner. Bridgehampton, NY: Bridge Works Pub. Co.
    149. Hayles, N. Katherine. 2005. _My Mother Was a Computer: Digital Subjects and Literary Texts_. Chicago: University of Chicago Press.
    150. Helmond, Anne. 2015. “The Platformization of the Web: Making Web Data Platform Ready.” _Social Media + Society_ 1 (2). .
    151. Hicks, Marie. 2018. _Programmed Inequality: How Britain Discarded Women Technologists and Lost its Edge in Computing_. Cambridge, MA: MIT Press.
    152. Higgins, Vaughan, and Wendy Larner. 2010. _Calculating the Social: Standards and the Reconfiguration of Governing_. Basingstoke, UK: Palgrave Macmillan.
    153. Holzer, Boris, and P. S. Mads. 2003. “Rethinking Subpolitics: Beyond the ‘Iron Cage’ of Modern Politics?” _Theory, Culture & Society_ 20 (2): 79–102.
    154. Huyssen, Andreas. 2015. _Miniature Metropolis: Literature in an Age of Photography and Film_. Cambridge, MA: Harvard University Press.
    155. Imerito, Tom. 2009. “Electrifying Knowledge.” _Pittsburgh Quarterly Magazine_. Summer. .
    156. Janssen, Olaf. D. 2011. “Digitizing All Dutch Books, Newspapers and Magazines—730 Million Pages in 20 Years—Storing It, and Getting It Out There.” In _Research and Advanced Technology for Digital Libraries_ , eds. S. Gradmann, F. Borri, C. Meghini, and H. Schuldt, 473–476. TPDL 2011. Lecture Notes in Computer Science, vol. 6966. Berlin: Springer.
    157. Jasanoff, Sheila. 2013. “Epistemic Subsidiarity—Coexistence, Cosmopolitanism, Constitutionalism.” _European Journal of Risk Regulation_ 4 (2) 133–141.
    158. Jeanneney, Jean N. 2007. _Google and the Myth of Universal Knowledge: A View from Europe_. Chicago: University of Chicago Press.
    159. Jones, Elisabeth A., and Joseph W. Janes. 2010. “Anonymity in a World of Digital Books: Google Books, Privacy, and the Freedom to Read.” _Policy & Internet_ 2 (4): 43–75.
    160. Jøsevold, Roger. 2016. “A National Library for the 21st Century—Knowledge and Cultural Heritage Online.” _Alexandria_ _:_ _The_ _Journal of National and International Library and Information Issues_ 26 (1): 5–14.
    161. Kang, Minsoo. 2011. _Sublime Dreams of Living Machines: The Automaton in the European Imagination_. Cambridge, MA: Harvard University Press.
    162. Karaganis, Joe. 2011. _Media Piracy in Emerging Economies_. New York: Social Science Research Council.
    163. Karaganis, Joe. 2018. _Shadow Libraries: Access to Educational Materials in Global Higher Education_. Cambridge, MA: MIT Press.
    164. Kaufman, Peter B., and Jeff Ubois. 2007. “Good Terms—Improving Commercial-Noncommercial Partnerships for Mass Digitization.” _D-Lib Magazine_ 13 (11–12). .
    165. Kelley, Robin D. G. 1994. _Race Rebels: Culture, Politics, and the Black Working Class_. New York: Free Press.
    166. Kelly, Kevin. 1994. _Out of Control: The Rise of Neo-Biological Civilization_. Reading, MA: Addison-Wesley.
    167. Kenney, Anne R, Nancy Y. McGovern, Ida T. Martinez, and Lance J. Heidig. 2003. “Google Meets Ebay: What Academic Librarians Can Learn from Alternative Information Providers." D-lib Magazine, 9 (6) .
    168. Kiriya, Ilya. 2012. “The Culture of Subversion and Russian Media Landscape.” _International Journal of Communication_ 6 (1): 446–466.
    169. Koevoets, Sanne. 2013. _Into the Labyrinth of Knowledge and Power: The Library as a Gendered Space in the Western Imaginary_. Utrecht, the Netherlands: Utrecht University.
    170. Kolko, Joyce. 1988. _Restructuring the World Economy_. New York: Pantheon Books.
    171. Komaromi, Ann. 2012. “Samizdat and Soviet Dissident Publics.” _Slavic Review_ 71 (1): 70–90.
    172. Kramer, Bianca. 2016a. “Sci-Hub: Access or Convenience? A Utrecht Case Study, Part 1.” _I &M / I&O 2.0_, June 20. .
    173. Kramer, Bianca. 2016b. “Sci-Hub: Access or Convenience? A Utrecht Case Study, Part 2.” .
    174. Krysa, Joasia. 2006. _Curating Immateriality: The Work of the Curator in the Age of Network Systems_. Brooklyn, NY: Autonomedia.
    175. Kurgan, Laura. 2013. _Close up at a Distance: Mapping, Technology, and Politics_. Brooklyn, NY: Zone Books.
    176. Labi, Aisha. 2005. “France Plans to Digitize Its ‘Cultural Patrimony’ and Defy Google’s ‘Domination.’” _Chronicle of Higher Education_ (March): 21.
    177. Larkin, Brian. 2008. _Signal and Noise: Media, Infrastructure, and Urban Culture in Nigeria_. Durham, NY: Duke University Press.
    178. Latour, Bruno. 2005. _Reassembling the Social: An Introduction to Actor-Network Theory_. Oxford: Oxford University Press.
    179. Latour, Bruno. 2007. “Beware, Your Imagination Leaves Digital Traces.” _Times Higher Literary Supplement_ , April 6.
    180. Latour, Bruno. 2008. _What Is the Style of Matters of Concern?: Two Lectures in Empirical Philosophy_. Assen, the Netherlands: Koninklijke Van Gorcum.
    181. Lavoie, Brian F., and Lorcan Dempsey. 2004. “Thirteen Ways of Looking at Digital Preservation.” _D-Lib Magazine_ 10 (July/August). .
    182. Leetaru, Kalev. 2008. “Mass Book Digitization: The Deeper Story of Google Books and the Open Content Alliance.” _First Monday_ 13 (10). .
    183. Lefebvre, Henri. 2009. _The Production of Space_. Malden, MA: Blackwell.
    184. Lefler, Rebecca. 2007. “‘Europeana’ Ready for Maiden Voyage.” _Hollywood Reporter_ , March 23. .
    185. Lessig, Lawrence. 2005a. “Lawrence Lessig on Interoperability.” _Creative Commons_ , October 19. .
    186. Lessig, Lawrence. 2005b. _Free Culture: The Nature and Future of Creativity_. New York: Penguin Books.
    187. Lessig, Lawrence. 2010. “For the Love of Culture—Will All of Our Literary Heritage Be Available to Us in the Future? Google, Copyright, and the Fate of American Books. _New Republic_ 24\. .
    188. Levy, Steven. 2011. _In the Plex: How Google Thinks, Works, and Shapes Our Lives_. New York: Simon & Schuster.
    189. Lewis, Jane. 1987. _Labour and Love: Women’s Experience of Home and Family, 1850–1940_. Oxford: Blackwell.
    190. Liang, Lawrence. 2009. “Piracy, Creativity and Infrastructure: Rethinking Access to Culture,” July 20.
    191. Liu, Jean. 2013. “Interactions: The Numbers Behind #ICanHazPDF.” _Altmetric_ , May 9. .
    192. Locke, John. 2003. _Two Treatises of Government: And a Letter Concerning Toleration_. New Haven, CT: Yale University Press.
    193. Martin, Andrew, and George Ross. 2004. _Euros and Europeans: Monetary Integration and the European Model of Society_. New York: Cambridge University Press.
    194. Mbembe, Achille. 2002. “The Power of the Archive and its Limits.” In _Refiguring the Archive_ , ed. Carolyn Hamilton. Cape Town, South Africa: David Philip.
    195. McDonough, Jerome. 2009. “XML, Interoperability and the Social Construction of Markup Languages: The Library Example.” _Digital Humanities Quarterly_ 3 (3). .
    196. McPherson, Tara. 2012. “U.S. Operating Systems at Mid-Century: The Intertwining of Race and UNIX.” In _Race After the Internet_ , eds. Lisa Nakamura and Peter Chow-White. New York: Routledge.
    197. Meckler, Alan M. 1982. _Micropublishing: A History of Scholarly Micropublishing in America, 1938–1980_. Westport, CT: Greenwood Press.
    198. Medak, Tomislav, et al. 2016. _The Radiated Book_. .
    199. Merton, Robert K., and Elinor Barber. 2004. _The Travels and Adventures of Serendipity: A Study in Sociological Semantics and the Sociology of Science_. Princeton, NJ: Princeton University Press.
    200. Meunier, Sophie. 2003. “France’s Double-Talk on Globalization.” _French Politics, Culture & Society_ 21:20–34.
    201. Meunier, Sophie. 2007. “The Distinctiveness of French Anti-Americanism.” In _Anti-Americanisms in World Politics_ , eds. Peter J. Katzenstein and Robert O. Keohane. Ithaca, NY: Cornell University Press.
    202. Michel, Jean-Baptiste, et al. 2011. “Quantitative Analysis of Culture Using Millions of Digitized Books.” _Science_ 331 (6014):176–182.
    203. Midbon, Mark. 1980. “Capitalism, Liberty, and the Development of the Development of the Library.” _Journal of Library History (Tallahassee, Fla.)_ 15 (2): 188–198.
    204. Miksa, Francis L. 1983. _Melvil Dewey: The Man and the Classification_. Albany, NY: Forest Press.
    205. Mitropoulos, Angela. 2012. _Contract and Contagion: From Biopolitics to Oikonomia_. Brooklyn, NY: Minor Compositions.
    206. Mjør, Kåre Johan. 2009. “The Online Library and the Classic Literary Canon in Post-Soviet Russia: Some Observations on ‘The Fundamental Electronic Library of Russian Literature and Folklore.’” _Digital Icons: Studies in Russian, Eurasian and Central European New Media_ 1 (2): 83–99.
    207. Montagnani, Maria Lillà, and Maurizio Borghi. 2008. “Promises and Pitfalls of the European Copyright Law Harmonisation Process.” In _The European Union and the Culture Industries: Regulation and the Public Interest_ , ed. David Ward. Aldershot, UK: Ashgate.
    208. Murrell, Mary. 2017. “Unpacking Google’s Library.” _Limn_ (6). .
    209. Nakamura, Lisa. 2002. _Cybertypes: Race, Ethnicity, and Identity on the Internet_. New York: Routledge.
    210. Nakamura, Lisa. 2013. “‘Words with Friends’: Socially Networked Reading on Goodreads.” _PMLA_ 128 (1): 238–243.
    211. Nava, Mica, and Alan O’Shea. 1996. _Modern Times: Reflections on a Century of English Modernity_ , 38–76. London: Routledge.
    212. Negroponte, Nicholas. 1995. _Being Digital_. New York: Knopf.
    213. Neubert, Michael. 2008. “Google’s Mass Digitization of Russian-Language Books.” _Slavic & East European Information Resources_ 9 (1): 53–62.
    214. Nicholson, William. 1819. “Platform.” In _British Encyclopedia: Or, Dictionary of Arts and Sciences, Comprising an Accurate and Popular View of the Present Improved State of Human Knowledge_. Philadelphia: Mitchell, Ames, and White.
    215. Niggemann, Elisabeth. 2011. _The New Renaissance: Report of the “Comité Des Sages.”_ Brussels: Comité des Sages.
    216. Noble, Safiya Umoja, and Brendesha M. Tynes. 2016. _The Intersectional Internet: Race, Sex, Class and Culture Online_. New York: Peter Lang Publishing.
    217. Nord, Deborah Epstein. 1995. _Walking the Victorian Streets: Women, Representation, and the City_. Ithaca, NY: Cornell University Press.
    218. Norvig, Peter. 2012. “Colorless Green Ideas Learn Furiously: Chomsky and the Two Cultures of Statistical Learning.” _Significance_ (August): 30–33.
    219. O’Neill, Paul, and Søren Andreasen. 2011. _Curating Subjects_. London: Open Editions.
    220. O’Neill, Paul, and Mick Wilson. 2010. _Curating and the Educational Turn_. London: Open Editions.
    221. Ong, Aihwa, and Stephen J. Collier. 2005. _Global Assemblages: Technology, Politics, and Ethics As Anthropological Problems_. Malden, MA: Blackwell Pub.
    222. Otlet, Paul, and W. Boyd Rayward. 1990. _International Organisation and Dissemination of Knowledge_. Amsterdam: Elsevier.
    223. Palfrey, John G. 2015. _Bibliotech: Why Libraries Matter More Than Ever in the Age of Google_. New York: Basic Books.
    224. Palfrey, John G., and Urs Gasser. 2012. _Interop: The Promise and Perils of Highly Interconnected Systems_. New York: Basic Books.
    225. Parisi, Luciana. 2004. _Abstract Sex: Philosophy, Bio-Technology and the Mutations of Desire_. London: Continuum.
    226. Patra, Nihar K., Bharat Kumar, and Ashis K. Pani. 2014. _Progressive Trends in Electronic Resource Management in Libraries_. Hershey, PA: Information Science Reference.
    227. Paulheim, Heiko. 2015. “What the Adoption of Schema.org Tells About Linked Open Data.” _CEUR Workshop Proceedings_ 1362:85–90.
    228. Peatling, G. K. 2004. “Public Libraries and National Identity in Britain, 1850–1919.” _Library History_ 20 (1): 33–47.
    229. Pechenick, Eitan A., Christopher M. Danforth, Peter S. Dodds, and Alain Barrat. 2015. “Characterizing the Google Books Corpus: Strong Limits to Inferences of Socio-Cultural and Linguistic Evolution.” _PLoS One_ 10 (10).
    230. Peters, John Durham. 2015. _The Marvelous Clouds: Toward a Philosophy of Elemental Media_. Chicago: University of Chicago Press.
    231. Pfanner, Eric. 2011. “Quietly, Google Puts History Online.” _New York Times_ , November 20.
    232. Pfanner, Eric. 2012. “Google to Announce Venture With Belgian Museum.” _New York Times_ , March 12. .
    233. Philip, Kavita. 2005. “What Is a Technological Author? The Pirate Function and Intellectual Property.” _Postcolonial Studies: Culture, Politics, Economy_ 8 (2): 199–218.
    234. Pine, Joseph B., and James H. Gilmore. 2011. _The Experience Economy_. Boston: Harvard Business Press.
    235. Ping-Huang, Marianne. 2016. “Archival Biases and Cross-Sharing.” _NTIK_ 5 (1): 55–56.
    236. Pollock, Griselda. 1998. “Modernity and the Spaces of Femininity.” In _Vision and Difference: Femininity, Feminism and Histories of Art_ , ed. Griselda Pollock, 245–256. London: Routledge & Kegan Paul.
    237. Ponte, Stefano, Peter Gibbon, and Jakob Vestergaard. 2011. _Governing Through Standards: Origins, Drivers and Limitations_. Basingstoke, UK: Palgrave Macmillan.
    238. Pörksen, Uwe. 1995. _Plastic Words: The Tyranny of a Modular Language_. University Park: Pennsylvania State University Press.
    239. Proctor, Nancy. 2013. “Crowdsourcing—an Introduction: From Public Goods to Public Good.” _Curator_ 56 (1): 105–106.
    240. Puar, Jasbir K. 2007. _Terrorist Assemblages: Homonationalism in Queer Times_. Durham, NC: Duke University Press.
    241. Purdon, James. 2016. _Modernist Informatics: Literature, Information, and the State_. New York: Oxford University Press.
    242. Putnam, Robert D. 1988. “Diplomacy and Domestic Politics: The Logic of Two-Level Games.” _International Organization_ 42 (3): 427–460.
    243. Rabinow, Paul. 2003. _Anthropos Today: Reflections on Modern Equipment_. Princeton, NJ: Princeton University Press.
    244. Rabinow, Paul, and Michel Foucault. 2011. _The Accompaniment: Assembling the Contemporary_. Chicago: University of Chicago Press.
    245. Raddick, M., et al. 2009. “Galaxy Zoo: Exploring the Motivations of Citizen Science Volunteers.” _Astronomy Education Review_ 9 (1).
    246. Ratto, Matt, and Boler Megan. 2014. _DIY Citizenship: Critical Making and Social Media_. Cambridge, MA: MIT Press.
    247. Reichardt, Jasia. 1969. _Cybernetic Serendipity: The Computer and the Arts_. New York: Frederick A Praeger. .
    248. Ridge, Mia. 2013. “From Tagging to Theorizing: Deepening Engagement with Cultural Heritage through Crowdsourcing.” _Curator_ 56 (4): 435–450.
    249. Rieger, Oya Y. 2008. _Preservation in the Age of Large-Scale Digitization: A White Paper_. Washington, DC: Council on Library and Information Resources.
    250. Rodekamp, Volker, and Bernhard Graf. 2012. _Museen zwischen Qualität und Relevanz: Denkschrift zur Lage der Museen_. Berlin: G+H Verlag.
    251. Rogers, Richard. 2012. “Mapping and the Politics of Web Space.” _Theory, Culture & Society_ 29:193–219.
    252. Romeo, Fiona, and Lucinda Blaser. 2011. “Bringing Citizen Scientists and Historians Together.” Museums and the Web. .
    253. Russell, Andrew L. 2014. _Open Standards and the Digital Age: History, Ideology, and Networks_. New York: Cambridge University Press.
    254. Said, Edward. 1983. “Traveling Theory.” In _The World, the Text, and the Critic_ , 226–247. Cambridge, MA: Harvard University Press.
    255. Samimian-Darash, Limor, and Paul Rabinow. 2015. _Modes of Uncertainty: Anthropological Cases_. Chicago: The University of Chicago Press.
    256. Samuel, Henry. 2009. “Nicolas Sarkozy Fights Google over Classic Books.” _The Telegraph_ , December 14. .
    257. Samuelson, Pamela. 2010. “Google Book Search and the Future of Books in Cyberspace.” _Minnesota Law Review_ 94 (5): 1308–1374.
    258. Samuelson, Pamela. 2011. “Why the Google Book Settlement Failed—and What Comes Next?” _Communications of the ACM_ 54 (11): 29–31.
    259. Samuelson, Pamela. 2014. “Mass Digitization as Fair Use.” _Communications of the ACM_ 57 (3): 20–22.
    260. Samyn, Jeanette. 2012. “Anti-Anti-Parasitism.” _The New Inquiry_ , September 18.
    261. Sanderhoff, Merethe. 2014. _Sharing Is Caring: Åbenhed Og Deling I Kulturarvssektoren_. Copenhagen: Statens Museum for Kunst.
    262. Sassen, Saskia. 2008. _Territory, Authority, Rights: From Medieval to Global Assemblages_. Princeton, NJ: Princeton University Press.
    263. Schmidt, Henrike. 2009. “‘Holy Cow’ and ‘Eternal Flame’: Russian Online Libraries.” _Kultura_ 1, 4–8. .
    264. Schmitz, Dawn. 2008. _The Seamless Cyberinfrastructure: The Challenges of Studying Users of Mass Digitization and Institutional Repositories_. Washington, DC: Digital Library Federation, Council on Library and Information Resources.
    265. Schonfeld, Roger, and Liam Sweeney. 2017. “Inclusion, Diversity, and Equity: Members of the Association of Research Libraries.” _Ithaka S+R_ , August 30. .
    266. Schüll, Natasha Dow. 2014. _Addiction by Design: Machine Gambling in Las Vegas_. Princeton, NJ: Princeton University Press.
    267. Scott, James C. 2009. _Domination and the Arts of Resistance: Hidden Transcripts_. New Haven, CT: Yale University Press.
    268. Seddon, Nicholas. 2013. _Government Contracts: Federal, State and Local_. Annandale, Australia: The Federation Press.
    269. Serres, Michel. 2013. _The Parasite_. Minneapolis: University of Minnesota Press.
    270. Sherratt, Tim. 2013. “From Portals to Platforms: Building New Frameworks for User Engagement.” National Library of Australia, November 5. .
    271. Shukaitis, Stevphen. 2009. “Infrapolitics and the Nomadic Educational Machine.” In _Contemporary Anarchist Studies: An Introductory Anthology of Anarchy in the Academy_ , ed. Randall Amster. London: Routledge.
    272. Smalls, James. 2003. “‘Race’ As Spectacle in Late-Nineteenth-Century French Art and Popular Culture.” _French Historical Studies_ 26 (2): 351–382.
    273. Snyder, Francis. 2002. “Governing Economic Globalisation: Global Legal Pluralism and EU Law.” In _Regional and Global Regulation of International Trade_ , 1–47. Oxford: Hart Publishing.
    274. Solá-Morales, Rubió I. 1999. _Differences: Topographies of Contemporary Architecture_. Cambridge, MA: MIT Press.
    275. Sollfrank, Cornelia. 2015. “Nothing New Needs to Be Created. Kenneth Goldsmith’s Claim to Uncreativity.” In _No Internet—No Art. A Lunch Byte Anthology_ , ed. Melanie Bühler. Eindhoven: Onomatopee. .
    276. Somers, Margaret R. 2008. _Genealogies of Citizenship: Markets, Statelessness, and the Right to Have Rights_. Cambridge: Cambridge University Press.
    277. Sparks, Peter G. 1992. _A Roundtable on Mass Deacidification._ Report on a Meeting Held September 12–13, 1991, in Andover, Massachusetts. Washington, DC: Association of Research Libraries.
    278. Spivak, Gayatri C. 2000. “Megacity.” _Grey Room_ 1 (1): 8–25.
    279. Srnicek, Nick. 2017. _Platform Capitalism_. Cambridge: Polity Press.
    280. Stanley, Amy D. 1998. _From Bondage to Contract: Wage Labor, Marriage, and the Market in the Age of Slave Emancipation_. Cambridge: Cambridge University Press.
    281. Stelmakh, Valeriya D. 2008. “Book Saturation and Book Starvation: The Difficult Road to a Modern Library System.” _Kultura_ , September 4.
    282. Stiegler, Bernard. n.d. “Amateur.” Ars Industrialis: Association internationale pour une politique industrielle des technologies de l’esprit. .
    283. Star, Susan Leigh. 1999. “The Ethnography of Infrastructure.” _American Behavioral Scientist_ 43 (3): 377–391.
    284. Steyerl, Hito. 2012. “Defense of the Poor Image.” In _The Wretched of the Screen_. Berlin, Germany: Sternberg Presss.
    285. Stiegler, Bernard. 2003. _Aimer, s’aimer, nous aimer_. Paris: Éditions Galilée.
    286. Suchman, Mark C. 2003. “The Contract as Social Artifact.” _Law & Society Review_ 37 (1): 91–142.
    287. Sumner, William G. 1952. _What Social Classes Owe to Each Other_. Caldwell, ID: Caxton Printers.
    288. Tate, Jay. 2001. “National Varieties of Standardization.” In _Varieties of Capitalism: The Institutional Foundations of Comparative Advantage_ , ed. Peter A. Hall and David Soskice. Oxford: Oxford University Press.
    289. Tawa, Michael. 2012. “Limits of Fluxion.” In _Architecture in the Space of Flows_ , eds. Andrew Ballantyne and Chris Smith. Abingdon, UK: Routledge.
    290. Tay, J. S. W., and R. H. Parker. 1990. “Measuring International Harmonization and Standardization.” _Abacus_ 26 (1): 71–88.
    291. Tenen, Dennis, and Maxwell Henry Foxman. 2014. “ _Book Piracy as Peer Preservation_.” Columbia University Academic Commons. doi: 10.7916/D8W66JHS.
    292. Teubner, Gunther. 1997. _Global Law Without a State_. Aldershot, UK: Dartmouth.
    293. Thussu, Daya K. 2007. _Media on the Move: Global Flow and Contra-Flow_. London: Routledge.
    294. Tiffen, Belinda. 2007. “Recording the Nation: Nationalism and the History of the National Library of Australia.” _Australian Library Journal_ 56 (3): 342.
    295. Tsilas, Nicos. 2011. “Open Innovation and Interoperability.” In _Opening Standards: The Global Politics of Interoperability_ , ed. Laura DeNardis. Cambridge, MA: MIT Press.
    296. Tygstrup, Frederik. 2014. “The Politics of Symbolic Forms.” In _Cultural Ways of Worldmaking: Media and Narratives_ , ed. Ansgar Nünning, Vera Nünning, and Birgit Neumann. Berlin: De Gruyter.
    297. Vaidhyanathan, Siva. 2011. _The Googlization of Everything: (and Why We Should Worry)_. Berkeley: University of California Press.
    298. van Dijck, José. 2012. “Facebook as a Tool for Producing Sociality and Connectivity.” _Television & New Media_ 13 (2): 160–176.
    299. Veel, Kristin. 2003. “The Irreducibility of Space: Labyrinths, Cities, Cyberspace.” _Diacritics_ 33:151–172.
    300. Venn, Couze. 2006. “The Collection.” _Theory, Culture & Society_ 23:35–40.
    301. Verhoeven, Deb. 2016. “As Luck Would Have It: Serendipity and Solace in Digital Research Infrastructure.” _Feminist Media Histories_ 2 (1): 7–28.
    302. Vise, David A., and Mark Malseed. 2005. _The Google Story_. New York: Delacorte Press.
    303. Voltaire. 1786. _Dictionaire Philosophique_ (Oeuvres Completes de Voltaire, Tome Trente-Huiteme). Gotha, Germany: Chez Charles Guillaume Ettinger, Librarie.
    304. Vul, Vladimir Abramovich. 2003. “Who and Why? Bibliotechnoye Delo,” _Librarianship_ 2 (2). .
    305. Walker, Kevin. 2006. “Story Structures: Building Narrative Trails in Museums.” In _Technology-Mediated Narrative Environments for Learning_ , eds. G. Dettori, T. Giannetti, A. Paiva, and A. Vaz, 103–114. Dordrecht: Sense Publishers.
    306. Walker, Neil. 2003. _Sovereignty in Transition_. Oxford: Hart.
    307. Weigel, Moira. 2016. _Labor of Love: The Invention of Dating_. New York: Farrar, Straus and Giroux.
    308. Weiss, Andrew, and Ryan James. 2012. “Google Books’ Coverage of Hawai’i and Pacific Books.” _Proceedings of the American Society for Information Science and Technology_ 49 (1): 1–3.
    309. Weizman, Eyal. 2006. “Lethal Theory.” _Log_ 7:53–77.
    310. Wilson, Elizabeth. 1992. “The Invisible flaneur.” _New Left Review_ 191 (January–February): 90–110.
    311. Wolf, Gary. 2003. “The Great Library of Amazonia.” _Wired_ , November.
    312. Wolff, Janet. 1985. “The Invisible Flâneuse. Women and the Literature of Modernity.” _Theory, Culture & Society_ 2 (3): 37–46.
    313. Yeo, Richard R. 2003. “A Solution to the Multitude of Books: Ephraim Chambers’s ‘Cyclopaedia’ (1728) as ‘the Best Book in the Universe.’” _Journal of the History of Ideas_ 64 (1): 61–72.
    314. Young, Michael D. 1988. _The Metronomic Society: Natural Rhythms and Human Timetables_. Cambridge, MA: Harvard University Press.
    315. Yurchak, Alexei. 1997. “The Cynical Reason of Late Socialism: Power, Pretense, and the Anekdot.” _Public Culture_ 9 (2): 161–188.
    316. Yurchak, Alexei. 2006. _Everything Was Forever, Until It Was No More: The Last Soviet Generation_. Princeton, NJ: Princeton University Press.
    317. Yurchak, Alexei. 2008. “Suspending the Political: Late Soviet Artistic Experiments on the Margins of the State.” _Poetics Today_ 29 (4): 713–733.
    318. Žižek, Slavoj. 2009. _The Plague of Fantasies_. London: Verso.
    319. Zuckerman, Ethan. 2008. “Serendipity, Echo Chambers, and the Front Page.” _Nieman Reports_ 62 (4). .

    © 2018 Massachusetts Institute of Technology

    All rights reserved. No part of this book may be reproduced in any form by any
    electronic or mechanical means (including photocopying, recording, or
    information storage and retrieval) without permission in writing from the
    publisher.

    This book was set in ITC Stone Sans Std and ITC Stone Serif Std by Toppan
    Best-set Premedia Limited. Printed and bound in the United States of America.

    Library of Congress Cataloging-in-Publication Data

    Names: Thylstrup, Nanna Bonde, author.

    Title: The politics of mass digitization / Nanna Bonde Thylstrup.

    Description: Cambridge, MA : The MIT Press, [2018] | Includes bibliographical
    references and index.

    Identifiers: LCCN 2018010472 | ISBN 9780262039017 (hardcover : alk. paper)

    eISBN 9780262350044

    Subjects: LCSH: Library materials--Digitization. | Archival materials--
    Digitization. | Copyright and digital preservation.

    Classification: LCC Z701.3.D54 T49 2018 | DDC 025.8/4--dc23 LC record
    available at

    Weinmayr
    Confronting Authorship Constructing Practices How Copyright is Destroying Collective Practice
    2019


    # 11\. Confronting Authorship, Constructing Practices (How Copyright is
    Destroying Collective Practice)

    Eva Weinmayr

    © 2019 Eva Weinmayr, CC BY 4.0
    [https://doi.org/10.11647/OBP.0159.11](https://doi.org/10.11647/OBP.0159.11)

    This chapter is written from the perspective of an artist who develops models
    of practice founded on the fundamental assumption that knowledge is socially
    constructed. Knowledge, according to this understanding, builds on imitation
    and dialogue and is therefore based on a collective endeavour. Although
    collective forms of knowledge production are common in the sciences, such
    modes of working constitute a distinct shift for artistic practice, which has
    been conceived as individual and isolated or subjective. Moreover, the shift
    from the individual to the social in artistic production — what has been
    called art’s ‘social turn’[1](ch11.xhtml#footnote-525)  — also shifts the
    emphasis from the artwork to the social processes of production and therefore
    proposes to relinquish ‘the notion of the “work” as a noun (a static object)’
    and re-conceptualises ‘the “work” as a verb (a communicative
    activity)’.[2](ch11.xhtml#footnote-524) This shift from ‘noun’ to ‘verb’
    promotes collective practices over authored objects and includes work such as
    developing infrastructures, organising events, facilitating, hosting,
    curating, editing and publishing. Such generative practices also question the
    nature of authorship in art.

    Authorship is no doubt a method to develop one’s voice, to communicate and to
    interact with others, but it is also a legal, economic and institutional
    construct, and it is this function of authorship as a framing and measuring
    device that I will discuss in this chapter. Oscillating between the arts and
    academia, I shall examine the concept of authorship from a legal, economic and
    institutional perspective by studying a set of artistic practices that have
    made copyright, intellectual property and authorship into their artistic
    material.

    Copyright’s legal definition combines authorship, originality and property.
    ‘Copyright is not a transcendent moral idea’, as Mark Rose has shown, ‘but a
    specifically modern formation [of property rights] produced by printing
    technology, marketplace economics and the classical liberal culture of
    possessive individualism’.[3](ch11.xhtml#footnote-523) Therefore the author in
    copyright law is unequivocally postulated in terms of liberal and neoliberal
    values. Feminist legal scholar Carys Craig argues that copyright law and the
    concept of authorship it supports fail to adequately recognise the essential
    social nature of human creativity. It chooses relationships qua private
    property instead of recognising the author as necessarily social situated and
    therefore creating (works) within a network of social
    relations.[4](ch11.xhtml#footnote-522) This chapter tries to reimagine
    authorial activity in contemporary art that is not caught in ‘simplifying
    dichotomies that pervade copyright theory (author/user, creator/copier,
    labourer/free-rider)’,[5](ch11.xhtml#footnote-521) and to examine both the
    blockages that restrict our acknowledgement of the social production of art
    and the social forces that exist within emancipatory collective
    practices.[6](ch11.xhtml#footnote-520)

    Copyright is granted for an ‘original work [that] is fixed in any tangible
    medium of expression’. It is based on the relationship between an
    ‘originator’, being imagined as the origin of the
    work,[7](ch11.xhtml#footnote-519) and distinct products, which are fixed in a
    medium, ‘from which they can be perceived, reproduced, or otherwise
    communicated, either directly or with the aid of a machine or
    device.’[8](ch11.xhtml#footnote-518)

    Practices, on the contrary, are not protected under
    copyright.[9](ch11.xhtml#footnote-517) Because practice can’t be fixed into a
    tangible form of expression, intellectual property rights are not created and
    cannot be exploited economically. This inability to profit from practice by
    making use of intellectual property results in a clear privileging of the
    ‘outputs’ of authored works over practice. This value system therefore
    produces ‘divisive hierarchical splits between those who ‘do’ [practices], and
    those who write about, make work about
    [outputs]’.[10](ch11.xhtml#footnote-516)

    Media scholar Kathleen Fitzpatrick observes in her forthcoming book Generous
    Thinking:

    [H]owever much we might reject individualism as part and parcel of the
    humanist, positivist ways of the past, our working lives — on campus and off —
    are overdetermined by it. […] c. And the drive to compete […] bleeds out into
    all areas of the ways we work, even when we’re working together.’ The
    competitive individualism that the academy cultivates makes all of us
    painfully aware that even our most collaborative efforts will be assessed
    individually, with the result that even those fields whose advancement depends
    most on team-based efforts are required to develop careful guidelines for
    establishing credit and priority.[11](ch11.xhtml#footnote-515)

    Artist and activist Susan Kelly expands on this experience with her
    observation that this regime of individual merit even inhibits us from
    partaking in collective practices. She describes the dilemma for the academic
    activist, when the demand for ‘outputs’ (designs, objects, texts,
    exhibitions), which can be measured, quantified and exploited by institutions
    (galleries, museums, publishers, research universities), becomes the
    prerequisite of professional survival.

    Take the young academic, for example, who spends evenings and weekends in the
    library fast tracking a book on social movements about which she cares deeply
    and wants to broaden her understanding. She is also desperate for it to be
    published quickly to earn her the university research points that will see her
    teaching contract renewed for the following year. It is likely that the same
    academic is losing touch with the very movements she writes about, and is no
    longer participating in their work because she is exhausted and the book takes
    time to write no matter how fast she works. On publication of the book, her
    work is validated professionally; she gets the university contract and is
    invited to sit on panels in public institutions about contemporary social
    movements. In this hypothetical case, it is clear that the academic’s work has
    become detached from the movements she now writes and talks about, and she no
    doubt sees this. But there is good compensation for this uneasiness in the
    form of professional validation, invitations that flatter, and most
    importantly, an ease of the cycle of hourly paid or precarious nine-month
    contracts.[12](ch11.xhtml#footnote-514)

    Kelly’s and Fitzpatrick’s examples describe the paradoxes that the demand for
    authorship creates for collective practices. But how can we actually escape
    regimes of authorship that are conceptualised and economised as ‘cultural
    capital’?

    Academic authorship, after all, is the basis for employment, promotion, and
    tenure. Also, arguably, artists who stop being ‘authors’ of their own work
    would no longer be considered ‘artists’, because authorship is one of art’s
    main framing devices. In the following I will discuss three artistic practices
    that address this question — with, as we will see, very different
    outcomes.[13](ch11.xhtml#footnote-513)

    ## Authorship Replaces Authorship?

    In 2011, American artist Richard Prince spread a blanket on a sidewalk outside
    Central Park in New York City and sold copies of his latest artwork, a
    facsimile of the first edition of J. D. Salinger’s The Catcher in The
    Rye.[14](ch11.xhtml#footnote-512) He did not make any changes to the text of
    the novel and put substantial effort into producing an exact replica in terms
    of paper quality, colours, typeset and binding, reproducing the original
    publication as much as possible except for several significant details. He
    replaced the author’s name with his own. ‘This is an artwork by Richard
    Prince. Any similarity to a book is coincidental and not intended by the
    artist’, his colophon reads, concluding with ‘© Richard Prince’. Prince also
    changed the publisher’s name, Little Brown, to a made-up publishing house with
    the name AP (American Place) and removed Salinger’s photograph from the back
    of the dust cover.[15](ch11.xhtml#footnote-511)

    The artist’s main objective appeared to be not to pirate and circulate an
    unauthorised reprint of Salinger’s novel, because he did not present the book
    under Salinger’s name but his own. Prince also chose a very limited
    circulation figure.[16](ch11.xhtml#footnote-510) It is also far from
    conventional plagiarism, because hardly any twentieth century literature is
    more read and widely known than Salinger’s Catcher. So the question is, why
    would Prince want to recirculate one of the most-read American novels of all
    time, a book available in bookshops around the world, with a total circulation
    of 65 million copies, translated into 30
    languages?[17](ch11.xhtml#footnote-509)

    Prince stated that he loved Salinger’s novel so much that ‘I just wanted to
    make sure, if you were going to buy my Catcher in the Rye, you were going to
    have to pay twice as much as the one Barnes and Noble was selling from J. D.
    Salinger. I know that sounds really kind of shallow and maybe that’s not the
    best way to contribute to something, but in the book-collecting world you pay
    a premium for really collectible books,’ he explained in an interview with
    singer Kim Gordon.[18](ch11.xhtml#footnote-508)

    As intended, the work quickly turned into a
    collectible[19](ch11.xhtml#footnote-507) and attracted lots of applause from
    members of the contemporary art world including, among others, conceptual
    writer Kenneth Goldsmith, who described the work as a ‘terribly ballsy move’.
    Prince was openly ‘pirating what is arguably the most valuable property in
    American literature, practically begging the estate of Salinger to sue
    him.’[20](ch11.xhtml#footnote-506)

    ## Who has the Power to Appropriate?

    We need to examine Goldsmith’s appraisal more closely. What is this ‘ballsy
    move’? And how does it relate to the asserted criticality of appropriation
    artists in the late 1970s, a group of which Prince was part?

    Prince rose to prominence in New York in the late 1970s, associated with the
    Pictures generation of artists[21](ch11.xhtml#footnote-505) whose
    appropriation of images from mass culture and advertising — Prince’s
    photographs of Marlboro Man adverts, for example — examined the politics of
    representation.[22](ch11.xhtml#footnote-504) Theorists and critics, often
    associated with the academic October journal,[23](ch11.xhtml#footnote-503)
    interpreted the Pictures artists’ ‘unabashed usurpations of images as radical
    interrogations of the categories of originality and authenticity within the
    social construction of authorship. […] The author had become irrelevant
    because the original gesture had become unimportant; the copy adequately stood
    in its place and performed its legitimising
    function.’[24](ch11.xhtml#footnote-502)

    Artist Sherrie Levine, one of the leading figures in American appropriation
    art, expresses the core theoretical commitment of this group of artists in her
    1982 manifesto: ‘The world is filled to suffocating. Man has placed his token
    on every stone. Every word, every image, is leased and mortgaged. […] A
    picture is a tissue of quotations drawn from the innumerable centres of
    culture. We can only imitate a gesture that is always anterior, never
    original.’[25](ch11.xhtml#footnote-501) This ostensive refusal of originality
    poses, no doubt, a critique of the author who creates ‘ex nihilo’. But does it
    really present a critique of authorship per se? I shall propose three
    arguments from different viewpoints — aesthetic, economic and legal — to
    explore the assumptions of this assertion.

    From the aesthetic perspective, Prince and Levine are making formal choices in
    the process of appropriating already existing work. They re-photograph,
    produce photographic prints, make colour choices; they enlarge or scale down,
    trim the edges and take decisions about framing. Nate Harrison makes this
    point when he argues that ‘Levine and Prince take individual control of the
    mass-authored image, and in so doing, reaffirm the ground upon which the
    romantic author stands.’[26](ch11.xhtml#footnote-500) It is exactly this
    control of, and authority over, the signed and exhibited image that leads
    Prince and Levine to be validated as ‘author[s] par
    excellence’.[27](ch11.xhtml#footnote-499) Prince, for example, has been lauded
    as an artist who ‘makes it new, by making it
    again’.[28](ch11.xhtml#footnote-498) This ‘making it again’, a process that
    Hal Foster names ‘recoding’,[29](ch11.xhtml#footnote-497) creates new meaning
    and must therefore be interpreted as an ‘original’ authorial act.
    Subsequently, this work has been validated by museums, galleries, collectors
    and critics. From an economic perspective one can therefore argue that
    Prince’s numerous solo exhibitions in prestigious museums, his sales figures,
    and affiliation to commercial galleries are evidence that he has been ascribed
    artistic authorship as well as authorial agency by the institutions of the art
    world.[30](ch11.xhtml#footnote-496)

    Coming back to Prince’s appropriation of Catcher in the Rye, his conceptual
    gesture employs necessarily the very rhetoric and conceptual underpinnings of
    legislation and jurisdiction that he seemingly
    critiques.[31](ch11.xhtml#footnote-495) He declares ‘this is an artwork by
    Richard Prince, © Richard Prince’ and asserts, via claiming copyright, the
    concept of originality and creativity for his work. By this paradoxical
    gesture, he seemingly replaces ‘authorship’ with authorship and ‘ownership’
    with ownership. And by doing so, I argue, he reinforces its very concept.

    The legal framework remains conceptual, theoretical and untested in this case.
    But on another occasion, Prince’s authorship was tested in court — and
    eventually legally confirmed to belong to him. This is crucial to my inquiry.
    What are we to make of the fact that Prince, who challenges the copyright
    doctrine in his gestures of appropriation, has been ascribed legitimate
    authorship by courts who rule on copyright law? It seems paradoxical, because
    as Elizabeth Wang rightly claims, ‘if appropriation is legitimized, the
    political dimension of this act is excised’.[32](ch11.xhtml#footnote-494) And
    Cornelia Sollfrank argues ‘the value of appropriation art lies in its
    illicitness. […] Any form of [judicial] legitimisation would not support the
    [appropriation] artists’ claims, but rather undermine
    them.’[33](ch11.xhtml#footnote-493)

    ## Authorship Defined by Market Value and Celebrity Status?

    To illustrate this point I will briefly digress to discuss a controversial
    court case about Prince’s authorial legitimacy. In 2009, New-York-based
    photographer, Patrick Cariou began litigation against Prince, his gallerist
    Larry Gagosian and his catalogue publisher Rizzoli. Prince had appropriated
    Cariou’s photographs in his series Canal Zone which went on show at Gagosian
    Gallery.[34](ch11.xhtml#footnote-492) A first ruling by a district judge
    stated that Prince’s appropriation was copyright infringement and requested
    him to destroy the unsold paintings on show. The ruling also forbade those
    that had been sold from being displayed publicly in the
    future.[35](ch11.xhtml#footnote-491)

    However Prince’s eventual appeal turned the verdict around. A second circuit
    court decided that twenty-five of his thirty paintings fell under the fair use
    rule. The legal concept of fair use allows for copyright exceptions in order
    to balance the interests of exclusive right holders with the interests of
    users and the public ‘for purposes such as criticism, comment, news reporting,
    teaching (including multiple copies for classroom use), scholarship, or
    research’.[36](ch11.xhtml#footnote-490) One requirement to justify fair use is
    that the new work should be transformative, understood as presenting a new
    expression, meaning or message. The appeal’s court considered Prince’s
    appropriation as sufficiently transformative because a ‘reasonable
    observer’[37](ch11.xhtml#footnote-489)would perceive aesthetic differences
    with the original.[38](ch11.xhtml#footnote-488)

    Many artists applauded the appeal court’s verdict, as it seemed to set a
    precedent for a more liberal approach towards appropriation art. Yet attorney
    Sergio Muñoz Sarmiento and art historian Lauren van Haaften-Schick voiced
    concerns about the verdict’s interpretation of ‘transformative’ and the
    ruling’s underlying assumptions.

    The questions of ‘aesthetic differences’ perceived by a ‘reasonable observer’,
    Sarmiento rightly says, are significant. After all, Prince did not provide a
    statement of intent in his deposition[39](ch11.xhtml#footnote-487) therefore
    the judges had to adopt the role of a (quasi) art critic ‘employing [their]
    own artistic judgment[s]’ in a field in which they had not been
    trained.[40](ch11.xhtml#footnote-486)

    Secondly, trying to evaluate the markets Cariou and Prince cater for, the
    court introduced a controversial distinction between celebrity and non-
    celebrity artists. The court opinion reasons: ‘Certain of the Canal Zone
    artworks have sold for two million or more dollars. The invitation list for a
    dinner that Gagosian hosted in conjunction with the opening of the Canal Zone
    show included a number of the wealthy and famous such as the musicians Jay-Z
    and Beyoncé Knowles, artists Damien Hirst and Jeff Koons, [….] and actors
    Robert De Niro, Angelina Jolie, and Brad Pitt’.[41](ch11.xhtml#footnote-485)
    Cariou, on the contrary, so the verdict argues, ‘has not aggressively marketed
    his work’, and has earned just over $8,000 in royalties from Yes Rasta since
    its publication.[42](ch11.xhtml#footnote-484) Furthermore, he made only ‘a
    handful of private sales [of his photographic prints] to personal
    acquaintances’.[43](ch11.xhtml#footnote-483) Prince, by contrast, sold eight
    of his Canal Zone paintings for a total of $10,480,000 and exchanged seven
    others for works by canonical artists such as painter Larry Rivers and
    sculptor Richard Serra.[44](ch11.xhtml#footnote-482)

    The court documents here tend to portray Cariou as a sort of hobby artist or
    ‘lower class amateur’ in Sarmiento’s words,[45](ch11.xhtml#footnote-481)
    whereas Prince is described as a ‘well-known appropriation
    artist’[46](ch11.xhtml#footnote-480) with considerable success in the art
    market.[47](ch11.xhtml#footnote-479) Such arguing is dangerous, because it
    brings social class, celebrity status and art market success into play as
    legal categories to be considered in future copyright cases and dismisses
    ‘Cariou’s claim as a legitimate author and
    artist’.[48](ch11.xhtml#footnote-478) The parties eventually reached an out-
    of-court settlement regarding the remaining five paintings, and their
    infringement claim was returned to the district court meaning that no ruling
    had been issued. This pragmatic settlement can be interpreted as a missed
    opportunity for further clarification in the interpretation of fair use. No
    details about the settlement have been disclosed.[49](ch11.xhtml#footnote-477)

    Richard Prince presented himself in his court deposition as an artist, who
    ‘do[es]n’t really have a message,’ and was not ‘trying to create anything with
    a new meaning or a new message.’[50](ch11.xhtml#footnote-476) Nevertheless the
    appeal court’s ruling transforms the ‘elusive artist not only into a subject,
    but also into an [artist] author’[51](ch11.xhtml#footnote-475) — a status he
    set out to challenge in the first place. Therefore Richard Prince’s ongoing
    games[52](ch11.xhtml#footnote-474) might be entertaining or make us laugh, but
    they stop short of effectively challenging the conceptualisation of
    authorship, originality and property because they are assigned the very
    properties that are denied to the authors whose works are copied. That is to
    say, Prince’s performative toying with the law does not endanger his art’s
    operability in the art world. On the contrary, it constructs and affirms his
    reputation as a radical and saleable artist-author.

    ## De-Authoring

    A very different approach to copyright law is demonstrated by American artist
    Cady Noland, who employs the law to effectively endanger her art’s operability
    in the art market. Noland is famously concerned with the circulation and
    display of her work with respect to context, installation and photographic
    representation. Relatedly, she has also become very critical of short-term
    speculation on the art market. Noland has apparently not produced any new work
    for over a decade, due to the time she now spends pursuing litigation around
    her existing oeuvre.[53](ch11.xhtml#footnote-473) In 2011, she strikingly
    demonstrated that an artist need not give up control when her work enters the
    commercial art market and turns into a commodity for short-term profit. She
    made probably one of the most important stands in modern art history when she
    ‘de-authored’ her work Cowboys Milking (1990), after it was put up for auction
    at Sotheby’s with the consequence that the work could not be sold as a Cady
    Noland work anymore.

    Swiss-born dealer Marc Jancou, based in New York and Geneva, had consigned the
    work to Sotheby’s a few months after having purchased it for $106,500 from a
    private collector.[54](ch11.xhtml#footnote-472) Jancou was obviously attracted
    by the fact that one of Noland’s works had achieved the highest price for a
    piece by a living female artist: $6.6m.

    At Noland’s request, on the eve of the auction, Sotheby’s abruptly withdrew
    the piece, a silkscreen print on an aluminium panel. The artist argued that it
    was damaged: ‘The current condition […] materially differs from that at the
    time of its creation. […] [H]er honor and reputation [would] be prejudiced as
    a result of offering [it] for sale with her name associated with
    it.’[55](ch11.xhtml#footnote-471) From a legal point of view, this amounts to
    a withdrawal of Noland’s authorship. The US Visual Artists Rights Act of 1990,
    VARA, grants artists ‘authorship’ rights over works even after they have been
    sold, including the right to prevent intentional modification and to forbid
    the use of their name in association with distorted or mutilated
    work.[56](ch11.xhtml#footnote-470) Such rights are based on the premise that
    the integrity of a work needs to be guaranteed and a work of art has cultural
    significance that extends beyond mere property
    value.[57](ch11.xhtml#footnote-469)

    Noland’s withdrawal of authorship left Jancou with ‘a Cady Noland’ in his
    living room, but not on the market. In an email to Sotheby’s, he complained:
    ‘This is not serious! Why does an auction house ask the advise [sic] of an
    artist that has no gallery representation and has a biased and radical
    approach to the art market?’[58](ch11.xhtml#footnote-468) Given that Noland is
    a long-standing and outspoken sceptic with respect to speculative dealing in
    art, he somewhat naively wonders why she would be able to exercise this degree
    of power over an artwork that had been entered into a system of commercial
    exchange. His complaint had no effect. The piece remained withdrawn from the
    auction and Jancou filed a lawsuit in February 2012 seeking $26 million in
    damages from Sotheby’s.[59](ch11.xhtml#footnote-467)

    From an economic perspective, both artists, Noland and Prince, illustrated
    powerfully how authorship is instituted in the form of the artist’s signature,
    to construct (Prince’s Catcher in the Rye) or destroy (Noland’s Cowboy
    Milking) monetary value. Richard Prince’s stated intention is to double the
    book’s price, and by attaching his name to Salinger’s book in a Duchampian
    gesture, he turns it into a work of art authored and copyrighted by Prince.
    Noland, on the contrary lowers the value of her artwork by removing her
    signature and by asserting the artist-author’s (Noland) rights over the
    dealer-owner’s (Jancou).[60](ch11.xhtml#footnote-466)

    However, from a legal perspective I would argue that both Noland and Prince —
    in their opposite approaches of removing and adding their signatures — affirm
    authorship as it is conceptualised by the law.[61](ch11.xhtml#footnote-465)
    After all ‘copyright law is a system to which the notion of the author appears
    to be central — in defining the right owner, in defining the work, in defining
    infringement.’[62](ch11.xhtml#footnote-464)

    ## Intellectual Property Obsession Running Amok?

    Intellectual property — granted via copyright — has become one of the driving
    forces of the creative economy, being exploited by corporations and
    institutions of the so-called ‘creative industries’. In the governmental
    imagination, creative workers are described as ‘model entrepreneurs for the
    new economy’.[63](ch11.xhtml#footnote-463) Shortly after the election of New
    Labour in the UK in 1997, the newly formed Department of Culture, Media and
    Sport established the Creative Industries Mapping Document (CIMD 1998) and
    defined the ‘Creative Industries’ primarily in relation to creativity and
    intellectual property.[64](ch11.xhtml#footnote-462) According to the
    Department for Culture Media and Sport the creative industries have ‘their
    origin in individual creativity, skill and talent, which have a potential for
    wealth and job creation through the generation and exploitation of
    intellectual property.’[65](ch11.xhtml#footnote-461) This exploitation of
    intellectual property as intangible capital has been taken on board by
    institutions and public management policymakers, which not only turn creative
    practices into private property, but trigger working policies that produce
    precarious self-entrepreneurship and sacrifice in pursuit of
    gratification.[66](ch11.xhtml#footnote-460)

    We find this kind of thinking reflected for instance on the website built by
    the University of the Arts London to give advice on intellectual property —
    which was until recently headlined ‘Own It’.[67](ch11.xhtml#footnote-459)
    Here, institutional policies privilege the privatisation and propertisation of
    creative student work over the concept of sharing and fair use.

    There is evidence that this line of thought creates a self-inflicted
    impediment for cultural workers inside and outside art colleges. The College
    Art Association, a US-based organization of about fourteen thousand artists,
    arts professionals, students and scholars released a report in 2015 on the
    state of fair use in the visual arts.[68](ch11.xhtml#footnote-458) The survey
    reveals that ‘visual arts communities of practice share a great deal of
    confusion about and misunderstanding of the nature of copyright law and the
    availability of fair use. […] Formal education on copyright, not least at art
    colleges, appears to increase tendencies to overestimate risk and underuse
    fair use.’ As a result, the report states, the work of art students ‘is
    constrained and censored, most powerfully by themselves, because of that
    confusion and the resulting fear and anxiety.’[69](ch11.xhtml#footnote-457)

    This climate even results in outright self-censorship. The interviewees of
    this study ‘repeatedly expressed a pre-emptive decision not to pursue an
    idea’[70](ch11.xhtml#footnote-456) because gaining permission from right
    holders is often difficult, time consuming or expensive. The authors of this
    report called this mindset a ‘permissions culture’, giving some examples. ‘I
    think of copyright as a cudgel, and I have been repeatedly forestalled and
    censored because I have not been able to obtain copyright permission’, stated
    one academic, whose research did not get approval from an artist’s estate. He
    added: ‘For those of us who work against the grain of [the] market-driven arts
    economy, their one recourse for controlling us is copyright.’ Another said:
    ‘In many cases I have encountered artists’ estates and sometimes artists who
    refuse rights to publish (even when clearly fair use) unless they like the
    interpretation in the text. This is censorship and very deleterious to
    scholarship and a free public discourse on
    images.’[71](ch11.xhtml#footnote-455) One scholar declared that copyright
    questions overshadowed his entire work process: ‘In my own writing, I’m
    worrying all the time.’[72](ch11.xhtml#footnote-454) In such a climate of
    anxiety ‘editors choose not to publish books that they believe might have
    prohibitive permission costs; museums delay or abandon digital-access
    projects’, as Ben Mauk comments in the New Yorker
    Magazine.[73](ch11.xhtml#footnote-453)

    The language of law does harm because it has the rhetorical power to foreclose
    debate. Legal and political science scholar Jennifer Nedelsky traces the
    problem to the fact ‘that many right claims, such as “it’s my property”, have
    a conclusory quality. They are meant to end, not to open up debate’, therefore
    ‘treating as settled, what should be debated’.[74](ch11.xhtml#footnote-452)

    In a similar vein, political scientist Deborah Halbert describes how her
    critique of intellectual property took her on a journey to study the details
    of the law. The more she got into it, so she says, the more her own thinking
    had been ‘co-opted’ by the law. ‘The more I read the case law and law
    journals, the more I came to speak from a position inside the status quo. My
    ability to critique the law became increasingly bounded by the law itself and
    the language used by those within the legal profession to discuss issues of
    intellectual property. I began to speak in terms of incentives and public
    goods. I began to start any discussion of intellectual property by what was
    and was not allowed under the law. It became clear that the very act of
    studying the subject had transformed my standpoint from an outsider to an
    insider.’[75](ch11.xhtml#footnote-451)

    ## The Piracy Project — Multiple Authorship or ‘Unsolicited Collaborations’?

    A similar question of language applies to the term
    ‘pirate’.[76](ch11.xhtml#footnote-450) Media and communication scholar Ramon
    Lobato asks whether the language of piracy used by the critical intellectual
    property discourse ‘should be embraced, rejected, recuperated or
    rearticulated’? He contends that reducing ‘piracy’ to a mere legal category —
    of conforming, or not, with the law — tends to neglect the generative forces
    of piracy, which ‘create its own economies, exemplify wider changes in social
    structure, and bring into being tense and unusual relationships between
    consumers, cultural producers and governments.’[77](ch11.xhtml#footnote-449)

    When the word pirate first appeared in ancient Greek texts, it was closely
    related to the noun ‘peira’ which means trial or attempt. ‘The ‘pirate’ would
    then be the one who ‘tests’, ‘puts to proof’, ‘contends with’, and ‘makes an
    attempt’.[78](ch11.xhtml#footnote-448) Further etymological research shows
    that from the same root stems pira: experience, practice [πείρα], pirama:
    experiment [πείραμα], piragma: teasing [πείραγμα] and pirazo: tease, give
    trouble [πειράζω].[79](ch11.xhtml#footnote-447)

    This ‘contending with’, ’making an attempt’ and ‘teasing’ is at the core of
    the Piracy Project’s practice, whose aim is twofold: firstly, to gather and
    study a vast array of piratical practices (to test and negotiate the
    complexities and paradoxes created by intellectual property for artistic
    practice); and secondly to build a practice that is itself collaborative and
    generative on many different levels.[80](ch11.xhtml#footnote-446)

    The Piracy Project explores the philosophical, legal and social implications
    of cultural piracy and creative modes of dissemination. Through an open call,
    workshops, reading rooms and performative debates as well as through our
    research into international pirate book markets[81](ch11.xhtml#footnote-445)
    we gathered a collection of roughly 150 copied, emulated, appropriated and
    modified books from across the world. Their approaches to copying vary widely,
    from playful strategies of reproduction, modification and reinterpretation of
    existing works; to acts of civil disobedience circumventing enclosures such as
    censorship or market monopolies; to acts of piracy generated by commercial
    interests. This vast and contradictory spectrum of cases, from politically
    motivated bravery as well as artistic statements to cases of hard-edged
    commercial exploitation, serves as the starting point to explore the
    complexities and contradictions of authorship in debates, workshops, lectures
    and texts, like this one.

    In an attempt to rearticulate the language of piracy we call the books in the
    collection ‘unsolicited collaborations’.[82](ch11.xhtml#footnote-444)
    Unsolicited indicates that the makers of the books in the Piracy Project did
    not ask for permission — Richard Prince’s ‘Catcher in the Rye’ is one
    example.[83](ch11.xhtml#footnote-443) Collaboration refers to a relational
    activity and re-imagines authorship not as proprietary and stable, but as a
    dialogical and generative process. Here, as feminist legal scholar Carys Craig
    claims, ‘authorship is not originative but participative; it is not internal
    but interactive; it is not independent but interdependent. In short, a
    dialogic account of authorship is equipped to appreciate the derivative,
    collaborative, and communicative nature of authorial activity in a way that
    the Romantic [individual genius] account never
    can.’[84](ch11.xhtml#footnote-442)

    Such a participatory and interdependent conceptualisation of authorship is
    illustrated and tested in the Piracy Project’s research into reprinting,
    modifying, emulating and commenting on published books. As such it revisits —
    through material practice — Michel Foucault’s critical concept of the ‘author
    function’ as the triggering of a discourse, rather than a proprietary
    right.[85](ch11.xhtml#footnote-441)

    This becomes clearer when we consider that digital print technologies, for
    example through print on demand and desktop publishing, allow for a constant
    re-printing and re-editing of existing files. The advent and widespread
    accessibility of the photocopy machine in the late 1960s allowed the reader to
    photocopy books and collate selected chapters, pages or images in new and
    customised compilations. These new reproduction technologies undermine to an
    extent the concept of the printed book as a stable and authoritative
    work,[86](ch11.xhtml#footnote-440) which had prevailed since the mass
    production of books on industrial printing presses came into being. Eva
    Hemmungs Wirtén describes how the widespread availability of the
    photocopier[87](ch11.xhtml#footnote-439) has been perceived as a threat to the
    authority of the text and cites Marshall McLuhan’s address at the Vision 65
    congress in 1965:

    Xerography is bringing a reign of terror into the world of publishing because
    it means that every reader can become both author and publisher. […]
    Authorship and readership alike can become production-oriented under
    xerography. Anyone can take a book apart, insert parts of other books and
    other materials of his own interest, and make his own book in a relatively
    fast time. Any teacher can take any ten textbooks on any subject and custom-
    make a different one by simply xeroxing a chapter from this one and from that
    one.[88](ch11.xhtml#footnote-438)

    One example of a reprinted and modified book in the Piracy Project is No se
    diga a nadie (‘Don’t tell anyone’).[89](ch11.xhtml#footnote-437) It is an
    autobiographical novel by Peruvian journalist and TV presenter Jaime Bayli.
    The pirate copy, found by Andrea Francke on Lima’s pirate book markets, is
    almost identical in size, weight, and format and the cover image is only
    slightly cropped. However, this pirate copy has two extra chapters. Somebody
    has infiltrated the named author’s work and sneaked in two fictionalised
    chapters about the author’s life. These extra chapters are well written, good
    enough to blend in and not noticeable at first glance by the
    reader.[90](ch11.xhtml#footnote-436)

    The pirates cannot gain any cultural capital here, as the pirating author
    remains an anonymous ghost. Equally there is no financial profit to be made,
    as long as the pirate version is not pointed out to readers as an extended
    version. Such act is also not framed as a conceptual gesture, as it is the
    case with Prince’s Catcher in the Rye. It rather operates under the radar of
    everyone, and moreover and importantly, any revelation of this intervention or
    any claim of authorship would be counterproductive.

    This example helps us to think through concepts of the authoritative text and
    the stability of the book. Other cases in the Piracy Project find similar ways
    to queer the category of authorship and the dominant modes of production and
    dissemination.[91](ch11.xhtml#footnote-435) Our practice consists of
    collecting; setting up temporary reading rooms to house the collection; and
    organising workshops and debates in order to find out about the reasons and
    intentions for these acts of piracy, to learn from their strategies and to
    track their implications for dominant modes of production and
    dissemination.[92](ch11.xhtml#footnote-434)

    This discursive practice distinguishes the Piracy Project from radical online
    libraries, such as aaaaarg.fail or
    [memoryoftheworld.org](http://memoryoftheworld.org).[93](ch11.xhtml#footnote-433)
    While we share similar concerns, such as distribution monopolies, enclosure
    and the streamlining of knowledge, these peer-to-peer (p2p) platforms mainly
    operate as distribution platforms, developing strategies to share intact
    copies of authoritative texts. Marcell Mars, for example, argues against
    institutional and corporate distribution monopolies when he states ‘when
    everyone is a librarian, [the] library is everywhere’. Mars invites users of
    the online archive [memoryoftheworld.org](http://memoryoftheworld.org) to
    upload their scanned books to share with others. Similarly, Sean Dockray, who
    initiated aaaaarg.fail, a user generated online archive of books and texts,
    said in an interview: ‘the project wasn’t about criticising institutions,
    copyright, authority, and so on. It was simply about sharing knowledge. This
    wasn’t as general as it sounds; I mean literally the sharing of knowledge
    between various individuals and groups that I was in correspondence with at
    the time but who weren’t necessarily in correspondence with each
    other.’[94](ch11.xhtml#footnote-432)

    ## Practising Critique — Queering Institutional Categories

    In contrast to online p2p sharing platforms, the Piracy Project took off in a
    physical space, in the library of Byam Shaw School of Art in London. Its
    creation was a response to restrictive university policies when, in 2010, the
    management announced the closure of the art college library due to a merger
    with the University of the Arts London. A joint effort by students and staff,
    supported by the acting principal, turned Byam Shaw’s art college library into
    a self-organised library that remained public, as well as intellectually and
    socially generative.[95](ch11.xhtml#footnote-431)

    As a result of the college taking collective ownership over the library and
    its books, the space opened up. It had been a resource that was controlled and
    validated by institutional policies that shaped crucial decisions about what
    went on the shelves, but it became an assemblage of knowledge in which
    potentially obscure, self-published materials that were not institutionally
    validated were able to enter.

    For example, artist and writer Neil Chapman’s handmade facsimile of Gilles
    Deleuze’s Proust and Signs[96](ch11.xhtml#footnote-430) explored the
    materiality of print and related questions about the institutional policies of
    authorisation. Chapman produced a handmade facsimile of his personal paperback
    copy of Deleuze’s work, including binding mistakes in which a few pages were
    bound upside down, by scanning and printing the book on his home inkjet
    printer. The book is close to the original format, cover and weight. However,
    it has a crafty feel to it: the ink soaks into the paper creating a blurry
    text image very different from a mass-produced offset printed text. It has
    been assembled in DIY style and speaks the language of amateurism and
    makeshift. The transformation is subtle, and it is this subtlety that makes
    the book subversive in an institutional library context. How do students deal
    with their expectations that they will access authoritative and validated
    knowledge on library shelves and instead encounter a book that was printed and
    assembled by hand?[97](ch11.xhtml#footnote-429) Such publications circumvent
    the chain of institutional validation: from the author, to the publisher, the
    book trade, and lastly the librarian purchasing and cataloguing the book
    according to the standard bibliographic
    practices.[98](ch11.xhtml#footnote-428) A similar challenge to the stability
    of the printed book and the related hierarchy of knowledge occurred when
    students at Byam Shaw sought a copy of Jacques Ranciere’s Ignorant
    Schoolmaster and found three copied and modified versions. In accordance with,
    or as a response to, Ranciere’s pedagogical proposal, one copy featured
    deleted passages that left blank spaces for the reader to fill and to
    construct their own meaning in lieu of Ranciere’s
    text.[99](ch11.xhtml#footnote-427)

    This queering of the authority of the book as well as the normative,
    institutional frameworks felt like a liberating practice. It involved an open
    call for pirated books, a set of workshops and a series of
    lectures,[100](ch11.xhtml#footnote-426) which built a structure that allowed
    the Piracy Project to share concerns about the wider developments at the
    university and the government’s funding cuts in education, while the project
    could at the same time playfully subvert the dire and frustrating situation of
    a library that is earmarked for closure.

    The fact that the library’s acquisition budget was cut made the pirating
    action even more meaningful. Many books were produced on the photocopy machine
    in the college. Other copies were sent to the project by artists, writers,
    curators and critics who responded to the international call. The initial
    agreement was to accept any submission, no matter how controversial, illegal
    or unethical it might be. This invited a variety of approaches and
    contradicting voices, which were not muted by the self-censorship of their
    originators, nor by the context in which they circulated. By resisting
    generalised judgments, the project tried to practice critique in Judith
    Butler’s sense. For Butler ‘judgments operate […] as ways to subsume a
    particular under an already constituted category, whereas critique asks after
    the occlusive constitution of the field of categories themselves. […] Critique
    is able to call foundations into question, denaturalise social and political
    hierarchy, and even establish perspectives by which a certain distance on the
    naturalised world can be had.’[101](ch11.xhtml#footnote-425)

    To create such a space for the critique of the naturalisation of authorship as
    intellectual property was one of the aims of the Piracy Project: firstly by
    understanding that there is always a choice through discovering and exploring
    other cultures and nations dealing with (or deliberately suspending) Western
    copyright, and secondly through the project’s collective practice itself.

    ## Collective Authorship, Institutional Framing

    The collaborative mode and collectivity within the Piracy Project
    differentiates its artistic strategy in principle from Prince’s or Noland’s
    approaches, who both operate as individuals claiming individual authorship for
    their work.

    But how did the Piracy Project deal with the big authorship question? There
    was an interesting shift here: when the project still operated within the art
    college library, there was not much need for the articulation of authorship
    because it was embedded in a community who contributed in many different ways.
    Once the library was eventually shut after two years and the project was
    hosted by art institutions, a demand for the definition and framing of
    authorship arose.[102](ch11.xhtml#footnote-424) Here the relationship between
    the individual and the collective requires constant and careful
    negotiation.[103](ch11.xhtml#footnote-423) Members of collectives naturally
    develop different priorities and the differences in time, labour and thought
    invested by individuals makes one contributor want to claim ‘more authorship’
    than another. These conflicts require trust, transparency and a decision to
    value the less glamorous, more invisible and supportive work needed to
    maintain the project as much as the authoring of a text or speaking on a
    panel.[104](ch11.xhtml#footnote-422) We also do not necessarily speak with one
    voice. Andrea grew up in Peru and Brazil, and I in Germany, so we have
    different starting points and experiences: ‘we’ was therefore sometimes a
    problematic category.

    ## Our Relationships Felt Temporarily Transformed

    Walter Benjamin, in his text ‘The Author as Producer’, rightly called on
    intellectuals to take into account the means of production as much as the
    radical content of their writings.[105](ch11.xhtml#footnote-421) In
    theoretical writing, modes of production are too often ignored, which means in
    practice that theorists uncritically comply with the conventional
    micropolitics of publishing and dissemination. In other words, radical men and
    women write radical thoughts in books that are not radical at all in the way
    they are produced, published and disseminated. Cultural philosopher Gary Hall
    recounts with surprise a discussion headlined ‘Radical Publishing: What Are We
    Struggling For?’ that was held at the Institute of Contemporary Arts (ICA) in
    London in 2011. The invited panel speakers — Franco ‘Bifo’ Berardi, David
    Graeber, Peter Hallward, and Mark Fisher among others — were mostly concerned
    with, as Hall remembers,

    political transformations elsewhere: in the past, the future, Egypt, [….] but
    there was very little discussion of anything that would actually affect the
    work, business, role, and practices of the speakers themselves: radical ideas
    of publishing with transformed modes of production, say. As a result, the
    event in the end risked appearing mainly to be about a few publishers,
    including Verso, Pluto, and Zero Books, that may indeed publish radical
    political content but in fact operate according to quite traditional business
    models […] promoting their authors and products and providing more goods for
    the ticket-paying audience to buy. If the content of their publications is
    politically transformative, their publishing models certainly are not, with
    phenomena such as the student protests and ideas of communism all being turned
    into commodities to be marketed and sold.[106](ch11.xhtml#footnote-420)

    That truly radical practices are possible is demonstrated by Susan Kelly, when
    she reflects on her involvement in collective practices of creative dissent
    during the austerity protests in the UK in 2010 — roughly at the same time and
    in the same climate that the panel at the ICA took
    place.[107](ch11.xhtml#footnote-419) Kelly describes occasions when artists
    and activists who were involved in political organising, direct action,
    campaigning, and claiming and organising alternative social and cultural
    spaces, came together. She sees these occasions as powerful moments that
    provided a glimpse into what the beginnings of a transversal and overarching
    movement might look like.[108](ch11.xhtml#footnote-418) It was an attempt to

    devise the new modes of action, and new kinds of objects from our emerging
    analyses of the situation while keeping the format open, avoiding the
    replication of given positions, hierarchies and roles of teachers, students,
    artists, onlookers and so on. […] We met people we had never met before, never
    worked with or known, and for many of us, our relationships felt temporarily
    transformed, our vulnerabilities exposed and prior positions and defenses left
    irrelevant, or at least suspended.[109](ch11.xhtml#footnote-417)

    Exactly because these moments of protest produced actions and props that
    escaped authorship, it was even more alienating for the participants when a
    collectively fabricated prop for a demonstration, a large papier-mâché
    carrot[110](ch11.xhtml#footnote-416) that became a notorious image in the
    press at the time, was retrospectively ascribed in an Artforum interview to be
    the ‘authored’ work of an individual artist.[111](ch11.xhtml#footnote-415)

    Kelly, correctly, is highly critical of such designation, which re-erects the
    blockages and boundaries connected to regimes of authorship that collective
    action aimed to dismantle in the first place. It is vital not to ignore the
    ‘complex set of open and contingent relationships, actions and manifestations
    that composed this specific collective political work.’ We would have to ask,
    to which of the activities in the making of the papier-mâché carrot would we
    attribute authorship? Is it the paper sourcing, the gluing, the painting, the
    carrying or the communicative work of organising the gatherings? What if the
    roles and practices are fluid and cannot be delimited like this?

    ## How Not to Assign Authorship?

    What about this text you are reading now? It is based on a five-year
    collaboration to which numerous people contributed. Pirated books were given
    to the Piracy Project as well as arguments, ideas, questions, knowledge and
    practices in the form of conversations and workshops.

    In that regard, this text is informed by a myriad of encounters in panel
    discussions and debates, as well as in the classrooms supported by
    institutions, activist spaces and art spaces.[112](ch11.xhtml#footnote-414)
    All these people donated their valuable ideas to its writing. Various drafts
    have been read and commented on by friends, PhD supervisors and an anonymous
    peer reviewer, and it has been edited by the publishers in the process of
    becoming part of the anthology you now hold in your hands or read on a screen.
    In that light, do I simply and uncritically affirm the mechanisms I am
    criticising by delivering a single-authored text to be printed and validated
    within the prevailing audit culture?

    What if I did not add my name to this text? If it went unsigned, so to speak?
    If anonymity replaced the designation of authorship? The text has not been
    written collectively or collaboratively, despite the conventional processes of
    seeking comments from friendly and critical readers. This is my text, but what
    would happen if I did not assert my right to be its named author?

    How would the non-visibility of the author matter to the reader? We are used
    to making judgements that are at least partially based on the gender, status,
    authority and reputation of a writer. There are also questions of liability
    and accountability with respect to the content of the
    text.[113](ch11.xhtml#footnote-413) Given the long struggle of women writers
    and writers of colour to gain the right to be acknowledged as author, the act
    of not signing my text might be controversial or even counter productive. It
    would also go against the grain of scholarship that aims to decolonise the
    canon or fight against the prevailing gender inequality in scholarly
    publishing.[114](ch11.xhtml#footnote-412) And more, we have to ask who is
    actually in a position to afford not to assign individual names to works given
    that authorship — as discussed above — is used as a marker for professional
    survival and advancement.

    In this specific context however, and as practice based research, it would be
    worth testing out practically what such a text orphan would trigger within
    dominant infrastructures of publishing and validation. How would
    bibliographers catalogue such a text? How could it be referenced and cited?
    And how would it live online with respect to search engines, if there is no
    searchable name attached to it? Most of our current research repositories
    don’t allow the upload of author-less texts, instead returning error messages:
    ‘The author field must be completed’. Or they require a personalised log-in,
    which automatically tags the registered username to the uploaded text.

    What if I used a pseudonym, a common practice throughout literary
    history?[115](ch11.xhtml#footnote-411) Multiple identity pseudonyms, such as
    ‘Karen Eliot’ or ‘Monty Cantsin’ used by the Neoist movement in the 1980s and
    1990s could be interesting as they provide a joint name under which anybody
    could sign her or his work without revealing the author’s
    identity.[116](ch11.xhtml#footnote-410) This strategy of using a multi-
    identity avatar is currently practiced by a decentralised, international
    collective of hacktivists operating under the name ‘Anonymous’. The
    ‘elimination of the persona [of the author], and by extension everything
    associated with it, such as leadership, representation, and status, is’,
    according to Gabriella Coleman, ‘the primary ideal of
    Anonymous.’[117](ch11.xhtml#footnote-409)

    What if we adopted such models for academia? If we unionised and put in place
    a procedure to collectively publish our work anonymously, for example under a
    multi-identity avatar instead of individual names — how would such a text,
    non-attributable as it is, change the policies of evaluation and assessment
    within the knowledge economy? Would the lack of an identifiable name allow the
    text to resist being measured as (or reduced to) a quantifiable auditable
    ‘output’ and therefore allow the issue of individualistic authorship to be
    politicised? Or would it rather, as an individual and solitary act, be
    subjected — again — to the regimes of individualisation? It seems that only if
    not assigning individual authorship became a widespread and unionised practice
    could procedures be put in place that acknowledged non-authored, collective,
    non-competitive practices.[118](ch11.xhtml#footnote-408)

    However, as tempting and urgent as such a move might appear in order to allow
    individualistic authorship to be politicised, such a step also produces a
    challenging double bind. According to Sara Ahmed it actually does matter who
    is speaking. ’The ’who ’ does make a difference, not in the form of an
    ontology of the individual, but as a marker of a specific location from which
    the subject writes’.[119](ch11.xhtml#footnote-407)

    From a feminist and postcolonial perspective, the detachment of writing from
    the empirical body is problematic. Ahmed points out: ‘The universalism of the
    masculine perspective relies precisely on being disembodied, on lacking the
    contingency of a body. A feminist perspective would surely emphasise the
    implication of writing in embodiment, in order to re-historicise this supposed
    universalism, to locate it, and to expose the violence of its contingency and
    particularity (by declaring some-body wrote this text, by asking which body
    wrote this text).’[120](ch11.xhtml#footnote-406) Gayatri Spivak for example
    insists on marking the positionality of a speaking subject in order to account
    for the often unacknowledged eurocentrism of western
    philosophy.[121](ch11.xhtml#footnote-405)

    If we acknowledged this double bind, we might eventually be able to invent
    modes of being and working together that recognise the difference of the ’who’
    that writes, and at the same time might be able to move on from the question
    ‘how can we get rid of the author’ to inventing processes of subjectivation
    that we want to support and instigate.

    ## Works Cited

    (2 March 2012), ‘Etymology of Pirate’, in English Words of (Unexpected) Greek
    Origin,

    Ahmed, Sara (2004) Differences That Matter, Feminist Theory and Postmodernism
    (Cambridge: Cambridge University Press).

    Alarcon, Daniel (14 January 2010) ‘Life Among the Pirates’, Granta Magazine,


    Albanese, Andrew (11 January 2011) ‘J. D. Salinger Estate, Swedish Author
    Settle Copyright Suit’ in Publishers Weekly,
    news/article/45738-j-d-salinger-estate-swedish-author-settle-copyright-
    suit.html>

    Allen, Greg, ed. (2012) The Deposition of Richard Prince in the Case of Cariou
    v. Prince et al. (Zurich: Bookhorse).

    AND Publishing (4 May 2011) ‘AND Publishing announces The Piracy Lectures’,
    Art Agenda, piracy-lectures/>

    Andersson, Jonas (2009) ‘For the Good of the Net: The Pirate Bay as a
    Strategic Sovereign’, Culture Machine 10, 64–108.

    Aufderheide, Patricia, Peter Jaszi, Bryan Bello and Tijana Milosevic (2014)
    Copyright, Permissions, and Fair Use Among Visual Artists and the Academic and
    Museum Visual Arts Communities: An Issues Report (New York: College Art
    Association).

    Barron, Anne (1998) ‘No Other Law? Author–ity, Property and Aboriginal Art’,
    in Lionel Bently and Spyros Maniatis (eds.), Intellectual Property and Ethics
    (London: Sweet and Maxwell), pp. 37–88.

    Barthes, Roland (1967) ‘The Death of the Author’, Aspen, [n.p.],


    Benjamin, Walter (1970) ‘The Author as Producer’ in New Left Review 1.62,
    83–96.

    Bently, Lionel (1994) ‘Copyright and the Death of the Author in Literature and
    Law’, Modern Law Review 57, 973–86.

    — Andrea Francke, Sergio Muñoz Sarmiento, Prodromos Tsiavos and Eva Weinmayr
    (2014) ‘A Day at the Courtroom’, in Andrea Francke and Eva Weinmayr (eds.),
    Borrowing, Poaching, Plagiarising, Pirating, Stealing, Gleaning, Referencing,
    Leaking, Copying, Imitating, Adapting, Faking, Paraphrasing, Quoting,
    Reproducing, Using, Counterfeiting, Repeating, Translating, Cloning (London:
    AND Publishing), pp. 91–133.

    Biagioli, Mario (2014) ‘Plagiarism, Kinship and Slavery’, Theory Culture
    Society 31.2/3, 65–91,

    Buchloh, Benjamin (2009) ‘Pictures’, in David Evans (ed.), Appropriation,
    Documents of Contemporary Art (London: Whitechapel Gallery), originally
    published in October 8 (1979), 75–88.

    Buskirk, Martha (9 December 2013) ‘Marc Jancou, Cady Noland, and the Case of
    the Authorless Artwork’, Hyperallergic, jancou-cady-noland-and-the-case-of-an-authorless-artwork/>

    Butler, Judith (2001) ‘What is Critique? An Essay on Foucault’s Virtue’,
    Transversal 5,

    Cariou, Patrick (2009) Yes Rasta (New York: powerHouse Books).

    Chan, Sewell (1 July 2009) ‘Judge Rules for J. D. Salinger in “Catcher”
    Copyright Suit’, New York Times,


    Coleman, Gabriella (2014) Hacker, Hoaxer, Whistleblower, Spy: The Many Faces
    of Anonymous (London and New York: Verso).

    Corbett, Rachel (14 November 2012) New York Supreme Court Judge Dismisses Marc
    Jancou’s Lawsuit Against Sotheby’s,
    /new-york-supreme-court-judge-dismisses-marc-jancou%E2%80%99s-lawsuit-against-
    sotheby%E2%80%99s/>

    Cariou v Prince, et al., No. 11–1197-cv.
    [http://www.ca2.uscourts.gov/decisions/isysquery/f6e88b8b-48af-401c-
    96a0-54d5007c2f33/1/doc/11-1197_complete_opn.pdf#xml=http://www.ca2.uscourts.gov/decisions/isysquery
    /f6e88b8b-48af-401c-
    96a0-54d5007c2f33/1/hilite/](http://www.ca2.uscourts.gov/decisions/isysquery
    /f6e88b8b-48af-401c-
    96a0-54d5007c2f33/1/doc/11-1197_complete_opn.pdf%23xml=http://www.ca2.uscourts.gov/decisions/isysquery
    /f6e88b8b-48af-401c-96a0-54d5007c2f33/1/hilite/)

    Craig, Carys J. (2007) ‘Symposium: Reconstructing the Author-Self: Some
    Feminist Lessons for Copyright Law’, American University Journal of Gender,
    Social Policy & the Law 15.2, 207–68.

    Di Franco, Karen (2014) ‘The Library Medium’, in Andrea Francke and Eva
    Weinmayr (eds.), Borrowing, Poaching, Plagiarising, Pirating, Stealing,
    Gleaning, Referencing, Leaking, Copying, Imitating, Adapting, Faking,
    Paraphrasing, Quoting, Reproducing, Using, Counterfeiting, Repeating,
    Translating, Cloning (London: AND Publishing), pp. 77–90.

    Fitzpatrick, Kathleen (2018) ‘Generous Thinking The University and the Public
    Good’, Humanities Commons,

    Foster, Hal (1985) ‘(Post)modern Polemics’, in Recodings: Art, Spectacle,
    Cultural Politics (Port Townsend, WA: Bay Press), pp. 121–38.

    Foucault, Michel (1977) ‘What Is an Author?’, in [Donald F.
    Bouchard](https://www.amazon.co.uk/s/ref=dp_byline_sr_book_2?ie=UTF8&text=Donald+F.+Bouchard
    &search-alias=books-uk&field-author=Donald+F.+Bouchard&sort=relevancerank)
    (ed.), Language, Counter-Memory, Practice: Selected Essays and Interviews
    (Ithaca, NY: Cornell University Press), pp. 113–38.

    Genette Gérard (1997) Paratexts, Thresholds of Interpretation (Cambridge:
    Cambridge University Press).

    Goldsmith, Kenneth (19 April 2012) ‘Richard Prince’s Latest Act of
    Appropriation: The Catcher in the Rye’, Harriet, A Poetry Blog,
    of-appropriation-the-catcher-in-the-rye/>

    Gordon, Kim (18 June 2012) ‘Band Paintings: Kim Gordon Interviews Richard
    Prince’, Interview Magazine, [http://www.interviewmagazine.com/art/kim-gordon-
    richard-prince#](http://www.interviewmagazine.com/art/kim-gordon-richard-
    prince)

    Halbert, Deborah J. (2005) Resisting Intellectual Property (London:
    Routledge).

    Hall, Gary (2016) Pirate Philosophy, for a Digital Posthumanities (Cambridge,
    MA and London: The MIT Press).

    Harrison, Nate (29 June 2012) ‘The Pictures Generation, the Copyright Act of
    1976, and the Reassertion of Authorship in Postmodernity’, art&education.net,
    pictures-generation-the-copyright-act-of-1976-and-the-reassertion-of-
    authorship-in-postmodernity/>

    Heller-Roazen, Daniel (2009) The Enemy of All: Piracy and the Law of Nations
    (New York: Zone Books).

    Hemmungs Wirtén, Eva (2004) No Trespassing, Authorship, Intellectual Property
    Rights and the Boundaries of Globalization (Toronto: University of Toronto
    Press).

    Home, Stewart and Florian Cramer (1995) House of Nine Squares: Letters on
    Neoism, Psychogeography & Epistemological Trepidation,


    Kelly, Susan (2005) ‘The Transversal and the Invisible: How do You Really Make
    a Work of Art that Is not a Work of Art?’, Transversal 1,


    Kelly, Susan (2013) ‘“But that was my idea!” Problems of Authorship and
    Validation in Contemporary Practices of Creative Dissent’, Parallax 19.2,
    53–69,
    https://doi.org/[10.1080/13534645.2013.778496](https://doi.org/10.1080/13534645.2013.778496)

    Kennedy, Randy (2014) ‘Richard Prince Settles Copyright Suit With Patrick
    Cariou Over Photographs’, New York Times 18 March,
    [https://artsbeat.blogs.nytimes.com/2014/03/18/richard-prince-settles-
    copyright-suit-with-patrick-cariou-over-
    photographs/?_php=true&_type=blogs&_r=0](https://artsbeat.blogs.nytimes.com/2014/03/18
    /richard-prince-settles-copyright-suit-with-patrick-cariou-over-
    photographs/?_php=true&_type=blogs&_r=0)

    Klinger, Cornelia (2009) ‘Autonomy-Authenticity-Alterity: On the Aesthetic
    Ideology of Modernity’ in Modernologies: Contemporary Artists Researching
    Modernity and Modernism (Barcelona: Museu d’Art Contemporani de Barcelona),
    pp. 26–28.

    Krauss, Annette (2017) ‘Sites for Unlearning: On the Material, Artistic and
    Political Dimensions of Processes of Unlearning’, PhD thesis, Academy of Fine
    Arts Vienna.

    Krupnick, Mark (28 January 2010) ‘JD Salinger Obituary’, The Guardian,


    Kuo, Michelle and David Graeber (Summer 2012) ‘Michelle Kuo Talks with David
    Graeber’, Artforum International, /michelle-kuo-talks-with-david-graeber-31099>

    Levine, Sherrie (2009) ‘Statement//1982’, in David Evans (ed.), Appropriation,
    Documents of Contemporary Art (London: Whitechapel Gallery), p. 81.

    Lobato, Ramon (2014) ‘The Paradoxes of Piracy’, in Lars Eckstein and Anja
    Schwarz (eds.), Postcolonial Piracy: Media Distribution and Cultural
    Production in the Global South (London and New York: Bloomsbury), pp. 121–34,
    potsdam.de/opus4-ubp/frontdoor/deliver/index/docId/7218/file/ppr89.pdf>

    Lorey, Isabell (2015) State of Unsecurity: Government of the Precarious
    (London: Verso).

    Lovink, Geert and Ross, Andrew (eds.) (2007) ‘Organic Intellectual Work’, in
    My Creativity Reader: A Critique of Creative Industries (Amsterdam: Institute
    of Network Cultures), pp. 225–38,

    Marc Jancou Fine Art Ltd. v Sotheby’s, Inc. (13 November 2012) New York State
    Unified Court System, 2012 NY Slip Op 33163(U), york/other-courts/2012-ny-slip-op-33163-u.pdf?ts=1396133024>

    Mauk, Ben (2014) ‘Who Owns This Image?’, The New Yorker 12 February,


    McLuhan, Marshall (1966) ‘Address at Vision 65’, American Scholar 35, 196–205.

    Memory of the World,

    Muñoz Sarmiento, Sergio and Lauren van Haaften-Schick (2013–2014) ‘Cariou v.
    Prince: Toward a Theory of Aesthetic-Judicial Judgements’, in Texas A&M Law
    Review, vol. 1.

    Munro, Cait (10 November 2014) ‘Is Cady Noland More Difficult To Work With
    Than Richard Prince?’, artNet news, cady-noland-as-psychotic-as-richard-prince-162310>

    Myers, Julian (26 August 2009) Four Dialogues 2: On AAAARG, San Francisco
    Museum of Modern Art — Open Space, dialogues-2-on-aaaarg/>

    Nedelsky, Jennifer (1993) ’Reconceiving Rights as Relationship’, Review of
    Constitutional Studies / Revue d’études constitutionnelles 1.1, 1–26,


    Open Book Publishers Authors’ Guide,


    Piracy Project Catalogue. No se diga a nadie, [http://andpublishing.org/Public
    Catalogue/PCat_record.php?cat_index=99](http://andpublishing.org/PublicCatalogue/PCat_record.php?cat_index=99)

    Piracy Project Catalogue / Camille Bondon, Jacques Rancière: le mâitre
    ignorant,


    Piracy Project Catalogue / Neil Chapman, Deleuze, Proust and Signs,


    Piracy Project (19 April 2012) ‘The Impermanent Book’, Rhizome,


    Policante, Amedeo (2015) The Pirate Myth, Genealogies of an Imperial Concept
    (Oxford and New York: Routledge).

    Precarious Workers Brigade (24 April 2011) ‘Fragments Toward an Understanding
    of a Week that Changed Everything…’, e-flux,
    -week-that-changed-everything/>

    Prince, Richard (13 April 2015) Birdtalk,


    Rancière, Jacques (2010) Education, Truth and Emancipation (London:
    Continuum).

    — (2008) The Ignorant Schoolmaster: Five Lessons in Intellectual Emancipation
    (Stanford: University Press California)

    Raunig, Gerald (2002) ‘Transversal Multitudes’, Transversal 9,


    Rose, Mark (1993) Authors and Owners, The Invention of Copyright (Cambridge MA
    and London: Harvard University Press).

    Schor, Naomi (1989) ‘Dreaming Dissymmetry: Barthes, Foucault and Sexual
    Difference’, in Elizabeth Weed (ed.), Coming to Terms: Feminism, Theory,
    Politics (London: Routledge), pp. 47–58.

    Sollfrank, Cornelia (2012) ‘Copyright Cowboys Performing the Law’, Journal of
    New Media Caucus 8.2, fall-2012-v-08-n-02-december-2nd-2012/copyright-cowboys-performing-the-law/>

    Spivak, Gayatry Chakravorty (1988) ‘Can the Subaltern Speak?’, in Cary Nelson
    and Lawrence Grossberg (eds.), Marxism and the Interpretation of Culture
    (Urbana: University of Illinois Press), pp. 271–313.

    Strathern, Marilyn (2005) Kinship, Law, and the Unexpected: Relatives Are
    Always a Surprise (Cambridge: Cambridge University Press).

    Thoburn, Nicholas (2016) Anti-Book, On the Art and Politics of Radical
    Publishing (Minneapolis and London: University of Minnesota Press).

    UK Government, Department for Digital, Culture, Media and Sport (2015)
    Creative Industries Economic Estimates January 2015,
    estimates-january-2015/creative-industries-economic-estimates-january-2015
    -key-findings>

    — (1998) The Creative Industries Mapping Document,
    documents-1998>

    US Copyright Act (1976, amended 2016),

    Wang, Elizabeth H. (1990) ‘(Re)Productive Rights: Copyright and the Postmodern
    Artist’, Columbia-VLA Journal of Law & the Arts 14.2, 261–81,
    [https://heinonline.org/HOL/Page?handle=hein.journals/cjla14&div=10&g_sent=1&casa_token=&collection=journals](https://heinonline.org/HOL/Page?handle=hein.journals/cjla14&div=10&g_sent=1&casa_token=&collection=journals)

    Waugh, Seth (2007) ‘Sponsor Statement‘, in The Solomon R. Guggenheim
    Foundation (ed.), Richard Prince (Ostfildern: Hatje Cantz).

    Wellmon, Chad and Andrew Piper (21 July 2017) ‘Publication, Power, Patronage:
    On Inequality and Academic Publishing’, Critical Inquiry,


    Wright, Stephen (2013) Towards a Lexicon of Usership (Eindhoven: Van
    Abbemuseum).

    Zwick, Tracy (29 August 2013) ‘Art in America’, [https://www.artinamerica
    magazine.com/news-features/news/sothebys-wins-in-dispute-with-jancou-gallery-
    over-cady-noland-artwork/](https://www.artinamericamagazine.com/news-
    features/news/sothebys-wins-in-dispute-with-jancou-gallery-over-cady-noland-
    artwork/)

    * * *

    [1](ch11.xhtml#footnote-525-backlink) /social-turn>

    [2](ch11.xhtml#footnote-524-backlink) Carys J. Craig, ‘Symposium:
    Reconstructing the Author-Self: Some Feminist Lessons for Copyright Law’,
    American University Journal of Gender, Social Policy & the Law 15\. 2 (2007),
    207–68 (p. 224).

    [3](ch11.xhtml#footnote-523-backlink) Mark Rose, Authors and Owners, The
    Invention of Copyright (Cambridge, MA and London: Harvard University Press,
    1993), p. 142.

    [4](ch11.xhtml#footnote-522-backlink) Craig, ‘Symposium: Reconstructing the
    Author-Self’, p. 261.

    [5](ch11.xhtml#footnote-521-backlink) Ibid., p. 267.

    [6](ch11.xhtml#footnote-520-backlink) See also cultural theorist Gary Hall’s
    discussion of Pirate Philosophy, as a potential way forward to overcome such
    simplyfying dichotomies. ‘How can we [theorists] operate differently with
    regard to our own work, business, roles, and practices to the point where we
    actually begin to confront, think through, and take on (rather than take for
    granted, forget, repress, ignore, or otherwise marginalize) some of the
    implications of the challenge that is offered by theory to fundamental
    humanities concepts such as the human, the subject, the author, the book,
    copyright, and intellectual property, for the ways in which we create,
    perform, and circulate knowledge and research?’ Gary Hall, Pirate Philosophy,
    for a Digital Posthumanities (Cambridge, MA and London: The MIT Press, 2016),
    p. 16.

    [7](ch11.xhtml#footnote-519-backlink) Here ‘the producer is being imagined as
    the origin of the product’. (Strathern, p. 156). Therefore ‘in law,
    originality is simply the description of a causal relationship between a
    person and a thing: to say that a work is original in law is to say nothing
    more than that it originates from [can be attributed to] its creator’ (Barron,
    p. 56). And conversely, in law ‘there can be no ‘copyright work’ […] without
    some author who can be said to originate it’ (ibid., p. 55). Anne Barron, ‘No
    Other Law? Author–ity, Property and Aboriginal Art’, in Lionel Bently and
    Spyros Maniatis (eds.), Intellectual Property and Ethics (London: Sweet and
    Maxwell, 1998), pp. 37–88, and Marilyn Strathern, Kinship, Law, and the
    Unexpected: Relatives Are Always a Surprise (Cambridge: Cambridge University
    Press, 2005).

    See also Mario Biagioli’s and Marilyn Strathern’s discussion of the author-
    work relationship as kinship in Mario Biagioli, ‘Plagiarism, Kinship and
    Slavery’, Theory Culture Society 31.2–3 (2014), 65–91,


    [8](ch11.xhtml#footnote-518-backlink) US Copyright Law, Article 17, §102 (a),
    amendment 2016,[
    ](https://www.copyright.gov/title17/)

    [9](ch11.xhtml#footnote-517-backlink) ‘In no case does copyright protection
    for an original work of authorship extend to any idea, procedure, process,
    system, method of operation, concept, principle, or discovery, regardless of
    the form in which it is described, explained, illustrated, or embodied in such
    work.’ US Copyright Law, Article 17, §102 (b), amendment 2016,


    [10](ch11.xhtml#footnote-516-backlink) Susan Kelly, ‘“But that was my idea!”
    Problems of Authorship and Validation in Contemporary Practices of Creative
    Dissent’, Parallax 19.2 (2013), 53–69,
    https://doi.org/[10.1080/13534645.2013.778496](https://doi.org/10.1080/13534645.2013.778496).
    All references to this text refer to the version published on
    [academia.edu](http://academia.edu), which is slightly different:
    ,
    p. 6.

    [11](ch11.xhtml#footnote-515-backlink) Kathleen Fitzpatrick’s working method
    with her book Generous Thinking: A Radical Approach to Saving the University
    (Baltimore: John Hopkins University Press, 2019) presents an interesting
    alternative to standard procedures in scholarly publishing. She published the
    draft of her book online, inviting readers to comment. This could potentially
    become a model for multiple authorship as well as an alternative to the
    standard peer review procedures. I am quoting from the published draft
    version: Kathleen Fitzpatrick, ‘Critique and Competition’ in Generous
    Thinking: The University and the Public Good (Humanities Commons, 2018),
    paragraph 1,

    [12](ch11.xhtml#footnote-514-backlink) Kelly, ‘“But that was my idea!”’, p. 6.

    [13](ch11.xhtml#footnote-513-backlink) I refer in this chapter to US copyright
    law, if not indicated otherwise.

    [14](ch11.xhtml#footnote-512-backlink) He also released the book with Printed
    Matter at the New York Art Book Fair in 2011.

    [15](ch11.xhtml#footnote-511-backlink) It took Prince and his collaborator
    John McWhinney over a year to find a printer with the guts to print this
    facsimile. The one he eventually found was based in Iceland.

    [16](ch11.xhtml#footnote-510-backlink) Prince states in his blog entry ‘Second
    Thoughts on Being Original’, that he made 300 copies. ‘My plan was to show up
    once a week, same day, same time, same place, until all three hundred copies
    were gone.’ Birdtalk, 13 April 2015,
    Booksellers’ web pages, such as Printed Matter, N.Y. and
    [richardprincebooks.com](http://richardprincebooks.com), list an edition of
    500. See:

    [17](ch11.xhtml#footnote-509-backlink) Mark Krupnick, ‘JD Salinger Obituary’,
    The Guardian, 28 January 2010, /jd-salinger-obituary>

    [18](ch11.xhtml#footnote-508-backlink) Kim Gordon, ‘Band Paintings: Kim Gordon
    Interviews Richard Prince’, Interview Magazine, 18 June 2012,
    [http://www.interviewmagazine.com/art/kim-gordon-richard-
    prince#](http://www.interviewmagazine.com/art/kim-gordon-richard-prince)

    [19](ch11.xhtml#footnote-507-backlink) The inside flap of his replica stated a
    price of $62. On this afternoon on the sidewalk outside Central Park, he sold
    his copies for $40. When I was browsing the shelves at the New York art
    bookshop Printed Matter in 2012 I saw copies for $200 and in 2018 it is priced
    at $1200 and $3500 for a signed copy on Abebooks,
    [https://www.abebooks.co.uk/servlet/SearchResults?isbn=&an=richard%20prince
    &tn=catcher%20rye&n=100121503&cm_sp=mbc-_-ats-_-used](https://www.abebooks.co.uk/servlet/SearchResults?isbn=&an=richard%252520prince&tn=catcher%252520rye&n=100121503&cm_sp=mbc-_-ats-_-used)

    [20](ch11.xhtml#footnote-506-backlink) Kenneth Goldsmith, ‘Richard Prince’s
    Latest Act of Appropriation: The Catcher in the Rye’, Harriet: A Poetry Blog,
    19 April 2012, princes-latest-act-of-appropriation-the-catcher-in-the-rye/>

    [21](ch11.xhtml#footnote-505-backlink) In 1977 Douglas Crimp curated the
    exhibition ‘Pictures’ at Artists’ Space in New York with artists Troy
    Brauntuch, Jack Goldstein, Sherrie Levine, Robert Longo and Philip Smith.
    Artist Cornelia Sollfrank interprets ‘the non-specific title of the show’ as a
    first indication of the aesthetic strategies presented in the exhibition. The
    presentation of reproduced visual materials marked, according to Sollfrank, ‘a
    major challenge to the then predominant modernist discourse.’ Cornelia
    Sollfrank, ‘Copyright Cowboys Performing the Law’, Journal of New Media Caucus
    8.2 (2012), fall-2012-v-08-n-02-december-2nd-2012/copyright-cowboys-performing-the-law/>

    [22](ch11.xhtml#footnote-504-backlink) As Benjamin Buchloh writes ‘these
    processes of quotation, excerption, framing and staging that constitute the
    strategies of the work […] necessitate [the] uncovering strata of
    representation. Needless to say we are not in search of sources of origin, but
    of structures of signification: underneath each picture there is always
    another picture.’ Benjamin Buchloh, ‘Pictures’, in David Evans (ed.),
    Appropriation, Documents of Contemporary Art (London: Whitechapel Gallery,
    2009), p. 78\. Originally published in October 8 (1979), 75–88.

    [23](ch11.xhtml#footnote-503-backlink) October’s editors — including among
    others Rosalind Krauss, Hal Foster, Craig Owens, and Benjamin Buchloh —
    provided a theoretical context for this emerging art by introducing French
    structuralist and poststructuralist theory, i.e. the writings of Roland
    Barthes, Michel Foucault, and Jacques Derrida to the English speaking world.

    [24](ch11.xhtml#footnote-502-backlink) Nate Harrison, ‘The Pictures
    Generation, the Copyright Act of 1976, and the Reassertion of Authorship in
    Postmodernity’, art&education.net, 29 June 2012,
    pictures-generation-the-copyright-act-of-1976-and-the-reassertion-of-
    authorship-in-postmodernity/>

    [25](ch11.xhtml#footnote-501-backlink) Sherrie Levine, ‘Statement//1982’, in
    David Evans (ed.), Appropriation, Documents of Contemporary Art (London:
    Whitechapel Gallery, 2009), p. 81.

    [26](ch11.xhtml#footnote-500-backlink) Nate Harrison, ‘The Pictures
    Generation, the Copyright Act of 1976, and the Reassertion of Authorship in
    Postmodernity’, art&education.net, 29 June 2012,
    pictures-generation-the-copyright-act-of-1976-and-the-reassertion-of-
    authorship-in-postmodernity/>

    [27](ch11.xhtml#footnote-499-backlink) Ibid.

    [28](ch11.xhtml#footnote-498-backlink) Quoting this line from Prince book, Why
    I Go to the Movies Alone (New York: Barbara Gladstone Gallery, 1994), the
    sponsor statement in the catalogue for Prince’s solo show Spiritual America at
    The Guggenheim Museum in New York continues: ‘although his [work is] primarily
    appropriated […] from popular culture, [it] convey[s] a deeply personal
    vision. His selection of mediums and subject matter […] suggest a uniquely
    individual logic […] with wit and an idiosyncratic eye, Richard Prince has
    that rare ability to analyze and translate contemporary experience in new and
    unexpected ways.’ Seth Waugh, ‘Sponsor Statement‘, in The Solomon R.
    Guggenheim Foundation (ed.), Richard Prince (Ostfildern: Hatje Cantz, 2007).

    [29](ch11.xhtml#footnote-497-backlink) See Hal Foster, ‘(Post)modern
    Polemics’, in Recodings: Art, Spectacle, Cultural Politics (Port Townsend, WA:
    Bay Press, 1985).

    [30](ch11.xhtml#footnote-496-backlink) See note 47.

    [31](ch11.xhtml#footnote-495-backlink) One might argue that this performative
    act of claiming intellectual property is an attempt to challenge J. D.
    Salinger’s notorious protectiveness about his writing. Salinger sued the
    Swedish writer Fredrik Colting successfully for copyright infringement. Under
    the pseudonym John David California, Colting had written a sequel to The
    Catcher in the Rye. The sequel, 60 Years Later Coming Through The Rye, depicts
    the protagonist Holden Caulfield’s adventures as an old man. In 2009, the US
    District Court Judge in Manhattan, Deborah A. Batts, issued a preliminary
    injunction indefinitely barring the publication, advertising or distribution
    of the book in the US. See Sewell Chan, ‘Judge Rules for J. D. Salinger in
    “Catcher” Copyright Suit’, The New York Times, 1 July 2009,


    ‘In a settlement agreement reached between Salinger and Colting in 2011,
    Colting has agreed not to publish or otherwise distribute the book, e-book, or
    any other editions of 60 Years Later in the U.S. or Canada until The Catcher
    in the Rye enters the public domain. Notably, however, Colting is free to sell
    the book in other international territories without fear of interference, and
    a source has told Publishers Weekly that book rights have already been sold in
    as many as a half-dozen territories, with the settlement documents included as
    proof that the Salinger Estate will not sue. In addition, the settlement
    agreement bars Colting from using the title “Coming through the Rye”; forbids
    him from dedicating the book to Salinger; and would prohibit Colting or any
    publisher of the book from referring to The Catcher in the Rye, Salinger, the
    book being “banned” by Salinger, or from using the litigation to promote the
    book.’ Andrew Albanese, ‘J. D. Salinger Estate, Swedish Author Settle
    Copyright Suit’, Publishers Weekly, 11 January 2011,
    news/article/45738-j-d-salinger-estate-swedish-author-settle-copyright-
    suit.html>

    [32](ch11.xhtml#footnote-494-backlink) Elizabeth H. Wang, ‘(Re)Productive
    Rights: Copyright and the Postmodern Artist’, Columbia-VLA Journal of Law &
    the Arts 14.2 (1990), 261–81 (p. 281),
    [https://heinonline.org/HOL/Page?handle=hein.journals/cjla14&div=10&g_sent=1&casa_token=&collection=journals](https://heinonline.org/HOL/Page?handle=hein.journals/cjla14&div=10&g_sent=1&casa_token=&collection=journals)

    [33](ch11.xhtml#footnote-493-backlink) Sollfrank, ‘Copyright Cowboys’.

    [34](ch11.xhtml#footnote-492-backlink) Thirty paintings created by Prince
    contained forty-one of Cariou’s photographs. The images had been taken from
    Cariou’s book Yes Rasta (Brooklyn: powerHouse Books, 2000) and used by Prince
    in his painting series Canal Zone, which was shown at Gagosian Gallery, New
    York, in 2008.

    [35](ch11.xhtml#footnote-491-backlink) It might be no coincidence (or then
    again, it might) that the district court judge in this case, Deborah Batts, is
    the same judge who ruled in the 2009 case in which Salinger successfully
    brought suit for copyright infringement against Swedish author Fredrik Colting
    for 60 Years Later Coming Through the Rye, a sequel to Salinger’s book. See
    note 31.

    [36](ch11.xhtml#footnote-490-backlink) ’In determining whether the use made of
    a work in any particular case is a fair use the factors to be considered shall
    include — (1) the purpose and character of the use, including whether such use
    is of a commercial nature or is for nonprofit educational purposes; (2) the
    nature of the copyrighted work; (3) the amount and substantiality of the
    portion used in relation to the copyrighted work as a whole; and (4) the
    effect of the use upon the potential market for or value of the copyrighted
    work.’ US Copyright Act of 1976, amended 2016,


    [37](ch11.xhtml#footnote-489-backlink) ‘What is critical is how the work in
    question appears to the reasonable observer, not simply what an artist might
    say about a particular piece or body of work.’ Cariou v Prince, et al., court
    document, No. 11–1197-cv, page 14,
    [http://www.ca2.uscourts.gov/decisions/isysquery/f6e88b8b-48af-401c-
    96a0-54d5007c2f33/1/doc/11-1197_complete_opn.pdf#xml=http://www.ca2.uscourts.gov/decisions/isysquery
    /f6e88b8b-48af-401c-
    96a0-54d5007c2f33/1/hilite/](http://www.ca2.uscourts.gov/decisions/isysquery
    /f6e88b8b-48af-401c-
    96a0-54d5007c2f33/1/doc/11-1197_complete_opn.pdf%23xml=http://www.ca2.uscourts.gov/decisions/isysquery
    /f6e88b8b-48af-401c-96a0-54d5007c2f33/1/hilite/)

    [38](ch11.xhtml#footnote-488-backlink) The court opinion states: ‘These
    twenty-five of Prince’s artworks manifest an entirely different aesthetic from
    Cariou’s photographs. Where Cariou’s serene and deliberately composed
    portraits and landscape photographs depict the natural beauty of Rastafarians
    and their surrounding environs, Prince’s crude and jarring works, on the other
    hand, are hectic and provocative. Cariou’s black-and-white photographs were
    printed in a 9 1/2” x 12” book. Prince has created collages on canvas that
    incorporate color, feature distorted human and other forms and settings, and
    measure between ten and nearly a hundred times the size of the photographs.
    Prince’s composition, presentation, scale, color palette, and media are
    fundamentally different and new compared to the photographs, as is the
    expressive nature of Prince’s work.’ Ibid., pp. 12–13.

    [39](ch11.xhtml#footnote-487-backlink) Prince’s deposition testimony stated
    that he ‘do[es]n’t really have a message,’ that he was not ‘trying to create
    anything with a new meaning or a new message,’ and that he ‘do[es]n’t have any
    […] interest in [Cariou’s] original intent.’ Court Opinion, p. 13\. For full
    deposition see Greg Allen (ed.), The Deposition of Richard Prince in the Case
    of Cariou v. Prince et al. (Zurich: Bookhorse, 2012).

    [40](ch11.xhtml#footnote-486-backlink) The court opinion includes a dissent by
    Circuit Judge Clifford Wallace sitting by designation from the US Court of
    Appeals for the Ninth Circuit, ‘I, for one, do not believe that I am in a
    position to make these fact- and opinion-intensive decisions on the twenty-
    five works that passed the majority’s judicial observation. […] nor am I
    trained to make art opinions ab initio.’ Ibid., p. 5\.

    ‘Furthermore, Judge Wallace questions the majority’s insistence on analyzing
    only the visual similarities and differences between Cariou’s and Prince’s art
    works, “Unlike the majority, I would allow the district court to consider
    Prince’s statements reviewing fair use … I see no reason to discount Prince’s
    statements as the majority does.” In fact, Judge Wallace remarks that he views
    Prince’s statements as “relevant to the transformativeness analysis.” Judge
    Wallace does not believe that a simple visual side-by-side analysis is enough
    because this would call for judges to “employ [their] own artistic
    Judgment[s].”’ Sergio Muñoz Sarmiento and Lauren van Haaften-Schick, citing
    court documents. ‘Cariou v. Prince: Toward a Theory of Aesthetic-Judicial
    Judgements’, Texas A&M Law Review, vol. 1, 2013–2014, p. 948.

    [41](ch11.xhtml#footnote-485-backlink) Court opinion, p. 18.

    [42](ch11.xhtml#footnote-484-backlink) Ibid., p. 17.

    [43](ch11.xhtml#footnote-483-backlink) Ibid., pp. 4–5.

    [44](ch11.xhtml#footnote-482-backlink) Ibid., p. 18.

    [45](ch11.xhtml#footnote-481-backlink) Muñoz Sarmiento and van Haaften-Schick,
    ‘Aesthetic-Judicial Judgements’, p. 945.

    [46](ch11.xhtml#footnote-480-backlink) Court opinion, p. 15.

    [47](ch11.xhtml#footnote-479-backlink) The court opinion states: ‘He is a
    leading exponent of this genre and his work has been displayed in museums
    around the world, including New York’s Solomon R. Guggenheim Museum and
    Whitney Museum, San Francisco’s Museum of Modern Art, Rotterdam’s Museum
    Boijmans van Beuningen, and Basel’s Museum für Gegenwartskunst.’ Ibid., p. 5.

    [48](ch11.xhtml#footnote-478-backlink) Muñoz Sarmiento and van Haaften-Schick,
    ‘Aesthetic-Judicial Judgements’, p. 945.

    [49](ch11.xhtml#footnote-477-backlink) The New York Times reports Prince had
    not to destroy the five paintings at issue. Randy Kennedy, ‘Richard Prince
    Settles Copyright Suit With Patrick Cariou Over Photographs’, New York Times,
    18 March 2014, [https://artsbeat.blogs.nytimes.com/2014/03/18/richard-prince-
    settles-copyright-suit-with-patrick-cariou-over-
    photographs/?_php=true&_type=blogs&_r=0](https://artsbeat.blogs.nytimes.com/2014/03/18
    /richard-prince-settles-copyright-suit-with-patrick-cariou-over-
    photographs/?_php=true&_type=blogs&_r=0)

    [50](ch11.xhtml#footnote-476-backlink) Court opinion, p. 13.

    [51](ch11.xhtml#footnote-475-backlink) Sollfrank, ‘Copyright Cowboys’.

    [52](ch11.xhtml#footnote-474-backlink) In 2016 photographer Donald Graham
    filed a lawsuit against Prince with regard to Prince’s use of Graham’s
    Instagram pictures. Again, the image shows a photographic representation of
    Rastafarians. And similar to the Cariou case Prince appropriates Graham’s and
    Cariou’s cultural appropriation of Rastafarian culture.

    [53](ch11.xhtml#footnote-473-backlink) Cait Munro quotes Cady Noland from
    Sarah Thornton’s book 33 Artists in 3 Acts. Noland gave Thornton her first
    interview for twenty-four years: ‘Noland, an extremely talented artist, has
    become so obsessed with her old work that she’s been unable to create anything
    new in years. She admits to Thornton that ‘I’d like to get into a studio and
    start making work,’ but that tracking the old work has become a ‘full-time
    thing’. Cait Munro, ‘Is Cady Noland More Difficult To Work With Than Richard
    Prince?’, artNet news, 10 November 2014, /is-cady-noland-as-psychotic-as-richard-prince-162310>;

    [54](ch11.xhtml#footnote-472-backlink) Martha Buskirk, ‘Marc Jancou, Cady
    Noland, and the Case of the Authorless Artwork’, Hyperallergic, 9 December
    2013, an-authorless-artwork/>

    [55](ch11.xhtml#footnote-471-backlink) Marc Jancou Fine Art Ltd. v Sotheby’s,
    Inc., New York State Unified Court System, 2012 NY Slip Op 33163(U), 13
    November 2012, op-33163-u.pdf?ts=1396133024>

    [56](ch11.xhtml#footnote-470-backlink) ‘The author of a work of visual art —
    (1) shall have the right — (A) to claim authorship of that work, and (B) to
    prevent the use of his or her name as the author of any work of visual art
    which he or she did not create; (2) shall have the right to prevent the use of
    his or her name as the author of the work of visual art in the event of a
    distortion, mutilation, or other modification of the work which would be
    prejudicial to his or her honor or reputation; and (3) subject to the
    limitations set forth in section 113(d), shall have the right — (A) to prevent
    any intentional distortion, mutilation, or other modification of that work
    which would be prejudicial to his or her honor or reputation, and any
    intentional distortion, mutilation, or modification of that work is a
    violation of that right, and (B) to prevent any destruction of a work of
    recognized stature, and any intentional or grossly negligent destruction of
    that work is a violation of that right’, from US Code, Title 17, § 106A, Legal
    Information Institute, Cornell Law School,


    [57](ch11.xhtml#footnote-469-backlink) Buskirk, ‘Marc Jancou, Cady Noland’.

    [58](ch11.xhtml#footnote-468-backlink) Ibid.

    [59](ch11.xhtml#footnote-467-backlink) Jancou’s claim was dismissed by the New
    York Supreme Court in the same year. The Court’s decision was based on the
    language of Jancou’s consignment agreement with Sotheby’s, which gave
    Sotheby’s the right to withdraw Cowboys Milking ‘at any time before the sale’
    if, in Sotheby’s judgment, ‘there is doubt as to its authenticity or
    attribution.’ Tracy Zwick, ‘Art in America’, 29 August 2013,
    dispute-with-jancou-gallery-over-cady-noland-artwork/>

    [60](ch11.xhtml#footnote-466-backlink) It might be important here to recall
    that both Richard Prince and Cady Noland are able to afford the expensive
    costs incurred by a court case due to their success in the art market.

    [61](ch11.xhtml#footnote-465-backlink) The legal grounds for Noland’s move,
    the federal Visual Artists Rights Act of 1990, is based on French moral rights
    or author rights (droit d’auteur), which are inspired by the humanistic and
    individualistic values of the French Revolution and form part of European
    copyright law. They conceive the work as an intellectual and creative
    expression that is directly connected to its creator. Legal scholar Lionel
    Bently observes ‘the prominence of romantic conceptions of authorship’ in the
    recognition of moral rights, which are based on concepts of the originality
    and authenticity of the modern subject (Lionel Bently, ‘Copyright and the
    Death of the Author in Literature and Law’, Modern Law Review, 57 (1994),
    973–86 (p. 977)). ‘Authenticity is the pure expression, the expressivity, of
    the artist, whose soul is mirrored in the work of art.’ (Cornelia Klinger,
    ‘Autonomy-Authenticity-Alterity: On the Aesthetic Ideology of Modernity’ in
    Modernologies: Contemporary Artists Researching Modernity and Modernism,
    exhibition catalogue (Barcelona: Museu d’Art Contemporani de Barcelona, 2009),
    pp. 26–28 (p. 29)) Moral rights are the personal rights of authors, which
    cannot be surrendered fully to somebody else because they conceptualize
    authorship as authentic extension of the subject. They are ‘rights of authors
    and artists to be named in relation to the work and to control alterations of
    the work.’ (Bently, ‘Copyright and the Death of the Author’, p. 977) In
    contrast to copyright, moral rights are granted in perpetuity, and fall to the
    estate of an artist after his or her death.

    Anglo-American copyright, employed in Prince’s case, on the contrary builds
    the concept of intellectual property mainly on economic and distribution
    rights, against unauthorised copying, adaptation, distribution and display.
    Copyright lasts for a certain amount of time, after which the work enters the
    public domain. In most countries the copyright term expires seventy years
    after the death of the author. Non-perpetual copyright attempts to strike a
    balance between the needs of the author to benefit economically from his or
    her work and the interests of the public who benefit from the use of new work.

    [62](ch11.xhtml#footnote-464-backlink) Bently, ‘Copyright and the Death of the
    Author’, p. 974.

    [63](ch11.xhtml#footnote-463-backlink) Geert Lovink and Andrew Ross, ‘Organic
    Intellectual Work’, in Geert Lovink and Ned Rossiter (eds.), My Creativity
    Reader: A Critique of Creative Industries (Amsterdam: Institute of Network
    Cultures, 2007), pp. 225–38 (p. 230),


    [64](ch11.xhtml#footnote-462-backlink) UK Government Department for Digital,
    Culture, Media and Sports, The Creative Industries Mapping Document, 1998,
    documents-1998>

    [65](ch11.xhtml#footnote-461-backlink) UK Government, Department for Media,
    Culture & Sport, Creative Industries Economic Estimates January 2015,
    estimates-january-2015/creative-industries-economic-estimates-january-2015
    -key-findings>

    [66](ch11.xhtml#footnote-460-backlink) See critical discussion of the creative
    industries paradigm and the effects of related systems of governance on the
    precarisation of the individual: Lovink and Rossiter, My Creativity, and
    Isabell Lorey, State of Insecurity: Government of the Precarious (London:
    Verso, 2015).

    [67](ch11.xhtml#footnote-459-backlink) University of the Arts London,
    ‘Intellectual Property Know-How for the Creative Sector’. This site was
    initially accessed on 30 March 2015. In 2018 it was taken down and integrated
    into the UAL Intellectual Property Advice pages. Their downloadable PDFs still
    show the ‘Own-it’ logo, /freelance-and-business-advice/intellectual-property-advice>

    [68](ch11.xhtml#footnote-458-backlink) Patricia Aufderheide, Peter Jaszi,
    Bryan Bello, and Tijana Milosevic, Copyright, Permissions, and Fair Use Among
    Visual Artists and the Academic and Museum Visual Arts Communities: An Issues
    Report (New York: College Art Association, 2014).

    [69](ch11.xhtml#footnote-457-backlink) Ibid., p. 5.

    [70](ch11.xhtml#footnote-456-backlink) Sixty-six percent of all those who
    reported that they had abandoned or avoided a project because of an actual or
    perceived inability to obtain permissions said they would be ‘very likely’ to
    use copyrighted works of others more than they have in the past were
    permissions not needed. Ibid., p. 50.

    [71](ch11.xhtml#footnote-455-backlink) The Copyright, Permissions, and Fair
    Use Report gives some intriguing further observations: ‘Permissions roadblocks
    result in deformed or even abandoned work. Exhibition catalogues may be issued
    without relevant images because rights cannot be cleared. Editors of art
    scholarship reported journal articles going to print with blank spots where
    reproductions should be, because artists’ representatives disagreed with the
    substance of the article; and one book was published with last-minute
    revisions and deletions of all images because of a dispute with an estate —
    with disastrous results for sales. Journal editors have had to substitute
    articles or go without an article altogether because an author could not
    arrange permissions in time for publication. In one case, after an author’s
    manuscript was completed, an estate changed position, compelling the author
    both to rewrite and to draw substitute illustrations. Among other things, the
    cost of permissions leads to less work that features historical overviews and
    comparisons, and more monographs and case studies. Scholarship itself is
    distorted and even censored by the operation of the permissions culture. […]
    In some cases, the demands of rights holders have extended to altering or
    censoring the scholarly argument about a work. Catalogue copy sometimes is
    altered because scholarly arguments and perspectives are unacceptable to
    rights holders.’ These actions are in some cases explicitly seen as
    censorship. Ibid., p. 52.

    [72](ch11.xhtml#footnote-454-backlink) Ibid., p. 51.

    [73](ch11.xhtml#footnote-453-backlink) Ben Mauk, ‘Who Owns This Image?’, The
    New Yorker, 12 February 2014, owns-this-image>

    [74](ch11.xhtml#footnote-452-backlink) Jennifer Nedelsky, ’Reconceiving Rights
    as Relationship’, in Review of Constitutional Studies / Revue d’études
    constitutionnelles 1.1 (1993), 1–26 (p. 16),


    [75](ch11.xhtml#footnote-451-backlink) Deborah J. Halbert, Resisting
    Intellectual Property (London: Routledge, 2005), pp. 1–2.

    [76](ch11.xhtml#footnote-450-backlink) See for example Amedeo Policante
    examining the relationship between empire and pirate, claiming that the pirate
    can exist only in a relationship with imperial foundations. ‘Upon the naming
    of the pirate, in fighting it and finally in celebrating its triumph over it,
    Empire erects itself. There is no Empire without a pirate, a terrorizing
    common enemy, an enemy of all. At the same time, there is no pirate without
    Empire. In fact, pirates as outlaws cannot be understood in any other way but
    as legal creatures. In other words, they exist only in a certain extreme,
    liminal relationship with the law.’ Amedeo Policante, The Pirate Myth,
    Genealogies of an Imperial Concept (Oxford and New York: Routledge, 2015), p.
    viii.

    [77](ch11.xhtml#footnote-449-backlink) Ramon Lobato, ‘The Paradoxes of
    Piracy’, in Lars Eckstein and Anja Schwarz (eds.), Postcolonial Piracy: Media
    Distribution and Cultural Production in the Global South (London and New York:
    Bloomsbury, 2014), pp. 121–34 (pp. 121, 123).

    [78](ch11.xhtml#footnote-448-backlink) Daniel Heller-Roazen, The Enemy of All:
    Piracy and the Law of Nations (New York: Zone Books, 2009), p. 35, as cited by
    Gary Hall, Pirate Philosophy, p. 16.

    [79](ch11.xhtml#footnote-447-backlink) ‘Etymology of Pirate’, in English Words
    of (Unexpected) Greek Origin, 2 March 2012,


    [80](ch11.xhtml#footnote-446-backlink) The Piracy Project is a collaboration
    between AND Publishing and Andrea Francke initiated in London in 2010.

    [81](ch11.xhtml#footnote-445-backlink) Andrea Francke visited pirate book
    markets in Lima, Peru in 2010. The Red Mansion Prize residency enabled us to
    research book piracy in Beijing and Shanghai in 2012. A research residency at
    SALT Istanbul in 2012 facilitated field research in Turkey.

    [82](ch11.xhtml#footnote-444-backlink) See also Stephen Wright’s Towards a
    Lexicon of Usership (Eindhoven: Van Abbemuseum, 2013) proposing to replace the
    term (media) ‘piracy’ with ‘usership’. He explains: ‘On the one hand, the most
    notorious and ruthless cultural pirates today are Google and its subsidiaries
    like YouTube (through the institutionalized rip-off of user-generated value
    broadly known as Page-Rank), Facebook, and of course Warner Bros etc., but
    also academic publishers such as the redoubtable Routledge. On the other hand,
    all the user-run and user-driven initiatives like aaaaarg, or
    [pad.ma](http://pad.ma), or until recently the wonderful Dr Auratheft. But,
    personally, I would hesitate to assimilate such scaled-up, de-creative, user-
    propelled examples with anything like “cultural piracy”. They are, through
    usership, enriching what would otherwise fall prey to cultural piracy.’ Email
    to the author, 1 August 2012.

    See also: Andrea Francke and Eva Weinmayr (eds.), Borrowing, Poaching,
    Plagiarising, Pirating, Stealing, Gleaning, Referencing, Leaking, Copying,
    Imitating, Adapting, Faking, Paraphrasing, Quoting, Reproducing, Using,
    Counterfeiting, Repeating, Translating, Cloning (London: AND Publishing,
    2014).

    [83](ch11.xhtml#footnote-443-backlink) Richard Prince’s ‘Catcher in the Rye’
    forms part of the Piracy Collection. Not the book copy priced at £1,500, just
    an A4 colour printout of the cover, downloaded from the Internet. On the shelf
    it sits next to Salinger’s copy, which we bought at Barnes and Noble for £20.

    [84](ch11.xhtml#footnote-442-backlink) Craig, ‘Symposium: Reconstructing the
    Author-Self’, p. 246.

    [85](ch11.xhtml#footnote-441-backlink) Michel Foucault, ‘What Is an Author?’,
    in [Donald F.
    Bouchard](https://www.amazon.co.uk/s/ref=dp_byline_sr_book_2?ie=UTF8&text=Donald+F.+Bouchard
    &search-alias=books-uk&field-author=Donald+F.+Bouchard&sort=relevancerank)
    (ed.), Language, Counter-Memory, Practice: Selected Essays and Interviews
    (Ithaca, NY: Cornell University Press, 1977), pp. 113–38.

    [86](ch11.xhtml#footnote-440-backlink) See The Piracy Project, ‘The
    Impermanent Book’, Rhizome, 19 April 2012,


    [87](ch11.xhtml#footnote-439-backlink) It might be no coincidence that Roland
    Barthes’ seminal short essay ‘Death of the Author’ was published in the
    magazine Aspen at the same time, when photocopy machines were beginning to be
    widely used in libraries and offices.

    [88](ch11.xhtml#footnote-438-backlink) Eva Hemmungs Wirtén, No Trespassing,
    Authorship, Intellectual Property Rights and the Boundaries of Globalization
    (Toronto: University of Toronto Press, 2004), p. 66.

    [89](ch11.xhtml#footnote-437-backlink) See No se diga a nadie, The Piracy
    Project Catalogue,


    [90](ch11.xhtml#footnote-436-backlink) In an essay in Granta Magazine, Daniel
    Alarcon explains the popularity of book piracy in Peru due to the lack of
    formal distribution. ‘Outside Lima, the pirate book industry is the only one
    that matters’ explains Alarcon. Iquitos, the largest city in the Peruvian
    Amazon, with nearly 400,000 residents, had until 2007 no formal bookstore and
    in 2010 only two. Trujillo, the country’s third largest city, has one.
    According to Alarcon, an officially produced book costs twenty percent of an
    average worker’s weekly income, therefore the pirate printing industry fills
    this gap — an activity that is not seriously restricted by the state. In fact,
    Alarcon claims that the government is involved in the pirate printing industry
    as a way to control what is being read. Pirated books are openly sold in book
    markets and by street vendors at traffic crossings, therefore they ‘reach
    sectors of the market that formal book publishers cannot or don’t care to
    access. In a similar vein, the few prestigious private universities’ book
    check-out time is exactly twenty-four hours, the very turnaround for the copy
    shops in the neighbourhood to make a photocopied version of the checked-out
    library books. Daniel Alarcon, ‘Life Amongst the Pirates’, Granta Magazine, 14
    January 2010,

    [91](ch11.xhtml#footnote-435-backlink) A discussion of the vast variety of
    approaches here would exceed the scope of this text. If you are interested,
    please visit our searchable Piracy Collection catalogue, which provides short
    descriptions of the pirates’ approaches and strategies,


    [92](ch11.xhtml#footnote-434-backlink) For the performative debate A Day at
    the Courtroom hosted by The Showroom in London, the Piracy Project invited
    three copyright lawyers from different cultural and legal backgrounds to
    discuss and assess selected cases from the Piracy Project from the perspective
    of their differing jurisdictions. The final verdict was given by the audience,
    who positioned the ‘case’ on a colour scale ranging from illegal (red) to
    legal (blue). The scale replaced the law’s fundamental binary of legal —
    illegal, allowing for greater complexity and nuance. The advising scholars and
    lawyers were Lionel Bently (Professor of Intellectual Property at the
    University of Cambridge), Sergio Muñoz Sarmiento (Art and Law, New York),
    Prodromos Tsiavos (Project lead for Creative Commons, England, Wales and
    Greece). A Day at the Courtroom, The Showroom London, 15 June 2013. See a
    transcript of the debate in Francke and Weinmayr, Borrowing, Poaching,
    Plagiarising.

    [93](ch11.xhtml#footnote-433-backlink) Aaaaaarg.fail operates on an invitation
    only basis; [memoryoftheworld.org](http://memoryoftheworld.org) is openly
    accessible.

    [94](ch11.xhtml#footnote-432-backlink) Julian Myers, Four Dialogues 2: On
    AAAARG, San Francisco Museum of Modern Art — Open Space, 26 August 2009,
    . This
    constructive approach has been observed by Jonas Andersson generally with p2p
    sharing networks, which ’have begun to appear less as a reactive force (i.e.
    breaking the rules) and more as a proactive one (setting the rules). […]
    Rather than complain about the conservatism of established forms of
    distribution they simply create new, alternative ones.’ Jonas Andersson, ‘For
    the Good of the Net: The Pirate Bay as a Strategic Sovereign’, Culture Machine
    10 (2009), p. 64.

    [95](ch11.xhtml#footnote-431-backlink) This process was somewhat fraught,
    because at the same time David Cameron launched his perfidious ‘Big Society’
    concept, which proposed that members of the community should volunteer at
    institutions, such as local public libraries, which otherwise could not
    survive because of government cuts.

    [96](ch11.xhtml#footnote-430-backlink) See the Piracy Project catalogue: Neil
    Chapman, Deleuze, Proust and Signs,


    [97](ch11.xhtml#footnote-429-backlink) Of course unconventional publications
    can and are being collected, but these are often more arty objects, flimsy or
    oversized, undersized etc. and frequently end up in the special collections,
    framed and categorised ‘as different’ from the main stack of the collections.

    [98](ch11.xhtml#footnote-428-backlink) When The Piracy Project was invited to
    create a reading room at the New York Art Book Fair in 2012, a librarian from
    the Pratt Institute dropped by every single day, because she was so fixed on
    the questions, the pirate books and their complex strategies of queering the
    category of authorship posed to standardised bibliographic practices. Based on
    this question we organised a cataloguing workshop ‘Putting the Piracy
    Collection on the shelf’ at Grand Union in Birmingham, where we developed a
    new cataloguing vocabulary for cases in the collection. See union.org.uk/gallery/putting-the-piracy-collection-on-the-shelves/>

    See also Karen Di Franco’s reflection on the cataloguing workshop ‘The Library
    Medium’ in Francke and Weinmayr, Borrowing, Poaching, Plagiarising.

    [99](ch11.xhtml#footnote-427-backlink) See Piracy Project catalogue: Camille
    Bondon, Jacques Rancière: le mâitre ignorant,
    .
    Rancière’s pedagogical proposal suggests that ‘the most important quality of a
    schoolmaster is the virtue of ignorance’. (Rancière, 2010, p. 1). In his book
    The Ignorant Schoolmaster: Five Lessons in Intellectual Emancipation Jacques
    Rancière uses the historic case of the French teacher Joseph Jacotot, who was
    exiled in Belgium and taught French classes to Flemish students whose language
    he did not know and vice versa. Reportedly he gave his students a French text
    to read alongside its translation and, without mediation or explanation, let
    the students figure out the relationship between the two texts themselves. By
    intentionally using his ignorance as teaching method, Rancière claims, Jacotot
    removed himself as the centre of the classroom, as the one who knows. This
    teaching method arguably destabilises the hierarchical relationship of
    knowledge (between student and teacher) and therefore ‘establishes equality as
    the centre of the educational process’. Annette Krauss, ‘Sites for Unlearning:
    On the Material, Artistic and Political Dimensions of Processes of
    Unlearning’, PhD, Academy of Fine Arts Vienna, 2017, p. 113\. Jacques
    Rancière, Education, Truth and Emancipation (London: Continuum, 2010). Jacques
    Rancière, The Ignorant Schoolmaster: Five Lessons in Intellectual Emancipation
    (Stanford: University Press California, 1987).

    [100](ch11.xhtml#footnote-426-backlink) ‘AND Publishing announces The Piracy
    Lectures’, Art Agenda, 4 May 2011, publishing-announces-the-piracy-lectures/>

    [101](ch11.xhtml#footnote-425-backlink) Judith Butler, ‘What is Critique? An
    Essay on Foucault’s Virtue’, Transversal 5 (2001),


    [102](ch11.xhtml#footnote-424-backlink) Institutions that hosted long and
    short-term reading rooms or invited us for workshops included: The Showroom
    London, Grand Union Birmingham, Salt Istanbul, ZKM Academy for Media Arts
    Cologne, Kunstverein Munich. The Bluecoat Liverpool, Truth is Concrete,
    Steirischer Herbst Graz, Printed Matter New York, New York Art Book Fair at
    MoMA PS1, 281 Vancouver, Rum 46 Aarhus, Miss Read, Kunstwerke Berlin.
    Institutions that invited us for talks or panel discussions included:
    Whitechapel Art Gallery, Open Design Conference Barcelona, Institutions by
    Artists Vancouver, Academy of Fine Arts Leipzig, Freie University Berlin, and
    various art academies and universities across Europe.

    [103](ch11.xhtml#footnote-423-backlink) At times, we signed ‘the Piracy
    Project’ (the title) under our own names (the artist-authors), because it felt
    suitable to take the credit for all our personal work, instead of
    strengthening the ‘umbrella organisation’ AND. When the editor of Rhizome
    asked us to write about the project, we authored the jointly written text as
    ‘by Piracy Project’. On other occasions we framed it ‘The Piracy Project is a
    collaboration of the artists x and y, as part of AND Publishing’s research
    program.’ At some point, the Piracy Project outgrew AND Publishing because it
    took up all our time, and we began to question whether the Piracy Project was
    part of AND, or whether AND was part of the Piracy Project.

    [104](ch11.xhtml#footnote-422-backlink) This less glamourous work includes
    answering emails, booking flights, organising rooms and hosting, in short the
    administrative work required to run and maintain such a project. The feminist
    discourse of domestic and reproductive labour is relevant here, but a more
    detailed discussion exceeds the scope of this text.

    [105](ch11.xhtml#footnote-421-backlink) Walter Benjamin, ‘The Author as
    Producer’, New Left Review 1.62 (1970), 83–96. See also Hall, Pirate
    Philosophy, pp. 127–232.

    [106](ch11.xhtml#footnote-420-backlink) Ibid., p. 129.

    [107](ch11.xhtml#footnote-419-backlink) Several gatherings, such as ‘Direct
    Weekend’ and ‘Long Weekend’ at various art colleges in London involved
    Precarious Workers Brigade, Carrot Workers, tax evasion campaigners, UK Uncut,
    alternative media groups, feminist alliances, anti-poverty groups. See
    Precarious Workers Brigade, ‘Fragments Toward an Understanding of a Week that
    Changed Everything…’, e-flux 24 (April 2011),
    -week-that-changed-everything/>

    [108](ch11.xhtml#footnote-418-backlink) Susan Kelly describes Felix Guattari’s
    use of the term transversality ‘as a conceptual tool to open hitherto closed
    logics and hierarchies and to experiment with relations of interdependency in
    order to produce new assemblages and alliances […] and different forms of
    (collective) subjectivity that break down oppositions between the individual
    and the group.’ Susan Kelly, ‘The Transversal and the Invisible: How do You
    Really Make a Work of Art that Is not a Work of Art?’, Transversal 1 (2005),
    . See also Gerald Raunig’s
    description of transversal activist practice: as ‘There is no longer any
    artificially produced subject of articulation; it becomes clear that every
    name, every linkage, every label has always already been collective and must
    be newly constructed over and over again. In particular, to the same extent to
    which transversal collectives are only to be understood as polyvocal groups,
    transversality is linked with a critique of representation, with a refusal to
    speak for others, in the name of others, with abandoning identity, with a loss
    of a unified face, with the subversion of the social pressure to produce
    faces.’ Gerald Raunig, ‘Transversal Multitudes’, Transversal 9 (2002),


    [109](ch11.xhtml#footnote-417-backlink) Kelly, ‘”But that was my idea!”’, p.
    3.

    [110](ch11.xhtml#footnote-416-backlink) The carrot is used as ‘a symbol of the
    promise of paid work and future fulfilment made to those working under
    conditions of free labour in the cultural sector.’ Ibid.

    [111](ch11.xhtml#footnote-415-backlink) In an interview published in Artforum,
    David Graeber says: ‘Another artist I know, for example, made a sculpture of a
    giant carrot used during a protest at Millbank; I think it was actually thrown
    through the window of Tory headquarters and set on fire. She feels it was her
    best work, but her collective, which is mostly women, insisted on collective
    authorship, and she feels unable to attach her name to the work.’ ‘Another
    World: Michelle Kuo Talks with David Graeber’, Artforum International (Summer
    2012), p. 270, david-graeber-31099>

    [112](ch11.xhtml#footnote-414-backlink) Artist Rosalie Schweiker, who read a
    draft of this text, suggested that I make a list of the name of every person
    involved in the project in order to demonstrate this generative and expansive
    mode of working.

    [113](ch11.xhtml#footnote-413-backlink) Such an action might even infringe
    legal requirements or contracts. Open Book Publishers’ contract, for example,
    states: ‘The author hereby asserts his/her right to be identified in relation
    to the work on the title page and cover and the publisher undertakes to comply
    with this requirement. A copyright notice in the Author’s name will be printed
    in the front pages of the Work.’ Open Book Publishers, Authors’ Guide, p. 19,


    [114](ch11.xhtml#footnote-412-backlink) For a discussion of gender inequality
    in recent scholarly publishing see Chad Wellmon and Andrew Piper ‘Publication,
    Power, Patronage: On Inequality and Academic Publishing’, Critical Inquiry (21
    July 2017),
    publication_power_and_patronage_on_inequality_and_academic_publishing/

    [115](ch11.xhtml#footnote-411-backlink) See Gérard Genette’s discussion of the
    ‘pseudonym effect’ as conceptual device. He distinguishes between the reader
    not knowing about the use of the pseudonym and the conceptual effect of the
    reader having information about the use of a pseudonym. Gérard Genette,
    Paratexts, Thresholds of Interpretation (Cambridge University Press, 1997).

    [116](ch11.xhtml#footnote-410-backlink) The Neoist movement developed in
    Canada, North America and Europe in the late 1970s. It selected one signature
    name for multiple identities and authors, who published, performed and
    exhibited under this joint name. It is different from a collective name, as
    any person could sign her or his work with these joint names without revealing
    the author’s identity. See letter exchanges between cultural theorist Florian
    Cramer and artist and writer Stewart Home: ‘I would like to describe “Monty
    Cantsin” as a multiple identity, “Karen Eliot” as a multiple pen-name and,
    judging from the information I have, “Luther Blissett” as a collective
    phantom.’ Florian Cramer, 2 October 1995, in Stewart Home and Florian Cramer,
    House of Nine Squares: Letters on Neoism, Psychogeography & Epistemological
    Trepidation, . See also
    Nicholas Thoburn’s research into the political agency of anonymous authorship.
    Nicholas Thoburn, Anti-Book, On the Art and Politics of Radical Publishing
    (Minneapolis and London: University of Minnesota Press, 2016) pp. 168–223.

    [117](ch11.xhtml#footnote-409-backlink) Anonymous started on 4chan, an online
    imageboard where users post anonymously. ‘The posts on 4chan have no names or
    any identifiable markers attached to them. The only thing you are able to
    judge a post by is its content and nothing else.’ Gabriella Coleman, Hacker,
    Hoaxer, Whistleblower, Spy: The Many Faces of Anonymous (London and New York:
    Verso, 2014), p. 47.

    [118](ch11.xhtml#footnote-408-backlink) I thank Susan Kelly for making this
    point while reviewing my text.

    [119](ch11.xhtml#footnote-407-backlink) It is interesting to come back to
    Foucault’s text ‘What is an author’ and complicate his own position as
    authorial subject. Referring to Naomi Schor and Gayatri Spivak, Sara Ahmed
    suggests, that ‘Foucault effaces the sexual specificity of his own narrative
    and perspective as a male philosopher. The refusal to enter the discourse as
    an empirical subject, a subject which is both sexed and European, may finally
    translate into a universalising mode of discourse, which negates the
    specificity of its own inscription (as a text)’. See Naomi Schor, ‘Dreaming
    Dissymmetry: Barthes, Foucault and Sexual Difference’, in Elizabeth Weed
    (ed.), Coming to Terms: Feminism, Theory, Politics (London: Routledge, 1989),
    pp. 47–58; and Gayatry Chakravorty Spivak, ‘Can the Subaltern Speak?’, in Cary
    Nelson and Lawrence Grossberg (eds.), Marxism and the Interpretation of
    Culture (Urbana, IL: University of Illinois Press, 1988), pp. 271–313.

    [120](ch11.xhtml#footnote-406-backlink) Sara Ahmed, Differences That Matter,
    Feminist Theory and Postmodernism (Cambridge, UK: Cambridge University Press,
    2004) p. 125.

    [121](ch11.xhtml#footnote-405-backlink) Spivak, ‘Can the Subaltern Speak?’,
    pp. 271–313.


    WHW
    There Is Something Political in the City Air
    2016


    What, How & for Whom / WHW

    “There is something political in the city air”*

    The curatorial collective What,
    How & for Whom / WHW, based
    in Zagreb and Berlin, examine
    the interconnections between
    contemporary art and political and
    social strata, including the role of art
    institutions in contemporary society.
    In the present essay, their discussion
    of recent projects they curated
    highlights the struggle for access to
    knowledge and the free distribution
    of information, which in Croatia also
    means confronting the pressures
    of censorship and revisionism
    in the writing of history and the
    construction of the future.

    Contemporary art’s attempts to come to terms with its evasions in delivering on the promise of its own intrinsic capacity to propose alternatives, and
    to do better in the constant game of staying ahead of institutional closures
    and marketization, are related to a broader malady in leftist politics. The
    crisis of organizational models and modes of political action feels especially acute nowadays, after the latest waves of massive political mobilization
    and upheaval embodied in such movements as the Arab Spring and Occupy and the widespread social protests in Southern Europe against austerity
    measures – and the failure of these movements to bring about structural
    changes. As we witnessed in the dramatic events that unfolded through the
    spring and summer of 2015, even in Greece, where Syriza was brought to
    power, the people’s will behind newly elected governments proved insufficient to change the course of austerity politics in Europe. Simultaneously,
    a series of conditional gains and effective defeats gave rise to the alarming
    ascent of radical right-wing populism, against which the left has failed to
    provide any real vision or driving force.
    Both the practice of political articulation and the political practices of
    art have been affected by the hollowing and disabling of democracy related
    to the ascendant hegemony of the neoliberal rationale that shapes every
    domain of our lives in accordance with a specific image of economics,1
    as well as the problematic “embrace of localism and autonomy by much
    of the left as the pure strategy”2 and the left’s inability to destabilize the
    dominant world-view and reclaim the future.3 Consequently, art practices
    increasingly venture into novel modes of operation that seek to “expand
    our collective imagination beyond what capitalism allows”.4 They not only
    point to the problems but address them head on. By negotiating art’s autonomy and impact on the social, and by conceptualizing the whole edifice
    of art as a social symptom, such practices attempt to do more than simply
    squeeze novel ideas into exhausted artistic formats and endow them with
    political content that produces “marks of distinction”,5 which capital then
    exploits for the enhancement of its own reproduction.
    The two projects visited in this text both work toward building truly
    accessible public spaces. Public Library, launched by Marcell Mars and
    Tomislav Medak in 2012, is an ongoing media and social project based on
    ideas from the open-source software movement, while Autonomy Cube, by
    artist Trevor Paglen and the hacker and computer security researcher Jacob Appelbaum, centres on anonymized internet usage in the post–Edward
    *
    1
    2
    3
    4
    5

    David Harvey, Rebel Cities: From the Right to the City to the Urban Revolution, Verso, London and New York, 2012, p. 117.
    See Wendy Brown, Undoing the Demos: Neoliberalism’s Stealth Revolution, Zone books,
    New York, 2015.
    Harvey, Rebel Cities, p. 83.
    See Nick Srnicek and Alex Williams, Inventing the Future: Postcapitalism and a World
    Without Work, Verso, London and New York, 2015.
    Ibid., p. 495.
    See Harvey, Rebel Cities, especially pp. 103–109.

    “There is something political in the city air”

    289

    Snowden world of unprecedented institutionalized surveillance. Both projects operate in tacit alliance with art institutions that more often than not
    are suffering from a kind of “mission drift” under pressure to align their
    practices and structures with the profit sector, a situation that in recent
    decades has gradually become the new norm.6 By working within and with
    art institutions, both Public Library and Autonomy Cube induce the institutions to return to their initial mission of creating new common spaces
    of socialization and political action. The projects develop counter-publics
    and work with infrastructures, in the sense proposed by Keller Easterling:
    not just physical networks but shared standards and ideas that constitute
    points of contact and access between people and thus rule, govern, and
    control the spaces in which we live.7
    By building a repository of digitized books, and enabling others to do this
    as well, Public Library promotes the idea of the library as a truly public institution that offers universal access to knowledge, which “together with
    free public education, a free public healthcare, the scientific method, the
    Universal Declaration of Human Rights, Wikipedia, and free software,
    among others – we, the people, are most proud of ”, as the authors of the
    project have said.8 Public Library develops devices for the free sharing of
    books, but it also functions as a platform for advocating social solidarity
    in free access to knowledge. By ignoring and avoiding the restrictive legal
    regime for intellectual property, which was brought about by decades of
    neoliberalism, as well as the privatization or closure of public institutions,
    spatial controls, policing, and surveillance – all of which disable or restrict
    possibilities for building new social relations and a new commons – Public
    Library can be seen as part of the broader movement to resist neoliberal
    austerity politics and the commodification of knowledge and education
    and to appropriate public spaces and public goods for common purposes.
    While Public Library is fully engaged with the movement to oppose the
    copyright regime – which developed as a kind of rent for expropriating the
    commons and reintroducing an artificial scarcity of cognitive goods that
    could be reproduced virtually for free – the project is not under the spell of
    digital fetishism, which until fairly recently celebrated a new digital commons as a non-frictional space of smooth collaboration where a new political and economic autonomy would be forged that would spill over and
    undermine the real economy and permeate all spheres of life.9 As Matteo
    Pasquinelli argues in his critique of “digitalism” and its celebration of the
    6
    7
    8
    9

    See Brown, Undoing the Demos.
    Keller Easterling, Extrastatecraft: The Power of Infrastructure Space, Verso, London and
    New York, 2014.
    Marcell Mars, Manar Zarroug, and Tomislav Medak, “Public Library”, in Public Library,
    ed. Marcell Mars, Tomislav Medak, and What, How & for Whom / WHW, exh. publication, What, How & for Whom / WHW and Multimedia Institute, Zagreb, 2015, p. 78.
    See Matteo Pasquinelli, Animal Spirits: A Bestiary of the Commons, NAi Publishers, Rotterdam, and Institute of Network Cultures, Amsterdam, 2008.

    290

    What, How & for Whom / WHW

    virtues of the information economy with no concern about the material
    basis of production, the information economy is a parasite on the material
    economy and therefore “an accurate understanding of the common must
    be always interlinked with the real physical forces producing it and the material economy surrounding it.”10
    Public Library emancipates books from the restrictive copyright regime
    and participates in the exchange of information enabled by digital technology, but it also acknowledges the labour and energy that make this possible. There is labour that goes into the cataloguing of the books, and labour
    that goes into scanning them before they can be brought into the digital
    realm of free reproduction, just as there are the ingenuity and labour of
    the engineers who developed a special scanner that makes it easier to scan
    books; also, the scanner needs to be installed, maintained, and fed books
    over hours of work. This is where the institutional space of art comes in
    handy by supporting the material production central to the Public Library
    endeavour. But the scanner itself does not need to be visible. In 2014, at
    the Museo Nacional Centro de Arte Reina Sofia in Madrid, we curated the
    exhibition Really Useful Knowledge, which dealt with conflicts triggered by
    struggles over access to knowledge and the effects that knowledge, as the
    basis of capital reproduction, has on the totality of workers’ lives. In the
    exhibition, the production funds allocated to Public Library were used to
    build the book scanner at Calafou, an anarchist cooperative outside Barcelona. The books chosen for scanning were relevant to the exhibition’s
    themes – methods of reciprocal learning and teaching, forms of social and
    political organization, the history of the Spanish Civil War, etc. – and after
    being scanned, they were uploaded to the Public Library website. All that
    was visible in the exhibition itself was a kind of index card or business card
    with a URL link to the Public Library website and a short statement (fig. 1):
    A public library is:
    • free access to books for every member of society
    • library catalog
    • librarian
    With books ready to be shared, meticulously cataloged, everyone is a
    librarian. When everyone is librarian, the library is everywhere.11
    Public Library’s alliance with art institutions serves to strengthen the
    cultural capital both for the general demand to free books from copyright
    restrictions on cultural goods and for the project itself – such cultural capital could be useful in a potential lawsuit. Simultaneously, the presence and
    realization of the Public Library project within an exhibition enlists the host
    institution as part of the movement and exerts influence on it by taking
    the museum’s public mission seriously and extending it into a grey zone of
    10
    11

    Ibid., p. 29.
    Mars, Zarroug, and Medak, “Public Library”, p. 85.

    “There is something political in the city air”

    291

    questionable legality. The defence of the project becomes possible by making the traditional claim of the “autonomy” of art, which is not supposed
    to assert any power beyond the museum walls. By taking art’s autonomy
    at its word, and by testing the truth of the liberal-democratic claim that
    the field of art is a field of unlimited freedom, Public Library engages in a
    kind of “overidentification” game, or what Keller Easterling, writing about
    the expanded activist repertoire in infrastructure space, calls “exaggerated
    compliance”.12 Should the need arise, as in the case of a potential lawsuit
    against the project, claims of autonomy and artistic freedom create a protective shroud of untouchability. And in this game of liberating books from
    the parochial capitalist imagination that restricts their free circulation, the
    institution becomes a complicit partner. The long-acknowledged insight
    that institutions embrace and co-opt critique is, in this particular case, a
    win-win situation, as Public Library uses the public status of the museum
    as a springboard to establish the basic message of free access and the free
    circulation of books and knowledge as common sense, while the museum
    performs its mission of bringing knowledge to the public and supporting
    creativity, in this case the reworking, rebuilding and reuse of technology
    for the common good. The fact that the institution is not naive but complicit produces a synergy that enhances potentialities for influencing and
    permeating the public sphere. The gesture of not exhibiting the scanner in
    the museum has, among other things, a practical purpose, as more books
    would be scanned voluntarily by the members of the anarchist commune
    in Calafou than would be by the overworked museum staff, and employing
    somebody to do this during the exhibition would be too expensive (and the
    mantra of cuts, cuts, cuts would render negotiation futile). If there is a flirtatious nod to the strategic game of not exposing too much, it is directed less
    toward the watchful eyes of the copyright police than toward the exhibition
    regime of contemporary art group shows in which works compete for attention, the biggest scarcity of all. Public Library flatly rejects identification
    with the object “our beloved bookscanner” (as the scanner is described on
    the project website13), although it is an attractive object that could easily
    be featured as a sculpture within the exhibition. But its efficacy and use
    come first, as is also true of the enigmatic business card–like leaflet, which
    attracts people to visit the Public Library website and use books, not only to
    read them but also to add books to the library: doing this in the privacy of
    one’s home on one’s own computer is certainly more effective than doing
    it on a computer provided and displayed in the exhibition among the other
    art objects, films, installations, texts, shops, cafés, corridors, exhibition
    halls, elevators, signs, and crowds in a museum like Reina Sofia.
    For the exhibition to include a scanner that was unlikely to be used or
    a computer monitor that showed the website from which books might be
    12
    13

    Easterling, Extrastatecraft, p. 492.
    See https://www.memoryoftheworld.org/blog/2012/10/28/our-belovedbookscanner-2/ (accessed July 4, 2016).

    292

    What, How & for Whom / WHW

    downloaded, but probably not read, would be the embodiment of what
    philosopher Robert Pfaller calls “interpassivity”, the appearance of activity or a stand-in for it that in fact replaces any genuine engagement.14 For
    Pfaller, interpassivity designates a flight from engagement, a misplaced libidinal investment that under the mask of enjoyment hides aversion to an
    activity that one is supposed to enjoy, or more precisely: “Interpassivity is
    the creation of a compromise between cultural interests and latent cultural
    aversion.”15 Pfaller’s examples of participation in an enjoyable process that
    is actually loathed include book collecting and the frantic photocopying of
    articles in libraries (his book was originally published in 2002, when photocopying had not yet been completely replaced by downloading, bookmarking, etc.).16 But he also discusses contemporary art exhibitions as sites of
    interpassivity, with their overabundance of objects and time-based works
    that require time that nobody has, and with the figure of the curator on
    whom enjoyment is displaced – the latter, he says, is a good example of
    “delegated enjoyment”. By not providing the exhibition with a computer
    from which books can be downloaded, the project ensures that books are
    seen as vehicles of knowledge acquired by reading and not as immaterial
    capital to be frantically exchanged; the undeniable pleasure of downloading and hoarding books is, after all, just one step removed from the playground of interpassivity that the exhibition site (also) is.
    But Public Library is hardly making a moralistic statement about the
    virtues of reading, nor does it believe that ignorance (such as could be
    overcome by reading the library’s books) is the only obstacle that stands
    in the way of ultimate emancipation. Rather, the project engages with, and
    contributes to, the social practice that David Harvey calls “commoning”:
    “an unstable and malleable social relation between a particular self-defined social group and those aspects of its actually existing or yet-to-becreated social and/or physical environment deemed crucial to its life and
    livelihood”.17 Public Library works on the basis of commoning and tries to
    enlist others to join it, which adds a distinctly political dimension to the
    sabotage of intellectual property revenues and capital accumulation.
    The political dimension of Public Library and the effort to form and
    publicize the movement were expressed more explicitly in the Public Li14
    15
    16

    17

    Robert Pfaller, On the Pleasure Principle in Culture: Illusions Without Owners, Verso, London and New York, 2014.
    Ibid., p. 76.
    Pfaller’s book, which first appeared in German, was published in English only in 2014.
    His ideas have gained greater relevance over time, not only as the shortcomings of the
    immensely popular social media activism became apparent – where, as many critics
    have noted, participation in political organizing and the articulation of political tasks
    and agendas are often replaced by a click on an icon – but also because of Pfaller’s
    broader argument about the self-deception at play in interpassivity and its role in eliciting enjoyment from austerity measures and other calamities imposed on the welfare
    state by the neoliberal regime, which since early 2000 has exceeded even the most sober (and pessimistic) expectations.
    Ibid., p. 73.

    “There is something political in the city air”

    293

    brary exhibition in 2015 at Gallery Nova in Zagreb, where we have been
    directing the programme since 2003. If the Public Library project was not
    such an eminently collective practice that pays no heed to the author function, the Gallery Nova show might be considered something like a solo exhibition. As it was realized, the project again used art as an infrastructure
    and resource to promote the movement of freeing books from copyright
    restrictions while collecting legitimization points from the art world as enhanced cultural capital that could serve as armour against future attacks
    by the defenders of the holy scripture of copyright laws. But here the more
    important tactic was to show the movement as an army of many and to
    strengthen it through self-presentation. The exhibition presented Public
    Library as a collection of collections, and the repertory form (used in archive science to describe a collection) was taken as the basic narrative procedure. It mobilized and activated several archives and open digital repositories, such as MayDay Rooms from London, The Ignorant Schoolmaster and
    His Committees from Belgrade, Library Genesis and Aaaaaarg.org, Catalogue
    of Free Books, (Digitized) Praxis, the digitized work of the Midnight Notes
    Collective, and Textz.com, with special emphasis on activating the digital
    repositories UbuWeb and Monoskop. Not only did the exhibition attempt to
    enlist the gallery audience but, equally important, the project was testing
    its own strength in building, articulating, announcing, and proposing, or
    speculating on, a broader movement to oppose the copyright of cultural
    goods within and adjacent to the art field.
    Presenting such a movement in an art institution changes one of the
    basic tenets of art, and for an art institution the project’s main allure probably lies in this kind of expansion of the art field. A shared politics is welcome, but nothing makes an art institution so happy as the sense of purpose that a project like Public Library can endow it with. (This, of course,
    comes with its own irony, for while art institutions nowadays compete for
    projects that show emphatically how obsolete the aesthetic regime of art is,
    they continue to base their claims of social influence on knowledge gained
    through some form of aesthetic appreciation, however they go about explaining and justifying it.) At the same time, Public Library’s nonchalance
    about institutional maladies and anxieties provides a homeopathic medicine whose effect is sometimes so strong that discussion about placebos
    becomes, at least temporarily, beside the point. One occasion when Public
    Library’s roving of the political terrain became blatantly direct was the exhibition Written-off: On the Occasion of the 20th Anniversary of Operation
    Storm, which we organized in the summer of 2015 at Gallery Nova (figs.
    2–4).
    The exhibition/action Written-off was based on data from Ante Lesaja’s
    extensive research on “library purification”, which he published in his book
    Knjigocid: Uništavanje knjige u Hrvatskoj 1990-ih (Libricide: The Destruction
    of Books in Croatia in the 1990s).18 People were invited to bring in copies of
    18

    Ante Lesaja, Knjigocid: Uništavanje knjige u Hrvatskoj 1990-ih, Profil and Srbsko narodno

    294

    What, How & for Whom / WHW

    books that had been removed from Croatian public libraries in the 1990s.
    The books were scanned and deposited in a digital archive; they then became available on a website established especially for the project. In Croatia during the 1990s, hundreds of thousands of books were removed from
    schools and factories, from public, specialized, and private libraries, from
    former Yugoslav People’s Army centres, socio-political organizations, and
    elsewhere because of their ideologically inappropriate content, the alphabet they used (Serbian Cyrillic), or the ethnic or political background of the
    authors. The books were mostly thrown into rubbish bins, discarded on
    the street, destroyed, or recycled. What Lesaja’s research clearly shows is
    that the destruction of the books – as well as the destruction of monuments
    to the People’s Liberation War (World War II) – was not the result of individuals running amok, as official accounts preach, but a deliberate and systematic action that symbolically summarizes the dominant politics of the
    1990s, in which war, rampant nationalism, and phrases about democracy
    and sovereignty were used as a rhetorical cloak to cover the nakedness of
    the capitalist counter-revolution and criminal processes of dispossession.
    Written-off: On the Occasion of the 20th Anniversary of Operation Storm
    set up scanners in the gallery, initiated a call for collecting and scanning
    books that had been expunged from public institutions in the 1990s, and
    outlined the criteria for the collection, which corresponded to the basic
    domains in which the destruction of the books, as a form of censorship,
    was originally implemented: books written in the Cyrillic alphabet or in
    Serbian regardless of the alphabet; books forming a corpus of knowledge
    about communism, especially Yugoslav communism, Yugoslav socialism,
    and the history of the workers’ struggle; and books presenting the anti-Fascist and revolutionary character of the People’s Liberation Struggle during
    World War II.
    The exhibition/action was called Written-off because the removal and
    destruction of the books were often presented as a legitimate procedure
    of library maintenance, thus masking the fact that these books were unwanted, ideologically unacceptable, dangerous, harmful, unnecessary, etc.
    Written-off unequivocally placed “book destruction” in the social context
    of the period, when the destruction of “unwanted” monuments and books
    was happening alongside the destruction of homes and the killing of “unwanted” citizens, outside of and prior to war operations. For this reason,
    the exhibition was dedicated to the twentieth anniversary of Operation
    Storm, the final military/police operation in what is called, locally, the
    Croatian Homeland War.19
    The exhibition was intended as a concrete intervention against a political logic that resulted in mass exile and killing, the history of which is
    glossed over and critical discussion silenced, and also against the official
    19

    vijeće, Zagreb, 2012.
    Known internationally as the Croatian War of Independence, the war was fought between Croatian forces and the Serb-controlled Yugoslav People’s Army from 1991 to
    1995.

    “There is something political in the city air”

    295

    celebrations of the anniversary, which glorified militarism and proclaimed
    the ethical purity of the victory (resulting in the desired ethnic purity of the
    nation).
    As both symbolic intervention and real-life action, then, the exhibition
    Written-off took place against a background of suppressed issues relating
    to Operation Storm – ethno-nationalism as the flip side of neoliberalism,
    justice and the present status of the victims and refugees, and the overall character of the war known officially as the Homeland War, in which
    discussions about its prominent traits as a civil war are actively silenced
    and increasingly prosecuted. In protest against the official celebrations
    and military parades, the exhibition marked the anniversary of Operation
    Storm with a collective action that evokes books as symbolic of a “knowledge society” in which knowledge becomes the location of conflictual engagement. It pointed toward the struggle over collective symbolic capital
    and collective memory, in which culture as a form of the commons has a
    direct bearing on the kind of place we live in. The Public Library project,
    however, is engaged not so much with cultural memory and remembrance
    as a form of recollection or testimony that might lend political legitimation
    to artistic gestures; rather, it engages with history as a construction and
    speculative proposition about the future, as Peter Osborne argues in his
    polemical hypotheses on the notion of contemporary art that distinguishes
    between “contemporary” and “present-day” art: “History is not just a relationship between the present and the past – it is equally about the future.
    It is this speculative futural moment that definitively separates the concept
    of history from memory.”20 For Public Library, the future that participates
    in the construction of history does not yet exist, but it is defined as more
    than just a project against the present as reflected in the exclusionary, parochially nationalistic, revisionist and increasingly fascist discursive practices of the Croatian political elites. Rather, the future comes into being as
    an active and collective construction based on the emancipatory aspects of
    historical experiences as future possibilities.
    Although defined as an action, the project is not exultantly enthusiastic
    about collectivity or the immediacy and affective affinities of its participants, but rather it transcends its local and transient character by taking
    up the broader counter-hegemonic struggle for the mutual management
    of joint resources. Its endeavour is not limited to the realm of the political
    and ideological but is rooted in the repurposing of technological potentials
    from the restrictive capitalist game and the reutilization of the existing infrastructure to build a qualitatively different one. While the culture industry adapts itself to the limited success of measures that are geared toward
    preventing the free circulation of information by creating new strategies
    for pushing information into a form of property and expropriating value

    20

    Peter Osborne, Anywhere or Not at All: Philosophy of Contemporary Art, Verso, London
    and New York, 2013, p. 194.

    296

    What, How & for Whom / WHW

    fig. 1
    Marcell Mars, Art as Infrastructure: Public Library, installation
    view, Really Useful Knowledge, curated by WHW, Museo
    Nacional Centro de Arte Reina Sofia, Madrid, 2014.
    Photo by Joaquin Cortes and Roman Lores / MNCARS.

    fig. 2
    Public Library, exhibition view, Gallery Nova, Zagreb, 2015.
    Photo by Ivan Kuharic.

    fig. 3
    Written-off: On the Occasion of the 20th Anniversary of Operation
    Storm, exhibition detail, Gallery Nova, Zagreb, 2015.
    Photo by Ivan Kuharic.

    fig. 4
    Written-off: On the Occasion of the 20th Anniversary of Operation
    Storm, exhibition detail, Gallery Nova, Zagreb, 2015.
    Photo by Ivan Kuharic.

    fig. 5
    Trevor Paglen and Jacob Appelbaum, Autonomy Cube,
    installation view, Really Useful Knowledge, curated by WHW,
    Museo Nacional Centro de Arte Reina Sofia, Madrid, 2014.
    Photo by Joaquín Cortés and Román Lores / MNCARS.

    through the control of metadata (information about information),21 Public Library shifts the focus away from aesthetic intention – from unique,
    closed, and discrete works – to a database of works and the metabolism
    of the database. It creates values through indexing and connectivity, imagined communities and imaginative dialecticization. The web of interpenetration and determination activated by Public Library creates a pedagogical endeavour that also includes a propagandist thrust, if the notion of
    propaganda can be recast in its original meaning as “things that must be
    disseminated”.
    A similar didactic impetus and constructivist praxis is present in the work
    Autonomy Cube, which was developed through the combined expertise of
    artist and geographer Trevor Paglen and internet security researcher, activist and hacker Jacob Appelbaum. This work, too, we presented in the
    Reina Sofia exhibition Really Useful Knowledge, along with Public Library
    and other projects that offered a range of strategies and methodologies
    through which the artists attempted to think through the disjunction between concrete experience and the abstraction of capital, enlisting pedagogy as a crucial element in organized collective struggles. Autonomy Cube
    offers a free, open-access, encrypted internet hotspot that routes internet
    traffic over TOR, a volunteer-run global network of servers, relays, and services, which provides anonymous and unsurveilled communication. The
    importance of the privacy of the anonymized information that Autonomy
    Cube enables and protects is that it prevents so-called traffic analysis – the
    tracking, analysis, and theft of metadata for the purpose of anticipating
    people’s behaviour and relationships. In the hands of the surveillance
    state this data becomes not only a means of steering our tastes, modes of
    consumption, and behaviours for the sake of making profit but also, and
    more crucially, an effective method and weapon of political control that
    can affect political organizing in often still-unforeseeable ways that offer
    few reasons for optimism. Visually, Autonomy Cube references minimalist
    sculpture (fig. 5) (specifically, Hans Haacke’s seminal piece Condensation
    Cube, 1963–1965), but its main creative drive lies in the affirmative salvaging of technologies, infrastructures, and networks that form both the leading organizing principle and the pervasive condition of complex societies,
    with the aim of supporting the potentially liberated accumulation of collective knowledge and action. Aesthetic and art-historical references serve
    as camouflage or tools for a strategic infiltration that enables expansion of
    the movement’s field of influence and the projection of a different (contingent) future. Engagement with historical forms of challenging institutions
    becomes the starting point of a poetic praxis that materializes the object of
    its striving in the here and now.
    Both Public Library and Autonomy Cube build their autonomy on the dedi21

    McKenzie Wark, “Metadata Punk”, in Public Library, pp. 113–117 (see n. 9).

    “There is something political in the city air”

    305

    cation and effort of the collective body, without which they would not
    exist, rendering this interdependence not as some consensual idyll of cooperation but as conflicting fields that create further information and experiences. By doing so, they question the traditional edifice of art in a way
    that supports Peter Osborne’s claim that art is defined not by its aesthetic
    or medium-based status, but by its poetics: “Postconceptual art articulates a post-aesthetic poetics.”22 This means going beyond criticality and
    bringing into the world something defined not by its opposition to the real,
    but by its creation of the fiction of a shared present, which, for Osborne,
    is what makes art truly contemporary. And if projects like these become a
    kind of political trophy for art institutions, the side the institutions choose
    nevertheless affects the common sense of our future.

    22

    Osborne, Anywhere or Not at All, p. 33.

    306

    What, How & for Whom / WHW

    “There is something political in the city air”

    307


     

    Display 200 300 400 500 600 700 800 900 1000 ALL characters around the word.