USDC
Complaint: Elsevier v. SciHub and LibGen
2015


Case 1:15-cv-04282-RWS Document 1 Filed 06/03/15 Page 1 of 16

UNITED STATES DISTRICT COURT
SOUTHERN DISTRICT OF NEW YORK

Index No. 15-cv-4282 (RWS)
COMPLAINT

ELSEVIER INC., ELSEVIER B.V., ELSEVIER LTD.
Plaintiffs,

v.

SCI-HUB d/b/a WWW.SCI-HUB.ORG, THE LIBRARY GENESIS PROJECT d/b/a LIBGEN.ORG, ALEXANDRA ELBAKYAN, JOHN DOES 1-99,
Defendants.

Plaintiffs Elsevier Inc, Elsevier B.V., and Elsevier Ltd. (collectively “Elsevier”),
by their attorneys DeVore & DeMarco LLP, for their complaint against www.scihub.org,
www.libgen.org, Alexandra Elbakyan, and John Does 1-99 (collectively the “Defendants”),
allege as follows:

NATURE OF THE ACTION

1. This is a civil action seeking damages and injunctive relief for: (1) copyright infringement under the copyright laws of the United States (17 U.S.C. § 101 et seq.); and (2) violations of the Computer Fraud and Abuse Act, 18.U.S.C. § 1030, based upon Defendants’ unlawful access to, use, reproduction, and distribution of Elsevier’s copyrighted works. Defendants’ actions in this regard have caused and continue to cause irreparable injury to Elsevier and its publishing partners (including scholarly societies) for which it publishes certain journals.

1

Case 1:15-cv-04282-RWS Document 1 Filed 06/03/15 Page 2 of 16

PARTIES

2. Plaintiff Elsevier Inc. is a corporation organized under the laws of Delaware, with its principal place of business at 360 Park Avenue South, New York, New York 10010.

3. Plaintiff Elsevier B.V. is a corporation organized under the laws of the Netherlands, with its principal place of business at Radarweg 29, Amsterdam, 1043 NX, Netherlands.

4. Plaintiff Elsevier Ltd. is a corporation organized under the laws of the United Kingdom, with its principal place of business at 125 London Wall, EC2Y 5AS United Kingdom.

5. Upon information and belief, Defendant Sci-Hub is an individual or organization engaged in the operation of the website accessible at the URL “www.sci-hub.org,” and related subdomains, including but not limited to the subdomain “www.sciencedirect.com.sci-hub.org,”
www.elsevier.com.sci-hub.org,” “store.elsevier.com.sci-hub.org,” and various subdomains
incorporating the company and product names of other major global publishers (collectively with www.sci-hub.org the “Sci-Hub Website”). The sci-hub.org domain name is registered by
“Fundacion Private Whois,” located in Panama City, Panama, to an unknown registrant. As of
the date of this filing, the Sci-Hub Website is assigned the IP address 31.184.194.81. This IP address is part of a range of IP addresses assigned to Petersburg Internet Network Ltd., a webhosting company located in Saint Petersburg, Russia.

6. Upon information and belief, Defendant Library Genesis Project is an organization which operates an online repository of copyrighted materials accessible through the website located at the URL “libgen.org” as well as a number of other “mirror” websites
(collectively the “Libgen Domains”). The libgen.org domain is registered by “Whois Privacy
Corp.,” located at Ocean Centre, Montagu Foreshore, East Bay Street, Nassau, New Providence,

2

Case 1:15-cv-04282-RWS Document 1 Filed 06/03/15 Page 3 of 16

Bahamas, to an unknown registrant. As of the date of this filing, libgen.org is assigned the IP address 93.174.95.71. This IP address is part of a range of IP addresses assigned to Ecatel Ltd., a web-hosting company located in Amsterdam, the Netherlands.

7. The Libgen Domains include “elibgen.org,” “libgen.info,” “lib.estrorecollege.org,” and “bookfi.org.”

8. Upon information and belief, Defendant Alexandra Elbakyan is the principal owner and/or operator of Sci-Hub. Upon information and belief, Elbakyan is a resident of Almaty, Kazakhstan.

9. Elsevier is unaware of the true names and capacities of the individuals named as Does 1-99 in this Complaint (together with Alexandra Elbakyan, the “Individual Defendants”),
and their residence and citizenship is also unknown. Elsevier will amend its Complaint to allege the names, capacities, residence and citizenship of the Doe Defendants when their identities are learned.

10. Upon information and belief, the Individual Defendants are the owners and operators of numerous of websites, including Sci-Hub and the websites located at the various
Libgen Domains, and a number of e-mail addresses and accounts at issue in this case.

11. The Individual Defendants have participated, exercised control over, and benefited from the infringing conduct described herein, which has resulted in substantial harm to
the Plaintiffs.

JURISDICTION AND VENUE

12. This is a civil action arising from the Defendants’ violations of the copyright laws of the United States (17 U.S.C. § 101 et seq.) and the Computer Fraud and Abuse Act (“CFAA”),

3

Case 1:15-cv-04282-RWS Document 1 Filed 06/03/15 Page 4 of 16

18.U.S.C. § 1030. Therefore, the Court has subject matter jurisdiction over this action pursuant to 28 U.S.C. § 1331.

13. Upon information and belief, the Individual Defendants own and operate computers and Internet websites and engage in conduct that injures Plaintiff in this district, while
also utilizing instrumentalities located in the Southern District of New York to carry out the acts complained of herein.

14. Defendants have affirmatively directed actions at the Southern District of New York by utilizing computer servers located in the District without authorization and by
unlawfully obtaining access credentials belonging to individuals and entities located in the
District, in order to unlawfully access, copy, and distribute Elsevier's copyrighted materials
which are stored on Elsevier’s ScienceDirect platform.
15.

Defendants have committed the acts complained of herein through unauthorized

access to Plaintiffs’ copyrighted materials which are stored and maintained on computer servers
located in the Southern District of New York.
16.

Defendants have undertaken the acts complained of herein with knowledge that

such acts would cause harm to Plaintiffs and their customers in both the Southern District of
New York and elsewhere. Defendants have caused the Plaintiff injury while deriving revenue
from interstate or international commerce by committing the acts complained of herein.
Therefore, this Court has personal jurisdiction over Defendants.
17.

Venue in this District is proper under 28 U.S.C. § 1391(b) because a substantial

part of the events giving rise to Plaintiffs’ claims occurred in this District and because the
property that is the subject of Plaintiffs’ claims is situated in this District.

4

Case 1:15-cv-04282-RWS Document 1 Filed 06/03/15 Page 5 of 16

FACTUAL ALLEGATIONS
Elsevier’s Copyrights in Publications on ScienceDirect
18.

Elsevier is a world leading provider of professional information solutions in the

Science, Medical, and Health sectors. Elsevier publishes, markets, sells, and licenses academic
textbooks, journals, and examinations in the fields of science, medicine, and health. The
majority of Elsevier’s institutional customers are universities, governmental entities, educational
institutions, and hospitals that purchase physical and electronic copies of Elsevier’s products and
access to Elsevier’s digital libraries. Elsevier distributes its scientific journal articles and book
chapters electronically via its proprietary subscription database “ScienceDirect”
(www.sciencedirect.com). In most cases, Elsevier holds the copyright and/or exclusive
distribution rights to the works available through ScienceDirect. In addition, Elsevier holds
trademark rights in “Elsevier,” “ScienceDirect,” and several other related trade names.
19.

The ScienceDirect database is home to almost one-quarter of the world's peer-

reviewed, full-text scientific, technical and medical content. The ScienceDirect service features
sophisticated search and retrieval tools for students and professionals which facilitates access to
over 10 million copyrighted publications. More than 15 million researchers, health care
professionals, teachers, students, and information professionals around the globe rely on
ScienceDirect as a trusted source of nearly 2,500 journals and more than 26,000 book titles.
20.

Authorized users are provided access to the ScienceDirect platform by way of

non-exclusive, non-transferable subscriptions between Elsevier and its institutional customers.
According to the terms and conditions of these subscriptions, authorized users of ScienceDirect
must be users affiliated with the subscriber (e.g., full-time and part-time students, faculty, staff

5

Case 1:15-cv-04282-RWS Document 1 Filed 06/03/15 Page 6 of 16

and researchers of subscriber universities and individuals using computer terminals within the
library facilities at the subscriber for personal research, education or other non-corporate use.)
21.

A substantial portion of American research universities maintain active

subscriptions to ScienceDirect. These subscriptions, under license, allow the universities to
provide their faculty and students access to the copyrighted works within the ScienceDirect
database.
22.

Elsevier stores and maintains the copyrighted material available in ScienceDirect

on servers owned and operated by a third party whose servers are located in the Southern District
of New York and elsewhere. In order to optimize performance, these third-party servers
collectively operate as a distributed network which serves cached copies of Elsevier’s
copyrighted materials by way of particular servers that are geographically close to the user. For
example, a user that accesses ScienceDirect from a University located in the Southern District of
New York will likely be served that content from a server physically located in the District.

Authentication of Authorized University ScienceDirect Users
23.

Elsevier maintains the integrity and security of the copyrighted works accessible

on ScienceDirect by allowing only authenticated users access to the platform. Elsevier
authenticates educational users who access ScienceDirect through their affiliated university’s
subscription by verifying that they are able to access ScienceDirect from a computer system or
network previously identified as belonging to a subscribing university.
24.

Elsevier does not track individual educational users’ access to ScienceDirect.

Instead, Elsevier verifies only that the user has authenticated access to a subscribing university.
25.

Once an educational user authenticates his computer with ScienceDirect on a

university network, that computer is permitted access to ScienceDirect for a limited amount of
6

Case 1:15-cv-04282-RWS Document 1 Filed 06/03/15 Page 7 of 16

time without re-authenticating. For example, a student could access ScienceDirect from their
laptop while sitting in a university library, then continue to access ScienceDirect using that
laptop from their dorm room later that day. After a specified period of time has passed, however,
a user will have to re-authenticate his or her computer’s access to ScienceDirect by connecting to
the platform through a university network.
26.

As a matter of practice, educational users access university networks, and thereby

authenticate their computers with ScienceDirect, primarily through one of two methods. First,
the user may be physically connected to a university network, for example by taking their
computer to the university’s library. Second, the user may connect remotely to the university’s
network using a proxy connection. Universities offer proxy connections to their students and
faculty so that those users may access university computing resources – including access to
research databases such as ScienceDirect – from remote locations which are unaffiliated with the
university. This practice facilitates the use of ScienceDirect by students and faculty while they
are at home, travelling, or otherwise off-campus.
Defendants’ Unauthorized Access to University Proxy Networks to Facilitate Copyright
Infringement
27.

Upon information and belief, Defendants are reproducing and distributing

unauthorized copies of Elsevier’s copyrighted materials, unlawfully obtained from
ScienceDirect, through Sci-Hub and through various websites affiliated with the Library Genesis
Project. Specifically, Defendants utilize their websites located at sci-hub.org and at the Libgen
Domains to operate an international network of piracy and copyright infringement by
circumventing legal and authorized means of access to the ScienceDirect database. Defendants’
piracy is supported by the persistent intrusion and unauthorized access to the computer networks

7

Case 1:15-cv-04282-RWS Document 1 Filed 06/03/15 Page 8 of 16

of Elsevier and its institutional subscribers, including universities located in the Southern District
of New York.
28.

Upon information and belief, Defendants have unlawfully obtained and continue

to unlawfully obtain student or faculty access credentials which permit proxy connections to
universities which subscribe to ScienceDirect, and use these credentials to gain unauthorized
access to ScienceDirect.
29.

Upon information and belief, Defendants have used and continue to use such

access credentials to authenticate access to ScienceDirect and, subsequently, to obtain
copyrighted scientific journal articles therefrom without valid authorization.
30.

The Sci-Hub website requires user interaction in order to facilitate its illegal

copyright infringement scheme. Specifically, before a Sci-Hub user can obtain access to
copyrighted scholarly journals, articles, and books that are maintained by ScienceDirect, he must
first perform a search on the Sci-Hub page. A Sci-Hub user may search for content using either
(a) a general keyword-based search, or (b) a journal, article or book identifier (such as a Digital
Object Identifier, PubMed Identifier, or the source URL).
31.

When a user performs a keyword search on Sci-Hub, the website returns a proxied

version of search results from the Google Scholar search database. 1 When a user selects one of
the search results, if the requested content is not available from the Library Genesis Project, SciHub unlawfully retrieves the content from ScienceDirect using the access previously obtained.
Sci-Hub then provides a copy of that article to the requesting user, typically in PDF format. If,
however, the requested content can be found in the Library Genesis Project repository, upon

1

Google Scholar provides its users the capability to search for scholarly literature, but does not provide the
full text of copyrighted scientific journal articles accessible through paid subscription services such as
ScienceDirect. Instead, Google Scholar provides bibliographic information concerning such articles along with a
link to the platform through which the article may be purchased or accessed by a subscriber.

8

Case 1:15-cv-04282-RWS Document 1 Filed 06/03/15 Page 9 of 16

information and belief, Sci-Hub obtains the content from the Library Genesis Project repository
and provides that content to the user.
32.

When a user searches on Sci-Hub for an article available on ScienceDirect using a

journal or article identifier, the user is redirected to a proxied version of the ScienceDirect page
where the user can download the requested article at no cost. Upon information and belief, SciHub facilitates this infringing conduct by using unlawfully-obtained access credentials to
university proxy servers to establish remote access to ScienceDirect through those proxy servers.
If, however, the requested content can be found in the Library Genesis Project repository, upon
information and belief, Sci-Hub obtains the content from it and provides it to the user.
33.

Upon information and belief, Sci-Hub engages in no other activity other than the

illegal reproduction and distribution of digital copies of Elsevier’s copyrighted works and the
copyrighted works of other publishers, and the encouragement, inducement, and material
contribution to the infringement of the copyrights of those works by third parties – i.e., the users
of the Sci-Hub website.
34.

Upon information and belief, in addition to the blatant and rampant infringement

of Elsevier’s copyrights as described above, the Defendants have also used the Sci-Hub website
to earn revenue from the piracy of copyrighted materials from ScienceDirect. Sci-Hub has at
various times accepted funds through a variety of payment processors, including PayPal,
Yandex, WebMoney, QiQi, and Bitcoin.
Sci-Hub’s Use of the Library Genesis Project as a Repository for Unlawfully-Obtained
Scientific Journal Articles and Books
35.

Upon information and belief, when Sci-Hub pirates and downloads an article from

ScienceDirect in response to a user request, in addition to providing a copy of that article to that
user, Sci-Hub also provides a duplicate copy to the Library Genesis Project, which stores the
9

Case 1:15-cv-04282-RWS Document 1 Filed 06/03/15 Page 10 of 16

article in a database accessible through the Internet. Upon information and belief, the Library
Genesis Project is designed to be a permanent repository of this and other illegally obtained
content.
36.

Upon information and belief, in the event that a Sci-Hub user requests an article

which has already been provided to the Library Genesis Project, Sci-Hub may provide that user
access to a copy provided by the Library Genesis Project rather than re-download an additional
copy of the article from ScienceDirect. As a result, Defendants Sci-Hub and Library Genesis
Project act in concert to engage in a scheme designed to facilitate the unauthorized access to and
wholesale distribution of Elsevier’s copyrighted works legitimately available on the
ScienceDirect platform.
The Library Genesis Project’s Unlawful Distribution of Plaintiff’s Copyrighted Works
37.

Access to the Library Genesis Project’s repository is facilitated by the website

“libgen.org,” which provides its users the ability to search, download content from, and upload
content to, the repository. The main page of libgen.org allows its users to perform searches in
various categories, including “LibGen (Sci-Tech),” and “Scientific articles.” In addition to
searching by keyword, users may also search for specific content by various other fields,
including title, author, periodical, publisher, or ISBN or DOI number.
38.

The libgen.org website indicates that the Library Genesis Project repository

contains approximately 1 million “Sci-Tech” documents and 40 million scientific articles. Upon
information and belief, the large majority of these works is subject to copyright protection and is
being distributed through the Library Genesis Project without the permission of the applicable
rights-holder. Upon information and belief, the Library Genesis Project serves primarily, if not

10

Case 1:15-cv-04282-RWS Document 1 Filed 06/03/15 Page 11 of 16

exclusively, as a scheme to violate the intellectual property rights of the owners of millions of
copyrighted works.
39.

Upon information and belief, Elsevier owns the copyrights in a substantial

number of copyrighted materials made available for distribution through the Library Genesis
Project. Elsevier has not authorized the Library Genesis Project or any of the Defendants to
copy, display, or distribute through any of the complained of websites any of the content stored
on ScienceDirect to which it holds the copyright. Among the works infringed by the Library
Genesis Project are the “Guyton and Hall Textbook of Medical Physiology,” and the article “The
Varus Ankle and Instability” (published in Elsevier’s journal “Foot and Ankle Clinics of North
America”), each of which is protected by Elsevier’s federally-registered copyrights.
40.

In addition to the Library Genesis Project website accessible at libgen.org, users

may access the Library Genesis Project repository through a number of “mirror” sites accessible
through other URLs. These mirror sites are similar, if not identical, in functionality to
libgen.org. Specifically, the mirror sites allow their users to search and download materials from
the Library Genesis Project repository.
FIRST CLAIM FOR RELIEF
(Direct Infringement of Copyright)
41.

Elsevier incorporates by reference the allegations contained in paragraphs 1-40

42.

Elsevier’s copyright rights and exclusive distribution rights to the works available

above.

on ScienceDirect (the “Works”) are valid and enforceable.
43.

Defendants have infringed on Elsevier’s copyright rights to these Works by

knowingly and intentionally reproducing and distributing these Works without authorization.

11

Case 1:15-cv-04282-RWS Document 1 Filed 06/03/15 Page 12 of 16

44.

The acts of infringement described herein have been willful, intentional, and

purposeful, in disregard of and indifferent to Plaintiffs’ rights.
45.

Without authorization from Elsevier, or right under law, Defendants are directly

liable for infringing Elsevier’s copyrighted Works pursuant to 17 U.S.C. §§ 106(1) and/or (3).
46.

As a direct result of Defendants’ actions, Elsevier has suffered and continues to

suffer irreparable harm for which Elsevier has no adequate remedy at law, and which will
continue unless Defendants’ actions are enjoined.
47.

Elsevier seeks injunctive relief and costs and damages in an amount to be proven

at trial.
SECOND CLAIM FOR RELIEF
(Secondary Infringement of Copyright)
48.

Elsevier incorporates by reference the allegations contained in paragraphs 1-40

49.

Elsevier’s copyright rights and exclusive distribution rights to the works available

above.

on ScienceDirect (the “Works”) are valid and enforceable.
50.

Defendants have infringed on Elsevier’s copyright rights to these Works by

knowingly and intentionally reproducing and distributing these Works without license or other
authorization.
51.

Upon information and belief, Defendants intentionally induced, encouraged, and

materially contributed to the reproduction and distribution of these Works by third party users of
websites operated by Defendants.
52.

The acts of infringement described herein have been willful, intentional, and

purposeful, in disregard of and indifferent to Elsevier’s rights.

12

Case 1:15-cv-04282-RWS Document 1 Filed 06/03/15 Page 13 of 16

53.

Without authorization from Elsevier, or right under law, Defendants are directly

liable for third parties’ infringement of Elsevier’s copyrighted Works pursuant to 17 U.S.C. §§
106(1) and/or (3).
54.

Upon information and belief, Defendants profited from third parties’ direct

infringement of Elsevier’s Works.
55.

Defendants had the right and the ability to supervise and control their websites

and the third party infringing activities described herein.
56.

As a direct result of Defendants’ actions, Elsevier has suffered and continues to

suffer irreparable harm for which Elsevier has no adequate remedy at law, and which will
continue unless Defendants’ actions are enjoined.
57.

Elsevier seeks injunctive relief and costs and damages in an amount to be proven

at trial.
THIRD CLAIM FOR RELIEF
(Violation of the Computer Fraud & Abuse Act)
58.

Elsevier incorporates by reference the allegations contained in paragraphs 1-40

59.

Elsevier’s computers and servers, the third-party computers and servers which

above.

store and maintain Elsevier’s copyrighted works for ScienceDirect, and Elsevier’s customers’
computers and servers which facilitate access to Elsevier’s copyrighted works on ScienceDirect,
are all “protected computers” under the Computer Fraud and Abuse Act (“CFAA”).
60.

Defendants (a) knowingly and intentionally accessed such protected computers

without authorization and thereby obtained information from the protected computers in a
transaction involving an interstate or foreign communication (18 U.S.C. § 1030(a)(2)(C)); and
(b) knowingly and with an intent to defraud accessed such protected computers without
13

Case 1:15-cv-04282-RWS Document 1 Filed 06/03/15 Page 14 of 16

authorization and obtained information from such computers, which Defendants used to further
the fraud and obtain something of value (18 U.S.C. § 1030(a)(4)).
61.

Defendants’ conduct has caused, and continues to cause, significant and

irreparable damages and loss to Elsevier.
62.

Defendants’ conduct has caused a loss to Elsevier during a one-year period

aggregating at least $5,000.
63.

As a direct result of Defendants’ actions, Elsevier has suffered and continues to

suffer irreparable harm for which Elsevier has no adequate remedy at law, and which will
continue unless Defendants’ actions are enjoined.
64.

Elsevier seeks injunctive relief, as well as costs and damages in an amount to be

proven at trial.
PRAYER FOR RELIEF
WHEREFORE, Elsevier respectfully requests that the Court:
A. Enter preliminary and permanent injunctions, enjoining and prohibiting Defendants,
their officers, directors, principals, agents, servants, employees, successors and
assigns, and all persons and entities in active concert or participation with them, from
engaging in any of the activity complained of herein or from causing any of the injury
complained of herein and from assisting, aiding, or abetting any other person or
business entity in engaging in or performing any of the activity complained of herein
or from causing any of the injury complained of herein;
B. Enter an order that, upon Elsevier’s request, those in privity with Defendants and
those with notice of the injunction, including any Internet search engines, Web
Hosting and Internet Service Providers, domain-name registrars, and domain name

14

Case 1:15-cv-04282-RWS Document 1 Filed 06/03/15 Page 15 of 16

registries or their administrators that are provided with notice of the injunction, cease
facilitating access to any or all domain names and websites through which Defendants
engage in any of the activity complained of herein;
C. Enter an order that, upon Elsevier’s request, those organizations which have
registered Defendants’ domain names on behalf of Defendants shall disclose
immediately to Plaintiffs all information in their possession concerning the identity of
the operator or registrant of such domain names and of any bank accounts or financial
accounts owned or used by such operator or registrant;
D. Enter an order that, upon Elsevier’s request, the TLD Registries for the Defendants’
websites, or their administrators, shall place the domain names on
registryHold/serverHold as well as serverUpdate, ServerDelete, and serverTransfer
prohibited statuses, for the remainder of the registration period for any such website.
E. Enter an order canceling or deleting, or, at Elsevier’s election, transferring the domain
name registrations used by Defendants to engage in the activity complained of herein
to Elsevier’s control so that they may no longer be used for illegal purposes;
F. Enter an order awarding Elsevier its actual damages incurred as a result of
Defendants’ infringement of Elsevier’s copyright rights in the Works and all profits
Defendant realized as a result of its acts of infringement, in amounts to be determined
at trial; or in the alternative, awarding Elsevier, pursuant to 17 U.S.C. § 504, statutory
damages for the acts of infringement committed by Defendants, enhanced to reflect
the willful nature of the Defendants’ infringement;
G. Enter an order disgorging Defendants’ profits;

15

Case 1:15-cv-04282-RWS Document 1 Filed 06/03/15 Page 16 of 16

Thylstrup
The Politics of Mass Digitization
2019


The Politics of Mass Digitization

Nanna Bonde Thylstrup

The MIT Press

Cambridge, Massachusetts

London, England

# Table of Contents

1. Acknowledgments
2. I Framing Mass Digitization
1. 1 Understanding Mass Digitization
3. II Mapping Mass Digitization
1. 2 The Trials, Tribulations, and Transformations of Google Books
2. 3 Sovereign Soul Searching: The Politics of Europeana
3. 4 The Licit and Illicit Nature of Mass Digitization
4. III Diagnosing Mass Digitization
1. 5 Lost in Mass Digitization
2. 6 Concluding Remarks
5. References
6. Index

## List of figures

1. Figure 2.1 François-Marie Lefevere and Marin Saric. “Detection of grooves in scanned images.” U.S. Patent 7508978B1. Assigned to Google LLC.
2. Figure 2.2 Joseph K. O’Sullivan, Alexander Proudfooot, and Christopher R. Uhlik. “Pacing and error monitoring of manual page turning operator.” U.S. Patent 7619784B1. Assigned to Google LLC, Google Technology Holdings LLC.

#
Acknowledgments

I am very grateful to all those who have contributed to this book in various
ways. I owe special thanks to Bjarki Valtysson, Frederik Tygstrup, and Peter
Duelund, for their supervision and help thinking through this project, its
questions, and its forms. I also wish to thank Andrew Prescott, Tobias Olsson,
and Rune Gade for making my dissertation defense a memorable and thoroughly
enjoyable day of constructive critique and lively discussions. Important parts
of the research for this book further took place during three visiting stays
at Cornell University, Duke University, and Columbia University. I am very
grateful to N. Katherine Hayles, Andreas Huyssen, Timothy Brennan, Lydia
Goehr, Rodney Benson, and Fredric Jameson, who generously welcomed me across
the Atlantic and provided me with invaluable new perspectives, as well as
theoretical insights and challenges. Beyond the aforementioned, three people
in particular have been instrumental in terms of reading through drafts and in
providing constructive challenges, intellectual critique, moral support, and
fun times in equal proportions—thank you so much Kristin Veel, Henriette
Steiner, and Daniela Agostinho. Marianne Ping-Huang has further offered
invaluable support to this project and her theoretical and practical
engagement with digital archives and academic infrastructures continues to be
a source of inspiration. I am also immensely grateful to all the people
working on or with mass digitization who generously volunteered their time to
share with me their visions for, and perspectives on, mass digitization.

This book has further benefited greatly from dialogues taking place within the
framework of two larger research projects, which I have been fortunate enough
to be involved in: Uncertain Archives and The Past’s Future. I am very
grateful to all my colleagues in both these research projects: Kristin Veel,
Daniela Agostinho, Annie Ring, Katrine Dirkinck-Holmfeldt, Pepita Hesselberth,
Kristoffer Ørum, Ekaterina Kalinina Anders Søgaard as well as Helle Porsdam,
Jeppe Eimose, Stina Teilmann, John Naughton, Jeffrey Schnapp, Matthew Battles,
and Fiona McMillan. I am further indebted to La Vaughn Belle, George Tyson,
Temi Odumosu, Mathias Danbolt, Mette Kia, Lene Asp, Marie Blønd, Mace Ojala,
Renee Ridgway, and many others for our conversations on the ethical issues of
the mass digitization of colonial material. I have also benefitted from the
support and insights offered by other colleagues at the Department of Arts and
Cultural Studies, University of Copenhagen.

A big part of writing a book is also about keeping sane, and for this you need
great colleagues that can pull you out of your own circuit and launch you into
other realms of inquiry through collaboration, conversation, or just good
times. Thank you Mikkel Flyverbom, Rasmus Helles, Stine Lomborg, Helene
Ratner, Anders Koed Madsen, Ulrik Ekman, Solveig Gade, Anna Leander, Mareile
Kaufmann, Holger Schulze, Jakob Kreutzfeld, Jens Hauser, Nan Gerdes, Kerry
Greaves, Mikkel Thelle, Mads Rosendahl Thomsen, Knut Ove Eliassen, Jens-Erik
Mai, Rikke Frank Jørgensen, Klaus Bruhn Jensen, Marisa Cohn, Rachel Douglas-
Jones, Taina Bucher, and Baki Cakici. To this end you also need good
friends—thank you Thomas Lindquist Winther-Schmidt, Mira Jargil, Christian
Sønderby Jepsen, Agnete Sylvest, Louise Michaëlis, Jakob Westh, Gyrith Ravn,
Søren Porse, Jesper Værn, Jacob Thorsen, Maia Kahlke, Josephine Michau, Lærke
Vindahl, Chris Pedersen, Marianne Kiertzner, Rebecca Adler-Nissen, Stig
Helveg, Ida Vammen, Alejandro Savio, Lasse Folke Henriksen, Siine Jannsen,
Rens van Munster, Stephan Alsman, Sayuri Alsman, Henrik Moltke, Sean Treadway,
and many others. I also have to thank Christer and all the people at
Alimentari and CUB Coffee who kept my caffeine levels replenished when I tired
of the ivory tower.

I am furthermore very grateful for the wonderful guidance and support from MIT
Press, including Noah Springer, Marcy Ross, and Susan Clark—and of course for
the many inspiring conversations with and feedback from Doug Sery. I also want
to thank the anonymous peer reviewers whose insightful and constructive
comments helped improve this book immensely. Research for this book was
supported by grants from the Danish Research Council and the Velux Foundation.

Last, but not least, I wish to thank my loving partner Thomas Gammeltoft-
Hansen for his invaluable and critical input, optimistic outlook, and perfect
morning cappuccinos; my son Georg and daughter Liv for their general
awesomeness; and my extended family—Susanne, Bodil, and Hans—for their support
and encouragement.

I dedicate this book to my parents, Karen Lise Bonde Thylstrup and Asger
Thylstrup, without whom neither this book nor I would have materialized.

# I
Framing Mass Digitization

# 1
Understanding Mass Digitization

## Introduction

Mass digitization is first and foremost a professional concept. While it has
become a disciplinary buzzword used to describe large-scale digitization
projects of varying scope, it enjoys little circulation beyond the confines of
information science and such projects themselves. Yet, as this book argues, it
has also become a defining concept of our time. Indeed, it has even attained
the status of a cultural and moral imperative and obligation.1 Today, anyone
with an Internet connection can access hundreds of millions of digitized
cultural artifacts from the comfort of their desk—or many other locations—and
cultural institutions and private bodies add thousands of new cultural works
to the digital sphere every day. The practice of mass digitization is forming
new nexuses of knowledge, and new ways of engaging with that knowledge. What
at first glance appears to be a simple act of digitization (the transformation
of singular books from boundary objects to open sets of data), reveals, on
closer examination, a complex process teeming with diverse political, legal,
and cultural investments and controversies.

This volume asks why mass digitization has become such a “matter of concern,”2
and explores its implications for the politics of cultural memory. In
practical terms, mass digitization is digitization on an industrial scale. But
in cultural terms, mass digitization is much more than this. It is the promise
of heightened access to—and better preservation of—the past, and of more
original scholarship and better funding opportunities. It also promises
entirely new ways of reading, viewing, and structuring archives, new forms of
value and their extraction, and new infrastructures of control. This volume
argues that the shape-shifting quality of mass digitization, and its social
dynamics, alters the politics of cultural memory institutions. Two movements
simultaneously drive mass digitization programs: the relatively new phenomenon
of big data gold rushes, and the historically more familiar archival
accumulative imperative. Yet despite these prospects, mass digitization
projects are also uphill battles. They are costly and speculative processes,
with no guaranteed rate of return, and they are constantly faced by numerous
limitations and contestations on legal, social, and cultural levels.
Nevertheless, both public and private institutions adamantly emphasize the
need to digitize on a massive scale, motivating initiatives around the
globe—from China to Russia, Africa to Europe, South America to North America.
Some of these initiatives are bottom-up projects driven by highly motivated
individuals, while others are top-down and governed by complex bureaucratic
apparatuses. Some are backed by private money, others publically funded. Some
exist as actual archives, while others figure only as projections in policy
papers. As the ideal of mass digitization filters into different global
empirical situations, the concept of mass digitization attains nuanced
political hues. While all projects formally seek to serve the public interest,
they are in fact infused with much more diverse, and often conflicting,
political and commercial motives and dynamics. The same mass digitization
project can even be imbued with different and/or contradictory investments,
and can change purpose and function over time, sometimes rapidly.

Mass digitization projects are, then, highly political. But they are not
political in the sense that they transfer the politics of analog cultural
memory institutions into the digital sphere 1:1, or even liberate cultural
memory artifacts from the cultural politics of analog cultural memory
institutions. Rather, mass digitization presents a new political cultural
memory paradigm, one in which we see strands of technical and ideological
continuities combine with new ideals and opportunities; a political cultural
memory paradigm that is arguably even more complex—or at least appears more
messy to us now—than that of analog institutions, whose politics we have had
time to get used to. In order to grasp the political stakes of mass
digitization, therefore, we need to approach mass digitization projects not as
a continuation of the existing politics of cultural memory, or as purely
technical endeavors, but rather as emerging sociopolitical and sociotechnical
phenomena that introduce new forms of cultural memory politics.

## Framing, Mapping, and Diagnosing Mass Digitization

Interrogating the phenomenon of mass digitization, this book asks the question
of how mass digitization affects the politics of cultural memory institutions.
As a matter of practice, something is clearly changing in the conversion of
bounded—and scarce—historical material into ubiquitous ephemeral data. In
addition to the technical aspects of digitization, mass digitization is also
changing the political territory of cultural memory objects. Global commercial
platforms are increasingly administering and operating their scanning
activities in favor of the digital content they reap from the national “data
tombs” of museums and libraries and the feedback loops these generate. This
integration of commercial platforms into the otherwise primarily public
institutional set-up of cultural memory has produced a reconfiguration of the
political landscape of cultural memory from the traditional symbolic politics
of scarcity, sovereignty, and cultural capital to the late-sovereign
infrapolitics of standardization and subversion.

The empirical outlook of the present book is predominantly Western. Yet, the
overarching dynamics that have been pursued are far from limited to any one
region or continent, nor limited solely to the field of cultural memory.
Digitization is a global phenomenon and its reliance on late-sovereign
politics and subpolitical governance forms are shared across the globe.

The central argument of this book is that mass digitization heralds a new kind
of politics in the regime of cultural memory. Mass digitization of cultural
memory is neither a neutral technical process nor a transposition of the
politics of analog cultural heritage to the digital realm on a 1:1 scale. The
limitations of using conventional cultural-political frameworks for
understanding mass digitization projects become clear when working through the
concepts and regimes of mass digitization. Mass digitization brings together
so many disparate interests and elements that any mono-theoretical lens would
fail to account for the numerous political issues arising within the framework
of mass digitization. Rather, mass digitization should be approached as an
_infrapolitical_ process that brings together a multiplicity of interests
hitherto foreign to the realm of cultural memory.

The first part of the book, “framing,” outlines the theoretical arguments in
the book—that the political dynamics of mass digitization organize themselves
around the development of the technical infrastructures of mass digitization
in late-sovereign frameworks. Fusing infrastructure theory and theories on the
political dynamics of late sovereignty allows us to understand mass
digitization projects as cultural phenomena that are highly dependent on
standardization and globalization processes, while also recognizing that their
resultant infrapolitics can operate as forms of both control and subversion.

The second part of the book, “mapping,” offers an analysis of three different
mass digitization phenomena and how they relate to the late-sovereign politics
that gave rise to them. The part thus examines the historical foundation,
technical infrastructures, and (il)licit status and ideological underpinnings
of three variations of mass digitization projects: primarily corporate,
primarily public, and primarily private. While these variations may come
across as reproductions of more conventional societal structures, the chapters
in part two nevertheless also present us with a paradox: while the different
mass digitization projects that appear in this book—from Google’s privatized
endeavor to Europeana’s supranational politics to the unofficial initiatives
of shadow libraries—have different historical and cultural-political
trajectories and conventional regimes of governance, they also undermine these
conventional categories as they morph and merge into new infrastructures and
produce a new form of infrapolitics. The case studies featured in this book
are not to be taken as exhaustive examples, but rather as distinct, yet
nevertheless entangled, examples of how analog cultural memory is taken online
on a digital scale. They have been chosen with the aim of showing the
diversity of mass digitization, but also how it, as a phenomenon, ultimately
places the user in the dilemma of digital capitalism with its ethos of access,
speed, and participation (in varying degrees). The choices also have their
limitations, however. In their Western bias, which is partly rooted in this
author’s lack of language skills (specifically in Russian and Chinese), for
instance, they fail to capture the breadth and particularities of the
infrapolitics of mass digitization in other parts of the world. Much more
research is needed in this area.

The final part of the book, “diagnosing,” zooms in on the pathologies of mass
digitization in relation to affective questions of desire and uncertainty.
This part argues that instead of approaching mass digitization projects as
rationalized and instrumental projects, we should rather acknowledge them as
ambivalent spatio-temporal projects of desire and uncertainty. Indeed, as the
third part concludes, it is exactly uncertainty and desire that organizes the
new spatio-temporal infrastructures of cultural memory institutions, where
notions such as serendipity and the infrapolitics of platforms have taken
precedence over accuracy and sovereign institutional politics. The third part
thus calls into question arguments that imagine mass digitization as
instrumentalized projects that either undermine or produce values of
serendipity, as well as overarching narratives of how mass digitization
produces uncomplicated forms of individualized empowerment and freedom.
Instead, the chapter draws attention to the new cultural logics of platforms
that affect the cultural politics of mass digitization projects.

Crucially, then, this book seeks neither to condemn nor celebrate mass
digitization, but rather to unpack the phenomenon and anchor it in its
contemporary political reality. It offers a story of the ways in which mass
digitization produces new cultural memory institutions online that may be
entwined in the cultural politics of their analog origins, but also raises new
political questions to the collections.

## Setting the Stage: Assembling the Motley Crew of Mass Digitization

The dream and practice of mass digitizing cultural works has been around for
decades and, as this section attests, the projects vary significantly in
shape, size, and form. While rudimentary and nonexhaustive, this section
gathers a motley collection of mass digitization initiatives, from some of the
earliest digitization programs to later initiatives. The goal of this section
is thus not so much to meticulously map mass digitization programs, but rather
to provide examples of projects that might illuminate the purpose of this book
and its efforts to highlight the infrastructural politics of mass
digitization. As the section attests, mass digitization is anything but a
streamlined process. Rather, it is a painstakingly complex process mired in
legal, technical, personal, and political challenges and problems, and it is a
vision whose grand rhetoric often works to conceal its messy reality.

It is pertinent to note that mass digitization suffers from the combined
gendered and racialized reality of cultural institutions, tech corporations,
and infrastructural projects: save a few exceptions, there is precious little
diversity in the official map of mass digitization, even in those projects
that emerge bottom-up. This does not mean that women and minorities have not
formed a crucial part of mass digitization, selecting cultural objects,
prepping them (for instance ironing newspapers to ensure that they are flat),
scanning them, and constructing their digital infrastructures. However, more
often than not, their contributions fade into the background as tenders of the
infrastructures of mass digitization rather than as the (predominantly white,
male) “face” of mass digitization. As such, an important dimension of the
politics of these infrastructural projects is their reproduction of
established gendered and racialized infrastructures already present in both
cultural institutions and the tech industry.3 This book hints at these crucial
dimensions of mass digitization, but much more work is needed to change the
familiar cast of cultural memory institutions, both in the analog and digital
realms.

With these introductory remarks in place, let us now turn to the long and
winding road to mass digitization as we know it today. Locating the exact
origins of this road is a subjective task that often ends up trapping the
explorer in the mirror halls of technology. But it is worth noting that of
course there existed, before the Internet, numerous attempts at capturing and
remediating books in scalable forms, for the purposes both of preservation and
of extending the reach of library collections. One of the most revolutionary
of such technologies before the digital computer or the Internet was
microfilm, which was first held forth as a promising technology of
preservation and remediation in the middle of the 1800s.4 At the beginning of
the twentieth century, the Belgian author, entrepreneur, visionary, lawyer,
peace activist, and one of the founders of information science, Paul Otlet,
brought the possibilities of microfilm to bear directly on the world of
libraries. Otlet authored two influential think pieces that outlined the
benefits of microfilm as a stable and long-term remediation format that could,
ultimately, also be used to extend the reach of literature, just as he and his
collaborator, inventor and engineer Robert Goldschmidt, co-authored a work on
the new form of the book through microphotography, _Sur une forme nouvelle du
livre: le livre microphotographique_. 5 In his analyses, Otlet suggested that
the most important transformations would not take place in the book itself,
but in substitutes for it. Some years later, beginning in 1927 with the
Library of Congress microfilming more than three million pages of books and
manuscripts in the British Library, the remediation of cultural works in
microformat became a widespread practice across the world, and microfilm is
still in use to this day.6 Otlet did not confine himself to thinking only
about microphotography, however, but also pursued a more speculative vein,
inspired by contemporary experiments with electromagnetic waves, arguing that
the most radical change of the book would be wireless technology. Moreover, he
also envisioned and partly realized a physical space, _Mundaneum_ , for his
dreams of a universal archive. Paul Otlet and Nobel Peace Prize Winner Henri
La Fontaine conceived of Mundaneum in 1895 as part of their work on
documentation science. Otlet called the Mundaneum “… an Idea, an Institution,
a Method, a Body of work materials and collections, a Building, a Network.” In
more concrete, but no less ambitious terms, the Mundaneum was to gather
together all the world’s knowledge and classify it according to a universal
system they developed called the “Universal Decimal Classification.” In 1910,
Otlet and Fontaine found a place for their work in the Palais du
Cinquantenaire, a government building in Brussels. Later, Otlet commissioned
Le Corbusier to design a building for the Mundaneum in Geneva. The cooperation
ended unsuccesfully, however, and it later led a nomadic life, moving from The
Hague to Brussels and then in 1993 to the city of Mons in Belgium, where it
now exists as a museum called the Mundaneum Archive Center. Fatefully, Mons, a
former mining district, also houses Google’s largest data center in Europe and
it did not take Google long to recognize the cultural value in entering a
partnership with the Mundaneum, the two parties signing a contract in 2013.
The contract entailed among other things that Google would sponsor a traveling
exhibit on the Mundaneum, as well as a series of talks on Internet issues at
the museum and the university, and that the Mundaneum would use Google’s
social networking service, Google Plus, as a promotional tool. An article in
the _New York Times_ described the partnership as “part of a broader campaign
by Google to demonstrate that it is a friend of European culture, at a time
when its services are being investigated by regulators on a variety of
fronts.” 7 The collaboration not only spurred international interest, but also
inspired a group of influential tech activists and artists closely associated
with the creative work of shadow libraries to create the critical archival
project Mondotheque.be, a platform for “discussing and exploring the way
knowledge is managed and distributed today in a way that allows us to invent
other futures and different narrations of the past,”8 and a resulting digital
publication project, _The Radiated Book,_ authored by an assembly of
activists, artists, and scholars such as Femke Snelting, Tomislav Medak,
Dusan Barok, Geraldine Juarez, Shin Joung Yeo, and Matthew Fuller. 9

Another early precursor of mass digitization emerged with Project Gutenberg,
often referred to as the world’s oldest digital library. Project Gutenberg was
the brainchild of author Michael S. Hart, who in 1971, using technologies such
as ARPANET, Bulletin Board Systems (BSS), and Gopher protocols, experimented
with publishing and distributing books in digital form. As Hart reminisced in
his later text, “The History and Philosophy of Project Gutenberg,”10 Project
Gutenberg emerged out of a donation he received as an undergraduate in 1971,
which consisted of $100 million worth of computing time on the Xerox Sigma V
mainframe at the University of Illinois at Urbana-Champaign. Wanting to make
good use of the donation, Hart, in his own words, “announced that the greatest
value created by computers would not be computing, but would be the storage,
retrieval, and searching of what was stored in our libraries.”11 He therefore
committed himself to converting analog cultural works into digital text in a
format not only available to, but also accessible/readable to, almost all
computer systems: “Plain Vanilla ASCII” (ASCII for “American Standard Code for
Information Interchange”). While Project Gutenberg only converted about 50
works into digital text in the 1970s and the 1980s (the first was the
Declaration of Independence), it today hosts up to 56,000 texts in its
distinctly lo-fi manner.12 Interestingly, Michael S. Hart noted very early on
that the intention of the project was never to reproduce authoritative
editions of works for readers—“who cares whether a certain phrase in
Shakespeare has a ‘:’ or a ‘;’ between its clauses”—but rather to “release
etexts that are 99.9% accurate in the eyes of the general reader.”13 As the
present book attests, this early statement captures one of the central points
of contestation in mass digitization: the trade-off between accuracy and
accessibility, raising questions both of the limits of commercialized
accelerated digitization processes (see chapter 2 on Google Books) and of
class-based and postcolonial implications (see chapter 4 on shadow libraries).

If Project Gutenberg spearheaded the efforts of bringing cultural works into
the digital sphere through manual conversion of analog text into lo-fi digital
text, a French mass digitization project affiliated with the construction of
the Bibliothèque nationale de France (BnF) initiated in 1989 could be
considered one of the earliest examples of actually digitizing cultural works
on an industrial scale.14 The French were thus working on blueprints of mass
digitization programs before mass digitization became a widespread practice __
as part of the construction of a new national library, under the guidance of
Alain Giffard and initiated by François Mitterand. In a letter sent in 1990 to
Prime Minister Michel Rocard, President Mitterand outlined his vision of a
digital library, noting that “the novelty will be in the possibility of using
the most modern computer techniques for access to catalogs and documents of
the Bibliothèque nationale de France.”15 The project managed to digitize a
body of 70,000–80,000 titles, a sizeable amount of works for its time. As
Alain Giffard noted in hindsight, “the main difficulty for a digitization
program is to choose the books, and to choose the people to choose the
books.”16 Explaining in a conversation with me how he went about this task,
Giffard emphasized that he chose “not librarians but critics, researchers,
etc.” This choice, he underlined, could be made only because the digitization
program was “the last project of the president and a special mission” and thus
not formally a civil service program.17 The work process was thus as follows:

> I asked them to prepare a list. I told them, “Don’t think about what exists.
I ask of you a list of books that would be logical in this concept of a
library of France.” I had the first list and we showed it to the national
library, which was always fighting internally. So I told them, “I want this
book to be digitized.” But they would never give it to us because of
territory. Their ship was not my ship. So I said to them, “If you don’t give
me the books I shall buy the books.” They said I could never buy them, but
then I started buying the books from antiques suppliers because I earned a lot
of money at that time. So in the end I had a lot of books. And I said to them,
“If you want the books digitized you must give me the books.” But of the
80,000 books that were digitized, half were not in the collection. I used the
staff’s garages for the books, 80,000 books. It is an incredible story.18

Incredible indeed. And a wonderful anecdote that makes clear that mass
digitization, rather than being just a technical challenge, is also a
politically contingent process that raises fundamental questions of territory
(institutional as well as national), materiality, and culture. The integration
of the digital _très grande bibliothèque_ into the French national mass
digitization project Gallica, later in 1997, also foregrounds the
infrastructural trajectory of early national digitization programs into later
glocal initiatives. 19

The question of pan-national digitization programs was precisely at the
forefront of another early prominent mass digitization project, namely the
Universal Digital Library (UDL), which was launched in 1995 by Carnegie Mellon
computer scientist Raj Reddy and developed by linguist Jaime Carbonell,
physicist Michael Shamos, and Carnegie Mellon Foundation dean of libraries
Gloriana St. Clair. In 1998, the project launched the Thousand Book Project.
Later, the UDL scaled its initial efforts up to the Million Book Project,
which they successfully completed in 2007.20 Organizationally, the UDL stood
out from many of the other digitization projects by including initial
participation from three non-Western entities in addition to the Carnegie
Mellon Foundation—the governments of India, China, and Egypt.21 Indeed, India
and China invested about $10 million in the initial phase, employing several
hundred people to find books, bring them in, and take them back. While the
project ambitiously aimed to provide access “to all human knowledge, anytime,
anywhere,” it ended its scanning activities 2008. As such, the Universal
Digital Library points to another central infrastructural dimension of mass
digitization: its highly contingent spatio-temporal configurations that are
often posed in direct contradistinction to the universalizing discourse of
mass digitization. Across the board, mass digitization projects, while
confining themselves in practice to a limited target of how many books they
will digitize, employ a discourse of universality, perhaps alluding vaguely to
how long such an endeavor will take but in highly uncertain terms (see
chapters 3 and 5 in particular).

No exception from the universalizing discourse, another highly significant
mass digitization project, the Internet Archive, emerged around the same time
as the Universal Digital Library. The Internet Archive was founded by open
access activist and computer engineer Brewster Kahle in 1996, and although it
was primarily oriented toward preserving born-digital material, in particular
the Internet ( _Wired_ calls Brewster Kahle “the Internet’s de facto
librarian” 22), the Archive also began digitizing books in 2005, supported by
a grant from the Alfred Sloan Foundation. Later that year, the Internet
Archive created the infrastructural initiative, Open Content Alliance (OCA),
and was now embedded in an infrastructure that included over 30 major US
libraries, as well as major search engines (by Yahoo! and Microsoft),
technology companies (Adobe and Xerox), a commercial publisher (O’Reilly
Media, Inc.), and a not-for-profit membership organization of more than 150
institutions, including universities, research libraries, archives, museums,
and historical societies.23 The Internet Archive’s mass digitization
infrastructure was thus from the beginning a mesh of public and private
cooperation, where libraries made their collections available to the Alliance
for scanning, and corporate sponsors or the Internet Archive conversely funded
the digitization processes. As such, the infrastructures of the Internet
Archive and Google Books were rather similar in their set-ups.24 Nevertheless,
the initiative of the Internet Archive’s mass digitization project and its
attendant infrastructural alliance, OCA, should be read as both a technical
infrastructure responding to the question of _how_ to mass digitize in
technical terms, and as an infrapolitical reaction in response to the forces
of the commercial world that were beginning to gather around mass
digitization, such as Amazon 25 and Google. The Internet Archive thus
positioned itself as a transparent open source alternative to the closed doors
of corporate and commercial initiatives. Yet, as Kalev Leetaru notes, the case
was more complex than that. Indeed, while the OCA was often foregrounded as
more transparent than Google, their technical infrastructural components and
practices were in fact often just as shrouded in secrecy.26 As such, the
Internet Archive and the OCA draw attention to the important infrapolitical
question in mass digitization, namely how, why, and when to manage
visibilities in mass digitization projects.

Although the media sometimes picked up stories on mass digitization projects
already outlined, it wasn’t until Google entered the scene that mass
digitization became a headline-grabbing enterprise. In 2004, Google founders
Larry Page and Sergey Brin traveled to Frankfurt to make a rare appearance at
the Frankfurt Book Fair. Google was at that time still considered a “scrappy”
Internet company in some quarters, as compared with tech giants such as
Microsoft.27 Yet Page and Brin went to Frankfurt to deliver a monumental
announcement: Google would launch a ten-year plan to make available
approximately 15 million digitized books, both in- and out-of-copyright
works.28 They baptized the program “Google Print,” a project that consisted of
a series of partnerships between Google and five English-language libraries:
the University of Michigan at Ann Arbor, Stanford, Harvard, Oxford (Bodleian
Library), and the New York City Public Library. While Page’s and Brin’s
announcement was surprising to some, many had anticipated it; as already
noted, advances toward mass digitization proper had already been made, and
some of the partnership institutions had been negotiating with Google since
2002.29 As with many of the previous mass digitization projects, Google found
inspiration for their digitization project in the long-lived utopian ideal of
the universal library, and in particular the mythic library of Alexandria.30
As with other Google endeavors, it seemed that Page was intent on realizing a
utopian ideal that scholars (and others) had long dreamed of: a library
containing everything ever written. It would be realized, however, not with
traditional human-centered means drawn from the world of libraries, but rather
with an AI approach. Google Books would exceed human constraints, taking the
seemingly impossible vision of digitizing all the books in the world as a
starting point for constructing an omniscient Artificial Intelligence that
would know the entire human symbol system and allow flexible and intuitive
recollection. These constraints were physical (how to digitize and organize
all this knowledge in physical form); legal (how to do it in a way that
suspends existing regulation); and political (how to transgress territorial
systems). The invocation of the notion of the universal library was not a
neutral action. Rather, the image of Google Books as a library worked as a
symbolic form in a cultural scheme that situated Google as a utopian, and even
ethical, idealist project. Google Books seemingly existed by virtue of
Goethe’s famous maxim that “To live in the ideal world is to treat the
impossible as if it were possible.”31 At the time, the industry magazine
_Bookseller_ wrote in response to Google’s digitization plans: “The prospect
is both thrilling and frightening for the book industry, raising a host of
technical and theoretical issues.” 32 And indeed, while some reacted with
enthusiasm and relief to the prospect of an organization being willing to
suffer the cost of mass digitization, others expressed economic and ethical
concerns. The Authors Guild, a New York–based association, promptly filed a
copyright infringement suit against Google. And librarians were forced to
revisit core ethical principles such as privacy and public access.

The controversies of Google Books initially played out only in US territory.
However, another set of concerns of a more territorial and political nature
soon came to light. The French President at the time, Jacques Chirac, called
France to cultural-political arms, urging his culture minister, Renaud
Donnedieu de Vabres, and Jean-Noël Jeanneney, then-head of France’s
Bibliothèque nationale, to do the same with French texts as Google planned to
do with their partner libraries, but by means of a French search engine.33
Jeanneney initially framed this French cultural-political endeavor as a
European “contre-attaque” against Google Books, which, according to Jeanneney,
could pose “une domination écrasante de l'Amérique dans la définition de
l'idée que les prochaines générations se feront du monde.” (“a crushing
American domination of the formation of future generations’ ideas about the
world”)34 Other French officials insisted that the French digitization project
should be seen not primarily as a cultural-political reaction _against_
Google, but rather as a cultural-political incentive within France and Europe
to make European information available online. “I really stress that it's not
anti-American,” an official at France’s Ministry of Culture and Communication,
speaking on the condition of anonymity, noted in an interview. “It is not a
reaction. The objective is to make more material relevant to European heritage
available. … Everybody is working on digitization projects.” Furthermore, the
official did not rule out potential cooperation between Google and the
European project. 35 There was no doubt, however, that the move to mass
digitization “was a political drive by the French,” as Stephen Bury, head of
European and American collections at the British Library, emphasized.36

Despite its mixed messages, the French reaction nevertheless underscored the
controversial nature of mass digitization as a symbolic, as well as technical,
aspiration: mass digitization was a process that not only neutrally scanned
and represented books but could also produce a new mode of world-making,
actively structuring archives as well as their users.37 Now questions began to
surface about where, or with whom, to place governance over this new archive:
who would be the custodian of the keys to this new library? And who would be
the librarians? A series of related questions could also be asked: who would
determine the archival limits, the relations between the secret and the non-
secret or the private and the public, and whether these might involve property
or access rights, publication or reproduction rights, classification, and
putting into order? France soon managed to rally other EU countries (Spain,
Poland, Hungary, Italy, and Germany) to back its recommendation to the
European Commission (EC) to construct a European alternative to Google’s
search engine and archive and to set this out in writing. Occasioned by the
French recommendation, the EC promptly adopted the idea of Europeana—the name
of the proposed alternative—as a “flagship project” for the budding EU
cultural policy.38 Soon after, in 2008, the EC launched Europeana, giving
access to some 4.5 million digital objects from more than 1,000 institutions.

Europeana’s Europeanizing discourse presents a territorializing approach to
mass digitization that stands in contrast to the more universalizing tone of
Mundaneum, Gutenberg, Google Books, and the Universal Digital Library. As
such, it ties in with our final examples, namely the sovereign mass
digitization projects that have in fact always been one of the primary drivers
in mass digitization efforts. To this day, the map of mass digitization is
populated with sovereign mass digitization efforts from Holland and Norway to
France and the United States. One of the most impressive projects is the
Norwegian mass digitization project at the National Library of Norway, which
since 2004 has worked systematically to develop a digital National Library
that encompasses text, audio, video, image, and websites. Impressively, the
National Library of Norway offers digital library services that provide online
access (to all with a Norwegian IP address) to full-text versions of all books
published in Norway up until the year 2001, access to digital newspaper
collections from the major national and regional newspapers in all libraries
in the country, and opportunities for everyone with Internet access to search
and listen to more than 40,000 radio programs recorded between 1933 and the
present day.39 Another ambitious national mass digitization project is the
Dutch National Library’s effort to digitize all printed publications since
1470 and to create a National Platform for Digital Publications, which is to
act both as a content delivery platform for its mass digitization output and
as a national aggregator for publications. To this end, the Dutch National
Library made deals with Google Books and Proquest to digitize 42 million pages
just as it entered into partnerships with cross-domain aggregators such as
Europeana.40 Finally, it is imperative to mention the Digital Public Library
of America (DPLA), a national digital library conceived of in 2010 and
launched in 2013, which aggregates digital collections of metadata from around
the United States, pulling in content from large institutions like the
National Archives and Records Administration and HathiTrust, as well as from
smaller archives. The DPLA is in great part the fruit of the intellectual work
of Harvard University’s Berkman Center for Internet and Society and the work
of its Steering Committee, which consisted of influential names from the
digital, legal, and library worlds, such as Robert Darnton, Maura Marx, and
John Palfrey from Harvard University; Paul Courant of the University of
Michigan; Carla Hayden, then of Baltimore’s Enoch Pratt Free Library and
subsequently the Librarian of Congress; Brewster Kahle; Jerome McGann; Amy
Ryan of the Boston Public Library; and Doron Weber of the Sloan Foundation.
Key figures in the DPLA have often to great rhetorical effect positioned DPLA
vis-à-vis Google Books, partly as a question of public versus private
infrastructures.41 Yet, as the then-Chairman of DPLA John Palfrey conceded,
the question of what constitutes “public” in a mass digitization context
remains a critical issue: “The Digital Public Library of America has its
critics. One counterargument is that investments in digital infrastructures at
scale will undermine support for the traditional and the local. As the
chairman of the DPLA, I hear this critique in the question-and-answer period
of nearly every presentation I give. … The concern is that support for the
DPLA will undercut already eroding support for small, local public
libraries.”42 While Palfrey offers good arguments for why the DPLA could
easily work in unison with, rather than jeopardize, smaller public libraries,
and while the DPLA is building infrastructures to support this claim,43 the
discussion nevertheless highlights the difficulties with determining when
something is “public,” and even national.

While the highly publicized and institutionalized projects I have just
recounted have taken center stage in the early and later years of mass
digitization, they neither constitute the full cast, nor the whole machinery,
of mass digitization assemblages. Indeed, as chapter 4 in this book charts, at
the margins of mass digitization another set of actors have been at work
building new digital cultural memory assemblages, including projects such as
Monoskop and Lib.ru. These actors, referred to in this book as shadow library
projects (see chapter 4), at once both challenge and confirm the broader
infrapolitical dimensions of mass digitization, including its logics of
digital capitalism, network power, and territorial reconfigurations of
cultural memory between universalizing and glocalizing discourses. Within this
new “ecosystem of access,” unauthorized archives as Libgen, Gigapedia, and
Sci-Hub have successfully built “shadow libraries” with global reach,
containing massive aggregations of downloadable text material of both
scholarly and fictional character.44 As chapter 4 shows, these initiatives
further challenge our notions of public good, licit and illicit mass
digitization, and the territorial borders of mass digitization, just as they
add another layer of complexity to the question of the politics of mass
digitization.

Today, then, the landscape of mass digitization has evolved considerably, and
we can now begin to make out the political contours that have shaped, and
continue to shape, the emergent contemporary knowledge infrastructures of mass
digitization, ripe as they are with contestation, cooperation, and
competition. From this perspective, mass digitization appears as a preeminent
example of how knowledge politics are configured in today’s world of
“assemblages” as “multisited, transboundary networks” that connect
subnational, national, supranational, and global infrastructures and actors,
without, however, necessarily doing so through formal interstate systems.45 We
can also see that mass digitization projects did not arise as a result of a
sovereign decision, but rather emerged through a series of contingencies
shaped by late-capitalist and late-sovereign forces. Furthermore, mass
digitization presents us with an entirely new cultural memory paradigm—a
paradigm that requires a shift in thinking about cultural works, collections,
and contexts, from cultural records to be preserved and read by humans, to
ephemeral machine-readable entities. This change requires a shift in thinking
about the economy of cultural works, collections, and contexts, from scarce
institutional objects to ubiquitous flexible information. Finally, it requires
a shift in thinking about these same issues as belonging to national-global
domains to conceiving them in terms of a set of political processes that may
well be placed in national settings, but are oriented toward global agendas
and systems.

## Interrogating Mass Digitization

Mass digitization is often elastic in definition and elusive in practice.
Concrete attempts have been made to delimit what mass digitization is, but
these rarely go into specifics. The two characteristics most commonly
associated with mass digitization are the relative lack of selectivity of
materials, as compared to smaller-scale digitization projects, and the high
speed and high volume of the process in terms of both digital conversion and
metadata creation, which are made possible through a high level of
automation.46 Mass digitization is thus concerned not only with preservation,
but also with what kind of knowledge practices and values technology allows
for and encourages, for example, in relation to de- and recontextualization,
automation, and scale.47

Studies of mass digitization are commonly oriented toward technology or
information policy issues close to libraries, such as copyright, the quality
of digital imagery, long-term preservation responsibility, standards and
interoperability, and economic models for libraries, publishers, and
booksellers, rather than, as here, the exploration of theory.48 This is not to
say that existing work on mass digitization is not informed by theoretical
considerations, but rather that the majority of research emphasizes policy and
technical implementation at the expense of a more fundamental understanding of
the cultural implications of mass digitization. In part, the reason for this
is the relative novelty of mass digitization as an identifiable field of
practice and policy, and its significant ramifications in the fields of law
and information science.49 In addition to scholarly elucidations, mass
digitization has also given rise to more ideologically fuelled critical books
and articles on the topic.50

Despite its disciplinary branching, work on mass digitization has mainly taken
place in the fields of information science, law, and computer science, and has
primarily problematized the “hows” of mass digitization and not the “whys.”51
As with technical work on mass digitization, most nontechnical studies of mass
digitization are “problem-solving” rather than “critical,” and this applies in
particular to work originating from within the policy analysis community. This
body seeks to solve problems within the existing social order—for example,
copyright or metadata—rather than to interrogate the assumptions that underlie
mass digitization programs, which would include asking what kinds of knowledge
production mass digitization gives rise to. How does mass digitization change
the ideological infrastructures of cultural heritage institutions? And from
what political context does the urge to digitize on an industrial scale
emerge? While the technical and problem-solving corpus on mass digitization is
highly valuable in terms of outlining the most important stakeholders and
technical issues of the field, it does not provide insight into the deeper
structures, social mechanisms, and political implications of mass
digitization. Moreover, it often fails to account for digitization as a force
that is deeply entwined with other dynamics that shape its development and
uses. It is this lack that the present volume seeks to mitigate.

## Assembling Mass Digitization

Mass digitization is a composite and fluctuating infrastructure of
disciplines, interests, and forces rooted in public-private assemblages,
driven by ideas of value extraction and distribution, and supported by new
forms of social organization. Google Books, for instance, is both a commercial
project covered by nondisclosure agreements _and_ an academic scholarly
project open for all to see. Similarly, Europeana is both a public
digitization project directed at “citizens” _and_ a public-private partnership
enterprise ripe with profit motives. Nevertheless, while it is tempting to
speak about specific mass digitization projects such as Google Books and
Europeana in monolithic and contrastive terms, mass digitization projects are
anything but tightly organized, institutionally delineated, coherent wholes
that produce one dominant reading. We do not find one “essence” in mass
digitized archives. They are not “enlightenment projects,” “library services,”
“software applications,” “interfaces,” or “corporations.” Nor are they rooted
in one central location or single ideology. Rather, mass digitization is a
complex material and social infrastructure performed by a diverse
constellation of cultural memory professionals, computer scientists,
information specialists, policy personnel, politicians, scanners, and
scholars. Hence, this volume approaches mass digitization projects as
“assemblages,” that is, as contingent arrangements consisting of humans,
machines, objects, subjects, spaces and places, habits, norms, laws, politics,
and so on. These arrangements cross national-global and public-private lines,
producing what this volume calls “late-sovereign,” “posthuman,” and “late-
capitalist” assemblages.

To give an example, we can look at how the national and global aspects of
cultural memory institutions change with mass digitization. The national
museums and libraries we frequent today were largely erected during eras of
high nationalism, as supreme acts of cultural and national territoriality.
“The early establishment of a national collection,” as Belinda Tiffen notes,
“was an important step in the birth of the new nation,” since it signified
“the legitimacy of the nation as a political and cultural entity with its own
heritage and culture worthy of being recorded and preserved.”52 Today, as the
initial French incentive to build Europeana shows, we find similar
nationalization processes in mass digitization projects. However,
nationalizing a digital collection often remains a performative gesture than a
practical feat, partly because the information environment in the digital
sphere differs significantly from that of the analog world in terms of
territory and materiality, and partly because the dichotomy between national
and global, an agreed-upon construction for centuries, is becoming more and
more difficult to uphold in theory and practice.53 Thus, both Google Books and
Europeana link to sovereign frameworks such as citizens and national
representation, while also undermining them with late-capitalist transnational
economic agreements.

A related example is the posthuman aspect of cultural memory politics.
Cultural memory artifacts have always been thought of as profoundly human
collections, in the sense that they were created by and for human minds and
human meaning-making. Previously, humans also organized collections. But with
the invention of computers, most cultural memory institutions also introduced
a machine element to the management of accelerating amounts of information,
such as computerized catalog systems and recollection systems. With the advent
of mass digitization, machines have gained a whole new role in the cultural
memory ecosystem, not only as managers, but also as interpreters. Thus,
collections are increasingly digitized to be read by machines instead of
humans, just as metadata is now becoming a question of machine analysis rather
than of human contextualization. Machines are taking on more and more tasks in
the realm of cultural memory that require a substantial amount of cognitive
insight (just as mass digitization has created the need for new robot-like,
and often poorly paid, human tasks, such as the monotonous work of book
scanning). Mass digitization has thereby given rise to an entirely new
cultural-legal category titled “non-consumptive research,” a term used to
describe the large-scale analysis of texts, and which has been formalized by
the Google Books Settlement, for instance, in the following way: “research in
which computational analysis is performed on one or more books, but not
research in which a researcher reads or displays.”54

Lastly, mass digitization connects the politics of cultural memory to
transnational late capitalism, and to one of its expressions in particular:
digital capitalism.55 Of course, cultural memory collections have a long
history with capitalism. The nineteenth century held very fuzzy boundaries
between the cultural functions of libraries and the commercial interests that
surrounded them, and, as historian of libraries Francis Miksa notes, Melvin
Dewey, inventor of the Dewey Decimal System, was a great admirer of the
corporate ideal, and was eager to apply it to the library system.56 Indeed,
library development in the United States was greatly advanced by the
philanthropy of capitalism, most notably by Andrew Carnegie.57 The question,
then, is not so much whether mass digitization has brought cultural memory
institutions, and their collections and users, into a capitalist system, but
_what kind_ of capitalist system mass digitization has introduced cultural
memory to: digital capitalism.

Today, elements of the politics of cultural memory are being reassembled into
novel knowledge configurations. As a consequence, their connections and
conjugations are being transformed, as are their institutional embeddings.
Indeed, mass digitization assemblages are a product of our time. They are new
forms of knowledge institutions arising from a sociopolitical environment
where vertical territorial hierarchies and horizontal networks entwine in a
new political mesh: where solid things melt into air, and clouds materialize
as material infrastructures, where boundaries between experts and laypeople
disintegrate, and where machine cognition operates on a par with human
cognition on an increasingly large scale. These assemblages enable new types
of political actors—networked assemblages—which hold particular forms of power
despite their informality vis-à-vis the formal political system; and in turn,
through their practices, these actors partly build and shape those
assemblages.

Since concepts always respond to “a specific social and historical situation
of which an intellectual occasion is part,”58 it is instructive to revisit the
1980s, when the theoretical notion of assemblage emerged and slowly gained
cross-disciplinary purchase.59 Around this time, the stable structures of
modernist institutions began to give ground to postmodern forces: sovereign
systems entered into supra-, trans-, and international structures,
“globalization” became a buzzword, and privatizing initiatives drove wedges
into the foundations of state structures. The centralized power exercised by
disciplinary institutions was increasingly distributed along more and more
lines, weakening the walls of circumscribed centralized authority.60 This
disciplinary decomposition took place on all levels and across all fields of
society, including institutional cultural memory containers such as libraries
and museums. The forces of privatization, globalization, and digitization put
pressures not only on the authority of these institutions but also on a host
of related authoritative cultural memory elements, such as “librarians,”
“cultural works,” and “taxonomies,” and cultural memory practices such as
“curating,” “reading,” and “ownership.” Librarians were “disintermediated” by
technology, cultural works fragmented into flexible data, and curatorial
principles were revised and restructured just as reading was now beginning to
take place in front of screens, meaning-making to be performed by machines,
and ownership of works to be substituted by contractual renewals.

Thinking about mass digitization as an “assemblage” allows us to abandon the
image of a circumscribed entity in favor of approaching it as an aggregate of
many highly varied components and their contingent connections: scanners,
servers, reading devices, cables, algorithms; national, EU, and US
policymakers; corporate CEOs and employees; cultural heritage professionals
and laypeople; software developers, engineers, lobby organizations, and
unsalaried labor; legal settlements, academic conferences, position papers,
and so on. It gives us pause—every time we say “Google” or “Europeana,” we
might reflect on what we actually mean. Does the researcher employed by a
university library and working with Google Books also belong to Google Books?
Do the underpaid scanners? Do the users of Google? Or, when we refer to Google
Books, do we rather only mean to include the founders and CEOs of Google? Or
has Google in fact become a metaphor that expresses certain characteristics of
our time? The present volume suggests that all these components enter into the
new phenomenon of mass digitization and produce a new field of potentiality,
while at the same time they retain their original qualities and value systems,
at least to some extent. No assemblage is whole and imperturbable, nor
entirely reducible to its parts, but is simultaneously an accumulation of
smaller assemblages and a member of larger ones.61 Thus Google Books, for
example, is both an aggregation of smaller assemblages such as university
libraries, scanners (both humans and machines), and books, _and_ a member of
larger assemblages such as Google, Silicon Valley, neoliberal lobbies, and the
Internet, to name but a few.

While representations of assemblages such as the analyses performed in this
volume are always doomed to misrepresent empirical reality on some level, this
approach nevertheless provides a tool for grasping at least some of mass
digitization’s internal heterogeneity, and the mechanisms and processes that
enable each project’s continued assembled existence. The concept of the
assemblage allows us to grasp mass digitization as comprised of ephemeral
projects that are uncertain by nature, and sometimes even made up of
contradictory components.62 It also allows us to recognize that they are more
than mere networks: while ephemeral and networked, something enables them to
cohere. Bruno Latour writes, “Groups are not silent things, but rather the
provisional product of a constant uproar made by the millions of contradictory
voices about what is a group and who pertains to what.”63 It is the “taming
and constraining of this multivocality,” in particular by communities of
knowledge and everyday practices, that enables something like mass
digitization to cohere as an assemblage.64 This book is, among other things,
about those communities and practices, and the politics they produce and are
produced by. In particular, it addresses the politics of mass digitization as
an infrapolitical activity that retreats into, and emanates from, digital
infrastructures and the network effects they produce.

## Politics in Mass Digitization: Infrastructure and Infrapolitics

If the concept of “assemblage” allows us to see the relational set-up of mass
digitization, it also allows us to inquire into its political infrastructures.
In political terms, assemblage thinking is partly driven by dissatisfaction
with state-centric dominant ontologies, including reified units such as state,
society, or capitalism, and the unilinear focus on state-centric politics over
other forms of politics.65 The assemblage perspective is therefore especially
useful for understanding the politics of late-sovereign and late-capitalist
data projects such as mass digitization. As we will see in part 2, the
epistemic frame of sovereignty continues to offer an organizing frame for the
constitution and regulation of mass digitization and the virtues associated
with it (such as national representation and citizen engagement). However, at
the same time, mass digitization projects are in direct correspondence with
neoliberal values such as privatization, consumerism, globalization, and
acceleration, and its technological features allow for a complete
restructuring of the disciplinary spaces of libraries to form vaster and even
global scales of integration and economic organization on a multinational
stage.

Mass digitization is a concrete example of what cultural memory projects look
like in a “late-sovereign” age, where globalization tests the political and
symbolic authority of sovereign cultural memory politics to its limits, while
sovereignty as an epistemic organizing principle for the politics of cultural
memory nonetheless persists.66 The politics of cultural memory, in particular
those practiced by cultural heritage institutions, often still cling to fixed
sovereign taxonomies and epistemic frameworks. This focus is partly determined
by their institutional anchoring in the framework of national cultural
policies. In mass digitization, however, the formal political apparatus of
cultural heritage institutions is adjoined by a politics that plays out in the
margins: in lobbies, software industries, universities, social media, etc.
Those evaluating mass digitization assemblages in macropolitical terms, that
is, those who are concerned with political categories, will glean little of
the real politics of mass digitization, since such politics at the margins
would escape this analytic matrix.67 Assemblage thinking, by contrast, allows
us to acknowledge the political mechanisms of mass digitization beyond
disciplinary regulatory models, in societies where “where forces … not
categories, clash.”68

As Ian Hacking and many others have noted, the capacious usage of the notion
of “politics” threatens to strip the word of meaning.69 But talk of a politics
of mass digitization is no conceptual gimmick, since what is taking place in
the construction and practice of mass digitization assemblages plainly is
political. The question, then, is how best to describe the politics at work in
mass digitization assemblages. The answer advanced by the present volume is to
think of the politics of mass digitization as “infrapolitics.”

The notion of infrapolitics has until now primarily and profoundly been
advanced as a concept of hidden dissent or contestation (Scott, 1990).70 This
volume suggests shifting the lens to focus on a different kind of
infrapolitics, however, one that not only takes the shape of resistance but
also of maintenance and conformity, since the story of mass digitization is
both the story of contestation _and_ the politics of mundane and standard-
seeking practices. 71 The infrapolitics of mass digitization is, then, a kind
of politics “premised not on a subject, but on the infra,” that is, the
“underlying rules of the world,” organized around glocal infrastructures.72
The infrapolitics of mass digitization is the building and living of
infrastructures, both as spaces of contestation and processes of
naturalization.

Geoffrey Bowker and Susan Leigh Star have argued that the establishment of
standards, categories, and infrastructures “should be recognized as the
significant site of political and ethical work that they are.”73 This applies
not least in the construction and development of knowledge infrastructures
such as mass digitization assemblages, structures that are upheld by
increasingly complex sets of protocols and standards. Attaching “politics” to
“infrastructure” endows the term—and hence mass digitization under this
rubric—with a distinct organizational form that connects various stages and
levels of politics, as well as a distinct temporality that relates mass
digitization to the forces and ideas of industrialization and globalization.

The notion of infrastructure has a surprisingly brief etymology. It first
entered the French language in 1875 in relation to the excavation of
railways.74 Over the following decades, it primarily designated fixed
installations designed to facilitate and foster mobility. It did not enter
English vocabulary until 1927, and as late as 1951, the word was still
described by English sources as “new” (OED).75 When NATO adopted the term in
the 1950s, it gained a military tinge. Since then, “infrastructure” has
proliferated into ever more contexts and disciplines, becoming a “plastic
word”76 often used to signify any vital and widely shared human-constructed
resource.77

What makes infrastructures central for understanding the politics of mass
digitization? Primarily, they are crucial to understanding how industrialism
has affected the ways in which we organize and engage with knowledge, but the
politics of infrastructures are also becoming increasingly significant in the
late-sovereign, late-capitalist landscape.

The infrastructures of mass digitization mediate, combine, connect, and
converge upon different institutions, social networks, and devices, augmenting
the actors that take part in them with new agential possibilities by expanding
the radius of their action, strengthening and prolonging the reach of their
performance, and setting them free for other activities through their
accelerating effects, time often reinvested in other infrastructures, such as,
for instance, social media activities. The infrastructures of mass
digitization also increase the demand for globalization and mobility, since
they expand the radius of using/reading/working.

The infrastructures of mass digitization are thus media of polities and
politics, at times visible and at others barely legible or felt, and home both
to dissent as well as to standardizing measures. These include legal
infrastructures such as copyright, privacy, and trade law; material
infrastructures such as books, wires, scanners, screens, server parks, and
shelving systems; disciplinary infrastructures such as metadata, knowledge
organization, and standards; cultural infrastructures such as algorithms,
searching, reading, and downloading; societal infrastructures such as the
realms of the public and private, national and global. These infrastructures
are, depending, both the prerequisites for and the results of interactions
between the spatial, temporal, and social classes that take part in the
construction of mass digitization. The infrapolitics of mass digitization is
thus geared toward both interoperability and standardization, as well as
toward variation.78

Often when thinking of infrastructures, we conceive of them in terms of
durability and stability. Yet, while some infrastructures, such as railways
and Internet cables, are fairly solid and rigid constructions, others—such as
semantic links, time-limited contracts, and research projects—are more
contingent entities which operate not as “fully coherent, deliberately
engineered, end-to-end processes,” but rather as morphous contingent
assemblages, as “ecologies or complex adaptive systems” consisting of
“numerous systems, each with unique origins and goals, which are made to
interoperate by means of standards, socket layers, social practices, norms,
and individual behaviors that smooth out the connections among them.”79 This
contingency has direct implications for infrapolitics, which become equally
flexible and adaptive. These characteristics endow mass digitization
infrastructures with vulnerabilities but also with tremendous cultural power,
allowing them to distribute agency, and to create and facilitate new forms of
sociality and culture.

Building mass digitization infrastructures is a costly endeavor, and hence
mass digitization infrastructures are often backed by public-private
partnerships. Indeed infrastructures—and mass digitization infrastructures are
no exceptions—are often so costly that a certain mixture of political or
individual megalomania, state reach, and private capital is present in their
construction.80 This mixed foundation means that a lot of the political
decisions regarding mass digitization literally take place _beneath_ the radar
of “the representative institutions of the political system of nation-states,”
while also more or less aggressively filling out “gaps” in nation-state
systems, and even creating transnational zones with their own policies. 81
Hence the notion of “infra”: the infrapolitics of mass digitization hover at a
frequency that lies _below_ and beyond formal sovereign state apparatus,
organized, as they are, around glocal—and often private or privatized—material
and social infrastructures.

While distinct from the formalized sovereign political system, infrapolitical
assemblages nevertheless often perform as late-sovereign actors by engaging in
various forms of “sovereignty games.”82 Take Google, for instance, a private
corporation that often defines itself as at odds with state practice, yet also
often more or less informally meets with state leaders, engages in diplomatic
discussions, and enters into agreements with state agencies and local
political councils. The infrapolitical forces of Google in these sovereignty
games can on the one hand exert political pressure on states—for instance in
the name of civic freedom—but in Google’s embrace of politics, its
infrapolitical forces can on the other hand also squeeze the life out of
existing parliamentary ways, promoting instead various forms of apolitical or
libertarian modes of life. The infrapolitical apparatus thus stands apart from
more formalized politics, not only in terms of political arena, but also the
constraints that are placed upon them in the form, for instance, of public
accountability.83 What is described here can in general terms be called the
infrapolitics of neoliberalism, whose scenery consists of lobby rooms, policy-
making headquarters, financial zones, public-private spheres, and is populated
by lobbyists, bureaucrats, lawyers, and CEOs.

But the infrapolitical dynamics of mass digitization also operate in more
mundane and less obvious settings, such as software design offices and
standardization agencies, and are enacted by engineers, statisticians,
designers, and even users. Infrastructures are—increasingly—essential parts of
our everyday lives, not only in mass digitization contexts, but in all walks
of life, from file formats and software programs to converging transportation
systems, payment systems, and knowledge infrastructures. Yet, what is most
significant about the majority of infrapolitical institutions is that they are
so mundane; if we notice them at all, they appear to us as boring “lists of
numbers and technical specifications.”84 And their maintenance and
construction often occurs “behind the scenes.”85 There is a politics to these
naturalizing processes, since they influence and frame our moral, scientific,
and aesthetic choices. This is to say that these kinds of infrapolitical
activities often retire or withdraw into a kind of self-evidence in which the
values, choices, and influences of infrastructures are taken for granted and
accorded a kind of obviousness, which is universally accepted. It is therefore
all the more “politically and ethically crucial”86 to recognize the
infrapolitics of mass digitization, not only as contestation and privatized
power games, but also as a mode of existence that values professionalized
standardization measures and mundane routines, not least because these
infrapolitical modes of existence often outlast their material circumstances
(“software outlasts hardware” as John Durham Peters notes).87 In sum,
infrastructures and the infrapolitics they produce yield subtle but
significant world-making powers.

## Power in Mass Digitization

If mass digitization is a product of a particular social configuration and
political infrastructure, it is also, ultimately, a site and an instrument of
power. In a sense, mass digitization is an event that stages a fundamental
confrontation between state and corporate power, while pointing to the
reconfigurations of both as they become increasingly embedded in digital
infrastructures. For instance, such confrontation takes place at the
negotiating table, where cultural heritage directors face the seductive and
awe-inspiring riches of Silicon Valley, as well as its overwhelmingly
intricate contractual layouts and its intimidating entourage of lawyers.
Confrontation also takes place at the level of infrastructural ideology, in
the meeting between twentieth-century standardization ideals and the playful
and flexible network dynamics of the twenty-first century, as seen for
instance in the conjunction of institutionally fixed taxonomies and
algorithmic retrieval systems that include feedback mechanisms. And it takes
place at the level of users, as they experience a gain in some powers and the
loss of others in their identity transition from national patrons of cultural
memory institutions to globalized users of mass digitization assemblages.

These transformations are partly the results of society’s increasing reliance
on network power and its effects. Political theorists Michael Hardt and
Antonio Negri suggested almost two decades ago that among other things, global
digital systems enabled a shift in power infrastructures from robust national
economies and core industrial sectors to interactive networks and flexible
accumulation, creating a “form of network power, which requires the wide
collaboration of dominant nation-states, major corporations, supra-national
economic and political institutions, various NGOs, media conglomerates and a
series of other powers.”88 From this landscape, according to their argument,
emerged a new system of power in which morphing networks took precedence over
reliable blocs. Hardt and Negri’s diagnosis was one of several similar
arguments across the political spectrum that were formed within such a short
interval that “the network” arguably became the “defining concept of our
epoch.”89 Within this new epoch, the old centralized blocs of power crumbled
to make room for new forms of decentralized “bastard” power phenomena, such as
the extensive corporate/state mass surveillance systems revealed by Edward
Snowden and others, and new forms of human rights such as “the right to be
forgotten,” a right for which a more appropriate name would be “the right to
not be found by Google.”90 Network power and network effects are therefore
central to understanding how mass digitization assemblages operate, and why
some mass digitization assemblages are more powerful than others.

The power dynamics we find in Google Books, for instance, are directly related
to the ways in which digital technologies harness network effects: the power
of Google Books grows exponentially as its network expands.91 Indeed, as Siva
Vaidhyanathan noted in his critical work on Google’s role in society, what he
referred to as the “Googlization of books” was ultimately deeply intertwined
with the “Googlization of everything.”92 The networks of Google thus weren’t
external to both the success and the challenges of Google, but deeply endemic
to it, from portals and ranking systems to anchoring (elite) institutions, and
so on. The better Google Books becomes at harnessing network effects, the more
fundamental its influence is in the digital sphere. And Google Books is very
good at harnessing digital network power. Indeed, Google Books reached its
“tipping point” almost before it launched: it had by then already attracted so
many stakeholders that its mere existence decreased the power of any competing
entities—and the fact that its heavy user traffic is embedded in Google only
strengthened its network effects. Google Books’s tipping point tells us little
about its quality in an abstract sense: “tipping points” are more often
attained by proprietary measures, lobbying, expansion, and most typically by a
mixture of all of the above, than by sheer quality.93 This explains not only
the success of Google Books, but also its traction with even its critics:
although Google Books was initially criticized heavily for its poor imagery
and faulty metadata,94 its possible harmful impact on the public sphere,95 and
later, over privacy concerns,96 it had already created a power hub to which,
although they could have navigated around it, masses of people were
nevertheless increasingly drawn.

Network power is endemic not only to concrete digital networks, but also to
globalization at large as a process that simultaneously gives rise to feelings
of freedom of choice and loss of choice.97 Mass digitization assemblages, and
their globalization of knowledge infrastructures, thus crystalize the more
general tendencies of globalization as a process in which people participate
by choice, but not necessarily voluntarily; one in which we are increasingly
pushed into a game of social coordination, where common standards allow more
effective coordination yet also entrap us in their pull for convergence.
Standardization is therefore a key technique of network power: on the one
hand, standardization is linked with globalization (and various neoliberal
regimes) and the attendant widespread contraction of the state, while on the
other hand, standardization implies a reconfiguration of everyday life.98
Standards allow for both minute data analytics and overarching political
systems that “govern at a distance.”99 Standardization understood in this way
is thus a mode of capturing, conceptualizing, and configuring reality, rather
than simply an economic instrument or lubricant. In a sense, standardization
could even be said to be habit forming: through standardization, “inventions
become commonplace, novelties become mundane, and the local becomes
universal.”100

To be sure, standardization has long been a crucial tool of world-making
power, spanning both the early and late-capitalist eras.101 “Standard time,”
as John Durham Peters notes, “is a sine qua non for international
capitalism.”102 Without the standardized infrastructure of time there would be
no global transportation networks, no global trade channels, and no global
communication networks. Indeed, globalization is premised on standardization
processes.

What kind of standardization processes do we find, then, in mass digitization
assemblages? Internet use alone involves direct engagement with hundreds of
global standards, from Bluetooth to Wi-Fi standards, from protocol standards
to file standards such as Word and MP4 and HTTP.103 Moreover, mass
digitization assemblages confront users with a series of additional standards,
from cultural standards of tagging to technical standards of interoperability,
such as the European Data Model (EDM) and Google’s schema.org, or legal
standards such as copyright and privacy regulations. Yet, while these
standards share affinities with the standardization processes of
industrialization, in many respects they also deviate from them. Instead, we
experience in mass digitization “a new form of standardization,”104 in which
differentiation and flexibility gain increasing influence without, however,
dispensing with standardization processes.

Today’s standardization is increasingly coupled with demands for flexibility
and interoperability. Flexibility, as Joyce Kolko has shown, is a term that
gained traction in the 1970s, when it was employed to describe putative
solutions to the problems of Fordism.105 It was seen as an antidote to Fordist
“rigidity”—a serious offense in the neoliberal regime. Thus, while the digital
networks underlying mass digitization are geared toward standardization and
expansion, since “information technology rewards scale, but only to the extent
that practices are standardized,”106 they are also becoming increasingly
flexible, since too-rigid standards hinder network effects, that is, the
growth of additional networks. This is one reason why mass digitization
assemblages increasingly and intentionally break down the so-called “silo”
thinking of cultural memory institutions, and implement standard flexibility
and interoperability to increase their range.107 One area of such
reconfiguration in mass digitization is the taxonomic field, where stable
institutional taxonomic structures are converted to new flexible modes of
knowledge organization like linked data.108 Linked data can connect cultural
memory artifacts as well as metadata in new ways, and the move from a cultural
memory web of interlinked documents to a cultural memory web of interlinked
data can potentially “amplify the impact of the work of libraries and
archives.”109 However, in order to work effectively, linked data demands
standards and shared protocols.

Flexibility allows the user a freer range of actions, and thus potentially
also the possibility of innovation. These affordances often translate into
user freedom or empowerment. Yet flexibility does not necessarily equal
fundamental user autonomy or control. On the contrary, flexibility is often
achieved through decomposition, modularization, and black-boxing, allowing
some components to remain stable while others are changed without implications
for the rest of the system.110 These components are made “fluid” in the sense
that they are dispersed of clear boundaries and allowed multiple identities,
and in that they enable continuity and dissolution.

While these new flexible standard-setting mechanisms are often localized in
national and subnational settings, they are also globalized systems “oriented
towards global agendas and systems.”111 Indeed, they are “glocal”
configurations with digital networks at their cores. The increasing
significance of these glocal configurations has not only cultural but also
democratic consequences, since they often leave users powerless when it comes
to influencing their cores.112 This more fundamental problematic also pertains
to mass digitization, a phenomenon that operates in an environment that
constructs and encourages less Habermasian public spheres than “relations of
sociability,” from which “aggregate outcomes emerge not from an act of
collective decision-making, but through the accumulation of decentralized,
individual decisions that, taken together, nonetheless conduce to a
circumstance that affects the entire group.”113 For example, despite the
flexibility Google Books allows us in terms of search and correlation, we have
very little sway over its construction, even though we arguably influence its
dynamics. The limitations of our influence on the cores of mass digitization
assemblages have implications not only for how we conceive of institutional
power, but also for our own power within these matrixes.

## Notes

1. Borghi 2012, 420. 2. Latour 2008. 3. For more on this, see Hicks 2018;
Abbate 2012; Ensmenger 2012. In the case of libraries, (white) women still
make out the majority of the workforce, but there is a disproportionate amount
of men in senior positions, in comparison with their overall representation;
see, for example, Schonfeld and Sweeney 2017. 4. Meckler 1982. 5. Otlet and
Rayward 1990, chaps. 6 and 15. 6. For a historical and contemporary overview
over some milestones in the use of microfilms in a library context, see Canepi
et al. 2013, specifically “Historic Overview.” See also chap. 10 in Baker
2002. 7. Pfanner 2012. 8.
. 9. Medak et al.
2016. 10. Michael S. Hart, “The History and Philosophy of Project Gutenberg,”
Project Gutenberg, August 1992,
.
11. Ibid. 12. . 13. Ibid. 14. Bruno Delorme,
“Digitization at the Bibliotheque Nationale De France, Including an Interview
with Bruno Delorme,” _Serials_ 24 (3) (2011): 261–265. 15. Alain Giffard,
“Dilemmas of Digitization in Oxford,” _AlainGiffard’s Weblog_ , posted May 29,
2008, in-oxford>. 16. Ibid. 17. Author’s interview with Alain Giffard, Paris, 2010.
18. Ibid. 19. Later, in 1997, François Mitterrand demanded that the digitized
books should be brought online, accessible as text from everywhere. This,
then, was what became known as Gallica, the digital library of BnF, which was
launched in 1997. Gallica contains documents primarily out of copyright from
the Middle Ages to the 1930s, with priority given to French-speaking culture,
hosting about 4 million documents. 20. Imerito 2009. 21. Ambati et al. 2006;
Chen 2005. 22. Ryan Singel, “Stop the Google Library, Net’s Librarian Says,”
_Wired_ , May 19, 2009, library-nets-librarian-says>. 23. Alfred P. Sloan Foundation, Annual Report,
2006,
.
24. Leetaru 2008. 25. Amazon was also a major player in the early years of
mass digitization. In 2003 they gave access to a digital archive of more than
120,000 books with the professed goal of adding Amazon’s multimillion-title
catalog in the following years. As with all other mass digitization
initiatives, Jeff Bezos faced a series of copyright and technological
challenges. He met these with legal rhetorical ingenuity and the technical
skills of Udi Manber, who later became the lead engineer with Google, see, for
example, Wolf 2003. 26. Leetaru 2008. 27. John Markoff, “The Coming Search
Wars,” _New York Times_ , February 1, 2004,
. 28.
Google press release, “Google Checks out Library Books,” December 14, 2004,
.
29. Vise and Malseed 2005, chap. 21. 30. Auletta 2009, 96. 31. Johann Wolfgang
Goethe, _Sprüche in Prosa_ , “Werke” (Weimer edition), vol. 42, pt. 2, 141;
cited in Cassirer 1944. 32. Philip Jones, “Writ to the Future,” _The
Bookseller_ , October 22, 2015, future-315153>. 33. “Jacques Chirac donne l’impulsion à la création d’une
bibliothèque numérique,” _Le Monde_ , March 16, 2005,
donne-l-impulsion-a-la-creation-d-une-bibliotheque-
numerique_401857_3246.html>. 34. “An overwhelming American dominance in
defining future generations’ conception about the world” (author’s own
translation). Ibid. 35. Labi 2005; “The worst scenario we could achieve would
be that we had two big digital libraries that don’t communicate. The idea is
not to do the same thing, so maybe we could cooperate, I don’t know. Frankly,
I’m not sure they would be interested in digitizing our patrimony. The idea is
to bring something that is complementary, to bring diversity. But this doesn’t
mean that Google is an enemy of diversity.” 36. Chrisafis 2008. 37. Béquet
2009. For more on the political potential of archives, see Foucault 2002;
Derrida 1996; and Tygstrup 2014. 38. “Comme vous soulignez, nos bibliothèques
et nos archives contiennent la mémoire de nos culture européenne et de
société. La numérisation de leur collection—manuscrits, livres, images et
sons—constitue un défi culturel et économique auquel il serait bon que
l’Europe réponde de manière concertée.” (As you point out, our libraries and
archives contain the memory of our European culture and society. Digitization
of their collections—manuscripts, books, images, and sounds—is a cultural and
economic challenge it would be good for Europe to meets in a concerted
manner.) Manuel Barroso, open letter to Jacques Chirac, July 7, 2007,
[http://www.peps.cfwb.be/index.php?eID=tx_nawsecuredl&u=0&file=fileadmin/sites/numpat/upload/numpat_super_editor/numpat_editor/documents/Europe/Bibliotheques_numeriques/2005.07.07reponse_de_la_Commission_europeenne.pdf&hash=fe7d7c5faf2d7befd0894fd998abffdf101eecf1](http://www.peps.cfwb.be/index.php?eID=tx_nawsecuredl&u=0&file=fileadmin/sites/numpat/upload/numpat_super_editor/numpat_editor/documents/Europe/Bibliotheques_numeriques/2005.07.07reponse_de_la_Commission_europeenne.pdf&hash=fe7d7c5faf2d7befd0894fd998abffdf101eecf1).
39. Jøsevold 2016. 40. Janssen 2011. 41. Robert Darnton, “Google’s Loss: The
Public’s Gain,” _New York Review of Books_ , April 28, 2011,
. 42.
Palfrey 2015, __ 104. 43. See, for example, DPLA’s Public Library
Partnership’s Project, partnerships>. 44. Karaganis, 2018. 45. Sassen 2008, 3. 46. Coyle 2006; Borghi
and Karapapa, _Copyright and Mass Digitization_ ; Patra, Kumar, and Pani,
_Progressive Trends in Electronic Resource Management in Libraries_. 47.
Borghi 2012. 48. Beagle et al. 2003; Lavoie and Dempsey 2004; Courant 2006;
Earnshaw and Vince 2007; Rieger 2008; Leetaru 2008; Deegan and Sutherland
2009; Conway 2010; Samuelson 2014. 49. The earliest textual reference to the
mass digitization of books dates to the early 1990s. Richard de Gennaro,
Librarian of Harvard College, in a panel on funding strategies, argued that an
existing preservation program called “brittle books” should take precedence
over other preservation strategies such as mass deacidification; see Sparks,
_A Roundtable on Mass Deacidification_ , 46. Later the word began to attain
the sense we recognize today, as referring to digitization on a large scale.
In 2010 a new word popped up, “ultramass digitization,” a concept used to
describe the efforts of Google vis-à-vis more modest large-scale digitization
projects; see Greene 2010 _._ 50. Kevin Kelly, “Scan This Book!,” _New York
Times_ , May 14, 2006, ; Hall 2008; Darnton 2009;
Palfrey 2015. 51. As Alain Giffard notes, “I am not very confident with the
programs of digitization full of technical and economical considerations, but
curiously silent on the intellectual aspects” (Alain Giffard, “Dilemmas of
Digitization in Oxford,” _AlainGiffard’s Weblog_ , posted May 29, 2008,
oxford>). 52. Tiffen 2007. 344. See also Peatling 2004. 53. Sassen 2008. 54.
See _The Authors Guild et al. vs. Google, Inc._ , Amended Settlement Agreement
05 CV 8136, United States District Court, Southern District of New York,
(2009) sec 7(2)(d) (research corpus), sec. 1.91, 14. 55. Informational
capitalism is a variant of late capitalism, which is based on cognitive,
communicative, and cooperative labor. See Christian Fuchs, _Digital Labour and
Karl Marx_ (New York: Routledge, 2014), 135–152. 56. Miksa 1983, 93. 57.
Midbon 1980. 58. Said 1983, 237. 59. For example, the diverse body of
scholarship that employed the notion of “assemblage” as a heuristic and/or
ontological device for grasping and formulating these changing relations of
power and control; in sociology: Haggerty and Ericson 2000; Rabinow 2003; Ong
and Collier 2005; Callon et al. 2016; in geography: Anderson and McFarlane
2011, 124–127; in philosophy: Deleuze and Guattari 1987; DeLanda 2006; in
cultural studies: Puar 2007; in political science: Sassen 2008. The
theoretical scope of these works ranged from close readings of and ontological
alignments with Deleuze and Guattari’s work (e.g., DeLanda), to more
straightforward descriptive employments of the term as outlined in the OED
(e.g., Sassen). What the various approaches held in common was the effort to
steer readers away from thinking in terms of essences and stability toward
thinking about more complex and unstable structures. Indeed, the “assemblage”
seems to have become a prescriptive as much as a diagnostic tool (Galloway
2013b; Weizman 2006). 60. Deleuze 1997; Foucault 2009; Hardt and Negri 2007.
61. DeLanda 2006; Paul Rabinow, “Collaborations, Concepts, Assemblages,” in
Rabinow and Foucault 2011, 113–126, at 123. 62. Latour 2005, __ 28. 63. Ibid.,
35. 64. Tim Stevens, _Cyber Security and the Politics of Time_ (Cambridge:
Cambridge University Press, 2015), 33. 65. Abrahamsen and Williams 2011. 66.
Walker 2003. 67. Deleuze and Guattari 1987, 116. 68. Parisi 2004, 37. 69.
Hacking 1995, 210. 70. Scott 2009. In James C. Scott’s formulation,
infrapolitics is a form of micropolitics, that is, the term refers to
political acts that evade the formal political apparatus. This understanding
was later taken up by Robin D. G. Kelley and Alberto Moreires, and more
recently by Stevphen Shukaitis and Angela Mitropolous. See Kelley 1994;
Shukaitis 2009; Mitropoulos 2012; Alterbo Moreiras, _Infrapolitics: the
Project and Its Politics. Allegory and Denarrativization. A Note on
Posthegemony_. eScholarship, University of California, 2015. 71. James C.
Scott also concedes as much when he briefly links his notion of infrapolitics
to infrastructure, as the “cultural and structural underpinning of the more
visible political action on which our attention has generally been focused”;
Scott 2009, 184. 72. Mitropoulos 2012, 115. 73. Bowker and Star 1999, 319. 74.
Centre National de Ressource Textuelle et Lexicales,
. 75. For an English
etymological examination, see also Batt 1984, 1–6. 76. This is on account of
their malleability and the uncanny way they are used to fit every
circumstance. For more on the potentials and problems of plastic words, see
Pörksen 1995. 77. Edwards 2003, 186–187. 78. Mitropoulos 2012, 117. 79.
Edwards et al. 2012. 80. Peters 2015, at 31. 81. Beck 1996, 1–32, at 18;
Easterling 2014. 82. Adler-Nissen and Gammeltoft-Hansen 2008. 83. Holzer and
Mads 2003. 84. Star 1999, 377. 85. Ibid. 86. Bowker and Star 1999, 326. 87.
Peters 2015, 35. 88. Hardt and Negri 2009, 205. 89. Chun 2017. 90. As argued
by John Naughton at the _Negotiating Cultural Rights_ conference, National
Museum, Copenhagen, Denmark, November 13–14, 2015,
.
91. The “tipping point” is a metaphor for sudden change first introduced by
Morton Grodzins in 1960, later used by sociologists such as Thomas Schelling
(for explaining demographic changes in mixed-race neighborhoods), before
becoming more generally familiar in urbanist studies (used by Saskia Sassen,
for instance, in her analysis of global cities), and finally popularized by
mass psychologists and trend analysts such as Malcolm Gladwell, in his
bestseller of that name; see Gladwell 2000. 92. “Those of us who take
liberalism and Enlightenment values seriously often quote Sir Francis Bacon’s
aphorism that ‘knowledge is power.’ But, as the historian Stephen Gaukroger
argues, this is not a claim about knowledge: it is a claim about power.
‘Knowledge plays a hitherto unrecognized role in power,’ Gaukroger writes.
‘The model is not Plato but Machiavelli.’1 Knowledge, in other words, is an
instrument of the powerful. Access to knowledge gives access to that
instrument of power, but merely having knowledge or using it does not
automatically confer power. The powerful always have the ways and means to use
knowledge toward their own ends. … How can we connect the most people with the
best knowledge? Google, of course, offers answers to those questions. It’s up
to us to decide whether Google’s answers are good enough.” See Vaidhyanathan
2011, 149–150. 93. Easley and Kleinberg 2010, 528. 94. Duguid 2007; Geoffrey
Nunberg, “Google’s Book Search: A Disaster for Scholars,” _Chronicle of Higher
Education,_ August 31, 2009; _The Idea of Order: Transforming Research
Collections for 21st Century Scholarship_ (Washington, DC: Council on Library
and Information Resources, 2010), 106–115. 95. Robert Darnton, “Google’s Loss:
The Public’s Gain,” _New York Review of Books_ , April 28, 2011,
. 96.
Jones and Janes 2010. 97. David S. Grewal, _Network Power: The Social Dynamics
of Globalization_ (New Haven: Yale University Press, 2008). 98. Higgins and
Larner, _Calculating the Social: Standards and the Reconfiguration of
Governing_ (Basingstoke: Palgrave Macmillan, 2010). 99. Ponte, Gibbon, and
Vestergaard 2011; Gibbon and Henriksen 2012. 100. Russell 2014. See also Wendy
Chun on the correlation between habit and standardization: Chun 2017. 101.
Busch 2011. 102. Peters 2015, 224. 103. DeNardis 2011. 104. Hall and Jameson
1990. 105. Kolko 1988. 106. Agre 2000. 107. For more on the importance of
standard flexibility in digital networks, see Paulheim 2015. 108. Linked data
captures the intellectual information users add to information resources when
they describe, annotate, organize, select, and use these resources, as well as
social information about their patterns of usage. On one hand, linked data
allows users and institutions to create taxonomic categories for works on a
par with cultural memory experts—and often in conflict with such experts—for
instance by linking classical nudes with porn; and on the other hand, it
allows users and institutions to harness social information about patterns of
use. Linked data has ideological and economic underpinnings as much as
technical ones. 109.  _The National Digital Platform: for Libraries, Archives
and Museums_ , 2015, report-national-digital-platform>. 110. Petter Nielsen and Ole Hanseth, “Fluid
Standards. A Case Study of a Norwegian Standard for Mobile Content Services,”
under review,
.
111. Sassen 2008, 3. 112. Grewal 2008. 113. Ibid., 9.

# II
Mapping Mass Digitization

# 2
The Trials, Tribulations, and Transformations of Google Books

## Introduction

In a 2004 article in the cultural theory journal _Critical Inquiry_ , book
historian Roger Chartier argued that the electronic world had created a triple
rupture in the world of text: by providing new techniques for inscribing and
disseminating the written word, by inspiring new relationships with texts, and
by imposing new forms of organization onto them. Indeed, Chartier foresaw that
“the originality and the importance of the digital revolution must therefore
not be underestimated insofar as it forces the contemporary reader to
abandon—consciously or not—the various legacies that formed it.”1 Chartier’s
premonition was inspired by the ripples that digitization was already
spreading across the sea of texts. People were increasingly writing and
distributing electronically, interacting with texts in new ways, and operating
and implementing new textual economies.2 These textual transformations __ gave
rise to a range of emotional reactions in readers and publishers, from
catastrophizing attititudes and pessimism about “the end of the book” to the
triumphalist mythologizing of liquid virtual books that were shedding their
analog ties like butterflies shedding their cocoons.

The most widely publicized mass digitization project to date, Google Books,
precipitated the entire emotional spectrum that could arise from these textual
transversals: from fears that control over culture was slipping from authors
and publishers into the hands of large tech companies, to hopeful ideas about
the democratizing potential of bringing knowledge that was once locked up in
dusty tomes at places like Harvard and Stanford, and to a utopian
mythologizing of the transcendent potential of mass digitization. Moreover,
Google Books also affected legal and professional transformations of the
infrastructural set-up of the book, creating new precedents and a new
professional ethos. The cultural, legal, and political significance of Google
Books, whether positive or negative, not only emphasizes its fundamental role
in shaping current knowledge landscapes, it also allows us to see Google Books
as a prism that reflects more general political tendencies toward
globalization, privatization, and digitization, such as modulations in
institutional infrastructures, legal landscapes, and aesthetic and political
conventions. But how did the unlikely marriage between a tech company and
cultural memory institutions even come about? Who drove it forward, and around
and within which infrastructures? And what kind of cultural memory politics
did it produce? The following sections of this chapter will address some of
these problematics.

## The New Librarians

It was in the midst of a turbulent restructuring of the world of text, in
October 2004 at the Frankfurt International Book Fair, that Larry Page and
Sergey Brin of Google announced the launch of Google Print, a cooperation
between Google and leading Anglophone publishers. Google Print, which later
became Google Partner Program, would significantly alter the landscape and
experience of cultural memory, as well as its regulatory infrastructures. A
decade later, the traditional practices of reading, and the guardianship of
text and cultural works, had acquired entirely new meanings. In October 2004,
however, the publishing world was still unaware of Google’s pending influence
on the institutional world of cultural memory. Indeed, at that time, Amazon’s
mounting dominance in the field of books, which began a decade earlier in
1995, appeared to pose much more significant implications. The majority of
publishers therefore greeted Google’s plans in Frankfurt as a welcome
alternative to Jeff Bezos’s growing online behemoth.

Larry Page and Sergey Brin withheld a few details from their announcement at
Frankfurt, however; Google’s digitization plans would involve not only
cooperation with publishers, but also with libraries. As such, what would
later become Google Books would in fact consist of two separate, yet
interrelated, programs: Google Print (which would later become Google Partner
Program) and Google Library Project. In all secrecy, Google had for many
months prior to the Frankfurt Book Fair worked with select libraries in the US
and the UK to digitize their holdings. And in December 2004 the true scope of
Google’s mass digitization plans were revealed: what Page and Brin were
building was the foundation of a groundbreaking cultural memory archive,
inspired by the myth of Alexandria.3 The invocation of Alexandria situated the
nascent Google Books project in a cultural schema that historicized the
project as a utopian, even moral and idealist, project that could finally,
thanks to technology, exceed existing human constraints—legal, political, and
physical.4

Google’s utopian discourse was not foreign to mass digitization enthusiasts.
Indeed, it was the _langue du jour_ underpinning most large-scale digitization
projects, a discourse nurtured and influenced by the seemingly borderless
infrastructure of the web itself (which was often referred to in
universalizing terms). 5 Yet, while the universalizing discourse of mass
digitization was familiar, it had until then seemed like aspirational talk at
best, and strategic policy talk in the face of limited public funding, complex
copyright landscapes, and lumbering infrastructures, at worst. Google,
however, faced the task with a fresh attitude of determination and a will to
disrupt, as well as a very different form of leverage in terms of
infrastructural set-up. Google was already the world’s preferred search
engine, having mastered the tactical skill of navigating its users through
increasingly complex information landscapes on the web, and harvesting their
metadata in the process to continuously improve Google’s feedback systems.
Essentially ever-larger amounts of information (understood here as “users”)
were passing through Google’s crawling engines, and as the masses of
information in Google’s server parks grew, so did their computational power.
Google Books, then, as opposed to most existing digitization projects, which
were conceived mainly in terms of “access,” was embedded in the larger system
of Google that understood the power and value of “feedback,” collecting
information and entering it into feedback loops between users, machines, and
engineers. Google also understood that information power didn’t necessarily
lie in owning all the information they gave access to, but rather in
controlling the informational processes themselves.

Yet, despite Google’s advances in information seeking behaviors, the idea of
Google Books appeared as an odd marriage. Why was a private company in Silicon
Valley, working in the futuristic and accelerating world of software and fluid
information streams, intent on partnering up with the slow-paced world of
cultural memory institutions, traditionally more concerned with the past?
Despite the apparent clash of temporal and cultural regimes, however, Google
was in fact returning home to its point of inception. Google was born of a
research project titled the Stanford Integrated Digital Library Project, which
was part of the NSF’s Digital Libraries Initiative (1994–1999). Larry Page and
Sergey Brin were students then, working on the Stanford component of this
project, intending to develop the base technologies required to overcome the
most critical barriers to effective digital libraries, of which there were
many.6 Page’s and Brin’s specific project, titled Google, was presented as a
technical solution to the increasing amount of information on the World Wide
Web.7 At Stanford, Larry Page also tried to facilitate a serious discussion of
mass digitization at Stanford, and of whether or not it was feasible. But his
ideas received little support, and he was forced to leave the idea on the
drawing board in favor of developing search technologies.8

In September 1998, Sergey Brin and Larry Page left the library project to
found Google as a company and became immersed in search engine technologies.
However, a few years later, Page resuscitated the idea of mass digitization as
a part of their larger self-professed goal to change the world of information
by increasing access, scaling the amount of information available, and
improving computational power. They convinced Eric Schmidt, the new CEO of
Google, that the mass digitization of cultural works made sense not only from
a information perspective, but also from a business perspective, since the
vast amounts of information Google could extract from books would improve
Google’s ability to deliver information that was hitherto lacking, and this
new content would eventually also result in an increase in traffic and clicks
on ads.9

## The Scaling Techniques of Mass Digitization

A series of experiments followed on how to best approach the daunting task.
The emergence and decay of these experiments highlight the ways in which mass
digitization assemblages consist not only of thoughts, ideals, and materials,
but also a series of cultural techniques that entwine temporality,
materiality, and even corporeality. This perspective on mass digitization
emphasizes the mixed nature of mass digitization assemblages: what at first
glance appears as a relatively straightforward story about new technical
inventions, at a closer look emerges as complex entanglements of human and
nonhuman actors, with implications not only for how we approach it as a legal-
technical entity but also an infrapolitical phenomenon. As the following
section shows, attending to the complex cultural techniques of mass
digitization (its “how”) enables us to see that its “minor” techniques are not
excluded from or irrelevant to, but rather are endemic to, larger questions of
the infrapolitics of digital capitalism. Thus, Google’s simple technique of
scaling scanning to make the digitization processes go faster becomes
entangled in the creation of new habits and techniques of acceleration and
rationalization that tie in with the politics of digital culture and digital
devices. The industrial scaling of mass digitization becomes a crucial part of
the industrial apparatus of big data, which provide new modes of inscription
for both individuals and digital industries that in turn can be capitalized on
via data-mining, just as it raises questions of digital labor and copyright.

Yet, what kinds of scaling techniques—and what kinds of investments—Google
would have to leverage to achieve its initial goals were still unclear to
Google in those early years. Larry Page and co-worker Marissa Mayer therefore
began to experiment with the best ways to proceed. First, they created a
makeshift scanning device, whereby Marissa Mayer would turn the page and Larry
Page would click the shutter of the camera, guided by the pace of a
metronome.10 These initial mass digitization experiments signaled the
industrial nature of the mass digitization process, providing a metronomic
rhythm governed by the implacable regularity of the machine, in addition to
the temporal horizon of eternity in cultural memory institutions (or at least
of material decay).11 After some experimentation with scale and time, Google
bought a consignment of books from a second-hand book store in Arizona. They
scanned them and subsequently experimented with how to best index these works
not only by using information from the book, but also by pulling data about
the books from various other sources on the web. These extractions allowed
them to calculate a work’s relevance and importance, for instance by looking
at the number of times it had been referred to.12

In 2004 Google was also granted patent rights to a scanner that would be able
to scan the pages of works without destroying them, and which would make them
searchable thanks to sophisticated 3D scanning and complex algorithms.13
Google’s new scanner used infrared camera technology that detected the three-
dimensional shape and angle of book pages when the book was placed in the
scanner. The information from the book was then transmitted to Optical
Character Recognition (OCR), which adjusted image focus and allowed the OCR
software to read images of curved surfaces more accurately.

![11404_002_fig_001.jpg](images/11404_002_fig_001.jpg)

Figure 2.1 François-Marie Lefevere and Marin Saric. “Detection of grooves in
scanned images.” U.S. Patent 7508978B1. Assigned to Google LLC.

These new scanning technologies allowed Google to unsettle the fixed content
of cultural works on an industrial scale and enter them into new distribution
systems. The untethering and circulation of text already existed, of course,
but now text would mutate on an industrial scale, bringing into coexistence a
multiplicity of archiving modes and textual accumulation. Indeed, Google’s
systematic scaling-up of already existing technologies on an industrial and
accelerated scale posed a new paradigm in mass digitization, to a much larger
extent than, for instance, inventions of new technologies.14 Thus, while
Google’s new book scanners did expand the possibilities of capturing
information, Google couldn’t solve the problem of automating the process of
turning the pages of the books. For that they had to hire human scanners who
were asked to manually turn pages. The work of these human scanners was
largely invisible to the public, who could only see the books magically
appearing online as the digital archive accumulated. The scanners nevertheless
left ghostly traces, in the form of scanning errors such as pink fingers and
missing and crumbled pages—visual traces that underlined the historically
crucial role of human labor in industrializing and automating processes.15
Indeed, the question of how to solve human errors in the book scanning process
led to a series of inventive systems, such as the patent granted to Google in
2009 (filed in 2003), which describes a system that would minimize scanning
errors with the help of music.16 Later, Google open sourced plans for a book
scanner named “Linear Book Scanner” that would turn the pages automatically
with the help of a vacuum cleaner and a cleverly designed sheet metal
structure, after passing them over two image sensors taken from a desktop
scanner.17

Eventually, after much experimentation, Google consolidated its mass
digitization efforts in collaboration with select libraries.18 While some
institutions immediately and enthusiastically welcomed Google’s aspirations as
aligning with their own mission to improve access to information, others were
more hesitant, an institutional vacillation that hinted ominously at
controversy to come. Some libraries, such as the University of Michigan,
greeted the initiative with enthusiasm, whereas others, such as the Library of
Congress, saw a red flag pop up: copyright, one of the most fundamental
elements in the rights of texts and authors.19 The Library of Congress
questioned whether it was legal to scan and index books without a rights
holder’s permission. Google, in response, argued that it was within the fair
use provisions of the law, but the argument was speculative in so far as there
was no precedent for what Google was going to do. While some universities
agreed with Google’s views on copyright and shared its desire to disrupt
existing copyright practices, others allowed Google to make digital copies of
their holdings (a precondition for creating an index of it). Hence, some
libraries gave full access, others allowed only the scanning of books in the
public domain (published before 1923), and still others denied access
altogether. While the reticence of libraries was scattered, it was also a
precursor of a much more zealous resistance to Google Books, an opposition
that was mounted by powerful voices in the cultural world, namely publishers
and authors, and other commercial infrastructures of cultural memory.

![11404_002_fig_002.jpg](images/11404_002_fig_002.jpg)

Figure 2.2 Joseph K. O’Sullivan, Alexander Proudfooot, and Christopher R.
Uhlik. “Pacing and error monitoring of manual page turning operator.” U.S.
Patent 7619784B1. Assigned to Google LLC, Google Technology Holdings LLC.

While Google’s announcement of its cooperation with publishers at the
Frankfurt Book Fair was received without drama—even welcomed by many—the
announcement of its cooperation with libraries a few months later caused a
commercial uproar. The most publicized point of contestation was the fact that
Google was now not only displaying books in cooperation with publishers, but
also building a library of its own, without remunerating publishers and
authors. Why would readers buy books if they could read them free online?
Moreover, the Authors Guild worried that Google’s digital library would
increase the risk of piracy. At a deeper level, the case also emphasized
authors’ and publishers’ desire to retain control over their copyrighted works
in the face of the threat that the Library Project (unlike the Partner
Program) was posing: Google was digitizing without the copyright holder’s
permission. Thus, to them, the Library Project fundamentally threatened their
copyrights and, on a more fundamental level, existing copyright systems. Both
factors, they argued, would make book buying a superfluous activity.20 The
harsher criticisms framed Google Books as a book thief rather than as a global
philanthropist.21 Google, on its behalf, launched a defense of their actions
based on the notion of “fair use,” which as the following section shows,
eventually became the fundamental legal question.

## Infrastructural Transformations

Google Books became the symbol of the painful confusion and territorial
battles that marred the publishing world as it underwent a transformation from
analog to digital. The mounting and diverse opposition to Google Books was
thus not an isolated affair, but rather a persistent symptom—increasingly loud
stress signals emitting from the infrastructural joints of the analog realm of
books as it buckled under the strain of digital logic. As media theorist John
Durham Peters (drawing on media theorist Harold Innis) notes, the history of
media is also an “occupational history” that tells the tales of craftspeople
mastering medium-specific skills tactically battling for monopolies of
knowledge and guarding their access.22 And in the occupational history of
Google Books, the craftspeople of the printed book were being challenged by a
new breed of artificers who were excelling not so much in how to print, which
book sellers to negotiate with, or how to sell books to people, but rather in
the medium-specific tactical skills of the digital, such as building software
and devising search technologies, skills they were leveraging to their own
gain to create new “monopolies of knowledge” in the process.

As previously mentioned, the concerns expressed by publishers and authors in
regards to remuneration was accompanied by a more abstract sense of a loss of
control over their works and how this loss of control would affect the
copyrights. These concerns did not arise out of thin air, but were part of a
more general discourse on digital information as something that _cannot_ be
secured and controlled in the same way as analog commodities can. Indeed, it
seemed that authors and publishers were part of a world entirely different
from Google Books: while publishers and authors were still living in and
defending a “regime of scarcity,” 23 Google Books, by contrast, was busy
building a “realm of plenitude and infinite replenishment.” As such, the clash
between the traditional infrastructures of the analog book and the new
infrastructures of Google Books was symptomatic of the underlying radical
reorganization of information from a state of trade and exchange to a state of
constant transmission and contagion.24

Foregrounding the fair use defense25, Google argued that the public benefits
of scanning outweighed the negative consequences for authors.26 Influential
legal scholars such as Lawrence Lessig, among others, supported this argument,
suggesting that inclusion in a search engine in a way that does not erode the
value of the book was of such societal importance that it should be deemed
legal.27 The copyright owners, however, insisted that the burden should be on
Google to request permission to scan each work.28

Google and copyright owners reached a proposed settlement on October 28, 2008.
The proposal would allow Google not only to continue its scanning activities
and to show free snippets online, but would also give Google exclusive rights
to sell digital copies of out-of-print books. In return, Google would provide
all libraries in the United States with one free subscription to the digital
database, but Google could also sell additional subscriptions. Moreover,
Google was to pay $125 million, part of which would go to the construction of
a Book Rights Registry that identified rights holders and handled payments to
lawyers.29 Yet before the settlement was even formally treated, a mounting
opposition to it was launched in public.

The proposed settlement was received with harsh words, for instance by
Internet archivist Brewster Kahle and legal scholar Lawrence Lessig, who
opposed the settlement with words ranging from “insanity” to “cultural
asphyxiation” and “information monopoly.”30 Privacy proponents also spoke out
against Google Books, bringing attention to the implications of Google being
able to follow and track reading habits, among other things.31 The
organization Privacy Authors, including writers such as Jonathan Lethem, Bruce
Schneier, and Michael Chabon, and publishers, argued that although Google
Books was an “extremely exciting” project, it failed in its current form to
protect the privacy of readers, thus creating a “real risk of disclosure” of
sensitive information to “prying governmental entities and private litigants,”
potentially giving rise to a “chilling effect,” hurting not only readers but
also authors and publishers, not least those writing about sensitive or
controversial topics.32 The Association of Libraries also raised a set of
concerns, such as the cost of library subscriptions and privacy.33 And most
predictably, companies such as Amazon and Microsoft, who also had a stake in
mass digitization, opposed the settlement; Microsoft even funded some nuanced
research efforts into its implications.34 Finally, and most damningly, the
Department of Justice decided to get involved with an antitrust argument.

By this point, opposition to the Google Books project, as it was outlined in
the proposed settlement, wasn’t only motivated by commercial concerns; it was
now also motivated by a public that framed Google’s mass digitization project
as a parasitical threat to the public sphere itself. The framing of Google as
a potential menace was a jarring image that stood in stark contrast to Larry
Page’s and Sergey Brin’s philanthropic attitudes and to Google’s famous “Don’t
be evil” slogan. The public reaction thus signaled a change in Google’s
reputation as the company metamorphosed in the public eye from a small
underdog company to a multinational corporation with a near-monopoly in the
search industry. Google’s initially inspiring approach to information as a
realm of plenitude now appeared in the public view more similar to the actions
of megalomaniac land-grabbers.

Google, however, while maintaining its universalizing mission regarding
information, also countered the accusations of monopoly building, arguing that
potential competitors could just step up, since nothing in the agreements
entered into by the libraries and Google “precludes any other company or
organization from pursuing their own similar effort.”35 Nevertheless Judge
Denny Chin denied the settlement in March 2011 with the following statement:
“The question presented is whether the ASA is fair, adequate, and reasonable.
I conclude that it is not.”36 Google left the proposed settlement behind, and
appealed the decision of their initial case with new amicus briefs focusing on
their argument that book scanning was fair use. They argued that they were not
demanding exclusivity on the information they scanned, that they didn’t
prohibit other actors from digitizing the works they were digitizing, and that
their main goal was to enrich the public sphere with more information, not to
build an information monopoly. In July 2013 Judge Denny Chin issued a new
opinion confirming that Google Books was indeed fair use.37 Chin’s opinion was
later consolidated in a major victory for Google in 2015 when Judge Pierre
Leval in the Second Circuit Court legalized Google Books with the words
“Google’s unauthorized digitizing of copyright-protected works, creation of a
search functionality, and display of snippets from those works are non-
infringing fair uses.“38 Leval’s decision marked a new direction, not only for
Google Books, but also for mass digitization in general, as it signaled a
shift in cultural expectations about what it means to experience and
disseminate cultural artifacts.

Once again, the story of Google Books took a new turn. What was first
presented as a gift to cultural memory institutions and the public, and later
as theft from and threat to these same entities, on closer inspection revealed
itself as a much more complex circulatory system of expectations, promises,
risks, and blame. Google Books thus instigated a dynamic and forceful
connection between Google and cultural memory institutions, where the roles of
giver and receiver, and the first giver and second giver/returner, were
difficult to decode. Indeed, the binding nature of the relationship between
Google Books and cultural memory institutions proved to be much more complex
than the simple physical exchange of books and digital files. As the next
section outlines, this complex system of cultural production was held together
by contractual arrangement—central joints, as it were, connecting data and
works, public and private, local and global, in increasingly complex ways. For
Google Books, these contractual relations appear as the connective tissues
that make these assemblages possible, and which are therefore fundamental to
their affective dimensions.

## The Infrapolitics of Contract

In common parlance a contract is a legal tool that formalizes a “mutual
agreement between two or more parties that something shall be done or forborne
by one or both,” often enforceable by law.39 Contractual systems emerged with
the medieval merchant regime, and later evolved with classical liberalism into
an ideological revolt against paternalist systems as nothing less than
freedom, a legal construct that could destroy the sentimental bonds of
personal dependence.40 As the classic liberal social scientist William Graham
Sumner argued, “[c]ontract … is rational … realistic, cold, and matter-of-
fact.” The rational nature of contracts also affected their temporality, since
a contract endures only “so long as the reason for it endures,” and their
spatiality, relegating any form of sentiment from the public sphere to “the
sphere of private and personal relations.”41

Sentiments prevailed, however, as the contracts tying together Google and
cultural memory institutions emerged. Indeed, public and professional
evaluations of the agreements often took an affective, even sexualized, form.
The economist Paul Courant situated libraries “in bed with Google”42; library
consultant and media experts Jeff Ubois and Peter B. Kaufman recounted _how_
they got in bed with Google—“[w]e were approached singly, charmed in
confidence, the stranger was beguiling, and we embraced” 43; communication
scholar Evelyn Bottando announced that “libraries not only got in bed with
Google. They got married”44; and librarian Jessamyn West finally pondered on
the relationship ruins, “[s]till not sure, after all that, how we got this all
so wrong. Didn’t we both want the same thing? Maybe it really wasn’t us, it
was them. Most days it’s hard to remember what we saw in Google. Why did we
think we’d make good partners?”45

The evaluative discourse around Google Books dispels the idea of contracts as
dispassionate transactions for services and labor, showing rather that
contracts are infrapolitical apparatuses that give rise to emotions and
affect; and that, moreover, they are systems of doctrines, relations, and
social artifacts that organize around specific ideologies, temporalities,
materialities, and techniques.46 First and foremost, contracts give rise to
new kinds of infrastructures in the field of cultural memory: they mediate,
connect, and converge cultural memory institutions globally, giving rise to
new institutional networks, in some cases increasing globalization and
mobility for both users and objects, and in other cases restricting the same.
The Google Books contracts display both technical and symbolic aspects: as
technical artifacts they establish intricate frameworks of procedures,
commitments, rights, and incentives for governing the transactions of cultural
memory artifacts and their digitized copies. As symbolic artifacts they evoke
normative principles, expressing different measures of good will toward
libraries, but also—as all contracts do—introduce the possibility of distrust,
conflict and betrayal.47

Despite their centrality to mass digitization assemblages, and although some
of them have been made available to the public,48 the content of these
particular contracts still suffer from the epistemic gap incurred in practical
and symbolic form by Google’s Agreements and Non-Disclosure Agreements (NDA),
a kind of agreement most libraries are required to sign when entering the
agreement. Like all contracts, the individual contracts signed by the
partnership libraries vary in nature and have different implications. While
many of Google’s agreements may be publically available, they have often only
been made public through requests and transparency mechanisms such as the
Freedom of Information Act. As the Open Rights Alliance notes in their
publication of the agreement entered between the British Library and Google,
“We asked the British Library for a copy of the agreement with Google, which
was not uploaded to their transparency website with other similar contracts,
as it didn’t involve monetary exchange. This may be a loophole transparency
activists want to look at. After some toing and froing with the Freedom of
Information Act we got a copy.”49

While the culture of contractual secrecy is native to the business world, with
its safeguarding of business processes, and is easily navigated by business
partners, it is often opposed to the ethos of state-subsidized cultural
institutions who “draw their financial and moral support from a public that
expects transparency in their activities, ranging from their materials
acquisitions to their business deals.”50 For these reasons, library
organizations have recommended that nondisclosure agreements should be avoided
if possible, and minimized if they are necessary.51 Google, in response, noted
on its website that: “[t]hough not all of the library contracts have been made
public, we can say that all of them are non-exclusive, meaning that all of our
library partners are free to continue their own scanning projects or work with
others while they work with Google to digitize their books.”52

Regardless of their contractual content and later publication, the contracts
are a vital instrument in Google’s broader management of visibility. As Mikkel
Flyverbom, Clare Birchall, and others have argued, this practice of visibility
management—which they define as “the many ways in which organizations seek to
curate and control their presence, relations, and comprehension vis-à-vis
their surroundings” through practices of transparency, secrecy, opacity,
surveillance, and disclosure—is in the digital age a complex issue closely
tied to the question of governance and power. While each publication act may
serve to create an uncomplicated picture of transparency, it nevertheless
happens in a paradoxical global regulatory environment that on the one hand
encourages “sunshine” laws that demand that governments, corporations, and
civil-sector organizations provide access to information, yet on the other
hand also harbors regulatory agencies that seek mechanisms and rules by which
to keep information hidden. Thus, as Flyverbom et al. conclude, the “everyday
practices of organizing invariably implicate visibility management,” whose
valences are “attached to transparency and opacity” that are not simple and
straightforward, but rather remain “dependent upon the actor, the context, and
the purpose of organizations and individuals.”53

Steven Levy recounts how Google began its scanning operations in “near-total
stealth,” a “cloak-and-dagger” approach that stood in contrast to Google’s
public promotion of transparency as a new mode of existence. As Levy argues,
“[t]he secrecy was yet another expression of the paradox of a company that
sometimes embraced transparency and other times seemed to model itself on the
NSA.”54 Yet, while secrecy practices may have suited some of Google’s
operations, they sit much more uneasily with their book scanning programs: “If
Google had a more efficient way to scan books, sharing the improved techniques
could benefit the company in the long run—inevitably, much of the output would
find its way onto the web, bolstering Google’s indexes. But in this case,
paranoia and a focus on short-term gain kept the machines under wraps.”55 The
nondisclosure agreements show that while boundaries may be blurred between
Google Books and libraries, we may still identify different regulatory models
and modes of existence within their networks, including the explicit _library
ethos_ (in the Weberian sense of the term) of public access, not only to the
front end but also to some areas of the back end, and the business world’s
secrecy practices. 56

Entering into a mass digitization public-private partnership (PPP) with a
corporation such as Google is thus not only a logical and pragmatic next step
for cultural memory institutions, it is also a political step. As already
noted, Google Books, through its embedding in Google, injects cultural memory
objects into new economic and cultural infrastructures. These infrastructures
are governed less by the hierarchical world of curators, historians, and
politicians, and more by feedback networks of tech companies, users, and
algorithms. Moreover, they forge ever closer connections to data-driven market
logics, where computational rather than representational power counts. Mass
digitization PPPs such as Google Books are thus also symptoms of a much more
pervasive infrapolitical situation, in which cultural memory institutions are
increasingly forced to alter their identities from public caretakers of
cultural heritage to economic actors in the EU internal market, controlled by
the framework of competition law, time-limited contracts, and rules on state
aid.57 Moreover, mastering the rules of these new infrastructures is not
necessarily an easy feat for public institutions.58 Thus, while Google claims
to hold a core commitment regarding free digital access to information, and
while its financial apparatus could be construed as making Google an eligible
partner in accordance with the EU’s policy objectives toward furthering
public-private partnerships in Europe,59 it is nevertheless, as legal scholar
Maurizio Borghi notes, relevant to take into account Google’s previous
monopoly-building history.60

## The Politics of Google Books

A final aspect of Google Books relates to the universal aspiration of Google
Books’s collection, its infrapolitics, and what it empirically produces in
territorial terms. As this chapter’s previous sections have outlined, it was
an aspiration of Google Books to transcend the cultural and political
limitations of physical cultural memory collections by gathering the written
material of cultural memory institutions into one massive digitized
collection. Yet, while the collection spans millions of works in hundreds of
languages from hundreds of countries,61 it is also clear that even large-scale
mass digitization processes still entail procedures of selection on multiple
levels from libraries to works. These decisions produce a political reality
that in some respects reproduces and accentuates the existing politics of
cultural memory institutions in terms of territorial and class-based
representations, and in other respects give rise to new forms of cultural
memory politics that part ways with the political regimes of traditional
curatorial apparatuses.

One obvious area in which to examine the politics produced by the Google Books
assemblage is in the selection of libraries that Google chooses to partner
with.62 While the full list of Google Books partners is not disclosed on
Google’s own webpage, it is clear from the available list that, up to now,
Google Books has mainly partnered with “great libraries,” such as elite
university libraries and national libraries. The rationale for choosing these
libraries has no doubt been to partner up with cultural memory institutions
that preside over as much material as possible, and which are therefore able
to provide more pieces of the puzzle than, say, a small-town public library
that only presides over a fraction of their collections. Yet, while these
libraries provide Google Books with an impressive and extensive collection of
rare and valuable artifacts that give the impression of a near-universal
collection, they nevertheless also contain epistemological and historical
gaps. Historian and digital humanist Andrew Prescott notes, for example, the
limited collections of literature written by workers and other lower-class
people in the early eighteenth century in elite libraries. This institutional
lack creates a pre-filtered collection in Google Books, favoring “[t]hose
writers of working class origins who had a success story to report, who had
become distinguished statesmen, successful businessmen, religious leaders and
so on,” that is, the people who were “able to find commercial publishers who
were interested in their story.”63 Google’s decision to partner with elite
libraries thus inadvertently reproduces the class-based biases of analog
cultural memory institutions.

In addition to the reproduction of analog class-based bias in its digital
collection, the Google Books corpus also displays a genre bias, veering
heavily toward scientific publications. As mathematicians Eitan Pechenik et
al. show, the contents of the Google Books corpus in the period of the 1900s
is “increasingly dominated by scientific publications rather than popular
works,” and “even the first data set specifically labeled as fiction appears
to be saturated with medical literature.”64 The fact that Google Books is
constellated in such a manner thus challenges a “vast majority of existing
claims drawn from the Google Books corpus,” just as it points to the need “to
fully characterize the dynamics of the corpus before using these data sets to
draw broad conclusions about cultural and linguistic evolution.”65

Last but not least, Google Books’s collection still bespeaks its beginnings:
it still primarily covers Anglophone ground. There is hardly any literature
that reviews the geographic scope in Google Books, but existing work does
suggest that Google is still heavily oriented toward US-based libraries.66
This orientation does not necessarily give rise to an Anglophone linguistic
hegemony, as some have feared, since many of the Anglophone libraries hold
considerable collections of foreign language books. But it does invariably
limit its collections to the works in foreign languages that the elite
libraries deemed worthy of preserving. The gaps and biases of Google Books
reveal it to be less of a universal and monolithic collection, and more of an
impressive, but also specific and contingent, assemblage of works, texts, and
relations that is determined by the relations Google Books has entered into in
terms of class, discipline, and geographical scope.

Google Books is not only the result of selection processes on the level of
partnering institutions, but also on the level of organizational
infrastructure. While the infrastructures of Google Books in fact depart from
those of its parent company in many regards to avoid copyright infringement
charges, there is little doubt, however, that people working actively on
Google’s digitization activities (included here are both users and Google
employees) are also globally distributed in networked constellations. The
central organization for cultural digitization, the Google Cultural Institute,
is located in Paris, France. Yet the people affiliated with this hub are
working across several countries. Moreover, people working on various aspects
of Google Books, from marketing to language technology, to software
developments and manual scanning processes, are dispersed across the globe.
And it is perhaps in this way that we tend to think of Google in general—as a
networked global company—and for good reasons. Google has been operating
internationally almost for as long as it has been around. It has offices in
countries all over the globe, and works in numerous languages. Today it is one
of the most important global information institutions, and as more and more
people turn to Google for its services, Google also increasingly reflects
them—indeed they enter into a complex cognitive feedback mechanism system.
Google depends on the growing diversity of its “inhabitants” and on its
financial and cultural leverage on a global scale, and to this effect it is
continuously fine-tuning its glocalization strategies, blending the universal
and the particular. This glocal strategy does not necessarily create a
universal company, however; it would be more correct to say that Google’s
glocality brings the globe to Google, redefining it as an “American”
company.67 Hence, while there is little doubt that Google, and in effect
Google Books, increasingly tailors to specific consumers,68 and that this
tailoring allows for a more complex global representation generated by
feedback systems, Google’s core nevertheless remains lodged on American soil.
This is underlined by the fact that Google Books still effectively belongs to
US jurisdiction.69 Google Books is thus on the one hand a globalized company
in terms of both content and institutional framework; yet it also remains an
_American_ multinational corporation, constrained by US regulation and social
standards, and ultimately reinforcing the capacities of the American state.
While Google Books operates as a networked glocal project with universal
aspirations, then, it also remains fenced in by its legal and cultural
apparatuses.

In sum, just as a country’s regulatory and political apparatus affects the
politics of its cultural memory institutions in the analog world, so is the
politics of Google Books co-determined by the operations of Google. Thus,
curatorial choices are made not only on the basis of content, but also of the
location of server parks, existing company units, lobbying efforts, public
policy concerns, and so on. And the institutional identity of Google Books is
profoundly late-sovereign in this regard: on one hand it thrives on and
operates with horizontal network formations; on the other, it still takes into
account and has to operate with, and around, sovereign epistemologies and
political apparatuses. These vertical and horizontal lines ultimately rewire
the politics of cultural memory, shifting the stakes from sovereign
territorial possessions to more functional, complex, and effective means of
control.

## Notes

1. Chartier 2004. 2. As philosopher Jacques Derrida noted anecdotally on his
colleagues’ way of reading, “some of my American colleagues come along to
seminars or to lecture theaters with their little laptops. They don’t print
out; they read out directly, in public, from the screen. I saw it being done
as well at the Pompidou Center [in Paris] a few days ago. A friend was giving
a talk there on American photography. He had this little Macintosh laptop
there where he could see it, like a prompter: he pressed a button to scroll
down his text. This assumed a high degree of confidence in this strange
whisperer. I’m not yet at that point, but it does happen.” (Derrida 2005, 27).
3. As Ken Auletta recounts, Eric Schmidt remembers when Page surprised him in
the early 2000s by showing off a book scanner he had built which was inspired
by the great library of Alexandria, claiming that “We’re going to scan all the
books in the world,” and explaining that for search to be truly comprehensive
“it must include every book ever published.” Page literally wanted Google to
be a “super librarian” (Auletta 2009, __ 96). 4. Constraints of a physical
character (how to digitize and organize all this knowledge in physical form);
legal character (how to do it in a way that suspends existing regulation); and
political character (how to transgress territorial systems). 5. Take, for
instance, project Bibliotheca Universalis, comprising American, Japanese,
German, and British libraries among others, whose professed aim was “to
exploit existing digitization programs in order to … make the major works of
the world’s scientific and cultural heritage accessible to a vast public via
multimedia technologies, thus fostering … exchange of knowledge and dialogue
over national and international borders.” It was a joint project of the French
Ministry of Culture, the National Library of France, the Japanese National
Diet Library, the Library of Congress, the National Library of Canada,
Discoteca di Stato, Deutsche Bibliothek, and the British Library:
. The project took its name
from the groundbreaking Medieval publication _Bibliotecha Universalis_
(1545–1549), a four-volume alphabetical bibliography that listed all the known
books printed in Latin, Greek, or Hebrew. Obviously, the dream of the total
archive is not limited to the realm of cultural memory institutions, but has a
much longer and more generalized lineage; for a contemporary exploration of
these dreams see, for instance, issue six of _Limn Magazine_ , March 2016,
. 6. As the project noted in its research summary,
“One of these barriers is the heterogeneity of information and services.
Another impediment is the lack of powerful filtering mechanisms that let users
find truly valuable information. The continuous access to information is
restricted by the unavailability of library interfaces and tools that
effectively operate on portable devices. A fourth barrier is the lack of a
solid economic infrastructure that encourages providers to make information
available, and give users privacy guarantees”; Summary of the Stanford Digital
Library Technologies Project,
. 7. Brin and Page
1998. 8. Levy 2011, 347. 9. Levy 2011, 349. 10. Levy 2011, 349. 11. Young
1988. 12. They had a hard time, however, creating a new PageRank-like
algorithm for books; see Levy 2011, 349. 13. Google Inc., “Detection of
Grooves in Scanned Images,” March 24, 2009,
[https://www.google.ch/patents/US7508978?dq=Detection+Of+Grooves+In+Scanned+Images&hl=da&sa=X&ved=0ahUKEwjWqJbV3arMAhXRJSwKHVhBD0sQ6AEIHDAA](https://www.google.ch/patents/US7508978?dq=Detection+Of+Grooves+In+Scanned+Images&hl=da&sa=X&ved=0ahUKEwjWqJbV3arMAhXRJSwKHVhBD0sQ6AEIHDAA).
14. See, for example, Jeffrey Toobin. “Google’s Moon Shot,” _New Yorker_ ,
February 4, 2007, shot>. 15. Scanners whose ghostly traces are still found in digitized books
today are evidenced by a curious little blog collecting the artful mistakes of
scanners, _The Art of Google Books_ , .
For a more thorough and general introduction to the historical relationship
between humans and machines in labor processes, see Kang 2011. 16. The
abstract from the patent reads as follows: “Systems and methods for pacing and
error monitoring of a manual page turning operator of a system for capturing
images of a bound document are disclosed. The system includes a speaker for
playing music having a tempo and a controller for controlling the tempo based
on an imaging rate and/or an error rate. The operator is influenced by the
music tempo to capture images at a given rate. Alternative or in addition to
audio, error detection may be implemented using OCR to determine page numbers
to track page sequence and/or a sensor to detect errors such as object
intrusion in the image frame and insufficient light. The operator may be
alerted of an error with audio signals and signaled to turn back a certain
number of pages to be recaptured. When music is played, the tempo can be
adjusted in response to the error rate to reduce operator errors and increase
overall throughput of the image capturing system. The tempo may be limited to
a maximum tempo based on the maximum image capture rate.” See Google Inc.,
“Pacing and Error Monitoring of Manual Page Turning Operator,” November 17,
2009, . 17. Google, “linear-book-
scanner,” _Google Code Archive_ , August 22, 2012,
. 18. The libraries of
Harvard, the University of Michigan, Oxford, Stanford, and the New York Public
Library. 19. Levy 2011, 351. 20.  _The Authors Guild et al. vs. Google, Inc._
, Class Action Complaint 05 CV 8136, United States District Court, Southern
District of New York, September 20, 2005,
/settlement-resources.attachment/authors-
guild-v-google/Authors%20Guild%20v%20Google%2009202005.pdf>. 21. As the
Authors Guild notes, “The problem is that before Google created Book Search,
it digitized and made many digital copies of millions of copyrighted books,
which the company never paid for. It never even bought a single book. That, in
itself, was an act of theft. If you did it with a single book, you’d be
infringing.” Authors Guild v. Google: Questions and Answers,
. 22.
Peters 2015, 21. 23. Hayles 2005. 24. Purdon 2016, 4. 25. Fair use constitutes
an exception to the exclusive right of the copyright holder under the United
States Copyright Act; if the use of a copyright work is a “fair use,” no
permission is required. For a court to determine if a use of a copyright work
is fair use, four factors must be considered: (1) the purpose and character of
the use, including whether such use is of a commercial nature or is for
nonprofit educational purposes; (2) the nature of the copyrighted work; (3)
the amount and substantiality of the portion used in relation to the
copyrighted work as a whole; and (4) the effect of the use upon the potential
market for or value of the copyrighted work. 26. “Do you really want … the
whole world not to have access to human knowledge as contained in books,
because you really want opt out rather than opt in?” as quoted in Levy 2011,
360. 27. “It is an astonishing opportunity to revive our cultural past, and
make it accessible. Sure, Google will profit from it. Good for them. But if
the law requires Google (or anyone else) to ask permission before they make
knowledge available like this, then Google Print can’t exist” (Farhad Manjoo,
“Indexing the Planet: Throwing Google at the Book,” _Spiegel Online
International_ , November 9, 2005, /indexing-the-planet-throwing-google-at-the-book-a-383978.html>.) Technology
lawyer Jonathan Band also expressed his support: Jonathan Band, “The Google
Print Library Project: A Copyright Analysis,” _Journal of Internet Banking and
Commerce_ , December 2005, google-print-library-project-a-copyright-analysis.php?aid=38606>. 28.
According to Patricia Schroeder, the Association of American Publishers (AAP)
President, Google’s opt-out procedure “shifts the responsibility for
preventing infringement to the copyright owner rather than the user, turning
every principle of copyright law on its ear.” BBC News, “Google Pauses Online
Books Plan,” _BBC News_ , August 12, 2005,
. 29. Professor of law,
Pamela Samuelson, has conducted numerous progressive and detailed academic and
popular analyses of the legal implications of the copyright discussions; see,
for instance, Pamela Samuelson, “Why Is the Antitrust Division Investigating
the Google Book Search Settlement?,” _Huffington Post_ , September 19, 2009,
divi_b_258997.html>; Samuelson 2010; Samuelson 2011; Samuelson 2014. 30. Levy
2011, 362; Lessig 2010; Brewster Kahle, “How Google Threatens Books,”
_Washington Post_ , May 19, 2009, dyn/content/article/2009/05/18/AR2009051802637.html>. 31. EFF, “Google Book
Search Settlement and Reader Privacy,” Electronic Frontier Foundation, n.d.,
. 32.  _The Authors Guild et
al. vs. Google Inc_., 05 Civ. 8136-DC, United States Southern District of New
York, March 22, 2011,
[http://www.nysd.uscourts.gov/cases/show.php?db=special&id=115](http://www.nysd.uscourts.gov/cases/show.php?db=special&id=115).
33. Brief of Amicus Curiae, American Library Association et al. in relation to
_The Authors Guild et al. vs. Google Inc_., 05 Civ. 8136-DC, filed on August 1
2012,
.
34. Steven Levy, “Who’s Messing with the Google Books Settlement? Hint:
They’re in Redmond, Washington,” _Wired_ , March 3, 2009,
. 35. Sergey Brin, “A Library
to Last Forever,” _New York Times_ , October 8, 2009,
. 36.  _The Authors
Guild et al. vs. Google Inc_., 05 Civ. 8136-DC, United States Southern
District of New York, March 22, 2011,
[http://www.nysd.uscourts.gov/cases/show.php?db=special&id=115](http://www.nysd.uscourts.gov/cases/show.php?db=special&id=115).
37. “Google does, of course, benefit commercially in the sense that users are
drawn to the Google websites by the ability to search Google Books. While this
is a consideration to be acknowledged in weighing all the factors, even
assuming Google’s principal motivation is profit, the fact is that Google
Books serves several important educational purposes. Accordingly, I conclude
that the first factor strongly favors a finding of fair use.” _The Authors
Guild et al. vs. Google Inc_., 05 Civ. 8136-DC, United States Southern
District of New York, November 14, 2013,
[http://www.nysd.uscourts.gov/cases/show.php?db=special&id=355](http://www.nysd.uscourts.gov/cases/show.php?db=special&id=355).
38.  _Authors Guild v. Google, Inc_., 13–4829-cv, December 16, 2015,
81c0-23db25f3b301/1/doc/13-4829_opn.pdf>. In the aftermath of Pierre Leval’s
decision the Authors Guild has yet again filed yet another petition for the
Supreme Court to reverse the appeals court decision, and has publically
reiterated the framing of Google as a parasite rather than a benefactor. A
brief supporting the Guild’s petition and signed by a diverse group of authors
such as Malcolm Gladwell, Margaret Atwood, J. M. Coetzee, Ursula Le Guin, and
Yann Martel noted that the legal framework used to assess Google knew nothing
about “the digital reproduction of copyrighted works and their communication
on the Internet or the phenomenon of ‘mass digitization’ of vast collections
of copyrighted works”; nor, they argued, was the fair-use doctrine ever
intended “to permit a wealthy for-profit entity to digitize millions of works
and to cut off authors’ licensing of their reproduction, distribution, and
public display rights.” Amicus Curiae filed on behalf of Author’s Guild
Petition, No. 15–849, February 1, 2016, content/uploads/2016/02/15-849-tsac-TAA-et-al.pdf>. 39. Oxford English
Dictionary,
[http://www.oed.com/view/Entry/40328?rskey=bCMOh6&result=1&isAdvanced=false#eid8462140](http://www.oed.com/view/Entry/40328?rskey=bCMOh6&result=1&isAdvanced=false#eid8462140).
40. The contract as we know it today developed within the paradigm of Lex
Mercatoria; see Teubner 1997. The contract is therefore a device of global
reach that has developed “mainly outside the political structures of nation-
states and international organisations for exchanges primarily in a market
economy” (Snyder 2002, 8). In the contract theory of John Locke, the
signification of contracts developed from a mere trade tool to a distinction
between the free man and the slave. Here, the societal benefits of contracts
were presented as a matter of time, where the bounded delineation of work was
characterized as contractual freedom; see Locke 2003 and Stanley 1998. 41.
Sumner 1952, 23. 42. Paul Courant, “On Being in Bed with Google,” _Au Courant_
, November 4, 2007, google>. 43. Kaufman and Ubois 2007. 44. Bottando 2012. 45. Jessamyn West,
“Google’s Slow Fade With Librarians: Maybe They’re Just Not That Into Us,”
_Medium_ , February 2, 2015, with-librarians-fddda838a0b7>. 46. Suchman 2003. The lack of research into
contracts and emotions is noted by Hillary M. Berk in her fascinating research
on contracts in the field of surrogacy: “Despite a rich literature in law and
society embracing contracts as exchange relations, empirical work has yet to
address their emotional dimensions” (Berk 2015). 47. Suchman 2003, 100. 48.
See a selection on the Public Index:
, and The Internet Archive:
. You may also find
contracts here: the University of Michigan ( /michigan-digitization-project>), the University of Cali­fornia
(), the Committee on
Institutional Cooperation ( google-agreement>), and the British Library
( google-books-and-the-british-library>), to name but a few. 49. Javier Ruiz,
“Is the Deal between Google and the British Library Good for the Public?,”
Open Rights Group, August 24, 2011, /access-to-the-agreement-between-google-books-and-the-british-library>. 50.
Kaufman and Ubois 2007. 51. Association of Research Libraries, “ARL Encourages
Members to Refrain from Signing Nondisclosure or Confidentiality Clauses,”
_ARL News_ , June 5, 2009, encourages-members-to-refrain-from-signing-nondisclosure-or-confidentiality-
clauses#.Vriv-McZdE4>. 52. Google, “About the Library Project,” _Google Books
Help,_ n.d.,
[https://support.google.com/books/partner/faq/3396243?hl=en&rd=1](https://support.google.com/books/partner/faq/3396243?hl=en&rd=1).
53. Flyverbom, Leonardi, Stohl, and Stohl 2016. 54. Levy 2011, 354. 55. Levy
2011, 352. 56. To be sure, however, the practice of secrecy is no stranger to
libraries. Consider only the closed stack that the public is never given
access to; the bureaucratic routines that are kept from the public eye; and
the historic relation between libraries and secrecy so beautifully explored by
Umberto Eco in numerous of his works. Yet, the motivations for nondisclosure
agreements on the one hand and public sector secrets on the other differ
significantly, the former lodged in a commercial logic and the latter in an
idea, however abstract, about “the public good.” 57. Belder 2015. For insight
into the societal impact of contractual regimes on civil rights regimes, see
Somers 2008. For insight into relations between neoliberalism and contracts,
see Mitropoulos 2012. 58. As engineer and historian Henry Petroski notes, for
a PPP contract to be successful a contract must be written “properly” but “the
public partners are not often very well versed in these kinds of contracts and
they don’t know how to protect themselves.” See Buckholtz 2016. 59. As argued
by Lucky Belder in “Cultural Heritage Institutions as Entrepreneurs,” 2015.
60. Borghi 2013, 92–115. 61. Stephan Heyman, “Google Books: A Complex and
Controversial Experiment,” _New York Times_ , October 28, 2015,
and-controversial-experiment.html>. 62. Google, “Library Partners,” _Google
Books_ , . 63. Andrew
Prescott, “How the Web Can Make Books Vanish,” _Digital Riffs_ , August 2013,
.
64. Pechenick, Danforth, Dodds, and Barrat 2015. 65. What Pechenik et al.
refer to here is of course the claims of Erez Aiden and Jean-Baptiste Michel
among others, who promote “culturomics,” that is, the use of huge amounts of
digital information—in this case the corpus of Google Books—to track changes
in language, culture, and history. See Aiden and Michel 2013; and Michel et
al. 2011. 66. Neubert 2008; and Weiss and James 2012, 1–3. 67. I am indebted
to Gayatri Spivak here, who makes this argument about New York in the context
of globalization; see Spivak 2000. 68. In this respect Google mirrors the
glocalization strategies of media companies in general; see Thussu 2007, 19.
69. Although the decisions of foreign legislation of course also affect the
workings of Google, as is clear from the growing body of European regulatory
casework on Google such as the right to be forgotten, competition law, tax,
etc.

# 3
Sovereign Soul Searching: The Politics of Europeana

## Introduction

In 2008, the European Commission launched the European mass digitization
project, Europeana, to great fanfare. Although the EC’s official
communications framed the project as a logical outcome of years of work on
converging European digital library infrastructures, the project was received
in the press as a European counterresponse to Google Books.1 The popular media
framings of Europeana were focused in particular on two narratives: that
Europeana was a public response to Google’s privatization of cultural memory,
and that Europeana was a territorial response to American colonization of
European information and culture. This chapter suggests that while both of
these sentiments were present in Europeana’s early years, the politics of what
Europeana was—and is—paints a more complicated picture. A closer glance at
Europeana’s social, economic, and legal infrastructures thus shows that the
European mass digitization project is neither an attempt to replicate Google’s
glocal model, nor is it a continuation of traditional European cultural
policies. Rather, Europeana produces a new form of cultural memory politics
that converge national and supranational imaginaries with global information
infrastructures.

If global information infrastructures and national politics today seemingly go
hand in hand in Europeana, it wasn’t always so. In fact, in the 1990s,
networked technologies and national imaginaries appeared to be mutually
exclusive modes of existence. The fall of the Berlin Wall in 1989 nourished a
new antisovereign sentiment, which gave way to recurring claims in the 1990s
that the age of sovereignty had passed into an age of post-sovereignty. These
claims were fueled by a globalized set of economic, political, and
technological forces, not least of which the seemingly ungovernable nature of
the Internet—which appeared to unbuckle the nation-state’s control and voice
in the process of globalization and gave rise to a sense of plausible anarchy,
which in turn made John Perry Barlow’s (in)famous ‘‘Declaration of the
Independence of Cyberspace’’ appear not as pure utopian fabulation, but rather
as a prescient diagnosis.2 Yet, while it seemed in the early 2000s that the
Internet and the cultural and economic forces of globalization had made the
notion and practice of the nation-state redundant on both practical and
cultural levels, the specter of the nation nevertheless seemed to linger.
Indeed, the nation-state continued to remain a fixed point in political and
cultural discourses. In fact, it not only lingered as a specter, but borders
were also beginning to reappear as regulatory forces. The borderless world
was, as Tim Wu and Jack Goldsmith noted in 2006, an illusion;3 geography had
revenged itself, not least in the digital environment.4

Today, no one doubts the cultural-political import of the national imaginary.
The national imaginary has fueled antirefugee movements, the surge of
nationalist parties, the EU’s intensified crisis, and the election of Donald
Trump, to name just a few critical political events in the 2010s. Yet, while
the nationalist imaginary is becoming ever stronger, paradoxically its
communicative infrastructures are simultaneously becoming ever more
globalized. Thus, globally networked digital infrastructures are quickly
supplementing, and in many cases even substituting, those national
communicative infrastructures that were instrumental in establishing a
national imagined community in the first place—infrastructures such as novels
and newspapers.5 The convergence of territorially bounded imaginaries and
global networks creates new cultural-political constellations of cultural
memory where the centripetal forces of nationalism operate alongside,
sometimes with and sometimes against, the centrifugal forces of digital
infrastructures. Europeana is a preeminent example of these complex
infrastructural and imaginary dynamics.

## A European Response

When Google announced their digitization program at the Frankfurt Book Fair in
2004, it instantly created ripples in the European cultural-political
landscape, in France in particular. Upon hearing the news about Google’s
plans, Jacques Chirac, president of France at the time, promptly urged the
then-culture minister, Renaud Donnedieu de Vabres, and Jean-Noël Jeanneney,
head of France’s Bibliothèque nationale, to commence a similar digitization
project and to persuade other European countries to join them.6 The seeds for
Europeana were sown by France, “the deepest, most sedimented reservoir of
anti-American arguments,”7 as an explicitly political reaction to Google
Books.

Europeana was thus from its inception laced with the ambiguous political
relationship between two historically competing universalist-exceptionalist
nations: the United States and France.8 A relationship that France sometimes
pictures as a question of Americanization, and at other times extends to an
image of a more diffuse Anglo-Saxon constellation. Highlighting the effects
Google Books would have on French culture, Jeanneney argued that Google’s mass
digitization efforts would pose several possible dangers to French cultural
memory such as bias in the collecting and organizing practices of Google Books
and an Anglicization of the cultural memory regulatory system. Explaining why
Google Books should be seen not only as an American, but also as an Anglo-
Saxon project, Jeanneney noted that while Google Books “was obviously an
American project,” it was nevertheless also one “that reached out to the
British.” The alliance between the Bodleian Library at Oxford and Google Books
was thus not only a professional partnership in Jeanneney’s eyes, but also a
symbolic bond where “the familiar Anglo-Saxon solidarity” manifested once
again vis-à-vis France, only this time in the digital sphere. Jeanneney even
paraphrased Churchill’s comment to Charles de Gaulle, noting that Oxford’s
alliance with Google Books yet again evidenced how British institutions,
“without consulting anyone on the other side of the English Channel,” favored
US-UK alliances over UK-Continental alliances “in search of European
patriotism for the adventure under way.”9

How can we understand Jeanneney’s framing of Google Books as an Anglo-Saxon
project and the function of this framing in his plea for a nation-based
digitization program? As historian Emile Chabal suggests, the concept of the
Anglo-Saxon mentality is a preeminently French construct that has a clear and
rich rhetorical function to strengthen the French self-understanding vis-à-vis
a stereotypical “other.”10 While fuzzy in its conceptual infrastructure, the
French rhetoric of the Anglo-Saxon is nevertheless “instinctively understood
by the vast majority of the French population” to denote “not simply a
socioeconomic vision loosely inspired by market liberalism and
multiculturalism” but also (and sometimes primarily) “an image of
individualism, enterprise, and atomization.”11 All these dimensions were at
play in Jeanneney’s anti-Google Books rhetoric. Indeed, Jeanneney suggested,
Google’s mass digitization project was not only Anglo-Saxon in its collecting
practices and organizational principles, but also in its regulatory framework:
“We know how Anglo-Saxon law competes with Latin law in international
jurisdictions and in those of new nations. I don’t want to see Anglo-Saxon law
unduly favored by Google as a result of the hierarchy that will be
spontaneously established on its lists.”12

What did Jeanneney suggest as infrastructural protection against the network
power of the Anglo-Saxon mass digitization project? According to Jeanneney,
the answer lay in territorial digitization programs: rather than simply
accepting the colonizing forces of the Anglo-Saxon matrix, Jeanneney argued, a
national digitization effort was needed. Such a national digitization project
would be a “ _contre-attaque_ ” against Google Books that should protect three
dimensions of French cultural sovereignty: its language, the role of the state
in cultural policy, and the cultural/intellectual order of knowledge in the
cultural collections.13 Thus Jeanneney suggested that any Anglo-Saxon mass
digitization project should be competed against and complemented by mass
digitization projects from other nations and cultures to ensure that cultural
works are embedded in meaningful cultural contexts and languages. While the
nation was the central base of mass digitization programs, Jeanenney noted,
such digitization programs necessarily needed to be embedded in a European, or
Continental, infrastructure. Thus, while Jeanneney’s rallying cry to protect
the French cultural memory was voiced from France, he gave it a European
signature, frequently addressing and including the rest of Europe as a natural
ally in his _contre-attaque_ against Google Books. 14 Jeanenney’s extension of
French concerns to a European level was characteristic for France, which had
historically displayed a leadership role in formulating and shaping the EU.15
The EU, Jeanneney argued, could provide a resilient supranational
infrastructure that would enable French diversity to exist within the EU while
also providing a protective shield against unhampered Anglo-Saxon
globalization.

Other French officials took on a less combative tone, insisting that the
French digitization project should be seen not merely as a reaction to Google
but rather in the context of existing French and European efforts to make
information available online. “I really stress that it’s not anti-American,”
stated one official at the Ministry of Culture and Communication. Rather than
framing the French national initiatives as a reaction to Google Books, the
official instead noted that the prime objective was to “make more material
relevant to European patrimony available,” noting also that the national
digitization efforts were neither unique nor exclusionary—not even to
Google.16 The disjunction between Jeanneney’s discursive claims to mass
digitization sovereignty and the anonymous bureaucrat’s pragmatic and
networked approach to mass digitization indicates the late-sovereign landscape
of mass digitization as it unfolded between identity politics and pragmatic
politics, between discursive claims to sovereignty and economic global
cooperation. And as the next section shows, the intertwinement of these
discursive, ideological, and economic infrastructures produced a memory
politics in Europeana that was neither sovereign nor post-sovereign, but
rather late-sovereign.

## The Infrastructural Reality of Late-Sovereignty

Politically speaking, Europeana was always more than just an empty
countergesture or emulating response to Google. Rather, as soon as the EU
adopted Europeana as a prestige project, Europeana became embedded in the
political project of Europeanization and began to produce a political logic of
its own. Latching on to (rather than countering) a sovereign logic, Europeana
strategically deployed the European imaginary as a symbolic demarcation of its
territory. But the means by which Europeana was constructed and distributed
its territorial imaginaries nevertheless took place by means of globalized
networked infrastructures. The circumscribed cultural imaginary of Europeana
was thus made interoperable with the networked logic of globalization. This
combination of a European imaginary and neoliberal infrastructure in Europeana
produced an uneasy balance between national and supranational infrastructural
imaginaries on the one hand and globalized infrastructures on the other.

If France saw Europeana primarily through the prism of sovereign competition,
the European Commission emphasized a different dispositive: economic
competition. In his 2005 response to Jaques Chirac, José Manuel Barroso
acknowledged that the digitization of European cultural heritage was an
important task not only for nation-states but also for the EU as a whole.
Instead of the defiant tone of Jeanneney and De Vabres, Barraso and the EU
institutions opted for a more neutral, pragmatic, and diplomatic mass
digitization discourse. Instead of focusing on Europeana as a lever to prop up
the cultural sovereignty of France, and by extension Europe, in the face of
Americanization, Barosso framed Europeana as an important economic element in
the construction of a knowledge economy.17

Europeana was thus still a competitive project, but it was now reframed as one
that would be much more easily aligned with, and integrated into, a global
market economy.18 One might see the difference in the French and the EU
responses as a question of infrastructural form and affordance. If French mass
digitization discourses were concerned with circumscribing the French cultural
heritage within the territory of the nation, the EC was in practice more
attuned to the networked aspects of the global economy and an accompanying
discourse of competition and potentiality. The infrastructural shift from
delineated sphere to globalized network changed the infrapolitics of cultural
memory from traditional nation-based issues such as identity politics
(including the formation of canons) to more globally aligned trade-related
themes such as copyright and public-private governance.

The shift from canon to copyright did not mean, however, that national
concerns dissipated. On the contrary, ministers from the European Union’s
member countries called for an investigation into the way Google Books handled
copyright in 2008.19 In reality, Google Books had very little to do with
Europe at that time, in the sense that Google Books was governed by US
copyright law. Yet the global reach of Google Books made it a European concern
nevertheless. Both German and French representatives emphasized the rift
between copyright legislation in the US and in EU member states. The German
government proposed that the EC examine whether Google Books conformed to
Europe’s copyright laws. In France, President Nicolas Sarkozy stated in more
flamboyant terms that he would not permit France to be “stripped of our
heritage to the benefit of a big company, no matter how friendly, big, or
American it is.”20 Both countries moreover submitted _amicus curia_ briefs 21
to judge Denny Chin (who was in charge of the ongoing Google Books settlement
lawsuit in the US22), in which they argued against the inclusion of foreign
authors in the lawsuit.23 They further brought separate suits against Google
Books for their scanning activities and sought to exercise diplomatic pressure
against the advancement of Google Books.24

On an EU level, however, the territorial concerns were sidestepped in favor of
another matrix of concern: the question of public-private governance. Thus,
despite pressure from some member states, the EC decided not to write a
similar “amicus brief” on behalf of the EU.25 Instead, EC Commissioners
McCreevy and Reding emphasized the need for more infrastructures connecting
the public and private sectors in the field of mass digitization.26 Such PPPs
could range from relatively conservative forms of cooperation (e.g., private
sponsoring, or payments from the private sector for links provided by
Europeana) to more far-reaching involvement, such as turning the management of
Europeana over to the private sector.27 In a similar vein, a report authored
by a high-level reflection group (Comité des Sages) set down by the European
Commission opened the door for public-private partnerships and also set a time
frame for commercial exploitation.28 It was even suggested that Google could
play a role in the construction of Europeana. These considerations thus
contrasted the French resistance against Google with previous statements made
by the EC, which were concerned with preserving the public sector in the
administration of Europeana.

Did the European Commission’s networked politics signal a post-sovereign
future for Europeana? This chapter suggests no: despite the EC’s strategies,
it would be wrong to label the infrapolitics of Europeana as post-sovereign.
Rather, Europeana draws up a _late-sovereign_ 29 mass digitization landscape,
where claims to national sovereignty exist alongside networked
infrastructures.30 Why not post-sovereign? Because, as legal scholar Neil
Walker noted in 2003,31 the logic of sovereignty never waned even in the face
of globalized capitalism and legal pluralism. Instead, it fused with these
more globalized infrastructures to produce a form of politics that displayed
considerable continuity with the old sovereign order, yet also had distinctive
features such as globalized trade networks and constitutional pluralisms. In
this new system, seemingly traditional claims to sovereignty are carried out
irrespective of political practices, showing that globally networked
infrastructures and sovereign imaginaries are not necessarily mutually
exclusive; rather, territory and nation continue to remain powerful emotive
forces. Since Neil Walker’s theoretical corrective to theories on post-
sovereignty, the notion of late sovereignty seems to have only gained in
relevance as nationalist imaginaries increase in strength and power through
increasingly globalized networks.

As the following section shows, Europeana is a product of political processes
that are concerned with both the construction of bounded spheres and canons
_and_ networked infrastructures of connectivity, competition, and potentiality
operating beyond, below, and between national societal structures. Europeana’s
late-sovereign framework produces an infrapolitics in which the discursive
political juxtaposition between Europeana and Google Books exists alongside
increased cooperation between Google Books and Europeana, making it necessary
to qualify the comparative distinctions in mass digitization projects on a
much more detailed level than merely territorial delineations, without,
however, disposing of the notion of sovereignty. The simultaneous
contestations and connections between Europeana and Google Books thus make
visible the complex economic, intellectual, and technological infrastructures
at play in mass digitization.

What form did these infrastructures take? In a sense, the complex
infrastructural set-up of Europeana as it played out in the EU’s framework
ended up extending along two different axes: a vertical axis of national and
supranational sovereignty, where the tectonic territorial plates of nation-
states and continents move relative to each other by converging, diverging,
and transforming; and a horizontal axis of deterritorializing flows that
stream within, between, and throughout sovereign territories consisting both
of capital interests (in the form of transnational lobby organizations working
to protect, promote, and advance the interests of multinational companies or
nongovernmental organizations) and the affective relations of users.

## Harmonizing Europe: From Canon to Copyright

Even if the EU is less concerned with upholding the regulatory boundaries of
the nation-state in mass digitization, bordering effects are still found in
mass digitized collections—this time in the form of copyright regulation. As
in the case of Google Books, mass digitization also raised questions in Europe
about the future role of copyright in the digital sphere. On the one hand,
cultural industries were concerned about the implications of mass digitization
for their production and copyrights32; on the other hand, educational
institutions and digital industries were interested in “unlocking” the
cognitive and cultural potentials that resided within the copyrighted
collections in cultural heritage institutions. Indeed, copyright was such a
crucial concern that the EC repeatedly stated the necessity to reform and
harmonize European copyright regulation across borders.

Why is copyright a concern for Europeana? Alongside economic challenges, the
current copyright legislation is _the_ greatest obstacle against mass
digitization. Copyright effectively prohibits mass digitization of any kind of
material that is still within copyright, creating large gaps in digitized
collections that are often referred to as “the twentieth-century black hole.”
These black holes appear as a result of the way European “copyright interacts
with the digitization of cultural heritage collections” and manifest
themselves as “marked lack of online availability of twentieth-century
collections.” 33 The lack of a common copyright mechanism not only hinders
online availability, but also challenges European cross-border digitization
projects as well as the possibilities for data-mining collections à la Google
because of the difficulties connected to ascertaining the relevant
public domain and hence definitively flagging the public domain status of an
object.34

While Europeana’s twentieth-century black hole poses a problem, Europe would
not, as one worker in the EC’s Directorate-General (DG) Copyright unit noted,
follow Google’s opt-out mass digitization strategy because “the European
solution is not the Google solution. We do a diligent search for the rights
holder before digitizing the material. We follow the law.”35 By positioning
herself as on the right side of the law, the DG employee implicitly also
placed Google on the wrong side of the law. Yet, as another DG employee
explained with frustration, the right side of the law was looking increasingly
untenable in an age of mass digitization. Indeed, as she noted, the demands
for diligent search was making her work near impossible, not least due to the
different legal regimes in the US and the EU:

> Today if one wants to digitize a work, one has to go and ask the rights
holder individually. The problem is often that you can’t find the rights
holder. And sometimes it takes so much time. So there is a rights holder, you
know that he would agree, but it takes so much time to go and find out. And
not all countries have collective management … you have to go company by
company. In Europe we have producing companies that disappear after the film
has been made, because they are created only to make that film. So who are you
going to ask? While in the States the situation is different. You have the
majors, they have the rights, you know who to ask because they are very
stable. But in Europe we have this situation, which makes it very difficult,
the cultural access to cultural heritage. Of course we dream of changing
this.36

The dream is far from realized, however. Since the EU has no direct
legislative competence in the area of copyright, Europeana is the center of a
natural tension between three diverging, but sometimes overlapping instances:
the exclusivity of national intellectual property laws, the economic interests
toward a common market, and the cultural interests in the free movement of
information and knowledge production—a tension that is further amplified by
the coexistence of different legal traditions across member states.37 Seeking
to resolve this tension, the European Parliament and certain units in the
European Commission have strategically used Europeana as a rhetorical lever to
increase harmonization of copyright legislation and thus make it easier for
institutions to make their collections available online.38 “Harmonization” has
thus become a key concept in the rights regime of mass digitization,
essentially signaling interoperability rather than standardization of national
copyright regimes. Yet stakeholders differ in their opinions concerning who
should hold what rights over what content, over what period of time, at what
price, and how things should be made available. So within the process of
harmonization is a process that is less than harmonious, namely bringing
stakeholders to the table and committing. As the EC interviewee confirms,
harmonization requires not only technical but also political cooperation.

The question of harmonization illustrates the infrapolitical dimensions of
Europeana’s copyright systems, showing that they are not just technical
standards or “direct mirrors of reality” but also “co-produced responses to
technoscientific and political uncertainty.”39 The European attempts to
harmonize copyright standards across national borders therefore pit not only
one technical standard against the other, but also “alternative political
cultures and their systems of public reasoning against one another”40
(Jasanoff, 133). Harmonization thus compresses, rather than eliminates,
national varieties within Europe.41 Hence, Barroso’s vision of Europeana as a
collective _European_ cultural memory is faced with the fragmented patterns of
national copyright regimes, producing if not overtly political borders in the
collections, then certainly infrapolitical manifestations of the cultural
barriers that still exist between European countries.

## The Infrapolitics of Interoperability

Copyright is not the only infrastructural regime that upholds borders in
Europeana’s collections; technical standards also pose great challenges for
the dream of an European connective cultural memory.42 The notion of
_interoperability_ 43 has therefore become a key concern for mass
digitization, as interoperability is what allows digitized cultural memory
institutions to exchange and share documents, queries, and services.44

The rise of interoperability as a key concept in mass digitization is a side-
effect of the increasing complexity of economic, political, and technological
networks. In the twentieth century, most European cultural memory institutions
existed primarily as small “sovereign” institutions, closed spheres governed
by internal logics and with little impetus to open up their internal machinery
to other institutions and cooperate. The early 2000s signaled a shift in the
institutional infrastructural layout of cultural memory institutions, however.
One early significant articulation of this shift was a 324-page European
Commission report entitled _Technological Landscapes for Tomorrow’s Cultural
Economy: Unlocking the Value of Cultural Heritage_ (or the DigiCULT study), a
“roadmap” that outlined the political, organizational, and technological
challenges faced by European museums, libraries, and archives in the period
2002–2006. A central passage noted that the “conditions for success of the
cultural and memory institutions in the Information Society is (sic) the
‘network logic,’ a logic that is of course directly related to the necessity
of being interoperable.” 45 The network logic and resulting demand for
interoperability was not merely a question of digital connections, the report
suggested, but a more pervasive logic of contemporary society. The report thus
conceived interoperability as a question that ran deeper that technological
logic.46 The more complex cultural memory infrastructures become, the more
interoperability is needed if one wants the infrastructures to connect and
communicate with each other.47 As information scholar Christine Borgman notes,
interoperability has therefore long been “the holy grail of digital
libraries”—a statement echoed by Commissioner Reding on Europeana in 2005 when
she stated that “I am not suggesting that the Commission creates a single
library. I envisage a network of many digital libraries—in different
institutions, across Europe.”48 Reding’s statement shows that even at the
height of the French exceptionalist discourse on European mass digitization,
other political forces worked instead to reformat the sovereign sphere into a
network. The unravelling of the bounded spheres of cultural memory
institutions into networked infrastructures is therefore both an effect of,
and the further mobilization of, increased interoperability.

Interoperability is not only a concern for mass digitization projects,
however; rather, the calls for interoperability takes place on a much more
fundamental level. A European Council Conclusion on Europeana identifies
interoperability as a key challenge for the future construction of Europeana,
but also embeds this concern within the overarching European interoperability
strategy, _European Interoperability Framework for pan-European eGovernment
services_. 49 Today, then, interoperability appears to be turning into a
social theory. The extension of the concept of interoperability into the
social sphere naturally follows the socialization of another technical term:
infrastructure. In the past decades, Susan Leigh Star, Geoffrey Bowker, and
others have successfully managed to frame infrastructure “not only in terms of
human versus technological components but in terms of a set of interrelated
social, organizational, and technical components or systems (whether the data
will be shared, systems interoperable, standards proprietary, or maintenance
and redesign factored in).”50 It follows, then, as Christine Borgman notes,
that even if interoperability in technical terms is a “feature of products and
services that allows the connection of people, data, and diverse systems,”51
policy practice, standards and business models, and vested interest are often
greater determinants of interoperability than is technology.52 In similar
terms, information science scholar Jerome Mcdonough notes that “we need to
cease viewing [interoperability] purely as a technical problem, and
acknowledge that it is the result of the interplay of technical and social
factors.”53 Pushing the concept of interoperability even further, legal
scholars Urs Gasser and John Palfrey have even argued for viewing the world
through a theory of interoperability, naming their project “interop theory,”54
while Internet governance scholar Laura Denardis proposes a political theory
of interoperability.55

More than denoting a technical fact, then, interoperability emerges today as
an infrastructural logic, one that promotes openness, modularity, and
connectivity. Within the field of mass digitization, the notion of
interoperability is in particular promoted by the infrastructural workers of
cultural memory (e.g., archivists, librarians, software developers, digital
humanists, etc.) who dream of opening up the silos they work on to enrich them
with new meanings.56 As noted in chapter 1, European cultural memory
institutions had begun to address unconnected institutions as closed “silos.”
Mass digitization offered a way of thinking of these institutions anew—not as
frigid closed containers, but rather as vital connective infrastructures.
Interoperability thus gives rise to a new infrastructural form of cultural
memory: the traditional delineated sovereign spheres of expertise of analog
cultural memory institutions are pried open and reformatted as networked
ecosystems that consist not only of the traditional national public providers,
but also of additional components that have hitherto been alien in the
cultural memory industry, such as private individual users and commercial
industries.57

The logic of interoperability is also born of a specific kind of
infrapolitics: the politics of modular openness. Interoperability is motivated
by the “open” data movements that seek to break down proprietary and
disciplinary boundaries and create new cultural memory infrastructures and
ways of working with their collections. Such visions are often fueled by
Lawrence Lessig’s conviction that “the most important thing that the Internet
has given us is a platform upon which experience is interoperable.”58 And they
have given rise to the plethora of cultural concepts we find on the Internet
in the age of digital capitalism, such as “prosumers”, “produsers”, and so on.
These concepts are becoming more and more pervasive in the digital environment
where “any format of sound can be mixed with any format of video, and then
supplemented with any format of text or images.”59 According to Lessig, the
challenge to this “open” vision are those “who don’t play in this
interoperability game,” and the contestation between the “open” and the
“closed” takes place in the “the network,” which produces “a world where
anyone can clip and combine just about anything to make something new.”60

Despite its centrality in the mass digitization rhetoric, the concept of
interoperability and the politics it produces is rarely discussed in critical
terms. Yet, as Gasser and Palfrey readily conceded in 2007, interoperability
is not necessarily in itself an “unalloyed good.” Indeed, in “certain
instances,” Palfrey and Gasser noted, interoperability brings with it possible
drawbacks such as increased homogeneity, lack of security, lack of
reliability.61 Today, ten years on, Urs Gasser’s and John Palfrey’s admissions
of the drawbacks of interoperability appear too modest, and it becomes clear
that while their theoretical apparatus was able to identify the centrality of
interoperability in a digital world, their social theory missed its larger
political implications.

When scanning the literature and recommendations on interoperability, certain
words emerge again and again: innovation, choice, diversity, efficiency,
seamlessness, flexibility, and access. As Tara McPherson notes in her related
analysis of the politics of modularity, it is not much of a stretch to “layer
these traits over the core tenets of post-Fordism” and note their effect on
society: “time-space compression, transformability, customization, a
public/private blur, etc.”62 The result, she suggests, is a remaking of the
Fordist standardization processes into a “neoliberal rule of modularity.”
Extending McPherson’s critique into the temporal terrain, Franco Bifo Berardi
emphasizes the semantic politics of speed that is also inherent in
connectivity and interoperability: “Connection implies smooth surfaces with no
margins of ambiguity … connections are optimized in terms of speed and have
the potential to accelerate with technological developments.63 The
connectivity enabled by interoperability thus implies modularity with
components necessarily “open to interfacing and interoperability.”
Interoperability, then, is not only a question of openness, but also a way of
harnessing network effects by means of speed and resilience.

While interoperability may be an inherent infrastructural tenet of neoliberal
systems, increased interoperability does not automatically make mass
digitization projects neoliberal. Yet, interoperability does allow for
increased connectivity between individual cultural memory objects and a
neoliberal economy. And while the neoliberal economy may emulate critical
discourses on freedom and creativity, its main concern is profit. The same
systems that allow users to create and navigate collections more freely are
made interoperable with neoliberal systems of control.64

## The “Work” in Networking

What are the effects of interoperability for the user? The culture of
connectivity and interoperability has not only allowed Europeana’s collections
to become more visible to a wider public, it has also enabled these publics to
become intentionally or unintentionally involved in the act of describing and
ordering these same collections, for instance by inviting users to influence
existing collections as well as to generate their own collections. The
increased interaction with works also transform them from stable to mobile
objects.65 Mass digitization has thus transformed curatorial practice,
expanding it beyond the closed spheres of cultural memory institutions into
much broader ecosystems and extending the focus of curatorial attention from
fixed objects to dynamic network systems. As a result, “curatorial work has
become more widely distributed between multiple agents including technological
networks and software.”66 From having played a central role in the curatorial
practice, the curator is now only part of this entire system and increasingly
not central to it. Sharing the curator’s place are users, algorithms, software
engineers, and a multitude of other factors.

At the same time, the information deluge generated by digitization has
enhanced the necessity of curation, both within and outside institutions. Once
considered as professional caretaking for collections, the curatorial concept
has now been modulated to encompass a whole host of activities and agents,
just as curatorial practices are now ever more engaged in epistemic meaning
making, selecting and organizing materials in an interpretive framework
through the aggregation of global connection.67 And as the already monumental
and ever accelerating digital collections exceed human curatorial capacity,
the computing power of machines and cognitive capabilities of ordinary
citizens is increasingly needed to penetrate and make meaning of the data
accumulations.

What role is Europeana’s user given in this new environment? With the
increased modulation of public-private boundaries, which allow different
modules to take on different tasks and on different levels, the strict
separation between institution and environment is blurring in Europeana. So is
the separation between user, curator, consumer, and producer. New characters
have thus arisen in the wake of these transformations, hereunder the two
concepts of the “amateur” and the “citizen scientist.”

In contrast to much of the microlabor that takes place in the digital sphere,
Europeana’s participatory structures often consist in cognitive tasks that are
directly related to the field of cultural memory. This aligns with the
aspirations of the Citizen Science Alliance, which requires that all their
crowdsourcing projects answer “a real scientific research question” and “must
never waste the ‘clicks,’ or time, of volunteers.”68 Citizen science is an
emergent form of research practice in which citizens participate in research
projects on different levels and in different constellations with established
research communities. The participatory structures of citizen science range
from highly complex processes to more simple tasks, such as identifying
colors, themes, patterns that challenge machinic analyses, and so on. There
are different ways of classifying these participatory structures, but the most
prevalent participatory structures in Europeana include:

1. 1\. Contribution, where visitors are solicited to provide limited and specified objects, actions, or ideas to an institutionally controlled process, for example, Europeana’s _1914–1918_ exhibition, which allowed (and still allows) users to contribute photos, letters, and other memorabilia from that period.
2. 2\. Correction and transcription, where users correct faulty OCR scans of books, newspapers, etc.
3. 3\. Contextualization, that is, the practice of placing or studying objects in a meaningful context.
4. 4\. Augmenting collections, that is, enriching collections with additional dimensions. One example is the recently launched Europeana Sound Connections, which encourages and enables visitors to “actively enrich geo-pinned sounds from two data providers with supplementary media from various sources. This includes using freely reusable content from Europeana, Flickr, Wikimedia Commons, or even individuals’ own collections.”69
5. 5\. And finally, Europeana also offers participation through classification, that is, a social tagging system in which users contribute with classifications.

All these participatory structures fall within the general rubric of
crowdsourcing, and they are often framed in social terms and held up as an
altruistic alternative to the capitalist exploitation of other crowdsourcing
projects, because, as new media theorist Mia Ridge argues, “unlike commercial
crowdsourcing, participation in cultural memory crowdsourcing is driven by
pleasure, not profit. Rather than monetary recompense, GLAM (Galleries,
Museums, Archives, and Libraries) projects provide an opportunity for
altruistic acts, activated by intrinsic motivations, applied to inherently
engaging tasks, encouraged by a personal interest in the subject or task.”70
In addition—and based on this notion of altruism—these forms of crowdsourcing
are also subversive successors of, or correctives to, consumerism.

The idea of pitting the activities of citizen science against more simple
consumer logics has been at the heart of Europeana since its inception,
particularly influenced by the French philosopher Bernard Stiegler, who has
been instrumental not only in thinking about, but also building, Europeana’s
software infrastructures around the character of the “amateur.” Stiegler’s
thesis was that the amateur could subvert the industrial ethos of production
because he/she is not driven by a desire to consume as much as a desire to
love, and thus is able to imbue the archive with a logic different from pure
production71 without withdrawing from participation (the word “amateur” comes
from the French word _aimer_ ).72 Yet it appears to me that the convergence of
cultural memory ecosystems leaves little room for the philosophical idea of
mobilizing amateurism as a form of resistance against capitalist logics.73 The
blurring of production boundaries in the new cultural memory ecosystems raises
urgent questions to cultural memory institutions of how they can protect the
ethos of the amateur in citizen archives,74 while also aligning them with
institutional strategies of harvesting the “cognitive surplus” of users75 in
environments where play is increasingly taking on aspects of labor and vice
versa. As cultural theorist Angela Mitropoulos has noted, “networking is also
net-working.”76 Thus, while many of the participatory structures we find in
Europeana are participatory projects proper and not just what we might call
participation-lite—or minimal participation77—models, the new interoperable
infrastructures of cultural memory ecosystems make it increasingly difficult
to uphold clear-cut distinctions between civic practice and exploitation in
crowdsourcing projects.

## Collecting Europe

If Europeana is a late-sovereign mass digitization project that maintains
discursive ties to the national imaginary at the same time that it undercuts
this imaginary by means of networked infrastructures through increased
interoperability, the final question is: what does this late-sovereign
assemblage produce in cultural terms? As outlined above, it was an aspiration
of Europeana to produce and distribute European cultural memory by means of
mass digitization. Today, its collection gathers more than 50 million cultural
works in differing formats—from sound bites to photographs, textiles, films,
files, and books. As the previous sections show, however, the processes of
gathering the cultural artifacts have generated a lot of friction, producing a
political reality that in some respects reproduces and accentuates the
existing politics of cultural memory institutions in terms of representation
and ownership, and in other respects gives rise to new forms of cultural
memory politics that part ways with the political regimes of traditional
curatorial apparatuses.

The story of how Europeana’s initial collection was published and later
revised offers a good opportunity to examine its late-sovereign political
dynamics. Europeana launched in 2008, giving access to some 4.5 million
digital objects from more than 1,000 institutions. Shortly after its launch,
however, the site crashed for several hours. The reason given by EU officials
was that Europeana was a victim of its own success: “On the first day of its
launch, Europe’s digital library Europeana was overwhelmed by the interest
shown by millions of users in this new project … thousands of users searching
in the very same second for famous cultural works like the _Mona Lisa_ or
books from Kafka, Cervantes, or James Joyce. … The site was down because of
massive interest, which shows the enormous potential of Europeana for bringing
cultural treasures from Europe’s cultural institutions to the wide public.” 78
The truth, however, lay elsewhere. As a Europeana employee explained, the site
didn’t buckle under the enormous interest shown in it, but rather because
“people were hitting the same things everywhere.” The problem wasn’t so much
the way they were hitting on material, but _what_ they were hitting; the
Europeana employee explained that people’s search terms took the Commission by
surprise, “even hitting things the Commission didn’t want to show. Because
people always search for wrong things. People tend to look at pornographic and
forbidden material such as _Mein Kampf_ , etc.”79 Europeana’s reaction was to
shut down and redesign Europeana’s search interface. Europeana’s crash was not
caused by user popularity, but rather was caused by a decision made by the
Commission and Europeana staff to rework the technical features of Europeana
so that the most popular searches would not be public and to remove
potentially politically contentious material such as _Mein Kampf_ and nude
works by Peter Paul Rubens and Abraham Bloemaert, among others. Another
Europeana employee explained that the launch of Europeana had been forced
through before its time because of a meeting among the cultural ministers in
Europe, making it possible to display only a prototype. This beta version was
coded to reveal the most popular searches, producing a “carousel” of the same
content because, as the previous quote explains, people would search for the
same things, in particular “porn” and “ _Mein Kampf_ ,” allegedly leading the
US press to call Europeana a collection of fascist and porn material.

On a small scale, Europeana’s early glitch highlighted the challenge of how to
police the incoming digital flows from national cultural heritage institutions
for in-copyright works. With hundreds of different institutions feeding
hundreds of thousands of texts, images, and sounds into the portal, scanning
the content for illegal material was an impossible task for Europeana
employees. Many in-copyright works began flooding the portal. One in-copyright
work that appeared in the portal stood out in particular: Hitler’s _Mein
Kampf_. A common conception has been that _Mein Kampf_ was banned after WWII.
The truth was more complicated and involved a complex copyright case. When
Hitler died, his belongings were given to the state of Bavaria, including his
intellectual property rights to _Mein Kampf_. Since Hitler’s copyright was
transferred as part of the Allies’ de-Nazification program, the Bavarian state
allowed no one to republish the book. 80 Therefore, reissues of _Mein Kampf_
only reemerged in 2015, when the copyright was released. The premature digital
distribution of _Mein Kampf_ in Euro­peana was thus, according to copyright
legislation, illegal. While the _Mein Kampf_ case was extraordinary, it
flagged a more fundamental problem of how to police and analyze all the
incoming data from individual cultural heritage institutions.

On a more fundamental level, however, _Mein Kampf_ indicated not only a legal,
but also a political, issue for Europeana: how to deal with the expressions
that Europeana’s feedback mechanisms facilitated. Mass digitization promoted a
new kind of cultural memory logic, namely of feedback. Feedback mechanisms are
central to data-driven companies like Google because they offer us traces of
the inner worlds of people that would otherwise never appear in empirical
terms, but that can be catered to in commercial terms. 81 Yet, while the
traces might interest the corporation (or sociologist) on the hunt for
people’s hidden thoughts, a prestige project such as Europeana found it
untenable. What Europeana wanted was to present Europe’s cultural memory; what
they ended up showing was Europeans’ intense fascination with fascism and
porn. And this was problematic because Europeana was a political project of
representation, not a commercial project of capture.82

Since its glitchy launch, Europeana has refined its interface techniques, is
becoming more attuned to network analytics, and has grown exponentially both
in terms of institutional and in material scope. There are, at the time of
this writing, more than 50 million items in Europeana, and while its numbers
are smaller than Google Books, its scope is much larger, including images,
texts, sounds, videos, and 3-D objects. The platform features carefully
curated exhibitions highlighting European themes, from generalized exhibitions
about World War I and European artworks to much more specialized exhibitions
on, for instance, European cake culture.

But how is Europe represented in statistical terms? Since Europeana’s
inception, there have been huge variances in how much each nation-state
contributes to Europeana.83 So while Europeana is in principle representing
Europe’s collective cultural memory, in reality it represents a highly
fragmented image of Europe with a lot of European countries not even appearing
in the databases. Moreover, even these numbers are potentially misleading, as
one information scholar formerly working with Europeana notes: to pump up
their statistical representation, many institutions strategically invented
counting systems that would make their representation seem bigger than it
really is, for example, by declaring each scanned page in a medieval
manuscript as an object instead of as the entire work.84 The strategic acts of
volume increase are interesting mass digitization phenomena for many reasons:
first, they reveal the ultimately volume-based approach of mass digitization.
According to the scholar, this volume-based approach finds a political support
in the EC system, for whom “the object will always be quantitative” since
volume is “the only thing the commission can measure in terms of funding and
result.”85 In a way then, the statistics tell more than one story: in
political terms, they recount not only the classic tale of a fragmented Europe
but also how Europe is increasingly perceived, represented, and managed by
calculative technologies. In technical terms, they reveal the gray areas of
how to delineate and calculate data: what makes a data object? And in cultural
policy terms, they reflect the highly divergent prioritization of mass
digitization in European countries.

The final question is, then: how is this fragmented European collection
distributed? This is the point where Europeana’s territorial matrix reveals
its ultimately networked infrastructure. Europeana may be entered through
Google, Facebook, Twitter, and Pinterest, and vice versa. Therefore a click on
the aforementioned cake exhibition, for example, takes one straight to Google
Arts and Culture. The transportation from the Europeana platform to Google
happens smoothly, without any friction or notice, and if one didn’t look at
the change in URL, one would hardly notice the change at all since the
interface appears almost similar. Yet, what are the implications of this
networked nature? An obvious consequence is that Europeana is structurally
dependent on the social media and search engine companies. According to one
Europeana report, Google is the biggest source of traffic to the Europeana
portal, accounting for more than 50 percent of visits. Any changes in Google’s
algorithm and ranking index therefore significantly impact traffic patterns on
the Europeana portal, which in turn affects the number of Europeana pages
indexed by Google, which then directly impacts on the number of overall visits
to the Europeana portal.86 The same holds true for Facebook, Pinterest,
Google+, etc.

Held together, the feedback mechanisms, the statistical variance, and the
networked infrastructures of Europeana show just how difficult it is to
collect Europe in the digital sphere. This is not to say that territorial
sentiments don’t have power, however—far from it. Within the digital sphere we
are already seeing territorial statements circulated in Europe on both
national and supranational scales, with potentially far-reaching implications
on both. Yet, there is little to suggest that the territorial sentiments will
reproduce sovereign spheres in practice. To the extent that reterritorializing
sentiments are circulated in globalizing networks, this chapter has sought to
counter both ideas about post sovereignty and pure nationalization, viewing
mass digitization instead through the lens of late-sovereignty. As this
chapter shows, the notion of late-sovereignty allows us to conceptualize mass
digitization programs, such as Europeana, as globalized phenomena couched
within the language of (supra)national sovereignty. In the age where rampant
nationalist movements sweep through globalized communication networks, this
approach feels all the more urgent and applicable not only to mass
digitization programs, but also to reterritorializing communication phenomena
more broadly. Only if we take the ways in which the nationalist imaginary
works in the infrastructural reality of late capitalism, can we begin to
account for the infrapolitics of the highly mediated new territorial
imaginaries.

## Notes

1. Lefler 2007; Henry W., “Europe’s Digital Library versus Google,” Café
Babel, September 22, 2008, /europes-digital-library-versus-google.html>; Chrisafis 2008. 2. While
digitization did not stand apart from the political and economic developments
in the rapidly globalizing world, digital theorists and activists soon gave
rise to the Internet as an inherent metaphor for this integrative development,
a sign of the inevitability of an ultimately borderless world, where as
Negroponte notes, time zones would “probably play a bigger role in our digital
future than trade zones” (Negroponte 1995, 228). 3. Goldsmith and Wu 2006. 4.
Rogers 2012. 5. Anderson 1991. 6. “Jacques Chirac donne l’impulsion à la
création d’une bibliothèque numérique,” _Le Monde_ , March 16, 2005,
donne-l-impulsion-a-la-creation-d-une-bibliotheque-
numerique_401857_3246.html>. 7. Meunier 2007. 8. As Sophie Meunier reminds us,
the _Ursprung_ of the competing universalisms can be located in the two
contemporary revolutions that lent legitimacy to the universalist claims of
both the United States and France. In the wake of the revolutions, a perceived
competition arose between these two universalisms, resulting in French
intellectuals crafting anti-American arguments, not least when French
imperialism “was on the wane and American imperialism on the rise.” See
Meunier 2007, 141. Indeed, Muenier suggests, anti-Americanism is “as much a
statement about France as it is about America—a resentful longing for a power
that France no longer has” (ibid.). 9. Jeanneney 2007, 3. 10. Emile Chabal
thus notes how the term is “employed by prominent politicians, serious
academics, political commentators, and in everyday conversation” to “cover a
wide range of stereotypes, pre-conceptions, and judgments about the Anglo-
American world” (Chabal 2013, 24). 11. Chabal 2013, 24–25. 12. Jeanneney 2007.
13. While Jeanneney framed this French cultural-political endeavor as a
European “contre-attaque” against Google Books, he also emphasized that his
polemic was not at all to be read as a form of aggression. In particular he
pointed to the difficulties of translating the word _défie_ , which featured
in the title of the piece: “Someone rightly pointed out that the English word
‘defy,’ with which American reporters immediately rendered _défie,_ connotes a
kind of violence or aggressiveness that isn’t implied by the French word. The
right word in English is ‘challenge,’ which has a different implication, more
sporting, more positive, more rewarding for both sides” (Jeanneney 2007, 85).
14. See pages 12, 22, and 24 for a few examples in Jeanneney 2007. 15. On the
issue of the common currency, see, for instance, Martin and Ross 2004. The
idea of France as an appropriate spokesperson for Europe was familiar already
in the eighteenth century when Voltaire declared French “la Langue de
l’Europe”; see Bivort 2013. 16. The official thus first noted that, “Everybody
is working on digitization projects … cooperation between Google and the
European project could therefore well occur.” and later added that ”The worst
scenario we could achieve would be that we had two big digital libraries that
don’t communicate. … The idea is not to do the same thing, so maybe we could
cooperate, I don’t know. Frankly, I’m not sure they would be interested in
digitizing our patrimony. The idea is to bring something that is
complementary, to bring diversity. But this doesn’t mean that Google is an
enemy of diversity.” See Labi 2005. 17. Letter from Manuel Barroso to Jaques
Chirac, July 7, 2005,
[http://www.peps.cfwb.be/index.php?eID=tx_nawsecuredl&u=0&file=fileadmin/sites/numpat/upload/numpat_super_editor/numpat_editor/documents/Europe/Bibliotheques_numeriques/2005.07.07reponse_de_la_Commission_europeenne.pdf&hash=fe7d7c5faf2d7befd0894fd998abffdf101eecf1](http://www.peps.cfwb.be/index.php?eID=tx_nawsecuredl&u=0&file=fileadmin/sites/numpat/upload/numpat_super_editor/numpat_editor/documents/Europe/Bibliotheques_numeriques/2005.07.07reponse_de_la_Commission_europeenne.pdf&hash=fe7d7c5faf2d7befd0894fd998abffdf101eecf1).
18. As one EC communication noted, a digitization project on the scale of
Europeana could sharpen Europe’s competitive edge in digitization processes
compared to those in the US as well India and China; see European Commission,
“i2010: Digital Libraries,” _COM(2005) 465 final_ , September 30, 2005, [eur-
lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52005DC0465&from=EN](http
://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52005DC0465&from=EN).
19. “Google Books raises concerns in some member states,” as an anonymous
Czech diplomatic source put it; see Paul Meller, “EU to Investigate Google
Books’ Copyright Policies,” _PCWorld_ , May 28, 2009,
.
20. Pfanner 2011; Doward 2009; Samuel 2009. 21. Amicus brief is a legal term
that in Latin means “friend of the court.” Frequently, a person or group who
is not a party to a lawsuit, but has a strong interest in the matter, will
petition the court for permission to submit a brief in the action with the
intent of influencing the court’s decision. 22. See chapter 4 in this volume.
23. de la Durantaye 2011. 24. Kevin J. O’Brien and Eric Pfanner, “Europe
Divided on Google Book Deal,” _New York Times_ , August 23, 2009,
; see
also Courant 2009; Darnton 2009. 25. de la Durantaye 2011. 26. Viviane Reding
and Charlie McCreevy, “It Is Time for Europe to Turn over a New E-Leaf on
Digital Books and Copyright,” MEMO/09/376, September 7, 2009, [europa.eu/rapid
/press-release_MEMO-09-376_en.htm?locale=en](http://europa.eu/rapid/press-
release_MEMO-09-376_en.htm?locale=en). 27. European Commission,
“Europeana—Next Steps,” COM(2009) 440 final, August 28, 2009, [eur-
lex.europa.eu/LexUriServ/LexUriServ.do?uri=COM:2009:0440:FIN:en:PDF](http
://eur-lex.europa.eu/LexUriServ/LexUriServ.do?uri=COM:2009:0440:FIN:en:PDF).
28. “It is logical that the private partner seeks a period of preferential use
or commercial exploitation of the digitized assets in order to avoid free-
rider behaviour of competitors. This period should allow the private partner
to recoup its investment, but at the same time be limited in time in order to
avoid creating a one-market player situation. For these reasons, the Comité
set the maximum time of preferential use of material digitised in public-
private partnerships at maximum 7 years” (Niggemann 2011). 29. Walker 2003.
30. Within this complex environment it is not even possible to draw boundaries
between the networked politics of the EU and the sovereign politics of member
states. Instead, member states engage in double-talk. As political scientist
Sophie Meunier reminds us, even member states such as France engage in double-
talk on globalization, with France on the one hand becoming the “worldwide
champion of anti-globalization,” and on the other hand “a country whose
economy and society have quietly adapted to this much-criticized
globalization” (Meunier 2003). On political two-level games, see also Putnam
1988. 31. Walker 2003. 32. “Google Books Project to Remove European Titles,”
_Telegraph_ , September 7, 2009,
remove-European-titles.html>. 33. “Europeana Factsheet,” Europeana, September
28, 2015,
/copy-of-europeana-policy-illustrating-the-20th-century-black-hole-in-the-
europeana-dataset.pdf> . 34. C. Handke, L. Guibault, and J. J. Vallbé, “Is
Europe Falling Behind in Data Mining? Copyright’s Impact on Data Mining in
Academic Research,” 2015, id-12015-15-handke-elpub2015-paper-23>. 35. Interview with employee, DG
Copyright, DC Commission, 2010. 36. Interview with employee, DG Information
and Society, DC Commission, 2010. 37. Montagnani and Borghi 2008. 38. Julia
Fallon and Paul Keller, “European Parliament Demands Copyright Rules that
Allow Cultural Heritage Institutions to Share Collections Online,” Europeana
Pro, rules-better-fit-for-a-digital-age>. 39. Jasanoff 2013, 133 40. Ibid. 41. Tate
2001. 42. It would be tempting to suggest the discussion on harmonization
above would apply to interoperability as well. But while the concepts of
harmonization and interoperability—along with the neighboring term
standardization—are used intermittently and appear similar at first glance,
they nevertheless have precise cultural-legal meanings and implicate different
infrastructural set-ups. As noted above, the notion of harmonization is
increasingly used in the legal context of harmonizing regulatory
apparatuses—in the case of mass digitization especially copyright laws. But
the word has a richer semantic meaning, suggesting a search for commonalities,
literally by means of fitting together or arranging units into a whole. As
such the notion of harmony suggests something that is both pleasing and
presupposes a cohesive unit(y), for example, a door hinged to a frame, an arm
hinged to a body. While used in similar terms, the notion of interoperability
expresses a very different infrastructural modality. If harmonization suggests
unity, interoperability rather alludes to modularity. For more on the concepts
of standardization and harmonization in regulatory contexts, see Tay and
Parker 1990. 43. The notion of interoperability is often used to express a
system’s ability to transfer, render and connect to useful information across
systems, and calls for interoperability have increased as systems have become
increasingly complex. 44. There are “myriad technical and engineering issues
associated with connecting together networks, databases, and other computer-
based systems”; digitized cultural memory institutions have the option of
providing “a greater array of services” than traditional libraries and
archives from sophisticated search engines to document reformatting as rights
negotiations; digitized cultural memory materials are often more varied than
the material held in traditional libraries; and finally and most importantly,
mass digitization institutions are increasingly becoming platforms that
connect “a large number of loosely connected components” because no “single
corporation, professional organization, or government” would be able to
provide all that is necessary for a project such as Europeana; not least on an
international scale. EU-NSF Digital Library Working Group on Interoperability
between Digital Libraries Position Paper, 1998,
. 45.  _The
Digicult Report: Technological Landscapes for Tomorrow’s Cultural Economy:
Unlocking the Value of Cultural Heritage: Executive Summary_ (Luxembourg:
Office for Official Publications of the European Communities, 2002), 80. 46.
“… interoperability in organisational terms is not foremost dependent on
technologies,” ibid. 47. As such they align with what Internet governance
scholar Laura Denardis calls the Internet’s “underlying principle” (see
DeNardis 2014). 48. The results of the EC Working Group on Digital Library
Interoperability are reported in the briefing paper by Stephan Gradman
entitled “Interoperability: A Key Concept for Large Scale, Persistent Digital
Libraries” (Gradmann 2009). 49. “Semantic operability ensures that programmes
can exchange information, combine it with other information resources and
subsequently process it in a meaningful manner: _European Interoperability
Framework for pan-European eGovernment services_ , 2004,
. In the case of
Europeana, this could consist of the development of tools and technologies to
improve the automatic ingestion and interpretation of the metadata provided by
cultural institutions, for example, by mapping the names of artists so that an
artist known under several names is recognised as the same person.” (Council
Conclusions on the Role of Europeana for the Digital Access, Visibility and
Use of European Cultural Heritage,” European Council Conclusion, June 1, 2016,
.) 50.
Bowker, Baker, Millerand, and Ribes 2010. 51. Tsilas 2011, 103. 52. Borgman
2015, 46. 53. McDonough 2009. 54. Palfrey and Gasser 2012. 55. DeNardis 2011.
56. The .txtual Condition: Digital Humanities, Born-Digital Archives, and the
Future Literary; Palfrey and Gasser 2012; Matthew Kirschenbaum, “Distant
Mirrors and the Lamp,” talk at the 2013 MLA Presidential Forum Avenues of
Access session on “Digital Humanities and the Future of Scholarly
Communication.” 57. Ping-Huang 2016. 58. Lessig 2005 59. Ibid. 60. Ibid. 61.
Palfrey and Gasser 2012. 62. McPherson 2012, 29. 63. Berardi, Genosko, and
Thoburn 2011, 29–31. 64. For more on the nexus of freedom and control, see
Chun 2006. 65. The mere act of digitization of course inflicts mobility on an
object as digital objects are kept in a constant state of migration. 66. Krysa
2006. 67. See only the wealth of literature currently generated on the
“curatorial turn,” for example, O’Neill and Wilson 2010; and O’Neill and
Andreasen 2011. 68. Romeo and Blaser 2011. 69. Europeana Sound Connections,
collections-on-a-social-networking-platform.html>. 70. Ridge 2013. 71. Carolyn
Dinshaw has argued for the amateur’s ability in similar terms, focusing on her
potential to queer the archive (see Dinshaw 2012). 72. Stiegler 2003; Stiegler
n.d. The idea of the amateur as a subversive character precedes digitization,
of course. Think only of Roland Barthes’s idea of the amateur as a truly
subversive character that could lead to a break with existing ideologies in
disciplinary societies; see, for instance, Barthes’s celebration of the
amateur as a truly anti-bourgeois character (Barthes 1977 and Barthes 1981).
73. Not least in light of recent writings on the experience as even love
itself as a form of labor (see Weigel 2016). The constellation of love as a
form of labor has a long history (see Lewis 1987). 74. Raddick et al. 2009;
Proctor 2013. 75. “Many companies and institutions, that are successful
online, are good at supporting and harnessing people’s cognitive surplus. …
Users get the opportunity to contribute something useful and valuable while
having fun” (Sanderhoff, 33 and 36). 76. Mitropoulos 2012, 165. 77. Carpentier
2011. 78. EC Commission, “Europeana Website Overwhelmed on Its First Day by
Interest of Millions of Users,” MEMO/08/733, November 21, 2008,
. See also Stephen
Castle, “Europeana Goes Online and Is Then Overwhelmed,” _New York Times_ ,
November 21, 2008,
[nytimes.com/2008/11/22/technology/Internet/22digital.html](http://nytimes.com/2008/11/22/technology/Internet/22digital.html).
79. Information scholar affiliated with Europeana, interviewed by Nanna Bonde
Thylstrup, Brussels, Belgium, 2011. 80. See, for instance, Martina Powell,
“Bayern will mit ‘Mein Kampf’ nichts mehr zu tun haben,” _Die Zeit_ , December
13, 2013, soll-erscheinen>. Bavaria’s restrictive publishing policy of _Mein Kampf_
should most likely be interpreted as a case of preventive precaution on behalf
of the Bavarian State’s diplomatic reputation. Yet by transferring Hitler’s
author’s rights to the Bavarian Ministry, they allocated _Mein Kampf_ to an
existence in a gray area between private and public law. Since then, the book
has been the center of attention in a rift between, on the one hand, the
Ministry of Finance who has rigorously defended its position as the formal
rights holder, and, on the other hand, historians and intellectuals who,
supported the Bavarian science minister Wolfgang Heubisch, have argued that an
academic annotated version of _Mein Kampf_ should be made publicly accessible
in the name of Enlightenment. 81. Latour 2007. 82. Europeana’s more
traditional curatorial approach to mass digitization was criticized not only
by the media, but also others involved in mass digitization projects, who
claimed that Europeana had fundamentally misunderstood the point of mass
digitization. One engineer working on mass digitization projects is the
influential cultural software developer organization, IRI, argued that
Europeana’s production pattern was comparable to “launching satellites”
without thinking of the messages that are returned by the satellites. Google,
he argued, was differently attuned to the importance of feedback, because
“feedback is their business.” 83. In the most recent published report, Germany
contributes with about 15 percent and France with around 16 percent of the
total amount of available works. At the same time, Belgium and Slovenia only
count around 1 percent and Denmark along with Greece, Luxembourg, Portugal,
and a slew of other countries doesn’t even achieve representation in the pie
chart; see “Europeana Content Report,” August 6, 2015,
/europeana-dsi-ms7-content-report-august.pdf>. 84. Europeana information
scholar interview, 2011. 85. Ibid. 86. Wiebe de Jager, “MS15: Annual traffic
report and analysis,” Europeana, May 31 2014,
.

# 4
The Licit and Illicit Nature of Mass Digitization

## Introduction: Lurking in the Shadows

A friend has just recommended an academic book to you, and now you are dying
to read it. But you know that it is both expensive and hard to get your hands
on. You head down to your library to request the book, but you soon realize
that the wait list is enormous and that you will not be able to get your hands
on the book for a couple of weeks. Desperate, you turn to your friend for
help. She asks, “Why don’t you just go to a pirate library?” and provides you
with a link. A new world opens up. Twenty minutes later you have downloaded 30
books that you felt were indispensable to your bookshelf. You didn’t pay a
thing. You know what you did was illegal. Yet you also felt strangely
justified in your actions, not least spurred on by the enthusiastic words on
the shadow library’s front page, which sets forth a comforting moral compass.
You begin thinking to yourself: “Why are pirate libraries deemed more illegal
than Google’s controversial scanning project?” and “What are the moral
implications of my actions vis-à-vis the colonial framework that currently
dictates Europeana’s copyright policies?”

The existence of what this book terms shadow libraries raises difficult
questions, not only to your own moral compass but also to the field of mass
digitization. Political and popular discourses often reduce the complexity of
these questions to “right” and “wrong” and Hollywood narratives of pirates and
avengers. Yet, this chapter wishes to explore the deeper infrapolitical
implications of shadow libraries, setting out the argument that shadow
libraries offer us a productive framework for examining the highly complex
legal landscape of mass digitization. Rather than writing a chapter that
either supports or counters shadow libraries, the chapter seeks to chart the
complexity of the phenomenon and tease out its relevance for mass digitization
by framing it within what we might call an infrapolitics of parasitism.

In _The Parasite_ , a strange and fabulating book that brings together
information theory and cybernetics, physics, philosophy, economy, biology,
politics, and folk tales, French philosopher Michel Serres constructs an
argument about the conceptual figure of the parasite to explore the parasitic
nature of social relations. In a dizzying array of images and thought-
constructs, Serres argues against the idea of a balanced exchange of energy,
suggesting instead that our world is characterized by one parasite stealing
energy by feeding on another organism. For this purpose he reminds us of the
three meanings of parasite in the French language. In French, the term
parasite has three distinct, but related meanings. The first relates to one
organism feeding off another and giving nothing in return. Second, it refers
to the social concept of the freeloader, who lives off society without giving
anything in return. Both of these meanings are fairly familiar to most, and
lay the groundwork for our annoyance with both bugs and spongers. The third
meaning, however, is less known in most languages except French: here the
parasite is static noise or interference in a channel, interrupting the
seemingly balanced flow of things, mediating and thus transforming relations.
Indeed, for Serres, the parasite is itself a disruptive relation (rather than
entity). The parasite can also change positions of sender, receiver, and
noise, making it exceedingly difficult to discern parasite from nonparasite;
indeed, to such an extent that Serres himself exclaims “I no longer really
know how to say it: the parasite parasites the parasites.”1 Serres thus uses
his parasitic model to make a claim about the nature of cybernetic
technologies and the flow of information, arguing that “cybernetics gets more
and more complicated, makes a chain, then a network. Yet it is founded on the
theft of information, quite a simple thing.”2 The logic of the parasite,
Serres argues, is the logic of the interrupter, the “excluded third” or
“uninvited guest” who intercepts and confuses relations in a process of theft
that has a value both of destruction and a value of construction. The parasite
is thus a generative force, inventing, affecting, and transforming relations.
Hence, parasitism refers not only to an act of interference but also to an
interruption that “invents something new.”3

Michel Serres’s then-radical philosophy of the parasite is today echoed by a
broader recognition of the parasite as not only a dangerous entity, but also a
necessary mediator. Indeed, as Jeanette Samyn notes, we are today witnessing a
“pro-parasitic” movement in science in which “scientists have begun to
consider parasites and other pathogens not simply as problems but as integral
components of ecosystems.”4 In this new view, “… the parasite takes from its
host without ever taking its place; it creates new room, feeding off excess,
sometimes killing, but often strengthening its milieu.” In the following
sections, the lens of the parasite will help us explore the murky waters of
shadow libraries, not (only) as entities, but also as relational phenomena.
The point is to show how shadow libraries belong to the same infrapolitical
ecosystem as Google Books and Europeana, sometimes threatening them, but often
also strengthening them. Moreover, it seeks to show how visitors’ interactions
with shadow libraries are also marked by parasitical relations with Google,
which often mediates literature searches, thus entangling Google and shadow
libraries in a parasitical relationship where one feeds off the other and vice
versa.

Despite these entangled relations, the mass digitization strategies of shadow
libraries, Europeana, and Google Books differ significantly. Basically, we
might say that Google Books and Europeana each represent different strategies
for making material available on an industrial scale while maintaining claims
to legality. The sprawling and rapidly growing group of mass digitization
projects interchangeably termed shadow libraries represents a third set of
strategies. Shadow libraries5 share affinities with Europeana and Google Books
in the sense that they offer many of the same services: instant access to a
wealth of cultural works spanning journal articles, monographs, and textbooks
among others. Yet, while Google Books and Europeana promote visibility to
increase traffic, embed themselves in formal systems of communication, and
operate within the legal frameworks of public funding and private contracting,
shadow libraries in contrast operate in the shadows of formal visibility and
regulatory systems. Hence, while formal mass digitization projects such as
Google Books and Europeana publicly proclaim their desire to digitize the
world’s cultural memory, another layer of people, scattered across the globe
and belonging to very diverse environments, harbor the same aspirations, but
in much more subtle terms. Most of these people express an interest in the
written word, a moral conviction of free access, and a political view on
existing copyright regulations as unjust and/or untimely. Some also express
their fascination with the new wonders of technology and their new
infrastructural possibilities. Others merely wish to practice forms of access
that their finances, political regime, or geography otherwise prohibit them
from doing. And all of them are important nodes in a new shadowy
infrastructural system that provides free access worldwide to books and
articles on a scale that collectively far surpasses both Google and Europeana.

Because of their illicit nature, most analyses of shadowy libraries have
centered on their legal transgressions. Yet, their cultural trajectories
contain nuances that far exceed legal binaries. Approaching shadow libraries
through the lens of infrapolitics is helpful for bringing forth these much
more complex cultural mass digitization systems. This chapter explores three
examples of shadow libraries, focusing in particular on their stories of
origin, their cultural economies, and their sociotechnical infrastructures.
Not all shadow libraries fit perfectly into the category of mass digitization.
Some of them are smaller in size, more selective, and less industrial.
Nevertheless, I include them because their open access strategies allow for
unlimited downloads. Thus, shadow libraries, while perhaps selective in size
themselves, offer the opportunity to reproduce works at a massive and
distributed scale. As such, they are the perfect example of a mass
digitization assemblage.

The first case centers on lib.ru, an early Russia-based file-sharing platform
for exchanging books that today has grown into a massive and distributed file-
sharing project. It is primarily run by individuals, but it has also received
public funding, which shows that what at first glance appears as a simple case
of piracy simultaneously serves as a much more complex infrapolitical
structure. The second case, Monoskop, distinguishes itself by its boutique
approach to digitization. Monoskop too is characterized by its territorial
trajectory, rooted in Bratislava’s digital scene as an attempt to establish an
intellectual platform for the study of avant-garde (digital) cultures that
could connect its Bratislava-based creators to a global scene. Finally, the
chapter looks at UbuWeb, a shadow library dedicated to avant-garde cultural
works ranging from text and audio to images and film. Founded in 1996 as a US-
based noncommercial file-sharing site by poet Kenneth Goldsmith in response to
the marginal distribution of crucial avant-garde material, UbuWeb today offers
a wealth of avant-garde sound art, video, and textual works.

As the case studies show, shadow libraries have become significant mass
digitization infrastructures that offer the user free access to academic
articles and books, often by means of illegal file-sharing. They are informal
and unstable networks that rely on active user participation across a wide
spectrum, from deeply embedded people who have established file-sharing sites
to the everyday user occasionally sending the odd book or article to a friend
or colleague. As Lars Eckstein notes, most shadow libraries are characterized
not only by their informal character, but also by the speed with which they
operate, providing “a velocity of media content” which challenges legal
attacks and other forms of countermeasures.6 Moreover, shadow libraries also
often operate in a much more widely distributed fashion than both Europeana
and Google, distributing and mirroring content across multiple servers, and
distributing labor and responsibility in a system that is on the one hand more
robust, more redundant, and more resistant to any single point of failure or
control, and on the other hand more ephemeral, without a central point of
back-up. Indeed, some forms of shadow libraries exist entirely without a
center, instead operating infrastructurally along communication channels in
social media; for example, the use of the Twitter hashtag #ICanHazPDF to help
pirate scientific papers.

Today, shadow libraries exist as timely reminders of the infrapolitical nature
of mass digitization. They appear as hypertrophied versions of the access
provided by Google Books and Europeana. More fundamentally, they also exist as
political symptoms of the ideologies of the digital, characterized by ideals
of velocity and connectivity. As such, we might say that although shadow
libraries often position themselves as subversives, in many ways they also
belong to the same storyline as other mass digitization projects such as
Google Books and Europeana. Significantly, then, shadow libraries are
infrapolitical in two senses: first, they have become central infrastructural
elements in what James C. Scott calls the “infrapolitics of subordinate
groups,” providing everyday resistance by creating entrance points to
hitherto-excluded knowledge zones.7 Second, they represent and produce the
infrapolitics of the digital _tout court_ with their ideals of real-time,
globalized, and unhindered access.

## Lib.ru

Lib.ru is one of the earliest known digital shadow libraries. It was
established by the Russian computer science professor Maxim Moshkov, who
complemented his academic practice of programming with a personal hobby of
file-sharing on the so-called RuNet, the Russian-language segment of the
Internet.8 Moshkov’s collection had begun as an e-book swapping practice in
1990, but in 1994 he uploaded the material to his institute’s web server where
he then divided the site into several section such as “my hobbies,” “my work,”
and “my library.”9 If lib.ru began as a private project, however, the role of
Moshkov’s library soon changed as it quickly became Russia’s preferred shadow
library, with users playing an active role in its expansion by constantly
adding new digitized books. Users would continually scan and submit new texts,
while Moshkov, in his own words, worked as a “receptionist” receiving and
handling the material.10

Shadow libraries such as Moshkov’s were most likely born not only out of a
love of books, but also out of frustration with Russia’s lack of access to up-
to-date and affordable Western works.11 As they continued to grow and gain in
popularity, shadow libraries thus became not only points of access, but also
signs of infrastructural failure in the formal library system.12 After lib.ru
outgrew its initial server storage at Moshkov’s institute, Moshkov divided it
into smaller segments that were then distributed, leaving only the Russian
literary classics on the original site.13 Neighboring sites hosted other
genres, ranging from user-generated texts and fan fiction on a shadow site
called [samizdat.lib.ru](http://samizdat.lib.ru) to academic books in a shadow
library titled Kolkhoz, named after the commons-based agricultural cooperative
of the early Soviet era and curated and managed by “amateur librarians.”14 The
steadily accumulating numbers of added works, digital distributors, and online
access points expanded not only the range of the shadow collections, but also
their networked affordances. Lib.ru and its offshoots thus grew into an
influential node in the global mass digitization landscape, attracting both
political and legal attention.

### Lib.ru and the Law

Until 2004, lib.ru deployed a practice of handling copyright complaints by
simply removing works at the first request from the authors.15 But in 2004 the
library received its first significant copyright claim from the big Russian
publisher Kirill i Mefody (KM). KM requested that Moshkov remove access to a
long list of books, claiming exclusive Internet rights on the books, along
with works that were considered public domain. Moshkov refused to honor the
request, and a lawsuit ensued. The Ostankino Court of Moscow initially denied
the lawsuit because the contracts for exclusive Internet rights were
considered invalid. This did not deter KM, however, which then approached the
case from a different perspective, filing applications on behalf of well-known
Russian authors, including the crime author Alexandra Marinina and the science
fiction writer Eduard Gevorkyan. In the end, only Eduard Gevorkyan maintained
his claim, which was of the considerable size of one million rubles.16

During the trial, Moshkov’s library received widespread support from both
technologists and users of lib.ru, expressed, for example, in a manifesto
signed by the International Union of Internet Professionals, which among other
things touched upon the importance of online access not only to cultural works
but also to the Russian language and culture:

> Online libraries are an exceptionally large intellectual fund. They lessen
the effect of so-called “brain drain,” permitting people to stay in the orbit
of Russian language and culture. Without online libraries, the useful effect
of the Internet and computers in Russian education system is sharply lowered.
A huge, openly available mass of Russian literary texts is a foundation
permitting further development of Russian-language culture, worldwide.17

Emphasizing that Moshkov often had an agreement with the authors he put
online, the manifesto also called for a more stable model of online public
libraries, noting that “A wide list of authors who explicitly permitted
placing their works in the lib.ru library speaks volumes about the
practicality of the scheme used by Maxim Moshkov. However, the litigation
underway shows its incompleteness and weak spots.”18 Significantly, Moshkov’s
shadow library also received both moral and financial support from the state,
more specifically in the form of funding of one million rubles granted by the
Federal Agency for the Press and Mass Media. The funding came with the
following statement from the Agency’s chairman, Mikhail Seslavinsky:
“Following the lively discussion on how copyright could be protected in
electronic libraries, we have decided not to wait for a final decision and to
support the central library of RuNet—Maxim Moshkov’s site.”19 Seslavinsky’s
support not only reflected the public’s support of the digital library, but
also his own deep-seated interests as a self-confessed bibliophile, council
chair of the Russian organization National Union of Bibliophiles since 2011,
and author of numerous books on bibliology and bibliophilia. Additionally, the
support also reflected the issues at stake for the Russian legislative
framework on copyright. The framework had just passed a second reading of a
revised law “On Copyright and Related Rights” in the Russian parliament on
April 21, 2004, extending copyright from 50 years after an author’s death to
70 years, in accordance with international law and as a condition of Russia’s
entry into the World Trade Organization.20

The public funding, Moshkov stated, was spent on modernizing the technical
equipment for the shadow library, including upgrading servers and performing
OCR scanning on select texts.21 Yet, despite the widespread support, Moshkov
lost the copyright case to KM on May 31, 2005. The defeat was limited,
however. Indeed, one might even read the verdict as a symbolic victory for
Moshkov, as the court fined Moshkov only 30,000 rubles, a fragment of what KM
had originally sued for. The verdict did have significant consequences for how
Moshkov manages lib.ru, however. After the trial, Moshkov began extending his
classical literature section and stopped uploading books sent by readers into
his collection, unless they were from authors who submitted them because they
wished to publish in digital form.

What can we glean from the story of lib.ru about the infrapolitics of mass
digitization? First, the story of lib.ru illustrates the complex and
contingent historical trajectory of shadow libraries. Second, as the next
section shows, it offers us the possibility of approaching shadow libraries
from an infrastructural perspective, and exploring the infrapolitical
dimensions of shadow libraries in the area of tension between resistance and
standardization.

### The Infrapolitics of Lib.ru: Infrastructures of Culture and Dissent

While global in reach, lib.ru is first and foremost a profoundly
territorialized project. It was born out of a set of political, economic, and
aesthetic conditions specific to Russia and carries the characteristics of its
cultural trajectory. First, the private governance of lib.ru, initially
embodied by Moshkov, echoes the general development of the Internet in Russia
from 1991 to 1998, which was constructed mainly by private economic and
cultural initiatives at a time when the state was in a period of heavy
transition. Lib.ru’s minimalist programming style also made it a cultural
symbol of the early RuNet, acting as a marker of cultural identity for Russian
Internet users at home and abroad.22

The infrapolitics of lib.ru also carry the traits of the media politics of
Russia, which has historically been split into two: a political and visible
level of access to cultural works (through propaganda), and an infrapolitical
invisible level of contestation and resistance, enabling Russian media
consumers to act independently from official institutionalized media channels.
Indeed, some scholars tie the practice of shadow libraries to the Soviet
Union’s analog shadow activities, which are often termed _samizdat_ , that is,
illegal cultural distribution, including illegally listening to Western radio,
illegally trafficking Western music, and illegally watching Western films.23
Despite often circulating Western pop culture, the late-Soviet era samizdat
practices were often framed as noncapitalist practices of dissent without
profit motives.24 The dissent, however, was not necessarily explicitly
expressed. Lacking the defining fervor of a clear political ideology, and
offering no initiatives to overthrow the Soviet regime, samizdat was rather a
mode of dissent that evaded centralized ideological control. Indeed, as
Aleksei Yurchak notes, samizdat practices could even be read as a mode of
“suspending the political,” thus “avoiding the political concerns that had a
binary logic determined by the sovereign state” to demonstrate “to themselves
and to others that there were subjects, collectivities, forms of life, and
physical and symbolic spaces in the Soviet context that, without being overtly
oppositional or even political, exceeded that state’s abilities to define,
control, and understand them.”25 Yurchak thus reminds us that even though
samizdat was practiced as a form of nonpolitical practice, it nevertheless
inherently had significant political implications.

The infrapolitics of samizdat not only referred to a specific social practice
but were also, as Ann Komaromi reminds us, a particular discourse network
rooted in the technology of the typewriter: “Because so many people had their
own typewriters, the production of samizdat was more individual and typically
less linked to ideology and organized political structures. … The circulation
of Samizdat was more rhizomatic and spontaneous than the underground
press—samizdat was like mushroom ‘spores.’”26 The technopolitical
infrastructure of samizdat changed, however, with the fall of the Berlin Wall
in 1989, the further decentralization of the Russian media landscape, and the
emergence of digitization. Now, new nodes emerged in the Russian information
landscape, and there was no centralized authority to regulate them. Moreover,
the transmission of the Western capitalist system gave rise to new types of
shadow activity that produced items instead of just sharing items, adding a
new consumerist dimension to shadow libraries. Indeed, as Kuznetsov notes, the
late-Soviet samizdat created a dynamic textual space that aligned with more
general tendencies in mass digitization where users were “both readers and
librarians, in contrast to a traditional library with its order, selection,
and strict catalogisation.”27

If many of the new shadow libraries that emerged in the 1990s and 2000s were
inspired by the infrapolitics of samizdat, then, they also became embedded in
an infrastructural apparatus that was deeply nested within a market economy.
Indeed, new digital libraries emerged under such names as Aldebaran,
Fictionbook, Litportal, Bookz.ru, and Fanzin, which developed new platforms
for the distribution of electronic books under the label “Liters,” offering
texts to be read free of charge on a computer screen or downloaded at a
cost.28 In both cases, the authors receive a fee, either from the price of the
book or from the site’s advertising income. Accompanying these new commercial
initiatives, a concomitant movement rallied together in the form of Librusek,
a platform hosted on a server in Ecuador that offered its users the
possibility of uploading works on a distributed basis.29 In contrast to
Moshkov’s centralized control, then, the library’s operator Ilya Larin adhered
to the international piracy movement, calling his site a pirate library and
gracing Librusek’s website with a small animated pirate, complete with sabre
and parrot.

The integration and proliferation of samizdat practices into a complex
capitalist framework produced new global readings of the infrapolitics of
shadow libraries. Rather than reading shadow libraries as examples of late-
socialist infrapolitics, scholars also framed them as capitalist symptoms of
“market failure,” that is, the failure of the market to meet consumer
demands.30 One prominent example of such a reading was the influential Social
Science Research Council report edited by Joe Karaganis in 2006, titled “Media
Piracy in Emerging Economies,” which noted that cultural piracy appears most
notably as “a failure to provide affordable access to media in legal markets”
and concluded that within the context of developing countries “the pirate
market cannot be said to compete with legal sales or generate losses for
industry. At the low end of the socioeconomic ladder where such distribution
gaps are common, piracy often simply is the market.”31

In the Western world, Karaganis’s reading was a progressive response to the
otherwise traditional approach to media piracy as a legal failure, which
argued that tougher laws and increased enforcement are needed to stem
infringing activity. Yet, this book argues that Karaganis’s report, and the
approach it represents, also frames the infrapolitics of shadow libraries
within a consumerist framework that excises the noncommercial infrapolitics of
samizdat from the picture. The increasing integration of Russian media
infrapolitics into Western apparatuses, and the reframing of shadow libraries
from samizdat practices of political dissent to market failure, situates the
infrapolitics of shadow libraries within a consumerist dispositive and the
individual participants as consumers. As some critical voices suggest, this
has an impact on the political potential of shadow libraries because they—in
contrast to samizdat—actually correspond “perfectly to the industrial
production proper to the legal cultural market production.”32 Yet, as the
final section in this chapter shows, one also risks missing the rich nuances
of infrapolitics by conflating consumerist infrastructures with consumerist
practice.33

The political stakes of shadow libraries such as lib.ru illustrate the
difficulties in labeling shadow libraries in political terms, since they are
driven neither by pure globalized dissent nor by pure globalized and
commodified infrastructures. Rather, they straddle these binaries as
infrapolitical entities, the political dynamics of which align both with
standardization and dissent. Revisiting once more the theoretical debate, the
case of lib.ru shows that shadow libraries may certainly be global phenomena,
yet one should be careful with disregarding the specific cultural-political
trajectories that shape each individual shadow library. Lib.ru demonstrates
how the infrapolitics of shadow libraries emerge as infrastructural
expressions of the convergence between historical sovereign trajectories,
global information infrastructures, and public-private governance structures.
Shadow libraries are not just globalized projects that exist in parallel to
sovereign state structures and global economic flows. Instead, they are
entangled in territorial public-private governance practices that produce
their own late-sovereign infrapolitics, which, paradoxically, are embedded in
larger mass digitization problematics, both on their own territory and on the
global scene.

## Monoskop

In contrast to the broad and distributed infrastructure of lib.ru, other
shadow libraries have emerged as specialized platforms that cater to a
specific community and encourage a specific practice. Monoskop is one such
shadow library. Like lib.ru, Monoskop started as a one-man project and in many
respects still reflects its creator, Dušan Barok, who is an artist, writer,
and cultural activist involved in critical practices in the fields of
software, art, and theory. Prior to Monoskop, his activities were mainly
focused on the Bratislava cultural media scene, and Monoskop was among other
things set up as an infrastructural project, one that would not only offer
content but also function as a form of connectivity that could expand the
networked powers of the practices of which Barok was a part.34 In particular,
Barok was interested in researching the history of media art so that he could
frame the avant-garde media practices in which he engaged in Bratislava within
a wider historical context and thus lend them legitimacy.

### The Shadow Library as a Legal Stratagem

Monoskop was partly motivated by Barok’s own experiences of being barred from
works he deemed of significance to the field in which he was interested. As he
notes, the main impetus to start a blog “came from a friend who had access to
PDFs of books I wanted to read but could not afford go buy as they were not
available in public libraries.”35 Barok thus began to work on Monoskop with a
group of friends in Bratislava, initially hiding it from search engine bots to
create a form of invisibility that obfuscated its existence without, however,
preventing people from finding the Log and uploading new works. Information
about the Log was distributed through mailing lists on Internet culture, among
many other posts on e-book torrent trackers, DC++ networks, extensive
repositories such as LibGen and Aaaaarg, cloud directories, document-sharing
platforms such as Issuu and Scribd, and digital libraries such as the Internet
Archive and Project Gutenberg.36 The shadow library of Monoskop thus slowly
began to emerge, partly through Barok’s own efforts at navigating email lists
and downloading material, and partly through people approaching Monoskop
directly, sending it links to online or scanned material and even offering it
entire e-book libraries. Rather than posting these “donated” libraries in
their entirety, however, Barok and his colleagues edited the received
collection and materials so that they would fit Monoskop’s scope, and they
also kept scanning material themselves.

Today Monoskop hosts thematically curated collections of downloadable books on
art, culture, media studies, and other topics, partly in order to stimulate
“collaborative studies of the arts, media, and humanities.”37 Indeed, Monoskop
operates with a _boutique_ approach, offering relatively small collections of
personally selected publications to a steady following of loyal patrons who
regularly return to the site to explore new works. Its focal points are
summarized by its contents list, which is divided into three main categories:
“Avant-garde, modernism and after,” “Media culture,” and “Media, theory and
the humanities.” Within these three broad focal points, hundreds of links
direct the user to avant-garde magazines, art exhibitions and events, art and
design schools, artistic and cultural themes, and cultural theorists.
Importantly, shadow libraries such as Monoskop do not just host works
unbeknownst to the authors—authors also leak their own works. Thus, some
authors publishing with brand name, for-profit, all-rights-reserving, print-
on-paper-only publishing houses will also circulate a copy of their work on a
free text-sharing network such as Monoskop. 38

How might we understand Monoskop’s legal situation and maneuverings in
infrapolitical terms? Shadow libraries such as Monoskop draw their
infrapolitical strength not only from the content they offer but also from
their mode of engagement with the gray zones of new information
infrastructures. Indeed, the infrapolitics of shadow libraries such as
Monoskop can perhaps best be characterized as a stratagematic form of
infrapolitics. Monoskop neither inhabits the passive perspective of the
digital spectator nor deploys a form of tactics that aims to be failure free.
Rather, it exists as a body of informal practices and knowledges, as cunning
and dexterous networks that actively embed themselves in today’s
sociotechnical infrastructures. It operates with high sociotechnical
sensibilities, living off of the social relations that bring it into being and
stabilize it. Most significantly, Monoskop skillfully exploits the cracks in
the infrastructures it inhabits, interchangeably operating, evading, and
accompanying them. As Matthew Fuller and Andrew Goffey point out in their
meditation on stratagems in digital media, they do “not cohere into a system”
but rather operate as “extensive, open-ended listing[s]” that “display a
certain undecidability because inevitably a stratagem does not describe or
prescribe an action that is certain in its outcome.”39 Significantly, then,
failures and errors not only represent negative occurrences in stratagematic
approaches but also appeal to willful dissidents as potentially beneficial
tools. Dušan Barok’s response to a question about the legal challenges against
Monoskop evidences this stratagematic approach, as he replies that shadow
libraries such as Monoskop operate in the “gray zone,” which to him is also
the zone of fair use.40 Barok thus highlights the ways in which Monoskop
engages with established media infrastructures, not only on the level of
discursive conventions but also through their formal logics, technical
protocols, and social proprieties.

Thus, whereas Google lights up gray zones through spectacle and legal power
plays, and Europeana shuns gray zones in favor of the law, Monoskop literally
embraces its shadowy existence in the gray zones of the law. By working in the
shadows, Monoskop and likeminded operations highlight the ways in which the
objects they circulate (including the digital artifacts, their knowledge
management, and their software) can be manipulated and experimented upon to
produce new forms of power dynamics.41 Their ethics lie more in the ways in
which they operate as shadowy infrastructures than in intellectual reflections
upon the infrastructures they counter, without, however, creating an
opposition between thinking and doing. Indeed, as its history shows, Monoskop
grew out of a desire to create a space for critical reflection. The
infrapolitics of Monoskop is thus an infrapolitics of grayness that marks the
breakdown of clearly defined contrasts between legal and illegal, licit and
illicit, desire and control, instead providing a space for activities that are
ethically ambiguous and in which “everyone is sullied.”42

### Monoskop as a Territorializing Assemblage

While Monoskop’s stratagems play on the infrapolitics of the gray zones of
globalized digital networks, the shadow library also emerges as a late-
sovereign infrastructure. As already noted, Monoskop was from the outset
focused on surfacing and connecting art and media objects and theory from
Central and Eastern Europe. Often, this territorial dimension recedes into the
background, with discussions centering more on the site’s specialized catalog
and legal maneuvers. Yet Monoskop was initially launched partly as a response
to criticisms on new media scenes in the Slovak and Czech Republics as
“incomprehensible avant-garde.”43 It began as a simple invite-only instance of
wiki in August 2004, urging participants to collaboratively research the
history of media art. It was from the beginning conceived more as a
collaborative social practice and less as a material collection, and it
targeted noninstitutionalized researchers such as Barok himself.

As the nodes in Monoskop grew, its initial aim to research media art history
also expanded into looking at wider cultural practices. By 2010, it had grown
into a 100-gigabyte collection which was organized as a snowball research
collection, focusing in particular on “the white spots in history of art and
culture in East-Central Europe,” spanning “dozens of CDs, DVDs, publications,
as well as recordings of long interviews [Barok] did”44 with various people he
considered forerunners in the field of media arts. Indeed, Barok at first had
no plans to publish the collection of materials he had gathered over time. But
during his research stay in Rotterdam at the influential Piet Zwart Institute,
he met the digital scholars Aymeric Mansoux and Marcell Mars, who were both
active in avant-garde media practices, and they convinced him to upload the
collection.45 Due to the fragmentary character of his collection, Barok found
that Monoskop corresponded well with the pre-existing wiki, to which he began
connecting and embedding videos, audio clips, image files, and works. An
important motivating factor was the publication of material that was otherwise
unavailable online. In 2009, Barok launched Monoskop Log, together with his
colleague Tomáš Kovács. This site was envisioned as an affiliated online
repository of publications for Monoskop, or, as Barok terms it, “a free access
living archive of writings on art, culture, and media technologies.”46

Seeking to create situated spaces of reflection and to shed light on the
practices of media artists in Eastern and Central Europe, Monoskop thus
launched several projects devoted to excavating media art from a situated
perspective that takes its local history into account. Today, Monoskop remains
a rich source of information about artistic practices in Central and Eastern
Europe, Poland, Hungary, Slovakia, and the Czech Republic, relating it not
only to the art histories of the region, but also to its history of
cybernetics and computing.

Another early motivation for Monoskop was to provide a situated nodal point in
the globalized information infrastructures that emphasized the geographical
trajectories that had given rise to it. As Dušan Barok notes in an interview,
“For a Central European it is mind-boggling to realize that when meeting a
person from a neighboring country, what tends to connect us is not only
talking in English, but also referring to things in the far West. Not that the
West should feel foreign, but it is against intuition that an East-East
geographical proximity does not translate into a cultural one.”47 From this
perspective, Monoskop appears not only as an infrapolitical project of global
knowledge, but also one of situated sovereignty. Yet, even this territorial
focus holds a strategic dimension. As Barok notes, Monoskop’s ambition was not
only to gain new knowledge about media art in the region, but also to cash in
on the cultural capital into which this knowledge could potentially be
converted. Thus, its territorial matrix first and foremost translates into
Foucault’s famous dictum that “knowledge is power.” But it is nevertheless
also testament to the importance of including more complex spatial dynamics in
one’s analytical matrix of shadow libraries, if one wishes to understand them
as more than globalized breakers of code and arbiters of what Manuel Castells
once called the “space of flows.”48

## UbuWeb

If Monoskop is one of the most comprehensive shadow libraries to emerge from
critical-artistic practice, UbuWeb is one of the earliest ones and has served
as an inspirational example for Monoskop. UbuWeb is a website that offers an
encyclopedic scope of downloadable audio, video, and plain-text versions of
avant-garde art recordings, films, and books. Most of the books fall in the
category of small-edition artists’ books and are presented on the site with
permission from the artists in question, who are not so concerned with
potential loss of revenue since most of the works are officially out of print
and never made any money even when they were commercially available. At first
glance, UbuWeb’s aesthetics appear almost demonstratively spare. Still
formatted in HTML, it upholds a certain 1990s net aesthetics that has resisted
the revamps offered by the new century’s more dynamic infrastructures. Yet, a
closer look reveals that UbuWeb offers a wealth of content, ranging from high
art collections to much more rudimentary objects. Moreover, and more
fundamentally, its critical archival practice raises broader infrapolitical
questions of cultural hierarchies, infrastructures, and domination.

### Shadow Libraries between Gift Economies and Marginalized Forms of
Distribution

UbuWeb was founded by poet Kenneth Goldsmith in response to the marginal
distribution of crucial avant-garde material. It provides open access both to
out-of-print works that find a second life through digital art reprint and to
the work of contemporary artists. Upon its opening in 2001, Kenneth Goldsmith
termed UbuWeb’s economic infrastructure a “gift economy” and framed it as a
political statement that highlighted certain problems in the distribution of
and access to intellectual materials:

> Essentially a gift economy, poetry is the perfect space to practice utopian
politics. Freed from profit-making constraints or cumbersome fabrication
considerations, information can literally “be free”: on UbuWeb, we give it
away. … Totally independent from institutional support, UbuWeb is free from
academic bureaucracy and its attendant infighting, which often results in
compromised solutions; we have no one to please but ourselves. … UbuWeb posts
much of its content without permission; we rip full-length CDs into sound
files; we scan as many books as we can get our hands on; we post essays as
fast as we can OCR them. And not once have we been issued a cease and desist
order. Instead, we receive glowing emails from artists, publishers, and record
labels finding their work on UbuWeb, thanking us for taking an interest in
what they do; in fact, most times they offer UbuWeb additional materials. We
happily acquiesce and tell them that UbuWeb is an unlimited resource with
unlimited space for them to fill. It is in this way that the site has grown to
encompass hundreds of artists, thousands of files, and several gigabytes of
poetry.49

At the time of its launch, UbuWeb garnered extraordinary attention and divided
communities along lines of access and rights to historical and contemporary
artists’ media. It was in this range of responses to UbuWeb that one could
discern the formations of new infrastructural positions on digital archives,
how they should be made available, and to whom. Yet again, these legal
positions were accompanied by a territorial dynamic, including the impact of
regional differences in cultural policy on UbuWeb. Thus, as artist Jason Simon
notes, there were significant differences between the ways in which European
and North American distributors related to UbuWeb. These differences, Simon
points out, were rooted in “medium-specific questions about infrastructure,”
which differ “from the more interpretive discussion that accompanied video's
wholesale migration into fine art exhibition venues.”50 European pre-recession
public money thus permitted nonprofit distributors to embrace infrastructures
such as UbuWeb, while American distributors were much more hesitant toward
UbuWeb’s free-access model. When recession hit Europe in the late 2000s,
however, the European links to UbuWeb’s infrastructures crumbled while “the
legacy American distributors … have been steadily adapting.”51 The territorial
modulations in UbuWeb’s infrastructural set-up testify not only to how shadow
libraries such as UbuWeb are inherently always linked up to larger political
events in complex ways, but also to latent ephemerality of the entire project.

Goldsmith has more than once asserted that UbuWeb’s insistence on
“independent” infrastructures also means a volatile existence: “… by the time
you read this, UbuWeb may be gone. Cobbled together, operating on no money and
an all-volunteer staff, UbuWeb has become the unlikely definitive source for
all things avant-garde on the internet. Never meant to be a permanent archive,
Ubu could vanish for any number of reasons: our ISP pulls the plug, our
university support dries up, or we simply grow tired of it.” Goldsmith’s
emphasis on the ephemerality of UbuWeb is a shared condition of most shadow
libraries, most of which exist only as ghostly reminders with nonfunctional
download links or simply as 404 pages, once they pull the plug. Rather than
lamenting this volatile existence, however, Goldsmith embraces it as an
infrapolitical stance. As Cornelia Solfrank points out, UbuWeb was—and still
is—as much an “archival critical practice that highlights the legal and social
ramifications of its self-created distribution and archiving system as it is
about the content hosted on the site.”52 UbuWeb is thus not so much about
authenticity as it is about archival defiance, appropriation, and self-
reflection. Such broader and deeper understandings of archival theory and
practice allow us to conceive of it as the kind of infrapolitics that,
according to James C. Scott, “provides much of the cultural and structural
underpinning of the more visible political attention on which our attention
has generally been focused.”53 The infrapolitics of UbuWeb is devoted to
hatching new forms of organization, creating new enclaves of freedom in the
midst of orthodox ways of life, and inventing new structures of production and
dissemination that reveal not only the content of their material but also
their marginalized infrastructural conditions and the constellation of social
forces that lead to their online circulation.54

The infrapolitics of UbuWeb is testament not only to avant-garde cultures, but
also to what Hito Steyerl in her _Defense of Poor Images_ refers to as the
“neoliberal radicalization of the culture as commodity” and the “restructuring
of global media industries.” 55 These materials “circulate partly in the void
left by state organizations” that find it too difficult to maintain digital
distribution infrastructures and the art world’s commercial ecosystems, which
offer the cultural materials hosted on UbuWeb only a liminal existence. Thus,
while UbuWeb on the one hand “reveals the decline and marginalization of
certain cultural materials” whose production were often “considered a task of
the state,”56 on the other hand it shows how intellectual content is
increasingly privatized, not only in corporate terms but also through
individuals, which in UbuWeb’s case is expressed in Kenneth Goldsmith, who
acts as the sole archival gatekeeper.57

## The Infrapolitics of Shadow Libraries

If the complexity of shadow libraries cannot be reduced to the contrastive
codes of “right” and “wrong” and global-local binaries, the question remains
how to theorize the cultural politics of shadow libraries. This final section
outlines three central infrapolitical aspects of shadow libraries: access,
speed, and gift.

Mass digitization poses two important questions to knowledge infrastructures:
a logistical question of access and a strategic question of to whom to
allocate that access. Copyright poses a significant logistical barrier between
users and works as a point of control in the ideal free flow of information.
In mass digitization, increased access to information stimulates projects,
whereas in publishing industries with monopoly possibilities, the drive is
toward restriction and control. The uneasy fit between copyright regulations
and mass digitization projects has, as already shown, given rise to several
conflicts, either as legal battles or as copyright reform initiatives arguing
that current copyright frameworks cast doubt upon the political ideal of total
access. As with Europeana and Google Books, the question of _access_ often
stands at the core of the infrapolitics of shadow libraries. Yet, the
strategic responses to the problem of copyright vary significantly: if
Europeana moves within the established realm of legality to reform copyright
regulations and Google Books produces claims to new cultural-legal categories
such as “nonconsumptive reading,” shadow libraries offer a third
infrastructural maneuver—bypassing copyright infrastructures altogether
through practices of illicit file distribution.

Shadow libraries elicit a range of responses and discourses that place
themselves on a spectrum between condemnation and celebration. The most
straightforward response comes, unsurprisingly, from the publishing industry,
highlighting the fundamentally violent breaches of the legal order that
underpins the media industry. Such responses include legal action, policy
initiatives, and public campaigns against piracy, often staging—in more or
less explicit terms—the “pirate” as a common enemy of mankind, beyond legal
protection and to be fought by whatever means necessary.

The second response comes from the open source movement, represented among
others by the pro-reform copyright movement Creative Commons (CC), whose
flexible copyright framework has been adopted by both Europeana and Google
Books.58 While the open source movement has become a voice on behalf of the
telos of the Internet and its possibilities of offering free and unhindered
access, its response to shadow libraries has revealed the complex
infrapolitics of access as a postcolonial problematic. As Kavita Philip
argues, CC’s founder Lawrence Lessig maintains the image of the “good” Western
creative vis-à-vis the “bad” Asian pirate, citing for instance his statement
in his influential book _Free Culture_ that “All across the world, but
especially in Asia and Eastern Europe, there are businesses that do nothing
but take other people’s copyrighted content, copy it, and sell it. … This is
piracy plain and simple, … This piracy is wrong.” 59 Such statements, Kavita
Philip argues, frames the Asian pirate as external to order, whether it be the
order of Western law or neoliberalism.60

The postcolonial critique of CC’s Western normative discourse has instead
sought to conceptualize piracy, not as deviatory behavior in information
economies, but rather as an integral infrastructure endemic to globalized
information economies.61 This theoretical development offers valuable insights
for understanding the infrapolitics of shadow libraries. First of all, it
allows us to go beyond moral discussions of shadow libraries, and to pay
attention instead to the ways in which their infrastructures are built, how
they operate, and how they connect to other infrastructures. As Lawrence Liang
points out, if infrastructures traditionally belong to the domain of the
state, often in cooperation with private business, pirate infrastructures
operate in the gray zones of this set-up, in much the same way as slums exist
as shadow cities and copies are regarded as shadows of the original.62
Moreover, and relatedly, it reminds us of the inherently unstable form of
shadow libraries as a cultural construct, and the ways in which what gets
termed piracy differs across cultures. As Brian Larkin notes, piracy is best
seen as emerging from specific domains: dynamic localities with particular
legal, aesthetic, and social assemblages.63 In a final twist, research on
users of shadow libraries shows that usage of shadow libraries is distributed
globally. Multiple sources attest to the fact that most Sci-Hub usage occurs
outside the Anglosphere. According to Alexa Internet analytics, the top five
country sources of traffic to Sci-Hub were China, Iran, India, Brazil, and
Japan, which account for 56.4 percent of recent traffic. As of early 2016,
data released by Sci-Hub’s founder Alexandra Elbakyan also shows high usage in
developed countries, with a large proportion of the downloads coming from the
US and countries within the European Union.64 The same tendency is evident in
the #ICanHazPDF Twitter phenomenon, which while framed as “civil disobedience”
to aid users in the Global South65 nevertheless has higher numbers of posts
from the US and Great Britain.66

This brings us to the second cultural-political production, namely the
question of distribution. In their article “Book Piracy as Peer Preservation,”
Denis Tenen and Maxwell Henry Foxman note that rather than condemning book
piracy _tout court_ , established libraries could in fact learn from the
infrastructural set-ups of shadow libraries in relation to participatory
governance, technological innovation, and economic sustainability.67 Shadow
libraries are often premised upon an infrastructure that includes user
participation without, however, operating in an enclosed sphere. Often, shadow
libraries coordinate their actions by use of social media platforms and online
forums, including Twitter, Reddit, and Facebook, and the primary websites used
to host the shared files are AvaxHome, LibGen, and Sci-Hub. Commercial online
cloud storage accounts (such as Dropbox and Google Drive) and email are also
used to share content in informal ways. Users interested in obtaining an
article or book chapter will disseminate their request over one or more of the
platforms mentioned above. Other users of those platforms try to get the
requested content via their library accounts or employer-provided access, and
the actual files being exchanged are often hosted on other websites or emailed
to the requesting users. Through these networks, shadow libraries offer
convenient and speedy access to books and articles. Little empirical evidence
is available, but one study does indicate that a large number of shadow
library downloads are made because obtaining a PDF from a shadow library is
easier than using the legal access methods offered by a university’s
traditional channels of access, including formalized research libraries.68
Other studies indicate, however, that many downloads occur because the users
have (perceived) lack of full-text access to the desired texts.69

Finally, as indicated in the introduction to this chapter, shadow libraries
produce what we might call a cultural politics of parasitism. In the normative
model of shadow libraries, discourse often centers upon piracy as a theft
economy. Other discourses, drawing upon anthropological sources, have pointed
out that peer-to-peer file-sharing sites in reality organize around a gift
economy, that is, “a system of social solidarity based on a structured set of
gift exchange and social relationships among consumers.”70 This chapter,
however, ends with a third proposal: that shadow libraries produce a
parasitical form of infrapolitics. In _The Parasite_ , philosopher Michel
Serres speculates a way of thinking about relations of transfer—in social,
biological, and informational contexts—as fundamentally parasitic, that is, a
subtractive form of “taking without giving.” Serres contrasts the parasitic
model with established models of society based on notions such as exchange and
gift giving.71 Shadow libraries produce an infrapolitics that denies the
distinction between producers and subtractors of value, allowing us instead to
focus on the social roles infrastructural agents perform. Restoring a sense of
the wider context of parasitism to shadow libraries does not provide a clear-
cut solution as to when and where shadow libraries should be condemned and
when and where they should be tolerated. But it does help us ask questions in
a different way. And it certainly prevents the regarding of shadow libraries
as the “other” in the landscape of mass digitization. Shadow libraries
instigate new creative relations, the dynamics of which are infrastructurally
premised upon the medium they use. Just as typewriters were an important
component of samizdat practices in the Soviet Union, digital infrastructures
are central components of shadow libraries, and in many respects shadow
libraries bring to the fore the same cultural-political questions as other
forms of mass digitization: questions of territorial imaginaries,
infrastructures, regulation, speed, and ethics.

## Notes

1. Serres 1982, 55. 2. Serres 1982, 36. 3. Serres 1982, 36. 4. Samyn 2012. 5.
I stick with “shadow library,” a term that I first found in Lawrence Liang’s
(2012) writings on copyright and have since seen meaningfully unfolded in a
variety of contexts. Part of its strength is its sidestepping of the question
of the pirate and that term’s colonial connotations. 6. Eckstein and Schwarz
2014. 7. Scott 2009, 185–201. 8. See also Maxim Moshkov’s own website hosted
on lib.ru, . 9. Carey 2015. 10. Schmidt 2009. 11. Bodó
2016. “Libraries in the post-scarcity era.” As Balazs Bodó notes, the first
Russian mass-digitized shadow archives in Russia were run by professors from
the hard sciences, but the popularization of computers soon gave rise to much
more varied and widespread shadow library terrain, fueled by “enthusiastic
readers, book fans, and often authors, who spared no effort to make their
favorite books available on FIDOnet, a popular BBS system in Russia.” 12.
Stelmakh 2008, 4. 13. Bodó 2016. 14. Bodó 2016. 15. Vul 2003. 16. “In Defense
of Maxim Moshkov's Library,” n.d., The International Union of Internet
Professionals, . 17. Ibid. 18. Ibid. 19.
Schmidt 2009, 7. 20. Ibid. 21. Carey 2015. 22. Mjør 2009, 84. 23. Bodó 2015.
24. Kiriya 2012. 25. Yurchak 2008, 732. 26. Komaromi, 74. 27. Mjør, 85. 28.
Litres.ru, . 29. Library Genesis,
. 30. Kiriya 2012. 31. Karaganis 2011, 65, 426. 32.
Kiriya 2012, 458. 33. For a great analysis of the late-Soviet youth’s
relationship with consumerist products, read Yurchak’s careful study in
_Everything Was Forever, Until It Was No More: The Last Soviet Generation_
(2006). 34. “Dušan Barok: Interview,” _Neural_ 44 (2010), 10. 35. Ibid. 36.
Ibid. 37. Monoskop,” last modified March 28, 2018, Monoskop.
. . 38. “Dušan
Barok: Interview,” _Neural_ 44 (2010), 10. 39. Fuller and Goffey 2012, 21. 40.
“Dušan Barok: Interview,” _Neural_ 44 (2010), 11. 41. In an interview, Dušan
Barok mentions his inspirations, including early examples such as textz.com, a
shadow library created by the Berlin-based artist Sebastian Lütgert. Textz.com
was one of the first websites to facilitate free access to books on culture,
politics, and media theory in the form of text files. Often the format would
itself toy with legal limits. Thus, Lütgert declared in a mischievous manner
that the website would offer a text in various formats during a legal debacle
with Surhkamp Verlag: “Today, we are proud to announce the release of
walser.php (), a 10,000-line php script
that is able to generate the plain ascii version of ‘Death of a Critic.’ The
script can be redistributed and modified (and, of course, linked to) under the
terms of the GNU General Public License, but may not be run without written
permission by Suhrkamp Verlag. Of course, reverse-engineering the writings of
senile German revisionists is not the core business of textz.com, so
walser.php includes makewalser.php, a utility that can produce an unlimited
number of similar (both free as in speech and free as in copy) php scripts for
any digital text”; see “Suhrkamp recalls walser.pdf, textz.com releases
walser.php,” Rolux.org,
.
42. Fuller and Goffey 2012, 11. 43. “MONOSKOP Project Finished,” COL-ME Co-
located Media Expedition, [www.col-me.info/node/841](http://www.col-
me.info/node/841). 44. “Dušan Barok: Interview,” _Neural_ 44 (2010), 10. 45.
Aymeric Mansoux is a senior lecturer at the Piet Zwart Institute whose
research deals with the defining, constraining, and confining of cultural
freedom in the context of network-based practices. Marcel Mars is an advocate
of free software and a researcher who is also active in a shadow library named
_Public Library,_ (also interchangeably
known as Memory of the World). 46. “Dušan Barok,” Memory of the World,
. 47. “Dušan Barok: Interview,”
_Neural_ 44 (2010), 10. 48. Castells 1996. 49. Kenneth Goldsmith,”UbuWeb Wants
to Be Free” (last modified July 18, 2007),
. 50. Jacob King and
Jason Simon, “Before and After UbuWeb: A Conversation about Artists’ Film and
Video Distribution,” _Rhizome_ , February 20, 2014.
artists-film-and-vid>. 51. King and Simon 2014. 52. Sollfrank 2015. 53. Scott
1990, 184. 54. For this, I am indebted to Hito Steyerl’s essay ”In Defense of
the Poor Image,” in her book _The Wretched of the Screen_ , 31–59. 55. Steyerl
2012, 36. 56. Steyerl 2012, 39. 57. Sollfrank 2015. 58. Other significant open
source movements include Free Software Foundation, the Wikimedia Foundation,
and several open access initiatives in science. 59. Lessig 2005, 57. 60.
Philip 2005, 212. 61. See, for instance, Larkin 2008; Castells and Cardoso
2012; Fredriksson and Arvanitakis 2014; Burkart 2014; and Eckstein and Schwarz
2014. 62. Liang 2009. 63. Larkin 2008. 64. John Bohannon, “Who’s Downloading
Pirated Papers? Everyone,” _Science Magazine_ , April 28, 2016,
everyone>. 65. “The Scientists Encouraging Online Piracy with a Secret
Codeword,” _BBC Trending_ , October 21, 2015, trending-34572462>. 66. Liu 2013. 67. Tenen and Foxman 2014. 68. See Kramer
2016. 69. Gardner and Gardner 2017. 70. Giesler 2006, 283. 71. Serres 2013, 8.

# III
Diagnosing Mass Digitization

# 5
Lost in Mass Digitization

## The Desire and Despair of Large-Scale Collections

In 1995, founding editor of _Wired_ magazine Kevin Kelly mused upon how a
digital library would look:

> Two decades ago nonlibrarians discovered Borges’s Library in silicon
circuits of human manufacture. The poetic can imagine the countless rows of
hexagons and hallways stacked up in the Library corresponding to the
incomprehensible micro labyrinth of crystalline wires and gates stamped into a
silicon computer chip. A computer chip, blessed by the proper incantation of
software, creates Borges’s Library on command. … Pages from the books appear
on the screen one after another without delay. To search Borges’s Library of
all possible books, past, present, and future, one needs only to sit down (the
modern solution) and click the mouse.1

At the time of Kelly’s writing, book digitization on a massive scale had not
yet taken place. Building his chimerical dream around Jorge Luis Borges’s own
famous magic piece of speculation regarding the Library of Babel, Kelly not
only dreamed up a fantasy of what a digital library might be in an imaginary
dialogue with Borges; he also argued that Jorge Luis Borges’s vision had
already taken place, by grace of nonlibrarians, or—more
specifically—programmers. Specifically, Kelly mentions Karl Sims, a computer
scientist working on a supercomputer called Connection Machine 5 (you may
remember it from the set of _Jurassic Park_ ), who had created a simulated
version of Borges’s library.2

Twenty years after Kelly’s vision, a whole host of mass digitization projects
have sought more or less explicitly to fulfill Kelly’s vision. Incidentally,
Brewster Kahle, one of the lead engineers of the aforementioned Connection
Machine, has become a key figure in the field. Kahle has long dreamed of
creating a universal digital library, and has worked to fulfill it in
practical terms through the nonprofit Internet Archive project, which he
founded in 1996 with the stated mission of creating “universal access to all
knowledge.” In an op-ed in 2017, Kahle lamented the recent lack of progress in
mass digitization and argued for the need to create a new vision for mass
digitization, stating, “The Internet Archive, working with library partners,
proposes bringing millions of books online, through purchase or digitization,
starting with the books most widely held and used in libraries and
classrooms.”3 Reminding us that three major entities have “already digitized
modern materials at scale: Google, Amazon, and the Internet Archive, probably
in that order of magnitude,”4 Kahle nevertheless notes that “bringing
universal access to books” has not yet been achieved because of a fractured
field that diverges on questions of money, technology, and legal clarity. Yet,
outlining his new vision for how a sustainable mass digitization project could
be achieved, Kahle remains convinced that mass digitization is both a
necessity and a possibility.

While Brewster Kahle, Kevin Kelly, Google, Amazon, Europeana’s member
institutions, and others disagree on how to achieve mass digitization, for
whom, and in what form, they are all united in their quest for digitization on
a massive scale. Many shadow libraries operate with the same quantitative
statements, proudly asserting the quantities of their massive holdings on the
front page.

Given the fractured field of mass digitization, and the lack of economic
models for how to actually make mass digitization sustainable, why does the
common dream of mass digitization persist? As this chapter shows, the desire
for quantity, which drives mass digitization, is—much like the Borges stories
to which Kelly also refers—laced with ambivalence. On the one hand, the
quantitative aspirations are driven forth by the basic assumption that “more
is more”: more data and more cultural memory equal better industrial and
intellectual progress. One the other hand, the sheer scale of ambition also
causes frustration, anxiety, and failed plans.

The sense that sheer size and big numbers hold the promise of progress and
greatness is nothing new, of course. And mass digitization brings together
three fields that have each historically grown out of scalar ambitions:
collecting practices, statistics, and industrialization processes.
Historically, as cultural theorist Couze Venn reminds us, most large
collections bear the imprint of processes of (cultural) colonization, human
desires, and dynamics of domination and superiority. We therefore find in
large collections the “impulses and yearnings that have conditioned the
assembling of most of the collections that today establish a monument to past
efforts to gather together knowledge of the world and its treasury of objects
and deeds.”5 The field of statistics, moreover, so vital to the evolution of
modern governance models, is also premised upon the accumulation of ever-more
information.6 And finally, we all recognize the signs of modern
industrialization processes as they appear in the form of globalization,
standardization, and acceleration. Indeed, as French sociologist Henri
Lefebvre once argued (with a nod to Marx), the history of modern society could
plainly and simply be seen as the history of accumulation: of space, of
capital, of property.7

In mass digitization, we hear the political echoes of these histories. From
Jeanneney’s war cry to defend European patrimonies in the face of Google’s
cultural colonization to Google’s megalomaniac numbers game and Europeana’s
territorial maneuverings, scale is used as a point of reference not only to
describe the space of cultural objects in themselves but also to outline a
realm of cultural command.

A central feature in the history of accumulation and scale is the development
of digital technology and the accompanying new modes of information
organization. But even before then, the invention of new technologies offered
not only new modes of producing and gathering information and new
possibilities of organizing information assemblages, but also new questions
about the implications of these leaps in information production. As historians
Ann Blair and Peter Stallybrass show, “infolust,” that is, the cultural
attitude that values expansive collections for long-term storage, emerged in
the early Renaissance period.8 In that period, new print technology gave rise
to a new culture of accumulating and stockpiling notes and papers, even
without having a specific compositional purpose in mind. Within this scholarly
paradigm, new teleologies were formed that emphasized the latent value of any
piece of information, expressed for instance by Joachim Jungius’s exclamation
that “no field was too remote, no author too obscure that it would not yield
some knowledge or other” and Gabriel Naudé’s observation that there is “no
book, however bad or decried, which will not be sought after by someone over
time.”9 The idea that any piece of information was latently valuable was later
remarked upon by Melvin Dewey, who noted at the beginning of the twentieth
century that a “normal librarian’s instinct is to keep every book and
pamphlet. He knows that possibly some day, somebody wants it.”10

Today, mass digitization repeats similar concerns. It reworks the old dream of
an all-encompassing and universal library and has foregrounded once again
questions about what to save and what to let go. What, one might ask, would
belong in such a library? One important field of interest is the question of
whether, and how, to preserve metadata—today’s marginalia. Is it sufficient to
digitize cultural works, or should all accompanying information about the
provenance of the work also be included? And how can we agree upon what
marginalia actually is across different disciplines? Mass digitization
projects in natural history rarely digitize marginalia such as logs and
written accounts, focusing only on what to that discipline is the main object
at hand, for example, a piece of rock, a fly specimen, a pressed plant. Yet,
in the history of science, logs are an invaluable source of information about
how the collected object ended up in the collection, the meaning it had to the
collector, and the place it takes in the collection.11 In this way, new
questions with old trajectories arise: What is important for understanding a
collection and its life? What should be included and excluded? And how will we
know what will turn out to be important in the future?

In the era of big data, the imperative is often to digitize and “save all.”
Prestige mass digitization projects such as Google Books and Europeana have
thus often contextualized their importance in terms of scale. Indeed, as we
saw in the previous chapters, the question of scale has been a central point
of political contestation used to signal infrastructural power. Thus the hype
around Google Books, as well as the political ire it drew, centered on the
scale of the project just as quantitative goals are used in Europeana to
signal progress and significance. Inherent in these quantitative claims are
not only ideas about political power, but also the widespread belief in
digital circles—and the political regimes that take inspiration from them—that
the more information the user is able to access, the more empowered the user
is to navigate and make meaning on their own. In recent years, the imaginaries
of freedom of navigation have also been adjoined by fantasies of freedom of
infrastructural construction through the image of the platform. Mass
digitization projects should therefore not only offer the user the potential
to navigate collections freely, but also to build new products and services on
top of them.12 Yet, as this chapter argues, the ethos of potentially unlimited
expansion also prompts a new set of infrapolitical questions about agency and
control. While these questions are inherently related to the larger questions
of territory and power explored in the previous chapters, they occur on a
different register, closer to the individual user and within the spatialized
imaginaries of digital information.

As many critics have noted, the logic of expansion and scale, and the
accompanying fantasies of the empowered user, often builds on neoliberal
subjectification processes. While highly seductive, they often fail to take
into account the reality of social complexity. Therefore, as Lisa Nakamura
notes, the discourse of complete freedom of navigation through technological
liberation—expressed aptly in Microsoft’s famous slogan “Where do you want to
go today?”—assumes, wrongly, that everyone is at liberty to move about
unhindered.13 And the fantasy of empowerment through platforming is often also
shot through with neoliberal ideals that not only fail to take into account
the complex infrapolitical realities of social interaction, but also rely on
an entrepreneurial epistemology that evokes “a flat, two-dimensional stage on
which resources are laid out for users to do stuff with” and which we are not
“inclined to look underneath or behind it, or to question its structure.”14

This chapter unfolds these central infrapolitical problematics of the spatial
imaginaries of knowledge in relation to a set of prevalent cultural spatial
tropes that have gained new life in digital theory and that have informed the
construction and development of mass digitization projects: the flaneur, the
labyrinth, and the platform. Cultural reports, policy papers, and digital
design strategies often use these three tropes to elicit images of pleasure
and playfulness in mass digitization projects; yet, as the following sections
show, they also raise significant questions of control and agency, not least
against the backdrop of ever-increasing scales of information production.

## Too Much—Never Enough

The question of scale in mass digitization is often posed as a rational quest
for knowledge accumulation and interoperability. Yet this section argues that
digitized collections are more than just rational projects; they strike deep
affective cords of desire, domination, and anxiety. As Couze Venn reminds us,
collections harbor an intimate connection between cognition and affective
economy. In this connection, the rationalized drive to collect is often
accompanied by a slippage, from a rationalized urge to a pathological drive
ultimately associated with desire, power, domination, anxiety, nostalgia,
excess, and—sometimes even—compulsion and repetition.15 The practice of
collecting objects thus not only signals a rational need but often also
springs from desire, and as psychoanalysis has taught us, a sense of lack is
the reflection of desire. As Slavoj Zizek puts it, “desire’s _raison d’être_
is not to realize its goal, to find full satisfaction, but to reproduce itself
as desire.” 16 Therefore, no matter how much we collect, the collector will
rarely experience their collection as complete and will often be haunted by
the desire to collect more.

In addition to the frightening (yet titillating) aspect of never having our
desires satisfied, large collections also give rise to a set of information
pathologies that, while different in kind, share an understanding of
information as intimidation. The experience is generally induced by two
inherently linked factors. First, the size of the cultural collection has
historically also often implied a powerful collector with the means to gather
expensive materials from all over the world, and a large collection has thus
had the basic function of impressing and, if need be, intimidating people.
Second, large collections give rise to the sheer subjective experience of
being overwhelmed by information and a mental incapacity to take it all in.
Both factors point to questions of potency and importance. And both work to
instill a fear in the visitor. As Voltaire once noted, “a great library has
the quality of frightening those who look upon it.”17

The intimidating nature of large collections has been a favored trope in
cultural representations. The most famous example of a gargantuan, even
insanity-inducing, library is of course Jorge Luis Borges’s tale of the
Library of Babel, the universal totality of which becomes both a monstrosity
in the characters’ lives and a source of hope, depending on their willingness
to make peace and submit themselves to the library’s infinite scale and
Kafkaesque organization.18 But Borges’s nonfiction piece from 1939, _The Total
Library,_ also serves as an elegant tale of an informational nightmare. _The
Total Library_ begins by noting that the dream of the utopia of the total
library “has certain characteristics that are easily confused with virtues”
and ends with a more somber caution: “One of the habits of the mind is the
invention of horrible imaginings. … I have tried to rescue from oblivion a
subaltern horror: the vast, contradictory Library, whose vertical wildernesses
of books run the incessant risk of changing into others that affirm, deny, and
confuse everything like a delirious god.” 19

Few escape the intimidating nature of large collections. But while attention
has often been given to the citizen subjected to the disciplining force of the
sovereign state in the form of its institutions, less attention has been given
to those that have had to structure and make sense of these intimidating
collections. Until recently, cultural collections were usually oriented toward
the figure of the patron or, in more abstract geographical terms, (God-given)
patrimony. Renaissance cabinets of curiosities were meant to astonish and
dazzle; the ostentatious wealth of the Baroque museums of the seventeenth and
eighteenth centuries displayed demonstrations of Godly power; and bourgeois
museums of the nineteenth century positioned themselves as national
institutions of _Bildung_. But while cultural memory institutions have worked
first and foremost to mirror to an external audience the power and the psyche
of their owners in individual, religious, and/or geographical terms, they have
also consistently had to grapple internally with the problem of how to best
organize and display these collections.

One of the key generators of anxiety in vast libraries has been the question
of infrastructure. Each new information paradigm and each new technology has
induced new anxieties about how best to organize information. The fear of
disorder haunted both institutions and individuals. In his illustrious account
of Ephraim Chamber’s _Cyclopaedia_ (the forerunner of Denis Diderot’s and Jean
le Rond d’Alembert’s famous Enlightenment project, the _Encyclopédie_ ),
Richard Yeo thus recounts how Gottfried Leibniz complained in 1680 about “that
horrible mass of books which keeps on growing” so that eventually “the
disorder will become nearly insurmountable.”20 Five years on, the French
scholar and critic Adrien Baillet warned his readers, “We have reason to fear
that the multitude of books which grows every day in a prodigious fashion will
make the following centuries fall into a state as barbarous as that of the
centuries that followed the fall of the Roman Empire.”21 And centuries later,
in the wake of the typewriter, the annual report of the Secretary of the
Smithsonian Institution in Washington, DC, drew attention to the
infrastructural problem of organizing the information that was now made
available through the typewriter, noting that “about twenty thousand volumes …
purporting to be additions to the sum of human knowledge, are published
annually; and unless this mass be properly arranged, and the means furnished
by which its contents may be ascertained, literature and science will be
overwhelmed by their own unwieldy bulk.”22 The experience of feeling
overwhelmed by information and lacking the right tools to handle it is no
joke. Indeed, a number of German librarians actually went documentably insane
between 1803 and 1825 in the wake of the information glut that followed the
secularization of ecclesiastical libraries.23 The desire for grand collections
has thus always also been followed by an accompanying anxiety relating to
questions of infrastructure.

As the history of collecting pathologies shows, reducing mass digitization
projects to rational and technical information projects would deprive them of
their rich psychological dimensions. Instead of discounting these pathologies,
we should acknowledge them, and examine not only their nature, but also their
implications for the organization of mass digitization projects. As the
following section shows, the pathologies not only exist as psychological
forces, but also as infrastructural imaginaries that directly impact theories
on how best to organize information in mass digitization. If the scale of mass
digitization projects is potentially limitless, how should they be organized?
And how will we feel when moving about in their gargantuan archives?

## The Ambivalent flaneur

In an article on cultures of archiving, sociologist Mike Featherstone asked
whether “the expansion of culture available at our fingertips” could be
“subjected to a meaningful ordering,” or whether the very “desire to remedy
fragmentation” should be “seen as clinging to a form of humanism with its
emphasis upon cultivation of the persona and unity which are now regarded as
merely nostalgic.”24 Featherstone raised the question in response to the
popularization of the Internet at the turn of the millennium. Yet, as the
previous section has shown, his question is probably as old as the collecting
practices themselves. Such questions have become no less significant with mass
digitization. How are organizational practices conceived of as meaningful
today? As we shall see, this question not only relates to technical
characteristics but is also informed by a strong spatial imaginary that often
takes the shape of labyrinthine infrastructures and often orients itself
toward the figure of the user. Indeed, the role of the organizer of knowledge,
and therefore the accompanying responsibility of making sense of collections,
has been conferred from knowledge professionals to individuals.

Today, as seen in all the examples of mass digitization we have explored in
the previous chapters, cultural memory institutions face a different paradigm
than that of the eighteenth- and nineteenth-century disciplining cultural
memory institution. In an age that encourages individualism, democratic
ideals, and cultural participation, the orientations of the cultural memory
institutions have shifted in discourse, practice, or both, toward an emphasis
on the importance of the subjective experience and active participation of the
individual visitor. As part of this shift, and as a result of the increasing
integration of the digital imaginary and production apparatus into the field
of cultural memory, the visitor has thus metamorphosed from a disciplinary
subject to a prosumer, produser, participant, and/or user.

The organizational shift in the cultural memory ecosystem means that
visionaries and builders of mass digitization infrastructures now pay
attention not only to how collections may reflect upon the institution that
holds the collection, but also on how the user experiences the informational
navigation of collections. This is not to say that making an impression, or
even disciplining the user, is not a concern for many mass digitization
projects. Mass digitizations’ constant public claims to literal greatness
through numbers evidence this. Yet, today’s projects also have to contend with
the opinion of the public and must make their projects palatable and
consumable rather than elitist and intimidating. The concern of the builders
of mass digitization infrastructure is therefore not only to create an
internal logic to their collections, but also to maximize the user’s
experience of being offered a wealth of information, while mitigating the
danger of giving the visitor a sense of losing oneself, or even drowning, in
information. An important question for builders of mass digitization projects
has therefore been how to build visual and semantic infrastructures that offer
the user a sense of meaningful direction as well as a desire to keep browsing.

While digital collections are in principle no longer tethered to their
physical origins in spatial terms, we still encounter ideas about them in
spatialized terms, often using notions such as trails, paths, and alleyways to
visualize the spaces of digital collections.25 This form of spatialized logic
did not emerge with the mass digitization of cultural heritage collections,
however, but also resides at the heart of some of the most influential early
digital theories on the digital realm.26 These theorized and conceptualized
the web as a new form of architectural infrastructure, not only in material
terms (such as cables and servers) but also as a new experiential space.27 And
in this spatialized logic, the figure of the flaneur became a central
character. Thus, we saw in the 1990s the rise of a digital interpretation of
the flaneur, originally an emblematic figure of modern urban culture at the
turn of the twentieth century, in the form of the virtual flaneur or the
cyberflaneur. In 1994, German net artists Heiko Idensen and Matthias Krohn
paid homage to the urban figure, noting in a text that “the screen winks at
the flaneur” and locating the central tenets of computer culture with the
“intoxication of the flânerie. Screens as streets and homes … of the crowd?”28
Later, artist Steven Goldate provided a simple equation between online and
offline spaces, noting among other things that “What the city and the street
was to the flaneur, the Internet and the Superhighway have become to the
Cyberflaneur.”29

Scholars, too, explored the potentials and limits of thinking about the user
of the Internet in flaneurian terms. Thus, Mike Featherstone drew parallels
between the nineteenth-century flaneur and the virtual flaneur, exploring the
similarities and differences between navigational strategies, affects, and
agencies in the early urban metropolis and the emergent digital realm of the
1990s.30

Although the discourse on the digital flaneur was most prevalent in the 1990s,
it still lingers on in contemporary writings about digitized cultural heritage
collections and their design. A much-cited article by computer scientists
Marian Dörk, Sheelagh Carpendale, and Carey Williamson, for instance, notes
the striking similarity between the “growing cities of the 19th century and
today’s information spaces” and the relationship between “the individual and
the whole.”31 Dörk, Carpendale, and Williamson use the figure of the flaneur
to emphasize the importance of supporting not only utilitarian information
needs through grand systems but also leisurely information surfing behaviors
on an individual level. Dörk, Carpendale, and Willliamson’s reflections relate
to the experience of moving about in a mass of information and ways of making
sense of this information. What does it mean to make sense of mass
digitization? How can we say or know that the past two hours we spent
rummaging about in the archives of Google Books, digging deeper in Europeana,
or following hyperlinks in Monoskop made sense, and by whose standards? And
what are the cultural implications of using the flaneur as a cultural
reference point for these ideals? We find few answers to these questions in
Dörk, Carpendale, and Williamson’s article, or in related articles that invoke
the flaneur as a figure of inspiration for new search strategies. Thus, the
figure of the flaneur is predominantly used to express the pleasurable and
productive aspect of archival navigation. But in its emphasis on pleasure and
leisure, the figure neglects the much more ambivalent atmosphere that
enshrouds the flaneur as he navigates the modern metropolis. Nor does it
problematize the privileged viewpoint of the flaneur.

The character of the flaneur, both in its original instantiations in French
literature and in Walter Benjamin’s early twentieth-century writings, was
certainly driven by pleasure; yet, on a more fundamental level, his existence
was also, as Elizabeth Wilson points out in her feminist reading of the
flaneur, “a sorrowful engagement with the melancholy of cities,” which arose
“partly from the enormous, unfulfilled promise of the urban spectacle, the
consumption, the lure of pleasure and joy which somehow seem destined to be
disappointed.”32 Far from an optimistic and unproblematic engagement with
information, then, the figure of the flaneur also evokes deeper anxieties
arising from commodification processes and the accompanying melancholic
realization that no matter how much one strolls and scrolls, nothing one
encounters can ever fully satisfy one’s desires. Benjamin even strikingly
spatializes (and sexualizes) this mental state in an infrastructural
imaginary: the labyrinth. The labyrinth is thus, Benjamin suggests, “the home
of the hesitant. The path of someone shy of arrival at a goal easily takes the
form of a labyrinth. This is the way of the (sexual) drive in those episodes
which precede its satisfaction.”33

Benjamin’s hesitant flaneur caught in an unending maze of desire stands in
contrast to the uncomplicated flaneur invoked in celebratory theories on the
digital flaneur. Yet, recent literature on the design of digital realms
suggests that the hesitant man caught in a drive for more information is a
much more accurate image of the digital flaneur than the man-in-the-know.34
Perhaps, then, the allegorical figure of the flaneur in digital design should
be used less to address pleasurable wandering and more to invoke “the most
characteristic response of all to the wholly new forms of life that seemed to
be developing: ambivalence.”35 Caught up in the commodified labyrinth of the
modern digitized archive, the digital flaneur of mass digitization might just
as easily get stuck in a repetitive, monotonous routine of scrolling and
downloading new things, forever suspended in a state of unfulfilled desire,
than move about in meaningful and pleasurable ways.36

Moreover, and just as importantly, the figure of the flaneur is also entangled
in a cultural matrix of assumptions about gender, capabilities, and colonial
implications. In short: the flaneur is a white, able-bodied male. As feminist
theory attests to, the concept of the flaneur is male by definition. Some
feminists such as Griselda Pollock and Janet Wolff have denied the possibility
of a female variant altogether, because of women’s status as (often absent)
objects rather than subjects in the nineteenth-century urban environment.37
Others, such as Elizabeth Wilson, Deborah Epstein Nord, and Mica Nava have
complicated the issue by alluding the opportunities and limitations of
thinking about a female variant of the flaneur, for instance a flâneuse.38
These discussions have also reverberated in the digital sphere in new
variations.39 Whatever position one assumes, it is clear that the concept of
the flaneur, even in its female variant, is a complicated figure that has
problematic allusions to a universal privileged figure.

In similar terms, the flaneur also has problematic colonial and racial
connotations. As James Smalls points out in his essay “'Race As Spectacle in
Late-Nineteenth-Century French Art and Popular Culture,” the racial dimension
of the flaneur is “conspicuously absent” from most critical engagements with
the concept.40 Yet, as Smalls notes, the question of race is crucial, since
“the black man … is not privileged to lose himself in the Parisian crowd, for
he is constantly reminded of his epidermalized existence, reflected back at
him not only by what he sees, but by what we see as the assumed ‘normal’
white, universal spectator.”41 This othering is, moreover, not limited to the
historical scene of nineteenth-century Paris, but still remains relevant
today. Thus, as Garnette Cadogan notes in his essay “Walking While Black,”
non-white people are offered none of the freedoms of blending into the crowd
that Baudelaire’s and Benjamin’s flaneurs enjoyed. “Walking while black
restricts the experience of walking, renders inaccessible the classic Romantic
experience of walking alone. It forces me to be in constant relationship with
others, unable to join the New York flaneurs I had read about and hoped to
join.”42

Lastly, the classic figure of the flaneur also assumes a body with no
disabilities. As Marian Ryan notes in an essay in the _New York Times_ , “The
art of flânerie entails blending into the crowd. The disabled flaneur can’t
achieve that kind of invisibility.”43 What might we take from these critical
interventions into the uncomplicated discourse of the flaneur? Importantly,
they counterbalance the dominant seductive image of the empowered user, and
remind us of the colonial male gaze inherent in any invocation of the metaphor
of the flaneur, which for the majority of users is a subject position that is
simply not available (nor perhaps desirable).

The limitations of the figure of the flaneur raise questions not only about
the metaphor itself, but also about the topography of knowledge production it
invokes. As already noted, Walter Benjamin placed the flaneur within a larger
labyrinthine topology of knowledge production, where the flaneur could read
the spectacle in front of him without being read himself. Walter Benjamin
himself put the flaneur to rest with an analysis of an Edgar Allen Poe story,
where he analyzed the demise of the flaneur in an increasingly capitalist
topography, noting in melancholy terms that, “The bazaar is the last hangout
of the flaneur. If in the beginning the street had become an interieur for
him, now this interieur turned into a street, and he roamed through the
labyrinth of merchandise as he had once roamed through the labyrinth of the
city. It is a magnificent touch in Poe’s story that it includes along with the
earliest description of the flaneur the figuration of his end.”44 In 2012,
Evgeny Morozov in similar terms declared the death of the cyberflaneur.
Linking the commodification of urban spaces in nineteenth-century Paris to the
commodification of the Internet, Morozov noted that “it’s no longer a place
for strolling—it’s a place for getting things done” and that “Everything that
makes cyberflânerie possible—solitude and individuality, anonymity and
opacity, mystery and ambivalence, curiosity and risk-taking—is under
assault.”45 These two death sentences, separated by a century, link the
environment of the flaneur to significant questions about the commodification
of space and its infrapolitical implications.

Exploring the implications of this topography, the following section suggests,
will help us understand the infrapolitics of the spatial imaginaries of mass
digitization, not only in relation to questions of globalization and late
sovereignty, but also to cultural imaginaries of knowledge infrastructures.
Indeed, these two dimensions are far from mutually exclusive, but rather
belong to the same overarching tale of the politics of mass digitization.
Thus, while the material spatial infrastructures of mass digitization projects
may help us appreciate certain important political dynamics of Europeana,
Google Books, and shadow libraries (such as their territorializing features or
copyright contestations in relation to knowledge production), only an
inclusion of the infrastructural imaginaries of knowledge production will help
us understand the complex politics of mass digitization as it metamorphoses
from analog buildings, shelves, and cabinets to the circulatory networks of
digital platforms.

## Labyrinthine Imaginaries: Infrastructural Perspectives of Power and
Knowledge Production

If the flaneur is a central early figure in the cultural imaginary of the
observer of cultural texts, the labyrinth has long served as a cultural
imaginary of the library, and, in larger terms, the spatialized
infrastructural conditions of knowledge and power. Thus, literature is rife
with works that draw on libraries and labyrinths to convey stories about
knowledge production and the power struggles hereof. Think only of the elderly
monk-librarian in Umberto Eco’s classic, _The Name of the Rose,_ who notes
that: “the library is a great labyrinth, sign of the labyrinth of the world.
You enter and you do not know whether you will come out” 46; or consider the
haunting images of being lost in Jose Luis Borges’s tales about labyrinthine
libraries.47 This section therefore turns to the infrastructural space of the
labyrinth, to show that this spatial imaginary, much like the flaneur, is
loaded with cultural ambivalence, and to explore the ways in which the
labyrinthine infrastructural imaginary emphasizes and crystallizes the
infrapolitical tension in mass digitization projects between power and
perspective, agency and environment, playful innovation and digital labor.

The labyrinth is a prevalent literary trope, found in authors from Ovid,
Virgil, and Dante to Dickens and Nietzsche, and it has been used particularly
in relation to issues of knowledge and agency, and in haunting and nightmarish
terms in modern literature.48 As the previous section indicates, the labyrinth
also provides a significant image for understanding our relationship to mass
digitization projects as sites of both knowledge production and experience.
Indeed, one shadow library is even named _Aleph_ , which refers to the ancient
Hebrew letter and likely also nods at Jose Luis Borges’s labyrinthine short
story, _Aleph,_ on infinite labyrinthine architectures. Yet, what kind of
infrastructure is a labyrinth, and how does it relate to the potentials and
perils of mass digitization?

In her rich historical study of labyrinths, Penelope Doob argues that the
labyrinth possesses a dual potentiality: on the one hand, if experienced from
within, the labyrinth is a sign of confusion; on the other, when viewed from
above, it is a sign of complex order.49 As Harold Bloom notes, “all of us have
had the experience of admiring a structure when outside it, but becoming
unhappy within it.”50 Envisioning the labyrinth from within links to a
claustrophobic sense of ignorance, while also implying the possibility of
progress if you just turn the next corner. What better way to describe one’s
experience in the labyrinthine infrastructures of mass digitization projects
such as Google Books with its infrastructural conditions and contexts of
experience and agency? On the one hand, Google Books appears to provide the
view from above, lending itself as a logistical aid in its information-rich
environment. On the other hand, Google Books also produces an alienating
effect of impenetrability on two levels. First, although Google presents
itself as a compass, its seemingly infinite and constantly rearranging
universe nevertheless creates a sense of vertigo, only reinforced by the
almost existential question “Do you feel lucky?” Second, Google Books also
feels impenetrable on a deeper level, with its black-boxed governing and
ordering principles, hidden behind complex layers of code, corporate cultures,
and nondisclosure agreements.51 But even less-commercial mass digitization
projects such as, for instance, Europeana and Monoskop can produce a sense of
claustrophobia and alienation in the user. Think only of the frustration
encountered when reaching dead ends in the form of broken links or in lack of
access set down by European copyright regulations. Or even the alienation and
dissatisfaction that can well up when there are seemingly no other limits to
knowledge, such as in Monoskop, than one’s own cognitive shortcomings.

The figure of the labyrinth also serves as a reminder that informational
strolling is not only a leisurely experience, but also a laborious process.
Penelope Doob thus points out the common medieval spelling of labyrinth as
_laborintus_ , which foregrounds the concept of labor and “difficult process,”
whether frustrating, useful, or both.52 In an age in which “labor itself is
now play, just as play becomes more and more laborious,”53 Doob’s etymological
excursion serves to highlight the fact that in many mass digitization projects
it is indeed the user’s leisurely information scrolling that in the end
generates profit, cultural value, and budgetary justification for mass
digitization platforms. Jose van Dijck’s analysis of the valuation of traffic
in a digital environment is a timely reminder of how traffic is valued in a
cultural memory environment that increasingly orients itself toward social
media, “Even though communicative traffic on social media platforms seems
determined by social values such as popularity, attention, and connectivity,
they are impalpably translated into monetary values and redressed in business
models made possible by digital technology.”54 This is visible, for instance,
in Europeana’s usage statistic reports, which links the notions of _traffic_
and _performance_ together in an ontological equation (in this equation poor
performance inevitably means a mark of death). 55 In a blogpost marking the
launch of the _Europeana Statistics Dashboard_ , we are told that information
about mass digitization traffic is “vital information for a modern cultural
institution for both reporting and planning purposes and for public
accountability.”56 Thus, although visitors may feel solitary in their digital
wanderings, their digital footsteps are in fact obsessively traced and tracked
by mass digitization platforms and often also by numerous third parties.

Today, then, the user is indeed at work as she makes her way in the
labyrinthine infrastructures of mass digitization by scrolling, clicking,
downloading, connecting, and clearing and creating new paths. And while
“search” has become a keyword in digital knowledge environments, digital
infrastructures in mass digitization projects in fact distract as much as they
orient. This new economy of cultural memory begs the question: if mass
digitization projects, as labyrinthine infrastructures, invariably disorient
the wanderer as much as they aid her, how might we understand their
infrapolitics? After all, as the previous chapters have shown, mass
digitization projects often present a wide array of motivations for why
digitization should happen on a massive scale, with knowledge production and
cultural enlightenment usually featuring as the strongest arguments. But as
the spatialized heuristics of the flaneur and the labyrinth show, knowledge
production and navigation is anything but a simple concept. Rather, the
political dimensions of mass digitization discussed in previous chapters—such
as standardization, late sovereignty, and network power—are tied up with the
spatial imaginaries of what knowledge production and cultural memory are and
how they should and could be organized and navigated.

The question of the spatial imaginaries of knowledge production and
imagination has a long philosophic history. As historian David Bates notes,
knowledge in the Enlightenment era was often imagined as a labyrinthine
journey. A classic illustration of how this journey was imagined is provided
by Enlightenment philosopher Jean-Louis Castilhon, whose frustration is
palpable in this exclamation: “How cruel and painful is the situation of a
Traveller who has imprudently wandered into a forest where he knows neither
the winding paths, nor the detours, nor the exits!”57 These Enlightenment
journeys were premised upon an infrastructural framework that linked error and
knowledge, but also upon an experience of knowledge quests riddled by loss of
oversight and lack of a compass. As the previous sections show, the labyrinth
as a form of knowledge production in relation to truth and error persists as
an infrastructural trope in the digital. Yet, it has also metamorphosed
significantly since Castilhon. The labyrinthine infrastructural imaginaries we
find in digital environments thus differ significantly from more classical
images, not least under the influence of the rhizomatic metaphors of
labyrinths developed by Deleuze and Guattari and Eco. If the labyrinth of the
Renaissance had an endpoint and a truth, these new labyrinthine
infrastructures, as Kristin Veel points out, had a much more complex
relationship to the spatial organization of the truth. Eco and Deleuze and
Guattari thus conceived of their labyrinths as networks “in which all points
can be connected with one another” with “no center” but “an almost unlimited
multiplicity of alternative paths,” which makes it “impossible to rise above
the structure and observe it from the outside, because it transcends the
graphic two-dimensionality of the two earlier forms of labyrinths.”58 Deleuze
expressed the senselessness of these contemporary labyrinths as a “theater
where nothing is fixed, a labyrinth without a thread (Ariadne has hung
herself).”59

In mass digitization, this new infrastructural imaginary feeds a looming
concern over how best to curate and infrastructurate cultural collections. It
is this concern that we see at play in the aforementioned institutional
concerns over how to best create meaningful paths in the cultural collections.
The main question that resounds is: where should the paths lead if there is no
longer one truth, that is, if the labyrinth has no center? Some mass
digitization projects seem to revel in this new reality. As we have seen,
shadow libraries such as Monoskop and UbuWeb use the affordances of the
digital to create new cultural connections outside of the formal hierarchies
of cultural memory institutions. Yet, while embraced by some, predictably the
new distribution of authority generates anxiety in the cultural memory circles
that had hitherto been able to hold claim to knowledge organization expertise.
This is the dizzying perspective that haunts the cultural memory professionals
faced with Europeana’s data governance model. Thus, as one Europeana
professional explained to me in 2010, “Europeana aims at an open-linked-data
model with a number of implications. One implication is that there will be no
control of data usage, which makes it possible, for instance, to link classics
with porn. Libraries do not agree to this loss of control which was at the
base of their self-understanding.”60 The Europeana professional then proceeded
to recount the profound anxiety experienced and expressed by knowledge
professionals as they increasingly came face-to-face with a curatorial reality
that is radically changing what counts as knowledge and context, where a
search for Courbet could, in theory, not only lead the user to other French
masters of painting but also to a copy of a porn magazine (provided it is out
of copyright). The anxiety experienced by knowledge professionals in the new
cultural memory ecosystem can of course be explained by a rationalized fear of
job insecurity and territorial concerns. Yet, the fear of knowledge
infrastructures without a center may also run deeper. As Penelope Doob reminds
us, the center of the labyrinth historically played a central moral and
epistemological role in the labyrinthine topos, as the site that held the
epiphanous key to unravel whatever evils or secrets the labyrinth contained.
With no center, there is no key, no epiphany.61 From this perspective, then,
it is not only a job that is lost. It is also the meaning of knowledge
itself.62

What, then, can we take from these labyrinthine wanderings as we pursue a
greater understanding of the infrapolitics of mass digitization? Certainly, as
this section shows, the politics of mass digitization is entangled in
spatialized imaginaries that have a long and complex cultural and affective
trajectory interlinked with ontological and epistemological questions about
the very nature of knowledge. Cladding the walls of these trajectories are, of
course, the ever-present political questions of authority and territory, but
also deeper cultural and affective questions about the nature and meaning of
knowledge as it bandies about in our cultural imaginaries, between discoveries
and dead-ends, between freedom and control.

As the next section will show, one concept has in particular come to
encapsulate these concerns: the notion of serendipity. While the notion of
serendipity has a long history, it has gained new relevance with mass
digitization, where it is used to express the realm of possibilities opened up
by the new digital infrastructures of knowledge production. As such, it has
come to play a role, not only as a playful cultural imaginary, but also as an
architectural ideal in software developments for mass digitization. In the
following section, we will look at a few examples of these architectures, as
well as the knowledge politics they are entangled in.

## The Architecture of Serendipitous Platforms

Serendipity has for long been a cherished word in archival studies, used to
describe a magical moment of “Eureka!” A fickle and fabulating concept, it
belongs to the world of discovery, capturing the moment when a meandering
soul, a flaneur, accidentally stumbles upon a valuable find. As such, the
moment of serendipity is almost always a happy circumstance of chance, and
never an unfortunate moment of risk. Serendipity also embodies the word in its
own origins. This section outlines the origins of this word and situate its
reemergence in theories on libraries and on digital realms of knowledge
production.

The English aristocrat Horace Walpole coined the word serendipity in a letter
to Horace Mann in 1754, in which he explained his fascination with a Persian
fairy tale about three princes from the _Isle of Serendip_ _63_ who possess
superpowers of observation. In his letter, Walpole linked the contents of the
fantastical story to his view of how new discoveries are made: “As their
highnesses travelled, they were always making discoveries, by “accidental
sagacity,” of things which they were not in quest of.” 64 And he proposed a
new word—“serendipity”—to describe this sublime talent for discovery.

Walpole’s conceptual invention did not immediately catch fire in common
parlance.65 But a few centuries after its invention, it suddenly took hold.
Who awakened the notion from its dormant state, and why? Sociologists Robert
K. Merton and Elinor Barber provided one influential answer in their own
enjoyable exploration of the word. As they note, serendipity had a particular
playful tone to it, expressing a sense that knowledge comes about not only
through sheer willpower and discipline, but also via pleasurable chance. This
almost hedonistic dimension made it incompatible with the serious ethos of the
nineteenth century. As Merton and Barber note, “The serious early Victorians
were not likely to pick up serendipity, except perhaps to point to it as a
piece of frivolous whimsy. … Although the Victorians, and especially Victorian
scientists, were familiar with the part played by accident in the process of
discovery, they were likely neither to highlight that factor nor to clothe the
phenomenon of accidental discovery in so lighthearted a word as
serendipity.”66 But in the 1940s and 1950s something happened—the word began
to catch on. Merton and Barber link this turn of linguistic events not only to
pure chance, but also a change in scientific networks and paradigms. Traveling
from the world of letters, as they recount, the word began making its way into
scientific circles, where attention was increasingly turned to “splashy
discoveries in lab and field.”67 But as Lorraine Daston notes, “discoveries,
especially those made by serendipity, depend partly on luck, and scientists
schooled in probability theory are loathe to ascribe personal merit to the
merely lucky,” and scientists therefore increasingly began to “domesticate
serendipity.”68 Daston remarks that while scientists schooled in probability
were reluctant to ascribe their discoveries to pure chance, the “historians
and literary scholars who struck serendipitous gold in the archives did not
seem so eager to make a science out of their good fortune.”69 One tale of how
literary and historical scholars struck serendipitous gold in the archive is
provided by Mike Featherstone:

> Once in the archive, finding the right material which can be made to speak
may itself be subject to a high degree of contingency—the process not of
deliberate rational searching, but serendipity. In this context it is
interesting to note the methods of innovatory historians such as Norbert Elias
and Michel Foucault, who used the British and French national libraries in
highly unorthodox ways by reading seemingly haphazardly “on the diagonal,”
across the whole range of arts and sciences, centuries and civilizations, so
that the unusual juxtapositions they arrived at summoned up new lines of
thought and possibilities to radically re-think and reclassify received
wisdom. Here we think of the flaneur who wanders the archival textual city in
a half-dreamlike state in order to be open to the half-formed possibilities of
the material and sensitive to unusual juxtapositions and novel perceptions.70

English scholar Nancy Schultz in similar terms notes that the archive “in the
humanities” represents a “prime site for serendipitous discovery.”71 In most
of these cases, serendipity is taken to mean some form of archival insight,
and often even a critical intellectual process. Deb Verhoeven, Associate Dean
of Engagement and Innovation at the University of Technology Sydney, reminds
us in relation to feminist archival work that “stories of accidental
discovery” can even take on dimensions of feminist solace, consoling “the
researcher, and us, with the idea that no system, whatever its claims to
discipline, comprehensiveness, and structure, is exempt from randomness, flux,
overflow, and therefore potential collapse.”72

But with mass digitization processes, their fusion of probability theories and
archives, and their ideals of combined fun and fact-finding, the questions
raised in the hard sciences about serendipity, its connotations of freedom and
chance, engineering and control, now also haunt the archives of historians and
literary scholars. Serendipity has now often come to be used as a motivating
factor for digitization in the first place, based on arguments that mass
digitized archives allow not only for dedicated and target-oriented research,
but also for new modes of search, of reading haphazardly “on the diagonal”
across genres and disciplines, as well as across institutional and national
borders that hitherto kept works and insights apart. As one spokesperson from
a prominent mass digitization company states, “digital collections have been
designed both to assist researchers in accessing original primary source
materials and to enable them to make serendipitous discoveries and unexpected
connections between sources.”73 And indeed, this sentiment reverberates in all
mass digitization projects from Europeana and Google Books to smaller shadow
libraries such as UbuWeb and Monoskop. Some scholars even argue that
serendipity takes on new forms due to digitization.74

It seems only natural, then, that mass digitization projects, and their
actors, have actively adopted the discourse of serendipity, both as a selling
point and a strategic claim. Talking about Google’s digitization program, Dr.
Sarah Thomas, Bodley’s Librarian and Director of Oxford University Library
Services, notes: “Library users have always loved browsing books for the
serendipitous discoveries they provide. Digital books offer a similar thrill,
but on multiple levels—deep entry into the texts or the ability to browse the
virtual shelf of books assembled from the world's great libraries.”75 But it
has also raised questions for those people who are in charge, not only of
holding serendipity forth as an ideal, but also building the architecture to
facilitate it. Dan Cohen, speaking on behalf of the DPLA, thus noted the
centrality of the concept, but also the challenges that mass digitization
raised in practical terms: “At DPLA, we’ve been thinking a lot about what’s
involved with serendipitous discovery. Since we started from scratch and
didn’t need to create a standard online library catalog experience, we were
free to experiment and provide novel ways into our collection of over five
million items. How to arrange a collection of that scale so that different
users can bump into items of unexpected interest to them?” While adopting the
language of serendipity is easy, its infrastructural construction is much
harder to envision. This challenge clearly troubles the strategic team
developing Europeana’s infrastructure, as it notes in a programmatic tone that
stands hilariously at odds with the curiosity it must cater to:

> Reviewing the personas developed for the D6.2 Requirements for Europeana.eu8
deliverable—and in particular those of the “culture vultures”—one finds two
somewhat-opposed requirements. On the one hand, they need to be able to find
what they are looking for, and navigate through clear and well-structured
data. On the other hand, they also come to Europeana looking for
“inspiration”—that is to say, for something new and unexpected that points
them towards possibilities they had previously been unaware of; what, in the
formal literature of user experience and search design, is sometimes referred
to as “serendipity search.” Europeana’s users need the platform to be
structured and predictable—but not entirely so.76

To achieve serendipity, mass digitization projects have often sought to take
advantage of the labyrinthine infrastructures of digitization, relying not
only on their own virtual bookshelves, but also on the algorithmic highways
and back alleys of social media. Twitter, in particular, before it adopted
personalization methods, became a preferred infrastructure for mass
digitization projects, who took advantage of Twitter’s lack of personalized
search to create whimsical bots that injected randomness into the user’s feed.
One example was the Digital Public Library of America’s DPLA Bot, which grabs
a random noun and uses its API to share the first result it finds. The DPLA
Bot aims to “infuse what we all love about libraries—serendipitous
discovery—into the DPLA” and thus seeks to provide a “kind of ‘Surprise me!’
search function for DPLA.”77 It did not take the programmer Peter Meyr much
time to develop a similar bot for Europeana. In an interview with
EuropeanaPro, Peter Meyr directly related the EuropeanaBot to the
serendipitous affordances of Twitter and its rewards for mass digitization
projects, noting that:

> The presentation of digital resources is difficult for libraries. It is no
longer possible to just explore, browse the stacks and make serendipitous
findings. With Europeana, you don't even have a physical library to go to. So
I was interested in bringing a little bit of serendipity back by using a
Twitter bot. … If I just wanted to present (semi)random Europeana findings, I
wouldn’t have needed Twitter—an RSS-Feed or a web page would be enough.
However, I wanted to infuse EuropeanaBot with a little bit of “Twitter
culture” and give it a personality.78

The British Library also developed a Twitter bot titled the Mechanical
Curator, which posts random resources with no customization except a special
focus on images in the library’s seventeenth- to nineteenth-century
collections.79 But there were also many projects that existed outside social
media platforms and operated across mass digitization projects. One example
was the “serendipity engine,” Serendip-o-matic, which first examined the
user’s research interests and then, based on this data, identified “related
content in locations such as the Digital Public Library of America (DPLA),
Europeana, and Flickr Commons.”80 While this initiative was not endorsed by
any of these mass digitization projects, they nevertheless featured it on
their blogs, integrating it into the mass digitization ecosystem.

Yet, while mass digitization for some represents the opportunity to amplify
the chance of chance, other scholars increasingly wonder whether the
engineering processes of mass digitization would take serendipity out of the
archive. Indeed, to them, the digital is antithetical to chance. One such
viewpoint is uttered by historian Tristram Hunt in an op-ed charging against
Google’s British digitization program under the title, “Online is fine, but
history is best hands on.” In it, Hunt argues that the digital, rather than
providing a new means of chance finding, would impede historical discovery and
that only the analog archival environment could foster real historical
discoveries, since it is “… only with MS in hand that the real meaning of the
text becomes apparent: its rhythms and cadences, the relationship of image to
word, the passion of the argument or cold logic of the case. Then there is the
serendipity, the scholar’s eternal hope that something will catch his eye,”81
In similar terms, Graeme Davison describes the lacking of serendipitous
errings in digital archives, as he likens digital search engines with driving
“a high-powered car down a freeway, compared with walking or cycling. It gets
us there more quickly but we skirt the towns and miss a lot of interesting
scenery on the way.”82 William McKeen also links the loss of serendipity to
the acceleration of method in the digital:

> Think about the library. Do people browse anymore? We have become such a
directed people. We can target what we want, thanks to the Internet. Put a
couple of key words into a search engine and you find—with an irritating hit
or miss here and there—exactly what you’re looking for. It’s efficient, but
dull. You miss the time-consuming but enriching act of looking through
shelves, of pulling down a book because the title interests you, or the
binding. Inside, the book might be a loser, a waste of the effort and calories
it took to remove it from its place and then return. Or it might be a dark
chest of wonders, a life-changing first step into another world, something to
lead your life down a path you didn't know was there.83

Common to all these statements is the sentiment that the engineering of
serendipity removes the very chance of serendipity. As Nicholas Carr notes,
“Once you create an engine—a machine—to produce serendipity, you destroy the
essence of serendipity. It becomes something expected rather than
unexpected.”84 It appears, then, that computational methods have introduced
historians and literary scholars to the same “beaverish efforts”85 to
domesticate serendipity as the hard sciences had to face at the beginning of
the twentieth century.

To my knowledge, few systematic studies exist about whether mass digitization
projects such as Europeana and Google Books hamper or foster creative and
original research in empirical terms. How one would go about such a study is
also an open question. The dichotomy between digital and analog does seem a
bit contrived, however. As Dan Cohen notes in a blogpost for DPLA, “bookstores
and libraries have their own forms of ‘serendipity engineering,’ from
storefront staff picks to behind-the-scenes cataloguing and shelving methods
that make for happy accidents.”86 Yet there is no doubt that the discourse of
serendipity has been infused with new life that sometimes veers toward a
“spectacle of serendipity.”87

Over the past decade, the digital infrastructures that organize our cultural
memory have become increasingly integrated in a digital economy that valuates
“experience” as a cultural currency that can be exchanged to profit, and our
affective meanderings as a form of industrial production. This digital economy
affects the architecture and infrastructure of digital archives. The archival
discourse on digital serendipity is thus now embroiled in a more deep-seated
infrapolitics of workspace architecture, influenced by Silicon Valley’s
obsession with networks, process, and connectivity.88 Think only of the
increasing importance of Google and Facebook to mass digitization projects:
most of these projects have a Facebook page on which they showcase their
material, just as they take pains to make themselves “algorithmically
recognizable”89 to Google and other search engines in the hope of reaching an
audience beyond the echo chamber of archives and to distribute their archival
material on leisurely tidbit platforms such as Pinterest and Twitter.90 If
serendipity is increasingly thought of as a platform problem, the final
question we might pose is what kind of infrapolitics this platform economy
generates and how it affects mass digitization projects.

## The Infrapolitics of Platform Power

As the previous sections show, mass digitization projects rely upon spatial
metaphors to convey ideas about, and ideals of, cultural memory
infrastructures, their knowledge production, and their serendipitous
potential. Thus, for mass digitization projects, the ideal scenario is that
the labyrinthine errings of the user result in serendipitous finds that in
turn bring about new forms of cultural value. From the point of the user,
however, being caught up in the labyrinth might just as easily give rise to an
experience of being confronted with a sense of lack of oversight and
alienation in the alleyways of commodified infrastructures. These two
scenarios co-exist because of what Penelope Doob (as noted in the section on
labyrinthine imaginaries) refers to as the dual potentiality of the labyrinth,
which when experienced from within can be become a sign of confusion, and when
viewed from above becomes a sign of complex order.91

In this final section, I will turn to a new spatial metaphor, which appears to
have resolved this dual potentiality of the spatial perspective of mass
digitization projects: the platform. The platform has recently emerged as a
new buzzword in the digital economy, connoting simultaneously a perspective, a
business strategy, and a political ideology. Ideally the platform provides a
different perspective than the labyrinth, offering the user the possibility of
simultaneously constructing the labyrinth and viewing it from above. This
final section therefore explores how we might understand the infrapolitics of
the platform, and its role in the digital economy.

In its recent business strategy, Europeana claimed that it was moving from
operating as a “portal” to operating as a “platform.”92 The announcement was
part of a broader infrastructural transition in the field of cultural memory,
undergirded by a process of opening up and connecting the cultural memory
sector to wider knowledge ecosystems.93 Indeed, Europeana’s move is part of a
much larger discursive and material reality of a more fundamental process of
“platformization” of the web.94 The notion of the platform has thus recently
become an important heuristic for understanding the cultural development of
the web and its economy, fusing the computational understanding of the
platform as an environment in which a code is executed95 and the political and
social understanding of a platform as a site of politics.96

While the infrapolitics of the platformization of the web has become a central
discussion in software and communication studies, little interest has been
paid to the implications of platforms for the politics of cultural memory.
Yet, Europeana’s business strategy illustrates the significant infrapolitical
role that platforms are given in mass digitization literature. Citing digital
historian Tim Sherratt’s claim that “portals are for visiting, platforms for
building on,”97 Europeana’s strategy argues that if cultural memory sites free
themselves and their content from the “prison of portals” in favor of more
openness and flexibility, this will in turn empower users to created their own
“pathways” through the digital cultural memory, instead of being forced to
follow predetermined “narrative journeys.”98 The business plan’s reliance on
Sherratt’s theory of platforms shows that although the platform has a
technical meaning in computation, Europeana’s discourse goes beyond mere
computational logic. It instead signifies an infrapolitics that carries with
it an assumption about the political dynamics of software, standing in for the
freedom to act in the labyrinthine infrastructures of digital collections.

Yet, what is a platform, and how might we understand its infrapolitics? As
Tarleton Gillespie points out, the oldest definition of platform is
architectural, as a level or near-level surface, often elevated.99 As such,
there is something inherently simple about platforms. As architect Sverre Fehn
notes, “the simplest form of architecture is to cultivate the surface of the
earth, to make a platform.”100 Fehn’s statement conceals a more fundamental
insight about platforms, however: in the establishment of a low horizontal
platform, one also establishes a social infrastructure. Platforms are thus not
only material constructions, they also harbor infrapolitical affordances. The
etymology of the notion of “platform” evidences this infrapolitical dimension.
Originally a spatial concept, the notion of platform appeared in
architectural, figurative, and military formations in the sixteenth century,
soon developing into specialized discourses of party programs and military and
building construction,101 religious congregation,102 and architectural vantage
points.103 Both the architectural and social understandings of the term
connote a process in which sites of common ground are created in
contradistinction to other sites. In geology, for instance, platforms emerge
from abrasive processes that elevate and distinguish one area in relation to
others. In religious and political discourse, platforms emerge as
organizational sites of belonging, often in contradistinction to other forms
of organization. Platforms, then, connote both common ground and demarcated
borders that emerge out of abrasive processes. In the nineteenth century, a
third meaning adjoined the notion of platforms, namely trade-related
cooperation. This introduced a dynamic to the word that is less informed by
abrasive processes and more by the capture processes of what we might call
“connective capitalism.” Yet, despite connectivity taking center stage, even
these platforms were described as territorializing constructs that favor some
organizations and corporations over others.104

In the twentieth and twenty-first centuries, as Gilles Deleuze and Felix
Guattari successfully urged scholars and architects to replace roots with
rhizomes, the notion of platform began taking on yet another meaning. Deleuze
and Guattari began fervently arguing for the nonexistence of rooted
platforms.105 Their vision soon gave rise to a nonfoundational understanding
of the world as a “limitless multiplicity of positions from which it is
possible only to erect provisional constructions.”106 Deleuze and Guattari’s
ontology became widely influential in theorizing the web _in toto_ ; as Rem
Koolhaas once noted, the “language of architecture—platform, blueprint,
structure—became almost the preferred language for indicating a lot of
phenomenon that we’re facing from Silicon Valley.”107 From the singular
platforms of military and party politics, emerged, then, the thousand
platforms of the digital, where “nearly every surge of research and investment
pursued by the digital industry—e-commerce, web services, online advertising,
mobile devices and digital media sales—has seen the term migrate to it.”108

What infrapolitical logic can we glean from Silicon Valley’s adoption of the
vernacular notion of the platform? Firstly, it is an infrapolitics of
temporality. As Tarleton Gillespie points out, the semantic aspects of
platforms “point to a common set of connotations: a ‘raised level surface’
designed to facilitate some activity that will subsequently take place. It is
anticipatory, but not causal.”109 The inscription of platforms into the
material infrastructures of the Internet thus assume a value-producing
futurity. If serendipity is what is craved, then platforms are the site in
which this is thought to take place.

Despite its inclusion in the entrepreneurial discourse of Silicon Valley, the
notion of the platform is also used to signal an infrapolitics of
collaboration, even subversion. Olga Gurionova, for instance, explores the
subversive dynamics of critical artistic platforms,110 and Trebor Sholtz
promotes the term “platform cooperativism” to advance worker-based
cooperatives that would “design their own apps-based platforms, fostering
truly peer-to-peer ways of providing services and things, and speak truth to
the new platform capitalists.”111 Shadow libraries such as Monoskop appear as
perfect examples of such subversive platforms and evidence of Srnicek’s
reminder that not _all_ social interactions are co-opted into systems of
profit generation. 112 Yet, as the territorial, legal, and social
infrastructures of mass digitization become increasingly labyrinthine, it
takes a lot of critical consciousness to properly interpret and understand its
infrapolitics. Engage with the shadow library Library Genesis on Facebook, for
instance, and you submit to platform capitalism.

A significant trait of platform-based corporations such as Google and Facebook
is that they more often than not present themselves as apolitical, neutral,
and empowering tools of connectivity, passive until picked up by the user.
Yet, as Lisa Nakamura notes, “reading’s economies, cultures of sharing, and
circuits of travel have never been passive.”113 One of digital platforms’ most
important infrapolitical traits is their dependence on network effects and a
winner-takes-all logic, where the platform owner is not only conferred
enormous power vis-à-vis other less successful platforms but also vis-à-vis
the platform user.114 Within this game, the platform owner determines the
rules of the product and the service on offer. Entering into the discourse of
platforms implies, then, not only constructing a software platform, but also
entering into a parasitical game of relational network effects, where
different platforms challenge and use each other to gain more views and
activity. This gives successful platforms a great advantage in the digital
economy. They not only gain access to data, but they also control the rules of
how the data is to be managed and governed. Therefore, when a user is surfing
Google Books, Google—and not the library—collects the user’s search queries,
including results that appeared in searches and pages the user visited from
the search. The browser, moreover, tracks the user’s activity, including pages
the user has visited and when, user data, and possibly user login details with
auto-fill features, user IP address, Internet service provider, device
hardware details, operating system and browser version, cookies, and cached
data from websites. The labyrinthine infrastructure of the mass digitization
ecosystem also means that if you access one platform through another, your
data will be collected in different ways. Thus, if you visit Europeana through
Facebook, it will be Facebook that collects your data, including name and
profile; biographical information such as birthday, hometown, work history,
and interests; username and unique identifier; subscriptions, location,
device, activity date, time and time-zone, activities; and likes, check-ins,
and events.115 As more platforms emerge from which one can access mass
digitized archives, such as social media sites like Facebook, Google+,
Pinterest, and Twitter, as well as mobile devices such as Android, gaining an
overview of who collects one’s data and how becomes more nebulous.

Europeana’s reminder illustrates the assemblatic infrastructural set-up of
mass digitization projects and how they operate with multiple entry points,
each of which may attach its own infrapolitical dynamics. It also illustrates
the labyrinthine infrastructures of privacy settings, over which a mapping is
increasingly difficult to attain because of constant changes and
reconfigurations. It furthermore illustrates the changing legal order from the
relatively stable sovereign order of human rights obligations to the
modulating landscape of privacy policies.

How then might we characterize the infrapolitics of the spatial imaginaries of
mass digitization? As this chapter has sought to convey, writings about mass
digitization projects are shot through with spatialized metaphors, from the
flaneur to the labyrinth and the platform, either in literal terms or in the
imaginaries they draw on. While this section has analyzed these imaginaries in
a somewhat chronological fashion, with the interactivity of the platform
increasingly replacing the more passive gaze of the spectator, they coexist in
that larger complex of spatial digital thinking. While often used to elicit
uncomplicated visions of empowerment, desire, curiosity, and productivity,
these infrapolitical imaginaries in fact show the complexity of mass
digitization projects in their reinscription of users and cultural memory
institutions in new constellations of power and politics.

## Notes

1. Kelly 1994, p. 263. 2. Connection Machines were developed by the
supercomputer manufacturer Thinking Machines, a concept that also appeared in
Jorge Luis Borges’s _The Total Library_. 3. Brewster Kahle, “Transforming Our
Libraries from Analog to Digital: A 2020 Vision,” _Educause Review_ , March
13, 2017, from-analog-to-digital-a-2020-vision>. 4. Ibid. 5. Couze Venn, “The
Collection,” _Theory, Culture & Society_ 23, no. 2–3 (2006), 36. 6. Hacking
2010. 7. Lefebvre 2009. 8. Blair and Stallybrass 2010, 139–163. 9. Ibid., 143.
10. Dewey 1926, 311. 11. See, for instance, Lorraine Daston’s wonderful
account of the different types of historical consciousness we find in archives
across the sciences: Daston 2012. 12. David Weinberger, “Library as Platform,”
_Library Journal_ , September 4, 2012, /future-of-libraries/by-david-weinberger/#_>. 13. Nakamura 2002, 89. 14.
Shannon Mattern,”Library as Infrastructure,” _Places Journal_ , June 2014,
. 15. Couze
Venn, “The Collection,” _Theory, Culture & Society_ 23, no. 2–3 (2006), 35–40.
16. Žižek 2009, 39. 17. Voltaire, “Une grande bibliothèque a cela de bon,
qu’elle effraye celui qui la regarde,” in _Dictionaire Philosophique_ , 1786,
265. 18. In his autobiography, Borges asserted that it “was meant as a
nightmare version or magnification” of the municipal library he worked in up
until 1946. Borges describes his time at this library as “nine years of solid
unhappiness,” both because of his co-workers and the “menial” and senseless
cataloging work he performed in the small library. Interestingly, then, Borges
translated his own experience of being informationally underwhelmed into a
tale of informational exhaustion and despair. See “An Autobiographical Essay”
in _The Aleph and Other Stories_ , 1978, 243. 19. Borges 2001, 216. 20. Yeo
2003, 32. 21. Cited in Blair 2003, 11. 22. Bawden and Robinson 2009, 186. 23.
Garrett 1999. 24. Featherstone 2000, 166. 25. Thus, for instance, one
Europeana-related project with the apt acronym PATHS, argues for the need to
“make use of current knowledge of personalization to develop a system for
navigating cultural heritage collections that is based around the metaphor of
paths and trails through them” (Hall et al. 2012). See also Walker 2006. 26.
Inspiring texts for (early) spatial thinking of the Internet, see: Hayles
1993; Nakamura 2002; Chun 2006. 27. Much has been written about whether or not
it makes sense to frame digital realms and infrastructures in spatial terms,
and Wendy Chun has written an excellent account of the stakes of these
arguments, adding her own insightful comments to them; see chapter 1, “Why
Cyberspace?” in Chun 2013. 28. Cited in Hartmann 2004, 123–124. 29. Goldate
1996. 30. Featherstone 1998. 31. Dörk, Carpendale, and Williamson 2011, 1216.
32. Wilson 1992, 108. 33. Benjamin. 1985a, 40. 34. See, for instance, Natasha
Dow Schüll’s fascinating study of the addictive design of computational
culture: Schüll 2014. For an industry perspective, see Nir Eyal, _Hooked: How
to Build Habit-Forming Products_ (Princeton, NJ: Princeton University Press,
2014). 35. Wilson 1992, 93. 36. Indeed, it would be interesting to explore the
link between Susan Buck Morss’s reinterpretation of Benjamin’s anesthetic
shock of phantasmagoria and today’s digital dopamine production, as described
by Natasha Dow Schüll in _Addicted by Design_ (2014); see Buck-Morss 2006. See
also Bjelić 2016. 37. Wolff 1985; Pollock 1998. 38. Wilson 1992; Nord 1995;
Nava and O’Shea 1996, 38–76. 39. Hartmann 1999. 40. Smalls 2003, 356. 41.
Ibid., 357. 42. Cadogan 2016. 43. Marian Ryan, “The Disabled flaneur,” _New
York Times_ , December 12, 2017, /the-disabled-flaneur.html>. 44. Benjamin. 1985b, 54. 45. Evgeny Morozov, “The
Death of the Cyberflaneur,” _New York Times_ , February 4, 2012. 46. Eco 2014,
169. 47. See also Koevoets 2013. 48. In colloquial English, “labyrinth” is
generally synonymous with “maze,” but some people observe a distinction, using
maze to refer to a complex branching (multicursal) puzzle with choices of path
and direction, and using labyrinth for a single, non-branching (unicursal)
path, which leads to a center. This book, however, uses the concept of the
labyrinth to describe all labyrinthine infrastructures. 49. Doob 1994. 50.
Bloom 2009, xvii. 51. Might this be the labyrinthine logic detected by
Foucault, which unfolds only “within a hidden landscape,” revealing “nothing
that can be seen” and partaking in the “order of the enigma”; see Foucault
2004, 98. 52. Doob 1994, 97. Doob also finds this perspective in the
fourteenth century in Chaucer’s _House of Fame_ , in which the labyrinth
“becomes an emblem of the limitations of knowledge in this world, where all we
can finally do is meditate on _labor intus_ ” (ibid., 313). Lady Mary Wroth’s
work _Pamphilia to Amphilanthus_ provides the same imagery, telling the story
of the female heroine, Pamphilia, who fails to escape a maze but nevertheless
engages her experience within it as a source of knowledge. 53. Galloway 2013a,
29. 54. van Dijck 2012. 55. “Usage Stats for Europeana Collections,”
_EuropeanaPro,_ usage-statistics>. 56. Joris Pekel, “The Europeana Statistics Dashboard is
here,” _EuropeanaPro_ , April 6, 2016, /introducing-the-europeana-statistics-dashboard>. 57. Bates 2002, 32. 58. Veel
2003, 154. 59. Deleuze 2013, 56. 60. Interview with professor of library and
information science working with Europeana, Berlin, Germany, 2011. 61. Borges
mused upon the possible horrendous implications of such a lack, recounting two
labyrinthine scenarios he once imagined: “In the first, a man is supposed to
be making his way through the dusty and stony corridors, and he hears a
distant bellowing in the night. And then he makes out footprints in the sand
and he knows that they belong to the Minotaur, that the minotaur is after him,
and, in a sense, he, too, is after the minotaur. The Minotaur, of course,
wants to devour him, and since his only aim in life is to go on wandering and
wandering, he also longs for the moment. In the second sonnet, I had a still
more gruesome idea—the idea that there was no minotaur—that the man would go
on endlessly wandering. That may have been suggested by a phrase in one of
Chesterton’s Father Brown books. Chesterton said, ‘What a man is really afraid
of is a maze without a center.’ I suppose he was thinking of a godless
universe, but I was thinking of the labyrinth without a minotaur. I mean, if
anything is terrible, it is terrible because it is meaningless.” Borges and
Dembo 1970, 319. 62. Borges actually found a certain pleasure in the lack of
order, however, noting that “I not only feel the terror … but also, well, the
pleasure you get, let’s say, from a chess puzzle or from a good detective
novel.” Ibid. 63. Serendib, also spelled Serendip (Arabic Sarandīb), was the
Persian/Arabic word for the island of Sri Lanka, recorded in use as early as
AD 361. 64. Letter to Horace Mann, 28 January 1754, in _Walpole’s
Correspondence_ , vol. 20, 407–411. 65. As Robert Merton and Elinor Barber
note, it first made it into the OED in 1912 (Merton and Barber 2004, 72). 66.
Merton and Barber 2004, 40. 67. Lorraine Daston, “Are You Having Fun Today?,”
_London Review of Books_ , September 23, 2004. 68. Ibid. 69. Ibid. 70.
Featherstone 2000, 594. 71. Nancy Lusignan Schulz, “Serendipity in the
Archive,” _Chronicle of Higher Education_ , May 15, 2011,
. 72.
Verhoeven 2016, 18. 73. Caley 2017, 248. 74. Bishop 2016 75. “Oxford-Google
Digitization Project Reaches Milestone,” Bodleian Library and Radcliffe
Camera, March 26, 2009.
. 76. Timothy
Hill, David Haskiya, Antoine Isaac, Hugo Manguinhas, and Valentine Charles
(eds.), _Europeana Search Strategy_ , May 23, 2016,
.
77. “DPLAbot,” _Digital Public Library of America_ , .
78. “Q&A with EuropeanaBot developer,” _EuropeanaPro_ , August 20, 2013,
. 79. There
are of course many other examples, some of which offer greater interactivity,
such as the TroveNewsBot, which feeds off of the National Library of
Australia’s 370 million resources, allowing the user to send the bot any text
to get the bot digging through the Trove API for a matching result. 80.
Serendip-o-matic, n.d. . 81. Tristram Hunt,
“Online Is Fine, but History Is Best Hands On,” _Guardian_ July 3, 2011,
library-google-history>. 82. Davison 2009. 83. William McKeen, “Serendipity,”
_New York Times,_ (n.d.),
. 84. Carr 2006.
We find this argument once again in Aleks Krotoski, who highlights the man-
machine dichotomy, noting that the “controlled binary mechanics” of the search
engine actually make serendipitous findings “more challenging to find” because
“branching pathways of possibility are too difficult to code and don’t scale”
(Aleks Krokoski, “Digital serendipity: be careful what you don't wish for,”
_Guardian_ , August 11, 2011,
profiling-aleks-krotoski>.) 85. Lorraine Daston, “Are You Having Fun Today?,”
_London Review of Books_ , September 23, 2004. 86. Dan Cohen, “Planning for
Serendipity,” _DPLA_ News and Blog, February 7, 2014,
. 87. Shannon
Mattern, “Sharing Is Tables,” _e-flux_ , October 17, 2017,
furniture-for-digital-labor/>. 88. Greg Lindsay, “Engineering Serendipity,”
_New York Times_ , April 5, 2013,
serendipity.html>. 89. Gillespie 2017. 90. See, for instance, Milena Popova,
“Facebook Awards History App that Will Use Europeana’s Collections,”
_EuropeanaPro_ , March 7, 2014, awards-history-app-that-will-use-europeanas-collections>. 91. Doob 1994. 92.
“Europeana Strategy Impact 2015–2020,”
.
93. Ping-Huang 2016, 53. 94. Helmond 2015. 95. Ian Bogost and Nick Montfort.
2009. “Platform studies: freduently asked questions.” _Proceeding of the
Digital Arts and Culture Conference_.
. 96. Srnicek 2017; Helmond 2015;
Gillespie 2010. 97. “While a portal can present its aggregated content in a
way that invites exploration, the experience is always constrained—pre-
determined by a set of design decisions about what is necessary, relevant and
useful. Platforms put those design decisions back into the hands of users.
Instead of a single interface, there are innumerable ways of interacting with
the data.” See Tim Sherratt, “From Portals to Platforms; Building New
Frameworks for User Engagement,” National Library of Australia, November 5,
2013, platform>. 98. “Europeana Strategy Impact 2015–2020,”
.
99. Gillespie 2010, 349. 100. Fjeld and Fehn 2009, 108. 101. Gießmann 2015,
126. 102. See, for example, C. S. Lewis’s writings on Calvinism in _English
Literature in the Sixteenth Century Excluding Drama_. Or how about
Presbyterian minster Lyman Beecher, who once noted in a sermon: “in organizing
any body, in philosophy, religion, or politics, you must _have_ a platform;
you must stand somewhere; on some solid ground.” Such a platform could gather
people, so that they could “settle on principles just as … bees settle in
swarms on the branches, fragrant with blossoms and flowers.” See Beecher 2012,
21. 103. “Platform, in architecture, is a row of beams which support the
timber-work of a roof, and lie on top of the wall, where the entablature ought
to be raised. This term is also used for a kind of terrace … from whence a
fair prospect may be taken of the adjacent country.” See Nicholson 1819. 104.
As evangelist Calvin Colton noted in his work on the US’s public economy, “We
find American capital and labor occupying a very different position from that
of the same things in Europe, and that the same treatment applied to both,
would not be beneficial to both. A system which is good for Great Britain may
be ruinous to the United States. … Great Britain is the only nation that is
prepared for Free Trade … on a platform of universal Free Trade, the advanced
position of Great Britain … in her skill, machinery, capital and means of
commerce, would make all the tributary to her; and on the same platform, this
distance between her and other nations … instead of diminishing, would be
forever increasing, till … she would become the focus of the wealth, grandeur,
and power of the world.” 105. Deleuze and Guattari 1987. 106. Solá-Morales
1999, 86. 107. Budds 2016. 108. Gillespie 2010, 351. 109. Gillespie 2010, 350.
Indeed, it might be worth resurrecting the otherwise-extinct notion of
“plotform” to reinscribe agency and planning into the word. See Tawa 2012.
110. As Olga Gurionova points out, platforms have historically played a
significant role in creative processes as a “set of shared resources that
might be material, organizational, or intentional that inscribe certain
practices and approaches in order to develop collaboration, production, and
the capacity to generate change.” Indeed, platforms form integral
infrastructures in the critical art world for alternative systems of
organization and circulation that could be mobilized to “disrupt
institutional, representational, and social powers.” See Olga Goriunova, _Art
Platforms and Cultural Production on the Internet_ (New York: Routledge,
2012), 8. 111. Trebor Scholz, “Platform Cooperativism vs. the Sharing
Economy,” _Medium_ , December 5, 2016, cooperativism-vs-the-sharing-economy-2ea737f1b5ad>. 112. Srnicek 2017, 28–29.
113. Nakamura 2013, 243. 114. John Zysman and Martin Kennedy, “The Next Phase
in the Digital Revolution: Platforms, Automation, Growth, and Employment,”
_ETLA Reports_ 61, October 17, 2016, /ETLA-Raportit-Reports-61.pdf>. 115. Europeana’s privacy page explicitly notes
this, reminding the user that, “this site may contain links to other websites
that are beyond our control. This privacy policy applies solely to the
information you provide while visiting this site. Other websites which you
link to may have privacy policies that are different from this Privacy
Policy.” See “Privacy and Terms,” _Europeana Collections_ ,
.

# 6
Concluding Remarks

I opened this book claiming that the notion of mass digitization has shifted
from a professional concept to a cultural political phenomenon. If the former
denotes a technical way of duplicating analog material in digital form, mass
digitization as a cultural practice is a much more complex apparatus. On the
one hand, it offers the simple promise of heightened public and private access
to—and better preservation of—the past; one the other, it raises significant
political questions about ethics, politics, power, and care in the digital
sphere. I locate the emergence of these questions within the infrastructures
of mass digitization and the ways in which they not only offer new ways of
reading, viewing, and structuring cultural material, but also new models of
value and its extraction, and new infrastructures of control. The political
dynamic of this restructuring, I suggest, may meaningfully be referred to as a
form of infrapolitics, insofar as the political work of mass digitization
often happens at the level of infrastructure, in the form of standardization,
dissent, or both. While mass digitization entwines the cultural politics of
analog artifacts and institutions with the infrapolitical logics of the new
digital economies and technologies, there is no clear-cut distinction between
between the analog and digital realms in this process. Rather, paraphrasing N.
Katherine Hayles, I suggest that mass digitization, like a Janus-figure,
“looks to past and future, simultaneously reinforcing and undermining both.”1

A persistent challenge in the study of mass digitization is the mutability of
the analytical object. The unstable nature of cultural memory archives is not
a new phenomenon. As Derrida points out, they have always been haunted by an
unintended instability, which he calls “archive fever.” Yet, mass digitization
appears to intensify this instability even further, both in its material and
cultural instantiations. Analog preservation practices that seek to stabilize
objects are in the digital realm replaced with dynamic processes of content
migration and software updates. Cultural memory objects become embedded in
what Wendy Chun has referred to as the enduring ephemerality of the digital as
well as the bleeding edge of obsolescence.2

Indeed, from the moment when the seed for this book was first planted to the
time of its publication, the landscape of mass digitization, and the political
battles waged on its maps, has changed considerably. Google Books—which a
decade ago attracted the attention, admiration, and animosity of all—recently
metamorphosed from a giant flood to a quiet trickle. After a spectacle of
press releases on quantitative milestones, epic legal battles, and public
criticisms, Google apparently lost interest in Google Books. Google’s gradual
abandonment of the project resembled more an act of prolonged public ghosting
than a clear-cut break-up, leaving the public to read in between the lines
about where the company was headed: scanning activities dwindled; the Google
Books blog closed along with its Twitter feed; press releases dried up; staff
was laid off; and while scanning activities are still ongoing, they are
limited to works in the public domain, changing the scale considerably.3 One
commentator diagnosed the change of strategy as the demise of “the greatest
humanistic project of our time.”4 Others acknowledged in less dramatic terms
that while Google’s scanning activities may have stopped, its legacy lives on
and is still put to active use.5

In the present context, the important point to make is that a quiet life does
not necessarily equal death. Indeed, this is the lesson we learn from
attending to the subtle workings of infrastructure: the politics of
infrastructure is the politics of what goes on behind the curtains, not only
what is launched to the front page. Thus, as one engineer notes when
confronted with the fate of Google Books, “We’re not focused on shiny features
and things that are very visible to users. … It’s more like behind-the-scenes
work and perfecting the technology—acquiring content, processing it properly
so that we can view the entire book online, and adjusting the search
algorithm.”6 This is a timely reminder that any analysis of the infrapolitics
of mass digitization has to tend not only to the visible and loud politics of
construction, but also the quiet and ongoing politics of infrastructure
maintenance. It makes no sense to write an obituary for Google Books if the
infrastructure is still at work. Moreover, the assemblatic nature of mass
digitization also demands that we do not stop at the immediate borders of a
project when making analytical claims about their infrapolitics. Thus, while
Google Books may have stopped in its tracks, other trains of mass digitization
have pulled up instead, carrying the project of mass digitization forward
toward new, divergent, and experimental sites. Google’s different engagements
with cultural digitization shows that an analysis of the politics of Google’s
memory work needs to operate with an assemblatic method, rather than a
delineating approach.7 Europeana and DPLA also are mutable analytical objects,
both in economic and cultural form. Therefore, Europeana leads a precarious
life from one EU budget framework to the next, and its cultural identity and
software instantiations have transformed from a digital library, to a portal,
to a platform over the course of only a few decades. Last, but not least,
shadow libraries are mediating and multiplying cultural memory objects from
servers and mirror links that sometimes die just as quickly as they emerged.
The question of institutionalization matters greatly in this respect,
outlining what we might call a spectrum of contingency. If a mass digitization
project lives in the margins of institutions, such as in the case of many
shadow libraries, its infrastructure is often fraught with uncertainties. Less
precarious, but nonetheless tumultuous, are the corporate institutions with
their increasingly short market-driven lifespans. And, at the other end of the
spectrum, we find mass digitization projects embedded in bureaucratic
apparatuses whose lumbering budget processes provide publically funded mass
digitization projects with more stable infrastructures.

The temporal dimension of mass digitization projects also raises important
questions about the horizon of cultural memory in material terms. Should mass
digitization, one might ask, also mean whither analog cultural memory? This
question seems relevant not least in cases where institutions consider
digitization as a form of preservation that allows them to discard analog
artifacts once digitized. In digital form, we further have to contend with a
new temporal horizon of cultural memory itself, based not on only on
remembrance but on anticipation in the manner of “If you liked this, you might
also like. ….” Thus, while cultural memory objects link to objects of the
past, mass digitized cultural memory also gives rise to new methods of
prediction and preemption, for instance in the form of personalization. In
this anticipatory regime, cultural memory becomes subject to perpetual
calculatory activities, processing affects, and activities in terms of
likelihoods and probabilistic outcomes.

Thus, cultural memory has today become embedded in new glocalized
infrastructures. On the one hand, these infrastructures present novel
opportunities. Cultural optimists have suggested that mass digitization has
the potential to give rise to new cosmopolitan public spheres tethered from
the straitjackets of national territorializing forces. On the other hand,
critics argue that there is little evidence that cosmopolitan dynamics are in
fact at work. Instead, new colonial and neoliberal platforms arise from a
complex infrastructural apparatus of private and public institutions and
become shaped by political, financial, and social struggles over
representation, control, and ownership of knowledge.

In summary, it is obvious that the scale of mass digitization, public and
private, licit and illicit, has transformed how we engage with texts, cultural
works, and cultural memory. People today have instant access to a wealth of
works that would previously have required large amounts of money, as well as
effort, to engage with. Most of us enjoy the new cultural freedoms we have
been given to roam the archives, collecting and exploring oddities along the
way, and making new connections between works that would previously have been
held separate by taxonomy, geography, and time in the labyrinthine material
and social infrastructures of cultural memory.

A special attraction of mass digitization no doubt lies in its unfathomable
scale and linked nature, and the fantasy and “spectacle of collecting.”8 The
new cultural environment allows the user to accelerate the pace of information
by accessing key works instantly as well as idly rambling in the exotic back
alleys of digitized culture. Mass digitized archives can be explored to
functional, hedonistic, and critical ends (sometimes all at the same time),
and can be used to exhume forgotten works, forgotten authors, and forgotten
topics. Within this paradigm, the user takes center stage—at least
discursively. Suddenly, a link made between a porn magazine and a Courbet
painting could well be a valued cultural connection instead of a frowned-upon
transgression in the halls of high culture. Users do not just download books;
they also upload new folksonomies, “ego-documents,” and new cultural
constellations, which are all welcomed in the name of “citizen science.”
Digitization also infuses texts with new life due to its new connective
properties that allow readers and writers to intimately and
exhibitionistically interact around cultural works, and it provides new ways
of engaging with texts as digital reading migrates toward service-based rather
than hardware-based models of consumption. Digitization allows users to
digitally collect works themselves and indulge in alluring archival riches in
new ways.

But mass digitization also gives rise to a range of new ethical, political,
aesthetic, and methodological questions concerning the spatio-temporality,
ownership, territoriality, re-use, and dissemination of cultural memory
artifacts. Some of those dimensions have been discussed in detail in the
present work and include questions about digital labor, platformization,
management of visibility, ownership, copyright, and other new forms of control
and de- and recentralization and privatization processes. Others have only
been alluded to but continue to gain in relevance as processes of mass
digitization excavate and make public sensitive and contested archival
material. Thus, as the cultural memories and artifacts of indigenous
populations, colonized territories and other marginalized groups are brought
online, as well as artifacts that attest to the violent regimes of colonialism
and patriarchy, an attendant need has emerged for an ethics of care that goes
beyond simplistic calls for right to access, to instead attend to the
sensitivity of the digitized material and the ways in which we encounter these
materials.

Combined, these issues show that mass digitization is far from a
straightforward technical affair. Rather, the productive dimensions of mass
digitization emerge from the rubble of disruptive and turbulent political
processes that violently dislocate established frontiers and power dynamics
and give rise to new ones that are yet to be interpreted. Within these
turbulent processes, the familiar narratives of empowered users collecting and
connecting works and ideas in new and transgressive ways all too often leave
out the simultaneous and integrated story of how the labyrinthine
infrastructures of mass digitization also writes itself on the back of the
users, collecting them and their thoughts in the process, and subjecting them
to new economic logics and political regimes. As Lisa Nakamura reminds us, “by
availing ourselves of its networked virtual bookshelves to collect and display
our readerliness in a postprint age, we have become objects to be collected.”9
Thus, as we gather vintage images on Pinterest, collect books in Google Books,
and retweet sounds files from Europeana, we do best not only to question the
cultural logic and ethics of these actions but also to remember that as we
collect and connect, we are also ourselves collected and connected.

If the power of mass digitization happens at the level of infrastructure,
political resistance will have to take the form of infrastructural
intervention. We play a role in the formulation of the ethics of such
interventions, and as such we have to be willing to abandon the predominant
tropes of scale, access, and acceleration in favor of an infrapolitics of
care—a politics that offers opportunities for mindful, slow, and focused
encounters.

## Notes

1. Hayles 1999, 17. 2. Chun. 2008; Chun 2017. 3. Murrell 2017. 4. James
Somers, “Torching the Modern-Day Library of Alexandria,” _The Atlantic_ ,
April 20, 2017. 5. Jennifer Howard, “What Happened to Google’s Effort to Scan
Millions of University Library Books?,” _EdSurge_ , August 10, 2017,
scan-millions-of-university-library-books>. 6. Scott Rosenberg, “How Google
Books Got Lost,” _Wired_ , November 4, 2017, /how-google-book-search-got-lost>. 7. What to make, for instance, of the new
trend of employing Google’s neural networks to find one’s museum doppelgänger
from the company’s image database? Or the fact that Google Cultural Institute
is consistently turning out new cultural memory hacks such as its cardboard VR
glasses, its indoor mapping of museum spaces, and its gigapixel Art Camera
which reproduces artworks in uncanny detail. Or the expansion of their remit
from cultural memory institutions to also encompass natural history museums?
See, for example, Adrien Chen, “The Google Arts & Culture App and the Rise of
the ‘Coded Gaze,’” _New Yorker_ , January 26, 2018,
the-rise-of-the-coded-gaze-doppelganger>. 8. Nakamura 2013, 240. 9. Ibid.,
241.

#
References

1. Abbate, Janet. 2012. _Recoding Gender: Women’s Changing Participation in Computing_. Cambridge, MA: MIT Press.
2. Abrahamsen, Rita, and Michael C. Williams. 2011. _Security beyond the State: Private Security in International Politics_. Cambridge: Cambridge University Press.
3. Adler-Nissen, Rebecca, and Thomas Gammeltoft-Hansen. 2008. _Sovereignty Games: Instrumentalizing State Sovereignty in Europe and Beyond_. New York: Palgrave Macmillan.
4. Agre, Philip E. 2000. “The Market Logic of Information.” _Knowledge, Technology & Policy_ 13 (3): 67–77.
5. Aiden, Erez, and Jean-Baptiste Michel. 2013. _Uncharted: Big Data as a Lens on Human Culture_. New York: Riverhead Books.
6. Ambati, Vamshi, N. Balakrishnan, Raj Reddy, Lakshmi Pratha, and C. V. Jawahar. 2006. “The Digital Library of India Project: Process, Policies and Architecture.” _CiteSeer_. .
7. Amoore, Louise. 2013. _The Politics of Possibility: Risk and Security beyond Probability_. Durham, NC: Duke University Press.
8. Anderson, Ben, and Colin McFarlane. 2011. “Assemblage and Geography.” _Area_ 43 (2): 124–127.
9. Anderson, Benedict. 1991. _Imagined Communities: Reflections on the Origin and Spread of Nationalism_. London: Verso.
10. Arms, William Y. 2000. _Digital Libraries_. Cambridge, MA: MIT Press.
11. Arvanitakis, James, and Martin Fredriksson. 2014. _Piracy: Leakages from Modernity_. Sacramento, CA: Litwin Books.
12. Association of Research Libraries. 2009. “ARL Encourages Members to Refrain from Signing Nondisclosure or Confidentiality Clauses.” _ARL News_ , June 5.
13. Auletta, Ken. 2009. _Googled: The End of the World As We Know It_. New York: Penguin Press.
14. Baker, Nicholson. 2002. _The Double Fold: Libraries and the Assault on Paper_. London: Vintage Books.
15. Barthes, Roland. 1977. “From Work to Text” and “The Grain of the Voice.” In _Image Music Text_ , ed. Roland Barthes. London: Fontana Press.
16. Barthes, Roland. 1981. _Camera Lucida: Reflections on Photography_. New York: Hill and Wang.
17. Bates, David W. 2002. _Enlightenment Aberrations: Error and Revolution in France_. Ithaca, NY: Cornell University Press.
18. Batt, William H. 1984. “Infrastructure: Etymology and Import.” _Journal of Professional Issues in Engineering_ 110 (1): 1–6.
19. Bawden, David, and Lyn Robinson. 2009. “The Dark Side of Information: Overload, Anxiety and Other Paradoxes and Pathologies.” _Journal of Information Science_ 35 (2): 180–191.
20. Beck, Ulrick. 1996. “World Risk Society as Cosmopolitan Society? Ecological Questions in a Framework of Manufactured Uncertainties.” _Theory, Culture & Society_ 13 (4), 1–32.
21. Beecher, Lyman. 2012. _Faith Once Delivered to the Saints: A Sermon Delivered at Worcester, Mass., Oct. 15, 1823._ Farmington Hills, MI: Gale, Sabin Americana.
22. Belder, Lucky. 2015. “Cultural Heritage Institutions as Entrepreneurs.” In _Cultivate!: Cultural Heritage Institutions, Copyright & Cultural Diversity in the European Union & Indonesia_, eds. M. de Cock Buning, R. W. Bruin, and Lucky Belder, 157–196. Amsterdam: DeLex.
23. Benjamin, Walter. 1985a. “Central Park.” _New German Critique, NGC_ 34 (Winter): 32–58.
24. Benjamin, Walter. 1985b. “The flaneur.” In _Charles Baudelaire: a Lyric Poet in the Era of High Capitalism_. Translated by Harry Zohn. London: Verso.
25. Benjamin, Walter. 1999. _The Arcades Project_. Cambridge, MA: Harvard University Press.
26. Béquet, Gaëlle. 2009. _Digital Library as a Controversy: Gallica vs Google_. Proceedings of the 9th Conference Libraries in the Digital Age (Dubrovnik, Zadar, May 25–29, 2009). .
27. Berardi, Franco, Gary Genosko, and Nicholas Thoburn. 2011. _After the Future_. Edinburgh, UK: AK Press.
28. Berk, Hillary L. 2015. “The Legalization of Emotion: Managing Risk by Managing Feelings in Contracts for Surrogate Labor.” _Law & Society Review_ 49 (1): 143–177.
29. Bishop, Catherine. 2016. “The Serendipity of Connectivity: Piecing Together Women’s Lives in the Digital Archive.” _Women’s History Review_ 26 (5): 766–780.
30. Bivort, Olivier. 2013. “ _Le romantisme et la ‘langue de Voltaire_.’” Revue Italienne d’études Françaises, 3. DOI: 10.4000/rief.211.
31. Bjelić, Dušan I. 2016. _Intoxication, Modernity, and Colonialism: Freud’s Industrial Unconscious, Benjamin’s Hashish Mimesis_. New York: Palgrave Macmillan.
32. Blair, Ann, and Peter Stallybrass. 2010. “Mediating Information, 1450–1800”. In _This Is Enlightenment_ , eds. Clifford Siskin and William B. Warner. Chicago: University of Chicago Press.
33. Blair, Ann. 2003. “Reading Strategies for Coping with Information Overload ca. 1550–1700.” _Journal of the History of Ideas_ 64 (1): 11–28.
34. Bloom, Harold. 2009. _The Labyrinth_. New York: Bloom’s Literary Criticism.
35. Bodó, Balazs. 2015. “The Common Pathways of Samizdat and Piracy.” In _Samizdat: Between Practices and Representations_ , ed. V. Parisi. Budapest: CEU Institute for Advanced Study. Available at SSRN; .
36. Bodó, Balazs. 2016. “Libraries in the Post-Scarcity Era.” In _Copyrighting Creativity: Creative Values, Cultural Heritage Institutions and Systems of Intellectual Property_ , ed. Helle Porsdam. New York: Routledge.
37. Bogost, Ian, and Nick Montfort. 2009. “Platform Studies: Frequently Asked Questions.” _Proceeding of the Digital Arts and Culture Conference_. .
38. Borges, Jorge Luis. 1978. “An Autobiographical Essay.” In _The Aleph and Other Stories, 1933–1969: Together with Commentaries and an Autobiographical Essay_. New York: E. P. Dutton.
39. Borges, Jorge Luis. 2001. “The Total Library.” In _The Total Library: Non-fiction 1922–1986_. London: Penguin.
40. Borges, Jorge Luis, and L. S. Dembo. 1970. “An Interview with Jorge Luis Borges.” _Contemporary Literature_ 11 (3): 315–325.
41. Borghi, Maurizio. 2012. “Knowledge, Information and Values in the Age of Mass Digitisation.” In _Value: Sources and Readings on a Key Concept of the Globalized World_ , ed. Ivo de Gennaro. Leiden, the Netherlands: Brill.
42. Borghi, Maurizio, and Stavroula Karapapa. 2013. _Copyright and Mass Digitization: A Cross-Jurisdictional Perspective_. Oxford: Oxford University Press.
43. Borgman, Christine L. 2015. _Big Data, Little Data, No Data: Scholarship in the Networked World_. Cambridge, MA: MIT Press.
44. Bottando, Evelyn. 2012. _Hedging the Commons: Google Books, Libraries, and Open Access to Knowledge_. Iowa City: University of Iowa.
45. Bowker, Geoffrey C., Karen Baker, Florence Millerand, and David Ribes. 2010. “Toward Information Infrastructure Studies: Ways of Knowing in a Networked Environment.” In _The International Handbook of Internet Research_ , eds. Hunsinger Lisbeth Klastrup Jeremy and Matthew Allen. Dordrecht, the Netherlands: Springer.
46. Bowker, Geoffrey C, and Susan L. Star. 1999. _Sorting Things Out: Classification and Its Consequences_. Cambridge, MA: MIT Press.
47. Brin, Sergey. 2009. “A Library to Last Forever.” _New York Times_ , October 8.
48. Brin, Sergey, and Lawrence Page. 1998. “The Anatomy of a Large-Scale Hypertextual Web Search Engine.” _Computer Networks and ISDN Systems_ 30 (1–7): 107. .
49. Buckholtz, Alison. 2016. “New Ideas for Financing American Infrastructure: A Conversation with Henry Petroski.” _World Bank Group, Public-Private Partnerships Blog_ , March 29.
50. Buck-Morss, Susan. 2006. “The flaneur, the Sandwichman and the Whore: The Politics of Loitering.” _New German Critique_ (39): 99–140.
51. Budds, Diana. 2016. “Rem Koolhaas: ‘Architecture Has a Serious Problem Today.’” _CoDesign_ 21 (May). .
52. Burkart, Patrick. 2014. _Pirate Politics: The New Information Policy Contests_. Cambridge, MA: MIT Press.
53. Burton, James, and Daisy Tam. 2016. “Towards a Parasitic Ethics.” _Theory, Culture & Society_ 33 (4): 103–125.
54. Busch, Lawrence. 2011. _Standards: Recipes for Reality_. Cambridge, MA: MIT Press.
55. Caley, Seth. 2017. “Digitization for the Masses: Taking Users Beyond Simple Searching in Nineteenth-Century Collections Online.” _Journal of Victorian Culture : JVC_ 22 (2): 248–255.
56. Cadogan, Garnette. 2016. “Walking While Black.” Literary Hub. July 8. .
57. Callon, Michel, Madeleine Akrich, Sophie Dubuisson-Quellier, Catherine Grandclément, Antoine Hennion, Bruno Latour, Alexandre Mallard, et al. 2016. _Sociologie des agencements marchands: Textes choisis_. Paris: Presses des Mines.
58. Cameron, Fiona, and Sarah Kenderdine. 2007. _Theorizing Digital Cultural Heritage: A Critical Discourse_. Cambridge, MA: MIT Press.
59. Canepi, Kitti, Becky Ryder, Michelle Sitko, and Catherine Weng. 2013. _Managing Microforms in the Digital Age_. Association for Library Collections & Technical Services. .
60. Carey, Quinn Ann. 2015, “Maksim Moshkov and lib.ru: Russia’s Own ‘Gutenberg.’” _TeleRead: Bring the E-Books Home_. December 5. .
61. Carpentier, Nico. 2011. _Media and Participation: A Site of Ideological-Democratic Struggle_. Bristol, UK: Intellect.
62. Carr, Nicholas. 2006. “The Engine of Serendipity.” _Rough Type_ , May 18.
63. Cassirer, Ernst. 1944. _An Essay on Man: An Introduction to a Philosophy of Human Culture_. New Haven, CT: Yale University Press.
64. Castells, Manuel. 1996a. _The Rise of the Network Society_. Malden, MA: Blackwell Publishers.
65. Castells, Manuel. 1996b. _The Informational City: Information Technology, Economic Restructuring, and the Urban-Regional Process_. Cambridge: Blackwell.
66. Castells, Manuel, and Gustavo Cardoso. 2012. “Piracy Cultures: Editorial Introduction.” _International Journal of Communication_ 6 (1): 826–833.
67. Chabal, Emile. 2013. “The Rise of the Anglo-Saxon: French Perceptions of the Anglo-American World in the Long Twentieth Century.” _French Politics, Culture & Society_ 31 (1): 24–46.
68. Chartier, Roger. 2004. “Languages, Books, and Reading from the Printed Word to the Digital Text.” _Critical Inquiry_ 31 (1): 133–152.
69. Chen, Ching-chih. 2005. “Digital Libraries and Universal Access in the 21st Century: Realities and Potential for US-China Collaboration.” In _Proceedings of the 3rd China-US Library Conference, Shanghai, China, March 22–25_ , 138–167. Beijing: National Library of China.
70. Chrisafis, Angelique. 2008. “Dante to Dialects: EU’s Online Renaissance.” _Guardian_ , November 21. .
71. Chun, Wendy H. K. 2006. _Control and Freedom: Power and Paranoia in the Age of Fiber Optics_. Cambridge, MA: MIT Press.
72. Chun, Wendy Hui Kyong. 2008. “The Enduring Ephemeral, or the Future Is a Memory.” _Critical Inquiry_ 35 (1): 148–171.
73. Chun, Wendy H. K. 2017. _Updating to Remain the Same_. Cambridge, MA: MIT Press.
74. Clarke, Michael Tavel. 2009. _These Days of Large Things: The Culture of Size in America, 1865–1930_. Ann Arbor: University of Michigan Press.
75. Cohen, Jerome Bernard. 2006. _The Triumph of Numbers: How Counting Shaped Modern Life_. New York: W.W. Norton.
76. Conway, Paul. 2010. “Preservation in the Age of Google: Digitization, Digital Preservation, and Dilemmas.” _The Library Quarterly: Information, Community, Policy_ 80 (1): 61–79.
77. Courant, Paul N. 2006. “Scholarship and Academic Libraries (and Their Kin) in the World of Google.” _First Monday_ 11 (8).
78. Coyle, Karen. 2006. “Mass Digitization of Books.” _Journal of Academic Librarianship_ 32 (6): 641–645.
79. Darnton, Robert. 2009. _The Case for Books: Past, Present, and Future_. New York: Public Affairs.
80. Daston, Lorraine. 2012. “The Sciences of the Archive.” _Osiris_ 27 (1): 156–187.
81. Davison, Graeme. 2009. “Speed-Relating: Family History in a Digital Age.” _History Australia_ 6 (2). .
82. Deegan, Marilyn, and Kathryn Sutherland. 2009. _Transferred Illusions: Digital Technology and the Forms of Print_. Farnham, UK: Ashgate.
83. de la Durantaye, Katharine. 2011. “H Is for Harmonization: The Google Book Search Settlement and Orphan Works Legislation in the European Union.” _New York Law School Law Review_ 55 (1): 157–174.
84. DeLanda, Manuel. 2006. _A New Philosophy of Society: Assemblage Theory and Social Complexity_. London: Continuum.
85. Deleuze, Gilles. 1997. “Postscript on Control Societies.” In _Negotiations 1972–1990_ , 177–182. New York: Columbia University Press.
86. Deleuze, Gilles. 2013. _Difference and Repetition_. London: Bloomsbury Academic.
87. Deleuze, Gilles, and Félix Guattari. 1987. _A Thousand Plateaus: Capitalism and Schizophrenia_. Minneapolis: University of Minnesota Press.
88. DeNardis, Laura. 2011. _Opening Standards: The Global Politics of Interoperability_. Cambridge, MA: MIT Press.
89. DeNardis, Laura. 2014. “The Social Media Challenge to Internet Governance.” In _Society and the Internet: How Networks of Information and Communication Are Changing Our Lives_ , eds. Mark Graham and William H. Dutton. Oxford: Oxford University Press.
90. Derrida, Jacques. 1996. _Archive Fever: A Freudian Impression_. Chicago: University of Chicago Press.
91. Derrida, Jacques. 2005. _Paper Machine_. Stanford, CA: Stanford University Press.
92. Dewey, Melvin. 1926. “Our Next Half-Century.” _Bulletin of the American Library Association_ 20 (10): 309–312.
93. Dinshaw, Carolyn. 2012. _How Soon Is Now?: Medieval Texts, Amateur Readers, and the Queerness of Time_. Durham, NC: Duke University Press.
94. Doob, Penelope Reed. 1994. _The Idea of the Labyrinth: From Classical Antiquity Through the Middle Ages_. Ithaca, NY: Cornell University Press.
95. Dörk, Marian, Sheelagh Carpendale, and Carey Williamson. 2011. “The Information flaneur: A Fresh Look at Information Seeking.” _Conference on Human Factors in Computing Systems—Proceedings_ , 1215–1224.
96. Doward, Jamie. 2009. “Angela Merkel Attacks Google’s Plans to Create a Global Online Library.” _Guardian_ , October 11. .
97. Duguid, Paul. 2007. “Inheritance and Loss? A Brief Survey of Google Books.” _First Monday_ 12 (8). .
98. Earnshaw, Rae A., and John Vince. 2007. _Digital Convergence: Libraries of the Future_. London: Springer.
99. Easley, David, and Jon Kleinberg. 2010. _Networks, Crowds, and Markets: Reasoning About a Highly Connected World_. New York: Cambridge University Press.
100. Easterling, Keller. 2014. _Extrastatecraft: The Power of Infrastructure Space_. Verso.
101. Eckstein, Lars, and Anja Schwarz. 2014. _Postcolonial Piracy: Media Distribution and Cultural Production in the Global South_. London: Bloomsbury.
102. Eco, Umberto. 2014. _The Name of the Rose_. Boston: Mariner Books.
103. Edwards, Paul N. 2003. “Infrastructure and Modernity: Force, Time and Social Organization in the History of Sociotechnical Systems.” In _Modernity and Technology_ , eds. Thomas J. Misa, Philip Brey, and Andrew Feenberg. Cambridge, MA: MIT Press.
104. Edwards, Paul N., Steven J. Jackson, Melissa K. Chalmers, Geoffrey C. Bowker, Christine L. Borgman, David Ribes, Matt Burton, and Scout Calvert. 2012. _Knowledge Infrastructures: Intellectual Frameworks and Research Challenges_. Report of a workshop sponsored by the National Science Foundation and the Sloan Foundation University of Michigan School of Information, May 25–28. .
105. Ensmenger, Nathan. 2012. _The Computer Boys Take Over: Computers, Programmers, and the Politics of Technical Expertise_. Cambridge, MA: MIT Press.
106. Eyal, Nir. 2014. _Hooked: How to Build Habit-Forming Products_. Princeton, NJ: Princeton University Press.
107. Featherstone, Mike. 1998. “The flaneur, the City and Virtual Public Life.” _Urban Studies (Edinburgh, Scotland)_ 35 (5–6): 909–925.
108. Featherstone, Mike. 2000. “Archiving Cultures.” _British Journal of Sociology_ 51 (1): 161–184.
109. Fiske, John. 1987. _Television Culture_. London: Methuen.
110. Fjeld, Per Olaf, and Sverre Fehn. 2009. _Sverre Fehn: The Pattern of Thoughts_. New York: Monacelli Press.
111. Flyverbom, Mikkel, Paul M. Leonardi, Cynthia Stohl, and Michael Stohl. 2016. “The Management of Visibilities in the Digital Age.” _International Journal of Communication_ 10 (1): 98–109.
112. Foucault, Michel. 2002. _Archaeology of Knowledge_. London: Routledge.
113. Foucault, Michel. 2004. _Death and the Labyrinth: The World of Raymond Roussel_. Continuum International Publishing Group Ltd.
114. Foucault, Michel. 2009. _Security, Territory, Population: Lectures at the College de France, 1977–1978_. Basingstoke, UK: Palgrave Macmillan.
115. Fredriksson, Martin, and James Arvanitakis. 2014. _Piracy: Leakages from Modernity_. Sacramento, CA: Litwin Books.
116. Freedgood, Elaine. 2013. “Divination.” _PMLA_ 128 (1): 221–225.
117. Fuchs, Christian. 2014. _Digital Labour and Karl Marx_. New York: Routledge.
118. Fuller, Matthew, and Andrew Goffey. 2012. _Evil Media_. Cambridge, MA: MIT Press.
119. Galloway, Alexander R. 2013a. _The Interface Effect_. Cambridge: Polity Press.
120. Galloway Alexander, R. 2013b. “The Poverty of Philosophy: Realism and Post-Fordism.” _Critical Inquiry_ 39 (2): 347–366.
121. Gardner, Carolyn Caffrey, and Gabriel J. Gardner. 2017. “Fast and Furious (at Publishers): The Motivations behind Crowdsourced Research Sharing.” _College & Research Libraries_ 78 (2): 131–149.
122. Garrett, Jeffrey. 1999. “Redefining Order in the German Library, 1775–1825.” _Eighteenth-Century Studies_ 33 (1): 103–123.
123. Gibbon, Peter, and Lasse F. Henriksen. 2012. “A Standard Fit for Neoliberalism.” _Comparative Studies in Society and History_ 54 (2): 275–307.
124. Giesler, Markus. 2006. “Consumer Gift Systems.” _Journal of Consumer Research_ 33 (2): 283–290.
125. Gießmann, Sebastian. 2015. _Medien Der Kooperation_. Siegen, Germany: Universitet Verlag.
126. Gillespie, Tarleton. 2010. “The Politics of ‘Platforms.’” _New Media & Society_ 12 (3): 347–364.
127. Gillespie, Tarleton. 2017. “Algorithmically Recognizable: Santorum’s Google Problem, and Google’s Santorum Problem.” _Information Communication and Society_ 20 (1): 63–80.
128. Gladwell, Malcolm. 2000. _The Tipping Point: How Little Things Can Make a Big Difference_. Boston: Little, Brown.
129. Goldate, Steven. 1996. “The Cyberflaneur: Spaces and Places on the Internet.” _Art Monthly Australia_ 91:15–18.
130. Goldsmith, Jack L., and Tim Wu. 2006. _Who Controls the Internet?: Illusions of a Borderless World_. New York: Oxford University Press.
131. Goldsmith, Kenneth. 2007. “UbuWeb Wants to Be Free.” Last modified July 18, 2007. .
132. Golumbia, David. 2009. _The Cultural Logic of Computation_. Cambridge, MA: Harvard University Press.
133. Goriunova, Olga. 2012. _Art Platforms and Cultural Production on the Internet_. New York: Routledge.
134. Gradmann, Stephan. 2009. “Interoperability: A Key Concept for Large Scale, Persistent Digital Libraries.” 1st DL.org Workshop at 13th European Conference on Digital Libraries (ECDL).
135. Greene, Mark. 2010. “MPLP: It’s Not Just for Processing Anymore.” _American Archivist_ 73 (1): 175–203.
136. Grewal, David S. 2008. _Network Power: The Social Dynamics of Globalization_. New Haven, CT: Yale University Press.
137. Hacking, Ian. 1995. _Rewriting the Soul: Multiple Personality and the Sciences of Memory_. Princeton, NJ: Princeton University Press.
138. Hacking, Ian. 2010. _The Taming of Chance_. Cambridge: Cambridge University Press.
139. Hagel, John. 2012. _The Power of Pull: How Small Moves, Smartly Made, Can Set Big Things in Motion_. New York: Basic Books.
140. Haggerty, Kevin D, and Richard V. Ericson. 2000. “The Surveillant Assemblage.” _British Journal of Sociology_ 51 (4): 605–622.
141. Hall, Gary. 2008. _Digitize This Book!: The Politics of New Media, or Why We Need Open Access Now_. Minneapolis: University of Minnesota Press.
142. Hall, Mark, et al. 2012. “PATHS—Exploring Digital Cultural Heritage Spaces.” In _Theory and Practice of Digital Libraries. TPDL 2012_ , vol. 7489, 500–503. Lecture Notes in Computer Science. Berlin: Springer.
143. Hall, Stuart, and Fredric Jameson. 1990. “Clinging to the Wreckage: a Conversation.” _Marxism Today_ (September): 28–31.
144. Hardt, Michael, and Antonio Negri. 2007. _Empire_. Cambridge, MA: Harvard University Press.
145. Hardt, Michael, and Antonio Negri. 2009. _Commonwealth_. Cambridge, MA: Harvard University Press.
146. Hartmann, Maren. 1999. “The Unknown Artificial Metaphor or: The Difficult Process of Creation or Destruction.” In _Next Cyberfeminist International_ , ed. Cornelia Sollfrank. Hamburg, Germany: obn. .
147. Hartmann, Maren. 2004. _Technologies and Utopias: The Cyberflaneur and the Experience of “Being Online.”_ Munich: Fischer.
148. Hayles, N. Katherine. 1993. “Seductions of Cyberspace.” In _Lost in Cyberspace: Essays and Far-Fetched Tales_ , ed. Val Schaffner. Bridgehampton, NY: Bridge Works Pub. Co.
149. Hayles, N. Katherine. 2005. _My Mother Was a Computer: Digital Subjects and Literary Texts_. Chicago: University of Chicago Press.
150. Helmond, Anne. 2015. “The Platformization of the Web: Making Web Data Platform Ready.” _Social Media + Society_ 1 (2). .
151. Hicks, Marie. 2018. _Programmed Inequality: How Britain Discarded Women Technologists and Lost its Edge in Computing_. Cambridge, MA: MIT Press.
152. Higgins, Vaughan, and Wendy Larner. 2010. _Calculating the Social: Standards and the Reconfiguration of Governing_. Basingstoke, UK: Palgrave Macmillan.
153. Holzer, Boris, and P. S. Mads. 2003. “Rethinking Subpolitics: Beyond the ‘Iron Cage’ of Modern Politics?” _Theory, Culture & Society_ 20 (2): 79–102.
154. Huyssen, Andreas. 2015. _Miniature Metropolis: Literature in an Age of Photography and Film_. Cambridge, MA: Harvard University Press.
155. Imerito, Tom. 2009. “Electrifying Knowledge.” _Pittsburgh Quarterly Magazine_. Summer. .
156. Janssen, Olaf. D. 2011. “Digitizing All Dutch Books, Newspapers and Magazines—730 Million Pages in 20 Years—Storing It, and Getting It Out There.” In _Research and Advanced Technology for Digital Libraries_ , eds. S. Gradmann, F. Borri, C. Meghini, and H. Schuldt, 473–476. TPDL 2011. Lecture Notes in Computer Science, vol. 6966. Berlin: Springer.
157. Jasanoff, Sheila. 2013. “Epistemic Subsidiarity—Coexistence, Cosmopolitanism, Constitutionalism.” _European Journal of Risk Regulation_ 4 (2) 133–141.
158. Jeanneney, Jean N. 2007. _Google and the Myth of Universal Knowledge: A View from Europe_. Chicago: University of Chicago Press.
159. Jones, Elisabeth A., and Joseph W. Janes. 2010. “Anonymity in a World of Digital Books: Google Books, Privacy, and the Freedom to Read.” _Policy & Internet_ 2 (4): 43–75.
160. Jøsevold, Roger. 2016. “A National Library for the 21st Century—Knowledge and Cultural Heritage Online.” _Alexandria_ _:_ _The_ _Journal of National and International Library and Information Issues_ 26 (1): 5–14.
161. Kang, Minsoo. 2011. _Sublime Dreams of Living Machines: The Automaton in the European Imagination_. Cambridge, MA: Harvard University Press.
162. Karaganis, Joe. 2011. _Media Piracy in Emerging Economies_. New York: Social Science Research Council.
163. Karaganis, Joe. 2018. _Shadow Libraries: Access to Educational Materials in Global Higher Education_. Cambridge, MA: MIT Press.
164. Kaufman, Peter B., and Jeff Ubois. 2007. “Good Terms—Improving Commercial-Noncommercial Partnerships for Mass Digitization.” _D-Lib Magazine_ 13 (11–12). .
165. Kelley, Robin D. G. 1994. _Race Rebels: Culture, Politics, and the Black Working Class_. New York: Free Press.
166. Kelly, Kevin. 1994. _Out of Control: The Rise of Neo-Biological Civilization_. Reading, MA: Addison-Wesley.
167. Kenney, Anne R, Nancy Y. McGovern, Ida T. Martinez, and Lance J. Heidig. 2003. “Google Meets Ebay: What Academic Librarians Can Learn from Alternative Information Providers." D-lib Magazine, 9 (6) .
168. Kiriya, Ilya. 2012. “The Culture of Subversion and Russian Media Landscape.” _International Journal of Communication_ 6 (1): 446–466.
169. Koevoets, Sanne. 2013. _Into the Labyrinth of Knowledge and Power: The Library as a Gendered Space in the Western Imaginary_. Utrecht, the Netherlands: Utrecht University.
170. Kolko, Joyce. 1988. _Restructuring the World Economy_. New York: Pantheon Books.
171. Komaromi, Ann. 2012. “Samizdat and Soviet Dissident Publics.” _Slavic Review_ 71 (1): 70–90.
172. Kramer, Bianca. 2016a. “Sci-Hub: Access or Convenience? A Utrecht Case Study, Part 1.” _I &M / I&O 2.0_, June 20. .
173. Kramer, Bianca. 2016b. “Sci-Hub: Access or Convenience? A Utrecht Case Study, Part 2.” .
174. Krysa, Joasia. 2006. _Curating Immateriality: The Work of the Curator in the Age of Network Systems_. Brooklyn, NY: Autonomedia.
175. Kurgan, Laura. 2013. _Close up at a Distance: Mapping, Technology, and Politics_. Brooklyn, NY: Zone Books.
176. Labi, Aisha. 2005. “France Plans to Digitize Its ‘Cultural Patrimony’ and Defy Google’s ‘Domination.’” _Chronicle of Higher Education_ (March): 21.
177. Larkin, Brian. 2008. _Signal and Noise: Media, Infrastructure, and Urban Culture in Nigeria_. Durham, NY: Duke University Press.
178. Latour, Bruno. 2005. _Reassembling the Social: An Introduction to Actor-Network Theory_. Oxford: Oxford University Press.
179. Latour, Bruno. 2007. “Beware, Your Imagination Leaves Digital Traces.” _Times Higher Literary Supplement_ , April 6.
180. Latour, Bruno. 2008. _What Is the Style of Matters of Concern?: Two Lectures in Empirical Philosophy_. Assen, the Netherlands: Koninklijke Van Gorcum.
181. Lavoie, Brian F., and Lorcan Dempsey. 2004. “Thirteen Ways of Looking at Digital Preservation.” _D-Lib Magazine_ 10 (July/August). .
182. Leetaru, Kalev. 2008. “Mass Book Digitization: The Deeper Story of Google Books and the Open Content Alliance.” _First Monday_ 13 (10). .
183. Lefebvre, Henri. 2009. _The Production of Space_. Malden, MA: Blackwell.
184. Lefler, Rebecca. 2007. “‘Europeana’ Ready for Maiden Voyage.” _Hollywood Reporter_ , March 23. .
185. Lessig, Lawrence. 2005a. “Lawrence Lessig on Interoperability.” _Creative Commons_ , October 19. .
186. Lessig, Lawrence. 2005b. _Free Culture: The Nature and Future of Creativity_. New York: Penguin Books.
187. Lessig, Lawrence. 2010. “For the Love of Culture—Will All of Our Literary Heritage Be Available to Us in the Future? Google, Copyright, and the Fate of American Books. _New Republic_ 24\. .
188. Levy, Steven. 2011. _In the Plex: How Google Thinks, Works, and Shapes Our Lives_. New York: Simon & Schuster.
189. Lewis, Jane. 1987. _Labour and Love: Women’s Experience of Home and Family, 1850–1940_. Oxford: Blackwell.
190. Liang, Lawrence. 2009. “Piracy, Creativity and Infrastructure: Rethinking Access to Culture,” July 20.
191. Liu, Jean. 2013. “Interactions: The Numbers Behind #ICanHazPDF.” _Altmetric_ , May 9. .
192. Locke, John. 2003. _Two Treatises of Government: And a Letter Concerning Toleration_. New Haven, CT: Yale University Press.
193. Martin, Andrew, and George Ross. 2004. _Euros and Europeans: Monetary Integration and the European Model of Society_. New York: Cambridge University Press.
194. Mbembe, Achille. 2002. “The Power of the Archive and its Limits.” In _Refiguring the Archive_ , ed. Carolyn Hamilton. Cape Town, South Africa: David Philip.
195. McDonough, Jerome. 2009. “XML, Interoperability and the Social Construction of Markup Languages: The Library Example.” _Digital Humanities Quarterly_ 3 (3). .
196. McPherson, Tara. 2012. “U.S. Operating Systems at Mid-Century: The Intertwining of Race and UNIX.” In _Race After the Internet_ , eds. Lisa Nakamura and Peter Chow-White. New York: Routledge.
197. Meckler, Alan M. 1982. _Micropublishing: A History of Scholarly Micropublishing in America, 1938–1980_. Westport, CT: Greenwood Press.
198. Medak, Tomislav, et al. 2016. _The Radiated Book_. .
199. Merton, Robert K., and Elinor Barber. 2004. _The Travels and Adventures of Serendipity: A Study in Sociological Semantics and the Sociology of Science_. Princeton, NJ: Princeton University Press.
200. Meunier, Sophie. 2003. “France’s Double-Talk on Globalization.” _French Politics, Culture & Society_ 21:20–34.
201. Meunier, Sophie. 2007. “The Distinctiveness of French Anti-Americanism.” In _Anti-Americanisms in World Politics_ , eds. Peter J. Katzenstein and Robert O. Keohane. Ithaca, NY: Cornell University Press.
202. Michel, Jean-Baptiste, et al. 2011. “Quantitative Analysis of Culture Using Millions of Digitized Books.” _Science_ 331 (6014):176–182.
203. Midbon, Mark. 1980. “Capitalism, Liberty, and the Development of the Development of the Library.” _Journal of Library History (Tallahassee, Fla.)_ 15 (2): 188–198.
204. Miksa, Francis L. 1983. _Melvil Dewey: The Man and the Classification_. Albany, NY: Forest Press.
205. Mitropoulos, Angela. 2012. _Contract and Contagion: From Biopolitics to Oikonomia_. Brooklyn, NY: Minor Compositions.
206. Mjør, Kåre Johan. 2009. “The Online Library and the Classic Literary Canon in Post-Soviet Russia: Some Observations on ‘The Fundamental Electronic Library of Russian Literature and Folklore.’” _Digital Icons: Studies in Russian, Eurasian and Central European New Media_ 1 (2): 83–99.
207. Montagnani, Maria Lillà, and Maurizio Borghi. 2008. “Promises and Pitfalls of the European Copyright Law Harmonisation Process.” In _The European Union and the Culture Industries: Regulation and the Public Interest_ , ed. David Ward. Aldershot, UK: Ashgate.
208. Murrell, Mary. 2017. “Unpacking Google’s Library.” _Limn_ (6). .
209. Nakamura, Lisa. 2002. _Cybertypes: Race, Ethnicity, and Identity on the Internet_. New York: Routledge.
210. Nakamura, Lisa. 2013. “‘Words with Friends’: Socially Networked Reading on Goodreads.” _PMLA_ 128 (1): 238–243.
211. Nava, Mica, and Alan O’Shea. 1996. _Modern Times: Reflections on a Century of English Modernity_ , 38–76. London: Routledge.
212. Negroponte, Nicholas. 1995. _Being Digital_. New York: Knopf.
213. Neubert, Michael. 2008. “Google’s Mass Digitization of Russian-Language Books.” _Slavic & East European Information Resources_ 9 (1): 53–62.
214. Nicholson, William. 1819. “Platform.” In _British Encyclopedia: Or, Dictionary of Arts and Sciences, Comprising an Accurate and Popular View of the Present Improved State of Human Knowledge_. Philadelphia: Mitchell, Ames, and White.
215. Niggemann, Elisabeth. 2011. _The New Renaissance: Report of the “Comité Des Sages.”_ Brussels: Comité des Sages.
216. Noble, Safiya Umoja, and Brendesha M. Tynes. 2016. _The Intersectional Internet: Race, Sex, Class and Culture Online_. New York: Peter Lang Publishing.
217. Nord, Deborah Epstein. 1995. _Walking the Victorian Streets: Women, Representation, and the City_. Ithaca, NY: Cornell University Press.
218. Norvig, Peter. 2012. “Colorless Green Ideas Learn Furiously: Chomsky and the Two Cultures of Statistical Learning.” _Significance_ (August): 30–33.
219. O’Neill, Paul, and Søren Andreasen. 2011. _Curating Subjects_. London: Open Editions.
220. O’Neill, Paul, and Mick Wilson. 2010. _Curating and the Educational Turn_. London: Open Editions.
221. Ong, Aihwa, and Stephen J. Collier. 2005. _Global Assemblages: Technology, Politics, and Ethics As Anthropological Problems_. Malden, MA: Blackwell Pub.
222. Otlet, Paul, and W. Boyd Rayward. 1990. _International Organisation and Dissemination of Knowledge_. Amsterdam: Elsevier.
223. Palfrey, John G. 2015. _Bibliotech: Why Libraries Matter More Than Ever in the Age of Google_. New York: Basic Books.
224. Palfrey, John G., and Urs Gasser. 2012. _Interop: The Promise and Perils of Highly Interconnected Systems_. New York: Basic Books.
225. Parisi, Luciana. 2004. _Abstract Sex: Philosophy, Bio-Technology and the Mutations of Desire_. London: Continuum.
226. Patra, Nihar K., Bharat Kumar, and Ashis K. Pani. 2014. _Progressive Trends in Electronic Resource Management in Libraries_. Hershey, PA: Information Science Reference.
227. Paulheim, Heiko. 2015. “What the Adoption of Schema.org Tells About Linked Open Data.” _CEUR Workshop Proceedings_ 1362:85–90.
228. Peatling, G. K. 2004. “Public Libraries and National Identity in Britain, 1850–1919.” _Library History_ 20 (1): 33–47.
229. Pechenick, Eitan A., Christopher M. Danforth, Peter S. Dodds, and Alain Barrat. 2015. “Characterizing the Google Books Corpus: Strong Limits to Inferences of Socio-Cultural and Linguistic Evolution.” _PLoS One_ 10 (10).
230. Peters, John Durham. 2015. _The Marvelous Clouds: Toward a Philosophy of Elemental Media_. Chicago: University of Chicago Press.
231. Pfanner, Eric. 2011. “Quietly, Google Puts History Online.” _New York Times_ , November 20.
232. Pfanner, Eric. 2012. “Google to Announce Venture With Belgian Museum.” _New York Times_ , March 12. .
233. Philip, Kavita. 2005. “What Is a Technological Author? The Pirate Function and Intellectual Property.” _Postcolonial Studies: Culture, Politics, Economy_ 8 (2): 199–218.
234. Pine, Joseph B., and James H. Gilmore. 2011. _The Experience Economy_. Boston: Harvard Business Press.
235. Ping-Huang, Marianne. 2016. “Archival Biases and Cross-Sharing.” _NTIK_ 5 (1): 55–56.
236. Pollock, Griselda. 1998. “Modernity and the Spaces of Femininity.” In _Vision and Difference: Femininity, Feminism and Histories of Art_ , ed. Griselda Pollock, 245–256. London: Routledge & Kegan Paul.
237. Ponte, Stefano, Peter Gibbon, and Jakob Vestergaard. 2011. _Governing Through Standards: Origins, Drivers and Limitations_. Basingstoke, UK: Palgrave Macmillan.
238. Pörksen, Uwe. 1995. _Plastic Words: The Tyranny of a Modular Language_. University Park: Pennsylvania State University Press.
239. Proctor, Nancy. 2013. “Crowdsourcing—an Introduction: From Public Goods to Public Good.” _Curator_ 56 (1): 105–106.
240. Puar, Jasbir K. 2007. _Terrorist Assemblages: Homonationalism in Queer Times_. Durham, NC: Duke University Press.
241. Purdon, James. 2016. _Modernist Informatics: Literature, Information, and the State_. New York: Oxford University Press.
242. Putnam, Robert D. 1988. “Diplomacy and Domestic Politics: The Logic of Two-Level Games.” _International Organization_ 42 (3): 427–460.
243. Rabinow, Paul. 2003. _Anthropos Today: Reflections on Modern Equipment_. Princeton, NJ: Princeton University Press.
244. Rabinow, Paul, and Michel Foucault. 2011. _The Accompaniment: Assembling the Contemporary_. Chicago: University of Chicago Press.
245. Raddick, M., et al. 2009. “Galaxy Zoo: Exploring the Motivations of Citizen Science Volunteers.” _Astronomy Education Review_ 9 (1).
246. Ratto, Matt, and Boler Megan. 2014. _DIY Citizenship: Critical Making and Social Media_. Cambridge, MA: MIT Press.
247. Reichardt, Jasia. 1969. _Cybernetic Serendipity: The Computer and the Arts_. New York: Frederick A Praeger. .
248. Ridge, Mia. 2013. “From Tagging to Theorizing: Deepening Engagement with Cultural Heritage through Crowdsourcing.” _Curator_ 56 (4): 435–450.
249. Rieger, Oya Y. 2008. _Preservation in the Age of Large-Scale Digitization: A White Paper_. Washington, DC: Council on Library and Information Resources.
250. Rodekamp, Volker, and Bernhard Graf. 2012. _Museen zwischen Qualität und Relevanz: Denkschrift zur Lage der Museen_. Berlin: G+H Verlag.
251. Rogers, Richard. 2012. “Mapping and the Politics of Web Space.” _Theory, Culture & Society_ 29:193–219.
252. Romeo, Fiona, and Lucinda Blaser. 2011. “Bringing Citizen Scientists and Historians Together.” Museums and the Web. .
253. Russell, Andrew L. 2014. _Open Standards and the Digital Age: History, Ideology, and Networks_. New York: Cambridge University Press.
254. Said, Edward. 1983. “Traveling Theory.” In _The World, the Text, and the Critic_ , 226–247. Cambridge, MA: Harvard University Press.
255. Samimian-Darash, Limor, and Paul Rabinow. 2015. _Modes of Uncertainty: Anthropological Cases_. Chicago: The University of Chicago Press.
256. Samuel, Henry. 2009. “Nicolas Sarkozy Fights Google over Classic Books.” _The Telegraph_ , December 14. .
257. Samuelson, Pamela. 2010. “Google Book Search and the Future of Books in Cyberspace.” _Minnesota Law Review_ 94 (5): 1308–1374.
258. Samuelson, Pamela. 2011. “Why the Google Book Settlement Failed—and What Comes Next?” _Communications of the ACM_ 54 (11): 29–31.
259. Samuelson, Pamela. 2014. “Mass Digitization as Fair Use.” _Communications of the ACM_ 57 (3): 20–22.
260. Samyn, Jeanette. 2012. “Anti-Anti-Parasitism.” _The New Inquiry_ , September 18.
261. Sanderhoff, Merethe. 2014. _Sharing Is Caring: Åbenhed Og Deling I Kulturarvssektoren_. Copenhagen: Statens Museum for Kunst.
262. Sassen, Saskia. 2008. _Territory, Authority, Rights: From Medieval to Global Assemblages_. Princeton, NJ: Princeton University Press.
263. Schmidt, Henrike. 2009. “‘Holy Cow’ and ‘Eternal Flame’: Russian Online Libraries.” _Kultura_ 1, 4–8. .
264. Schmitz, Dawn. 2008. _The Seamless Cyberinfrastructure: The Challenges of Studying Users of Mass Digitization and Institutional Repositories_. Washington, DC: Digital Library Federation, Council on Library and Information Resources.
265. Schonfeld, Roger, and Liam Sweeney. 2017. “Inclusion, Diversity, and Equity: Members of the Association of Research Libraries.” _Ithaka S+R_ , August 30. .
266. Schüll, Natasha Dow. 2014. _Addiction by Design: Machine Gambling in Las Vegas_. Princeton, NJ: Princeton University Press.
267. Scott, James C. 2009. _Domination and the Arts of Resistance: Hidden Transcripts_. New Haven, CT: Yale University Press.
268. Seddon, Nicholas. 2013. _Government Contracts: Federal, State and Local_. Annandale, Australia: The Federation Press.
269. Serres, Michel. 2013. _The Parasite_. Minneapolis: University of Minnesota Press.
270. Sherratt, Tim. 2013. “From Portals to Platforms: Building New Frameworks for User Engagement.” National Library of Australia, November 5. .
271. Shukaitis, Stevphen. 2009. “Infrapolitics and the Nomadic Educational Machine.” In _Contemporary Anarchist Studies: An Introductory Anthology of Anarchy in the Academy_ , ed. Randall Amster. London: Routledge.
272. Smalls, James. 2003. “‘Race’ As Spectacle in Late-Nineteenth-Century French Art and Popular Culture.” _French Historical Studies_ 26 (2): 351–382.
273. Snyder, Francis. 2002. “Governing Economic Globalisation: Global Legal Pluralism and EU Law.” In _Regional and Global Regulation of International Trade_ , 1–47. Oxford: Hart Publishing.
274. Solá-Morales, Rubió I. 1999. _Differences: Topographies of Contemporary Architecture_. Cambridge, MA: MIT Press.
275. Sollfrank, Cornelia. 2015. “Nothing New Needs to Be Created. Kenneth Goldsmith’s Claim to Uncreativity.” In _No Internet—No Art. A Lunch Byte Anthology_ , ed. Melanie Bühler. Eindhoven: Onomatopee. .
276. Somers, Margaret R. 2008. _Genealogies of Citizenship: Markets, Statelessness, and the Right to Have Rights_. Cambridge: Cambridge University Press.
277. Sparks, Peter G. 1992. _A Roundtable on Mass Deacidification._ Report on a Meeting Held September 12–13, 1991, in Andover, Massachusetts. Washington, DC: Association of Research Libraries.
278. Spivak, Gayatri C. 2000. “Megacity.” _Grey Room_ 1 (1): 8–25.
279. Srnicek, Nick. 2017. _Platform Capitalism_. Cambridge: Polity Press.
280. Stanley, Amy D. 1998. _From Bondage to Contract: Wage Labor, Marriage, and the Market in the Age of Slave Emancipation_. Cambridge: Cambridge University Press.
281. Stelmakh, Valeriya D. 2008. “Book Saturation and Book Starvation: The Difficult Road to a Modern Library System.” _Kultura_ , September 4.
282. Stiegler, Bernard. n.d. “Amateur.” Ars Industrialis: Association internationale pour une politique industrielle des technologies de l’esprit. .
283. Star, Susan Leigh. 1999. “The Ethnography of Infrastructure.” _American Behavioral Scientist_ 43 (3): 377–391.
284. Steyerl, Hito. 2012. “Defense of the Poor Image.” In _The Wretched of the Screen_. Berlin, Germany: Sternberg Presss.
285. Stiegler, Bernard. 2003. _Aimer, s’aimer, nous aimer_. Paris: Éditions Galilée.
286. Suchman, Mark C. 2003. “The Contract as Social Artifact.” _Law & Society Review_ 37 (1): 91–142.
287. Sumner, William G. 1952. _What Social Classes Owe to Each Other_. Caldwell, ID: Caxton Printers.
288. Tate, Jay. 2001. “National Varieties of Standardization.” In _Varieties of Capitalism: The Institutional Foundations of Comparative Advantage_ , ed. Peter A. Hall and David Soskice. Oxford: Oxford University Press.
289. Tawa, Michael. 2012. “Limits of Fluxion.” In _Architecture in the Space of Flows_ , eds. Andrew Ballantyne and Chris Smith. Abingdon, UK: Routledge.
290. Tay, J. S. W., and R. H. Parker. 1990. “Measuring International Harmonization and Standardization.” _Abacus_ 26 (1): 71–88.
291. Tenen, Dennis, and Maxwell Henry Foxman. 2014. “ _Book Piracy as Peer Preservation_.” Columbia University Academic Commons. doi: 10.7916/D8W66JHS.
292. Teubner, Gunther. 1997. _Global Law Without a State_. Aldershot, UK: Dartmouth.
293. Thussu, Daya K. 2007. _Media on the Move: Global Flow and Contra-Flow_. London: Routledge.
294. Tiffen, Belinda. 2007. “Recording the Nation: Nationalism and the History of the National Library of Australia.” _Australian Library Journal_ 56 (3): 342.
295. Tsilas, Nicos. 2011. “Open Innovation and Interoperability.” In _Opening Standards: The Global Politics of Interoperability_ , ed. Laura DeNardis. Cambridge, MA: MIT Press.
296. Tygstrup, Frederik. 2014. “The Politics of Symbolic Forms.” In _Cultural Ways of Worldmaking: Media and Narratives_ , ed. Ansgar Nünning, Vera Nünning, and Birgit Neumann. Berlin: De Gruyter.
297. Vaidhyanathan, Siva. 2011. _The Googlization of Everything: (and Why We Should Worry)_. Berkeley: University of California Press.
298. van Dijck, José. 2012. “Facebook as a Tool for Producing Sociality and Connectivity.” _Television & New Media_ 13 (2): 160–176.
299. Veel, Kristin. 2003. “The Irreducibility of Space: Labyrinths, Cities, Cyberspace.” _Diacritics_ 33:151–172.
300. Venn, Couze. 2006. “The Collection.” _Theory, Culture & Society_ 23:35–40.
301. Verhoeven, Deb. 2016. “As Luck Would Have It: Serendipity and Solace in Digital Research Infrastructure.” _Feminist Media Histories_ 2 (1): 7–28.
302. Vise, David A., and Mark Malseed. 2005. _The Google Story_. New York: Delacorte Press.
303. Voltaire. 1786. _Dictionaire Philosophique_ (Oeuvres Completes de Voltaire, Tome Trente-Huiteme). Gotha, Germany: Chez Charles Guillaume Ettinger, Librarie.
304. Vul, Vladimir Abramovich. 2003. “Who and Why? Bibliotechnoye Delo,” _Librarianship_ 2 (2). .
305. Walker, Kevin. 2006. “Story Structures: Building Narrative Trails in Museums.” In _Technology-Mediated Narrative Environments for Learning_ , eds. G. Dettori, T. Giannetti, A. Paiva, and A. Vaz, 103–114. Dordrecht: Sense Publishers.
306. Walker, Neil. 2003. _Sovereignty in Transition_. Oxford: Hart.
307. Weigel, Moira. 2016. _Labor of Love: The Invention of Dating_. New York: Farrar, Straus and Giroux.
308. Weiss, Andrew, and Ryan James. 2012. “Google Books’ Coverage of Hawai’i and Pacific Books.” _Proceedings of the American Society for Information Science and Technology_ 49 (1): 1–3.
309. Weizman, Eyal. 2006. “Lethal Theory.” _Log_ 7:53–77.
310. Wilson, Elizabeth. 1992. “The Invisible flaneur.” _New Left Review_ 191 (January–February): 90–110.
311. Wolf, Gary. 2003. “The Great Library of Amazonia.” _Wired_ , November.
312. Wolff, Janet. 1985. “The Invisible Flâneuse. Women and the Literature of Modernity.” _Theory, Culture & Society_ 2 (3): 37–46.
313. Yeo, Richard R. 2003. “A Solution to the Multitude of Books: Ephraim Chambers’s ‘Cyclopaedia’ (1728) as ‘the Best Book in the Universe.’” _Journal of the History of Ideas_ 64 (1): 61–72.
314. Young, Michael D. 1988. _The Metronomic Society: Natural Rhythms and Human Timetables_. Cambridge, MA: Harvard University Press.
315. Yurchak, Alexei. 1997. “The Cynical Reason of Late Socialism: Power, Pretense, and the Anekdot.” _Public Culture_ 9 (2): 161–188.
316. Yurchak, Alexei. 2006. _Everything Was Forever, Until It Was No More: The Last Soviet Generation_. Princeton, NJ: Princeton University Press.
317. Yurchak, Alexei. 2008. “Suspending the Political: Late Soviet Artistic Experiments on the Margins of the State.” _Poetics Today_ 29 (4): 713–733.
318. Žižek, Slavoj. 2009. _The Plague of Fantasies_. London: Verso.
319. Zuckerman, Ethan. 2008. “Serendipity, Echo Chambers, and the Front Page.” _Nieman Reports_ 62 (4). .

© 2018 Massachusetts Institute of Technology

All rights reserved. No part of this book may be reproduced in any form by any
electronic or mechanical means (including photocopying, recording, or
information storage and retrieval) without permission in writing from the
publisher.

This book was set in ITC Stone Sans Std and ITC Stone Serif Std by Toppan
Best-set Premedia Limited. Printed and bound in the United States of America.

Library of Congress Cataloging-in-Publication Data

Names: Thylstrup, Nanna Bonde, author.

Title: The politics of mass digitization / Nanna Bonde Thylstrup.

Description: Cambridge, MA : The MIT Press, [2018] | Includes bibliographical
references and index.

Identifiers: LCCN 2018010472 | ISBN 9780262039017 (hardcover : alk. paper)

eISBN 9780262350044

Subjects: LCSH: Library materials--Digitization. | Archival materials--
Digitization. | Copyright and digital preservation.

Classification: LCC Z701.3.D54 T49 2018 | DDC 025.8/4--dc23 LC record
available at

Medak
Death and Survival of Dead Labor
2016


# Death and Survival of Dead Labor

by Tomislav Medak — Jan 08, 2016

![](https://schloss-post.com/content/uploads/public-
library_wuerttembergischer-kunstverein-600x450.jpg)

»Public Library. Rethinking the Infrastructures of
Knowledge Production«
Exhibition at Württembergischer Kunstverein Stuttgart, 2014

**The present-day social model of authorship is co-substantive with the
normative regime of copyright. Copyright’s avowed role is to triangulate a
balance between the rights of authors, cultural industries, and the public.
Its legal foundation is in the natural right of the author over the products
of intellectual labor. The recurrent claims of the death of the author,
disputing the primacy of the author over the work, have failed to do much to
displace the dominant understanding of the artwork as an extension of the
personality of the author.**

The structuralist criticism positing an impersonal structuring structure
within which the work operates; the hypertexual criticism dissolving
boundaries of work in the arborescent web of referentiality; or the remix
culture’s hypostatisation of the collective and re-appropriative nature of all
creativity – while changing the meaning we ascribe to the works of culture –
have all failed to leave an impact on how the production of works is
normativized and regulated.

And yet the nexus author–work–copyright has transformed in fundamental ways,
however in ways opposite to what these openings in our social epistemology
have suggested. The figure of the creator, with the attendant apotheosis of
individual creativity and originality, is nowadays more forcefully than ever
before being mobilized and animated by the efforts to expand the exclusive
realm of exploitation of the work under copyright. The forcefulness though
speaks of a deep-seated neurosis, intimating that the purported balance might
not be what it is claimed to be by the copyright advocates. Much is revealed
as we descend into the hidden abode of production.

## _Of Copyright and Authorship_

Copyright has principally an economic function: to unambiguously establish
individualized property in the products of intellectual labor. Once the legal
title is unambiguously assigned, there is a property holder with whose consent
the contracting, commodification, and marketing of the work can proceed. In
that aspect, copyright is not very different from the requirement of formal
freedom that is granted to the laborer to contract out their own labor power
as a commodity to capital, allowing then the capital to maximize the
productivity and appropriate the products of the worker’s labor – which is in
terms of Marx »dead labor.« In fact, the analogy between the contracting of
labor force and the contracting of intellectual work does not stop there. They
also share a common history.

The liberalism of rights and the commodification of labor have emerged from
the context of waning absolutism and incipient capitalism in Europe of the
seventeenth and the eighteenth century. Before the publishers and authors
could have their monopoly over the exploitation of their publications
instituted in the form of copyright, they had to obtain a privilege to print a
book from royal censors. First printing privileges granted to publishers, for
instance in early seventeenth century Great Britain, came with the burden
placed on publishers to facilitate censorship and control over the
dissemination of the growing body of printed matter in the aftermath of the
invention of movable type printing.

The evolution of regulatory mechanisms of contemporary copyright from the
context of absolutism and early capitalism receives its full relief if one
considers how peer review emerged as a self-censoring mechanism within the
Royal Academy and the Académie des sciences. [1] The internal peer review
process helped the academies maintain the privilege to print the works of
their members, which was given to them only under the condition that the works
they publish limit themselves to matters of science and make no political
statements that could otherwise sour the benevolence of the monarch. Once they
expanded to print in their almanacs, journals, and books the works of authors
outside of the academy ranks, they both expanded their scientific authority
and their regulating function to the entire nascent field of modern science.

The transition from the privilege tied to the publisher to the privilege tied
to the natural person of the author would unfold only later. In Great Britain
this occurred as the guild of printers, Stationers’ Company, failed to secure
the extension of its printing privilege and thus, in order to continue with
the business of printing books, decided to advocate a copyright for the
authors instead, which resulted in the passing of the Copyright Act of 1709,
also known as the Statute of Anne. Thus the author became the central figure
in the regulation of literary and scientific production. Not only did the
author now receive the exclusive rights to the work, the author was also made
– as Foucault has famously analyzed – the identifiable subject of scrutiny,
censorship, and political sanction by the absolutist state or the church.

And yet, although the romantic author now took center stage, copyright
regulation, the economic compensation for the work, would long remain no more
than an honorary one. Until well into the eighteenth century literary writing
and creativity in general were regarded as resulting from the divine
inspiration and not from the individual genius of the author. Money earned in
the growing business with books mostly stayed in the hands of the publishers,
while the author received an honorarium, a flat sum that served as a »token of
esteem.« [2] It was only with the increasingly vocal demand by the authors to
secure material and political independence from the patronage and authority
that they started to make claims for rightful remuneration.

## _Of Compensation and Exploitation
_

The moment of full-blown affirmation of romantic author-function marks a
historic moment of redistribution and establishment of compromise between the
right of publishers to economic exploitation of the works and the right of
authors to rightful compensation for their works. Economically this was made
possible by the expanding market for printed books in the eighteenth and the
nineteenth century, while politically this was catalyzed by the growing desire
for autonomy of scientific and literary production from the system of feudal
patronage and censorship in gradually liberalizing modern capitalist
societies. The autonomy of production was substantially coupled to the
production for the market. However, the irenic balance could not last
unobstructed. Once the production of culture and science was subsumed under
the exigencies of the market, it had to follow the laws of commodification and
competition that no commodity production can escape.

With the development of big corporation and monopoly capitalism, [3] the
purported balance between the author and the publisher, the innovator or
scientist and the company, the labor and the capital, the public circulation
and the pressures of monetization has become unhinged. While the legislative
expansions of protections, court decisions, and multilateral treaties are
legitimated on basis of the rights of creators, they have become the economic
basis for the monopolies dominating the commanding heights of the global
economy to protect their dominant position in the world market. The levels of
concentration in the industries with large portfolios of various forms of
intellectual property rights is staggering. The film industry is a US$88
billion industry dominated by six major studios. The recorded music industry
is an almost US$20 billion industry dominated by three major labels. The
publishing industry is a US$120 billion industry, where the leading ten earn
in revenues more than the next 40 largest publishing groups. Among patent
holding industries, the situation is a little more diversified, but big patent
portfolios in general dictate the dynamics of market power.

Academic publishing in particular draws a stark relief of the state of play.
It is a US$10 billion industry dominated by five publishers, financed up to
75% from the subscriptions of libraries. It is notorious for achieving extreme
year on year profit margins – in the case of Reed Elsevier regularly well over
20%, with Taylor & Francis, Springer, and Wiley-Blackwell only just lagging
behind. [4] Given that the work of contributing authors is not paid, but
financed by their institutions (provided they are employed at an institution)
and that the publications nowadays come mostly in the form of electronic
articles licensed under subscription for temporary use to libraries and no
longer sold as printed copies, the public interest could be served at a much
lower cost by leaving commercial closed-access publishers out of the equation.
However, given the entrenched position of these publishers and their control
over the moral economy of reputation in academia, the public disservice that
they do cannot be addressed within the historic ambit of copyright. It
requires politicization.

## _Of Law and Politics_

When we look back on the history of copyright, before there was legality there
was legitimacy. In the context of an almost completely naturalized and
harmonized global regulation of copyright the political question of legitimacy
seems to be no longer on the table. An illegal copy is an object of exchange
that unsettles the existing economies of cultural production. And yet,
copyright nowadays marks a production model that serves the power of
appropriation from the author and market power of the publishers much more
than the labor of cultural producers. Hence the illegal copy is again an
object begging the question as to what do we do at a rare juncture when a
historic opening presents itself to reorganize how a good, such as knowledge
and culture, is produced and distributed in a society. We are at such a
juncture, a juncture where the regime regulating legality and illegality might
be opened to the questioning of its legitimacy or illegitimacy.

1. Jump Up For a more detailed account of this development, as well as for the history of printing privilege in Great Britain, see Mario Biagioli: »From Book Censorship to Academic Peer Review,« in: _Emergences:_ _Journal for the Study of Media & Composite Cultures _12, no. 1 [2002], pp. 11–45.
2. Jump Up The transition of authorship from honorific to professional is traced back in Martha Woodmansee: _The Author, Art, and the Market: Rereading the History of Aesthetics_. New York 1996.
3. Jump Up When referencing monopoly markets, we do not imply purely monopolistic markets, where one company is the only enterprise selling a product, but rather markets where a small number of companies hold most of the market. In monopolistic competition, oligopolies profit from not competing on prices. Rather »all the main players are large enough to survive a price war, and all it would do is shrink the size of the industry revenue pie that the firms are fighting over. Indeed, the price in an oligopolistic industry will tend to gravitate toward what it would be in a pure monopoly, so the contenders are fighting for slices of the largest possible revenue pie.« Robert W. McChesney: _Digital Disconnect: How Capitalism Is Turning the Internet Against Democracy_. New York 2013, pp. 37f. The immediate effect of monopolistic competition in culture is that the consumption is shaped to conform to the needs of the large enterprise, i.e. to accommodate the economies of scale, narrowing the range of styles, expressions, and artists published and promoted in the public.
4. Jump Up Vincent Larivière, Stefanie Haustein, and Philippe Mongeon: »The Oligopoly of Academic Publishers in the Digital Era,« in: _PLoS ONE_ 10, no. 6 [June 2015]: e0127502, doi:10.1371/journal.pone.0127502.

![](data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7)

[Tomislav Medak](https://schloss-post.com/person/tomislav-medak/),
Zagreb/Croatia — Performing Arts, Solitude fellow 2013–2015

Tomislav Medak is a philosopher with interests in contemporary political
philosophy, media theory and aesthetics. He is coordinating the theory program
and publishing activities of the Multimedia Institute/MAMA (Zagreb/Croatia),
and works in parallel with the Zagreb-based theatre collective BADco.


Constant
Tracks in Electronic fields
2009


figure 3 Dmytri Kleiner: Web 2.0
is a business model, it capitalises
on community created values.

figure 1 E-traces: In the reductive
world of Web 2.0 there are no
insignificant actors because once
added up, everybody counts.

figure 4 Christophe Lazaro:
Sociologists and anthropologists
are trying to stick the notion of
‘social network' to the specificities
of digital networks, that is to say
to their horizontal character

figure 2

1

1

1

2

2

figure 5 The Robot Syndicat:
Destined to survive collectively
through multi-agent systems
and colonies of social robots

figure 6

figure 11

figure 7
figure 9

figure 8

figure 10

2

2

2

3

3

figure 12
Destination port:
Every single passing
of a visitor triggers
the projection of
a simultaneous
registration

figure 15

figure 18

figure 16

figure 13

figure 17

figure 19
Doppelgänger: The
electronic double
(duplicate, twin) in
a society of control
and surveillance

figure 14

3

3

3

4

4

figure 20 CookieSensus: Cookies
found on washingtonpost.com

figure 22 Image Tracer: Images
and data accumulate into layers as
the query is repeated over time

figure 21 ... and
cookies sent by tacodo.net
figure 23 Shmoogle: In one click,
Google hierarchy crumbles down

4

4

4

5

5

figure 24 Jussa
Parrikka: We move
onto a baroque world,
a mode of folding
and enveloping new
ways of perception
and movement

figure 25

figure 26 Extended Speakers: A
netting of thin metal wires suspends
from the ceiling of the haunted
house in the La Bellone courtyard

figure 28

figure 27

figure 29

figure 30

5

5

5

6

6

figure 31

figure 32

figure 33

figure 34

figure 35

figure 38

figure 41

figure 44

figure 47

figure 36

figure 39

figure 42

figure 45

figure 48

figure 37

figure 40

figure 43

figure 46

figure 49

6

6

6

7

7

figure 50

figure 55

figure 60

figure 65

figure 70

figure 75

figure 51

figure 56

figure 61

figure 66

figure 71

figure 76

figure 52

figure 57

figure 62

figure 67

figure 72

figure 77

figure 53

figure 58

figure 63

figure 68

figure 73

figure 78

figure 54

figure 59

figure 64

figure 69

figure 74

figure 79

7

7

7

8

8

figure 80 Elgaland-Vargaland:
Since November 2007, the Embassy
permanently resides in La Bellone

figure 81 Ambassadors Yves
Poliart and Wendy Van Wynsberghe

figure 85

figure 82
figure 84

figure 86
figure 83

8

8

8

9

9

figure 87 It could be the
result of psychic echoes from
the past, psychokinesis, or the
thoughts of aliens or nature spirits

figure 89 Manu
Luksch: Our
digital selves are
many dimensional,
alert, unforgetting

figure 88

figure 91

figure 93

figure 92

figure 94

figure 90

9

9

9

10

10

figure 95

figure 97

figure 96

figure 99

figure 98

10

10

10

11

11

figure 100

figure 101

figure 103
Audio-geographic
dérive: Listening to
the electro-magnetic
spectrum of Brussels

figure 106

figure 107

figure 102

figure 104

figure 108

figure 110

figure 105

figure 112

figure 111

figure 109

11

11

11

12

12

figure 113 Michael Murtaugh:
Rather than talking about
leaning forward or backward,
a more useful split might be
between reading and writing

figure 114

figure 117
figure 115 Adrian
Mackenzie: This
opacity reflects the
sheer number of
operations that have
to be compressed
into code ...

figure 116 ... in
order for digital signal
processing to work

figure 118

12

12

12

13

13

figure 119 Sabine Prokhoris and
Simon Hecquet: What happens
precisely when one decides to
consider these margins, these
‘supplementen', as fullgrown
creations – slave, nor attachment?

figure 120 Praticable:
Making the body as a locus of
knowledge production tangible

figure 121

figure 123

figure 122

figure 124

figure 125

13

13

13

14

14

figure 126 Mutual
Motions Video Library:
A physical exchange
between existing
imagery, real-time
interpretation,
experiences
and context

figure 129

figure 130

figure 127 Modern
Times: His gestures
are burlesque responses
to the adversity
in his life, or just
plain ‘exuberant'

figure 131 Michael
Terry: We really
want to have lots of
people looking at it,
and considering it,
and thinking about
the implications

figure 128

figure 132

figure 133 Görkem
Çetin: There's a lack of
usability bug reporting
tool which can be
used to submit, store,
modify and maintain
user submitted videos,
audio files and pictures

figure 134 Simon
Yuill: It is here
where contingency
and notation meet,
but it is here also
that error enters

14

14

14

15

15

figure 135

figure 141
figure 138

figure 136

figure 139

figure 137

figure 140

15

15

15

16

16

figure 144 Séverine Dusollier:
I think amongst many of the
movements that are made, most are
not ‘a work', they are subconscious
movements, movements that
are translations of gestures that
are simply banal or necessary

figure 142

figure 145

figure 143

16

16

16

17

17

figure 146 Sadie Plant: It is
this kind of deep collectivity,
this profound sense of
micro-collaboration, which
has often been tapped into

17

17

17

18

18

18

18

18

19

19

Verbindingen/Jonctions 10
EN
NL
FR

Tracks in electr(on)ic fields

19

19

19

20

20

Introduction
E-Traces

25

EN, NL, FR

35

EN, NL, FR

Nicolas Malevé, Michel Cleempoel
E-traces en contexte NL, FR

38

Dmytri Kleiner, Brian Wyrick
InfoEnclosure 2.0 NL

47

Christophe Lazaro

58

Marc Wathieu

65

Michel Cleempoel
Destination port
Métamorphoz
Doppelgänger
Andrea fiore
Cookiesensus

FR

70

EN, NL, FR

71

FR, NL, EN

73

EN

Tsila Hassine
Shmoogle and Tracer

EN

Jussi Parikka
Insects, Affects and Imagining New
Sensoriums EN

75
77

81

20

20

20

21

21

Pierre Berthet
Concert with various extended objects

EN, NL, FR

93

Leiff Elgren, CM von Hausswolff
Elgaland-Vargaland EN, NL, FR

95

CM von Hausswolff, Guy-Marc Hinant
Ghost Machinery EN, NL

98

Read Feel Feed Real

101

EN, NL, FR

Manu Luksch, Mukul Patel
Faceless: Chasing the Data Shadow

EN

104

Julien Ottavi
Electromagnetic spectrum Research code
0608 FR

119

Michael Murtaugh
Active Archives or: What's wrong with the
YouTube documentary? EN

131


EN, NL, FR

Femke Snelting

NL

139
143

Adrian Mackenzie
Centres of envelopment and intensive
movement in digital signal processing EN

155

Elpueblodechina
El Curanto EN

174

21

21

21

22

22

Alice Chauchat, Frédéric Gies

181

Dance (notation)

184

EN

Sabine Prokhoris, Simon Hecquet
Mutual Motions Video Library

188
198

EN, NL, FR

Inès Rabadan
Does the repetition of a gesture irrevocably
lead to madness?

215

Michael Terry (interview)
Data analysis as a discourse

217

EN

233

254

Sadie Plant
A Situated Report

275

Biographies

EN

287

EN, NL, FR

License register

311

Vocabulary

313

22

22

22

23

23

The Making-of

323

EN

Colophon

331

23

23

23

24

24

24

24

24

25

25

EN

Introduction

25

25

25

26

26


29

EN

Traces in electr(on)ic fields documents the 10 th edition
of Verbindingen/Jonctions with the same name, a bi-annual multidisciplinary festival organised by Constant, association for arts and media. It is a meeting point for a
diverse public that from an artistic, activist and / or theoretical perspective is interested in experimental reflections
on technological culture.
Not for the first time, but during this edition more explicit than ever, we put the question of the interaction
between body and technology on the table. How to think
about the actual effects of surveillance, the ubiquitous presence of cameras and public safety procedures that can only
regard individuals as an amalgamate of analysable data?
What is the status of ‘identity' when it appears both elusive and unchangeable? How are we conditioned by the
technology we use? What is the relationship between commitment and reward? flexibility of work and healthy life?
Which traces does technology leave in our thinking, behavior, our routine movements? And what residue do we
leave behind ourselves on electr(on)ic fields through our
presence in forums, social platforms, databases, log files?
The dual nature of the term ‘notation' formed an important source of inspiration. Systems that choreographers,
composers and computer programmers use to record ideas
and observations, can then be interpreted as instruction,
as a command which puts an actor, software, performing artist or machine in to motion. From punch card to
musical scale, from programming language to Laban notation, we were interested in the standards and protocols
needed to make such documents work. It was the reason
29

29

29

30

30

to organise the festival inside the documentation, library
and workshop for theater and dance, ‘maison du spectacle'
La Bellone. Located in the heart of Brussels, La Bellone
offered hospitality to a diverse group of thinkers, dancers,
artists, programmers, interface designers and others and
its meticulously renovated 17th century façade formed the
perfect backdrop for this intense program.
Throughout the festival we worked with a number of
themes, not meant to isolate areas of thinking, but rather
as ‘spider threads' interlinking various projects:
E-traces (p. 35) subjected the current reality of Web 2.0
to a number of critical considerations. How do we regain
control of the abundant data correlation that mega-companies such as Google and Yahoo produce, in exchange for
our usage of their services? How do we understand ‘service' when we are confronted with their corporate Janus
face: one a friendly interface, the other Machiavellian
user licenses?
Around us, magnetic fields resonate unseen waves (p.
77) took the ghostly presence of technology as a starting
point and Read Feel Feed Real (p. 101) listened to unheard
sounds and looked behind the curtains in Do-It-Yourself,
walks and urban interventions. Through the analysis of radio waves and their use in artistic installations, by making
electro-magnetic fields heard, we made unexplained phenomena tangible.
As machines learn about bodies, bodies learn about machines and the movements that emerge as a result, are
not readily reduced to cause and effect. Mutual movements (p. 139) started in the kitchen, the perfect place to
30

30

30

31

31

reconsider human-machine configurations, without having
to separate these from everyday life and the patterns that
are ingrained in it. Would a different idea of ‘user' also
change our approach to ‘use'?
At the end of the adventure Sadie Plant remarked in
her ‘situated report' on Tracks in electr(on)ic fields (p.
275): “It is ultimately very difficult to distinguish between
the user and the developer, or the expert and the amateur. The experiment, the research, the development is
always happening in the kitchen, in the bedroom, on the
bus, using your mobile or using your computer. (...) this
sense of repetitive activity, which is done in many trades
and many lines, and that really is the deep unconscious
history of human activity. And arguably that's where the
most interesting developments happen, albeit in a very unsung, unseen, often almost hidden way. It is this kind of
deep collectivity, this profound sense of micro-collaboration, which has often been tapped into.”
Constant, October 2009

34

34

35

35

EN

E-Traces

35

35

35

36

36

How does the information we seize in search engines
circulate, what happens to our data entered in social networking sites, health records, news sites, forums and chat
services we use? Who is interested? How does the ‘market' of the electronic profile function? These questions
constitute the framework of the E-traces project.
For this, we started to work on Yoogle!, an online game.
This game, still in an early phase of development, will allow users to play with the parameters of the Web 2.0 economy and to exchange roles between the different actors
of this economy. We presented a first demo of this game,
accompanied by a public discussion with lawyers, artists
and developers. The discussion and lecture were meant
to analyse more deeply the mechanism of the economy
behind its friendly interface, the speculation on profiling,
the exploitation of free labor, but also to develop further
the scenario of the game.

EN

NL

36

36

36

37

37

47

DMYTRI KLEINER, BRIAN WYRICK
License: Dmytri Kleiner & Brian Wyrick, 2007. Anti-Copyright. Use as desired in whole or in part. Independent or collective commercial use encouraged. Attribution optional.
Text first published in English in Mute: http://www.metamute.org/InfoEnclosure-2.0. For translations in
Polish and Portuguese, see http://www.telekommunisten.net

figure 3
Dmytri
Kleiner


MICHEL CLEEMPOEL
License: Free Art License
figure 12
Every single
passing of
a visitor
triggered the
projection
of a
simultaneous
registration

figure 14

EN

Destination port
During the Jonctions festival, Destination port registered the flux
of visitors in the entrance hall of La Bellone. Every single passing
of a visitor triggered the projection of a simultaneous registration
in the hall, and this in superposition with formerly captured images
of visitors, thus creating temporary and unlikely encounters between
persons.

Doppelgänger
Born in September 2001, represented here by Valérie Cordy et
Natalia De Mello, the MéTAmorphoZ collective is a multidisciplinary
association that create installations, spectacles and transdisciplinary
performances that mix artistic experiments and digital practices.
With the project Doppelganger, the collective MéTAmorphoZ focuses on the thematic of the electronic double(duplicate, twin) in a
society of control and surveillance.
“Our electronic identity, symbol of this new society of control,
duplicates our organic and social identity. But this legal obligation
to be assigned a unique, stable and unforgeable identity isn't, in the
end, a danger for our fundamental freedom to claim identitites which
are irreducibly multiple for each of us?”
72

72

72

73

73

ANDREA fiORE
License: Creative Commons Attribution-NonCommercial-ShareAlike
EN

Cookiecensus
Although still largely perceived as a private activity, web surfing
leaves persistent trails. While users browse and interact through the
web, sites watch them read, write, chat and buy. Even on the basis
of a few basic web publishing experiences one can conclude that most
web servers record ‘by default' their entire clickstream in persistent
‘log' files.
‘Web cookies' are sort of digital labels sent by websites to web
browsers in order to assign them a unique identity and automatically
recognize their users over several visits. Today, this technology, which
was introduced with the first version of the Netscape browser in 1994,
constitutes the de facto standard upon which a wide range of interactive functionalities are built that were not conceived by the early web
protocol design. Think, for example, of user accounts and authentications, personalized content and layouts, e-commerce and shopping
charts.
While it has undeniably contributed to the development and the
social spread of the new medium, web cookie technology is still to
be considered as problematic. Especially the so-called ‘third party
cookies' issue – a technological loophole enabling marketeers and advertisement firms to invisibly track users over large networks of syndicated websites – has been the object of a serious controversy, involving
a varied set of actors and stakeholders.
Cookiecensus is a software prototype. A wannabe info tool for
studying electronic surveillance in one of its natively digital environments. Its core functionality consists of mapping and analyzing third
party's cookies distribution patterns within a given web, in order to
identify its trackers and its network of syndicated sites. A further
feature of the tool is the possibility to inspect the content of a web
page in relation to its third party cookie sources.

figure 20
Cookies
found on
Washingtonpost.com

figure 21
Cookies
sent by
Tacodo.net

73

73

73

74

74

It is an attempt to deconstruct the perceived unity and consistency
of web pages by making their underlying content assemblage and their
related attention flows visible.

74

74

74

75

75

TSILA HASSINE
License: Free Art License
EN

Shmoogle and Tracer
What is Shmoogle? Shmoogle is a Google randomizer. In one
click, Google hierarchy crumbles down. Results that were usually exiled to pages beyond user attention get their ‘15 seconds of PageRank
fame'. While also being a useful tool for internet research, Shmoogle
is a comment, a constant reminder that the Google order is not necessarily ‘the good order', and that sometimes chaos is more revealing
than order. While Google serves the users with information ready for
immediate consumption, Shmoogle forces its users to scroll down and
make their own choices. If Google is a search engine, then Shmoogle
is a research engine.

figure 22
Images
and data
accumulate
into layers
as the query
is repeated
over time

figure 23 In
one click,
Google
hierarchy
crumbles
down

In Image Tracer, order is important. Image Tracer is a collaboration between artist group De Geuzen and myself. Tracer was born
out of our mutual interest in the traces images leave behind them on
their networked paths. In Tracer images and data accumulate into
layers as the query is repeated over time. Boundaries between image
and data are blurred further as the image is deliberately reduced to
thumbnail size, and emphasis is placed on the image's context, the
neighbouring images, and the metadata related to that image. Image Tracer builds up an archive of juxtaposed snapshots of the web.
As these layers accumulate, patterns and processes reveal themselves,
and trace a historiography in the making.

75

75

75

76

76

76

76

76

77

77

EN

NL

FR

Around us, magnetic fields resonate
unseen waves
Om ons heen resoneren ongeziene
golven
Autour de nous, les champs
magnétiques font résonner des ondes
invisibles

77

77

77

78

78

In computer terminology many words refer to chimerical images such as bots, demons and ghosts. Dr. Konstantin Raudive, a Latvian psychologist, and Swedish film
producer Friedrich Jurgenson went a step further and explored the territory of the Electric Voice phenomena. Electronic voice phenomena (EVP) are speech or speech-like
sounds that can be heard on electronic devices that were
not present at the time the recording was made. Some
believe these could be of paranormal origin.
For this part of the V/J10 programme, we chose a
metaphorical approach, working with bodiless entities and
hidden processes, finding inspiration in The Embassy of
Elgaland-Vargaland, semi-fictional kingdoms, consisting
of all Border Territories (Geographical, Mental & Digital). These kingdoms were founded by Leiff Elgren and
CM Von Hausswolff. Elgren stated that: “All dead people
are inhabitants of the country Elgaland-Vargaland, unless
they stated that they did not want to be an inhabitant”.


JUSSI PARIKKA
License: Creative Commons Attribution-NonCommercial-ShareAlike
EN

Insects, Affects and Imagining New Sensoriums

figure 24
Jussa
Parrikka
at V/J10

A Media Archaeological Rewiring
from Geniuses to Animals
An insect media artist or a media archaeologist imagining a potential weird medium might end up with something that sounds quite
mundane to us humans. For the insect probe head, the question of
what it feels like to perceive with two eyes and ears and move with two
legs would be a novel one, instead of the multiple legs and compound
eyes that it has to use to manoeuvre through space. The uncanny
formations often used in science fiction to describe something radically inhuman (like the killing machine insects of Alien movies) differ
from the human being in their anatomy, behaviour and morals. The
human brain might be a much more effcient problem solver and the
human hands are quite handy tool making metatools, and the human
body could be seen as an original form of any model of technics, as
Ernst Kapp already suggested by the end of the 19 th century. But
still, such realisations do not take away the fascination that emerges
from the question of what would it be like to move, perceive and think
differently; what does a becoming-animal entail.
I am of course taking my cue here from the philosopher Manuel DeLanda who in his 1991 book War in the Age of Intelligent Machines,
asked what would the history of warfare look like from the viewpoint
of a future robot historian? An exercise perhaps in creative imagination, DeLanda's question also served other ends relating to physics of
self-organization. My point is not to discuss DeLanda, or the history
of war machines, but I want to pick an idea from this kind of an
approach, an idea that could be integrated into media archaeological considerations, concerning actual or imaginary media. As already
said, imagining alternative worlds is not the endpoint of this exercise
81

81

81

82

82

in ‘insect media', but a way to dip into an alternative understanding
of media and technology, where such general categories as ‘humans'
and ‘machines' are merely the endpoints of intensive flows, capacities, tendencies and functions. Such a stance takes much of its force
from Gilles Deleuze's philosophical ontology of abstract materialism,
which focuses primarily on a Spinozian ontology of intensities, capacities and functions. In this sense, the human being is not a distinct
being in the world with secondary qualities, but a “capacity to signify, exchange, and communicate”, as Claire Colebrook has pointed
out in her article ‘The Sense of Space' (Postmodern Culture). This
opens up a new agenda not focused on ‘beings' and their tools, but
on capacities and tendencies that construct and create beings in a
move which emphasizes Deleuze's interest in pre-Kantian worlds of
baroque. In addition, this move includes a multiplication of subjectivities and objects of the world, a certain autonomy of the material
world beyond the privileged observer. Like everybody who has done
gardening knows: there is a world teeming with life outside the human
sphere, with every bush and tree being a whole society in itself.
To put it shortly, still following Colebrook's recent writing on the
concept of affect, what Deleuze found in the baroque worlds of windowless monads was a capacity of perception that does not stem from
a universalising idea of perception in general. Man or any general
condition of perception is not the primary privileged position of perception but perceptions and creations of space and temporality are
multiplied in the numerous monadic worlds, a distributed perception
of a kind that according to Deleuze later found resonance in the philosophy of A.N.Whitehead. For Whitehead, the perceiving subject is
more akin to a ‘superject', a second order construction from the sum
of its perceptions. It is the world perceived that makes up superjects
and based on the variations of perceptions also alternative worlds.
Baroque worlds, argues Deleuze in his book Le Pli from 1988, are
characterised by the primacy of variation and perspectivism which is
a much more radical notion than a relativist idea of different subjects
having different perspectives on the world. Instead, “the subject will
be what comes to the point of view”, and where “the point of view is
not what varies with the subject, at least in the first instance; it is, to
82

82

82

83

83

the contrary, the condition in which an eventual subject apprehends
a variation (metamorphosis). . . ”.
Now why this focus on philosophy, this short excursion that merely
sketches some themes around variation and imagination? What I am
after is an idea of how to smuggle certain ideas of variation, modulation and perception into considerations of media culture, media
archaeology and potentially also imaginary media, where imaginary
media become less a matter of a Lacanian mirror phase looking for
utopian communication offering unity, but a deterritorialising way
of understanding the distributed ontology of the world and media
technologies. Variation and imagination become something else than
the imaginations of a point of view – quite the contrary, the imagination and variation give rise to points of view, which opens up a
whole new agenda of a past paradoxically not determined, and even
further, future as open to variation. This would mean taking into
account perceptions unheard of, unfelt, unthought-of, but still real in
their intensive potentiality, a becoming-other of the sensorium so to
speak. Hence, imagination becomes not a human characteristic but
an epistemological tool that interfaces analytics of media theory and
history with the world of animals and novel affects.
Imaginary media and variations at the heart of media cultural
modes of seeing and hearing have been discussed in various recent
books. The most obvious one is The Book of Imaginary Media, edited
by Eric Kluitenberg. According to the introduction, all media consist
of a real and an imagined part, a functional coupling of material characteristics and discursive dreams which fabricate the crucial features
of modern communication tied intimately with utopian ideals. Imaginary media – or actual media imagined beyond its real capacities
– have been dreamed to compensate insuffcient communication, a
realisation that Kluitenberg elaborates with the argument that “central to the archaeology of imaginary media in the end are not the
machines, but the human aspirations that more often than not are
left unresolved by the machines. . . ”. Powers of imagination are then
based in the human beings doing the imagining, in the human powers
able to transcend the actual and factual ways of perception and to

83

83

83

84

84

grasp the unseen, unheard and unthought of media creations. Variation remains connected to the principle of the central point where
variation is perceived.
Talking of the primacy of variation, we are easily reminded of
Siegfried Zielinski's application of the idea of ‘variantology' as an
‘anarchaeology of media', a task dedicated to the primacy of variation resisting the homogeneous drive of commercialised media spheres.
Excavating dreams of past geniuses, from Empedocles to Athanius
Kircher's cosmic machines and communication networks to Ernst florens Friedrich Chladni's visualisation of sound, Zielinski has been underlining the creative potential in an exercise of imagining media. In
this context, he defines in threefold the term ‘imaginary media' in his
chapter in the Book of Imaginary Media:
• Untimely media/apparatus/machines: “Media devised and designed
either much too late or much too early. . . ”
• Conceptual media/apparatus/machines: “Artefacts that were only
ever sketched as models. . . but never actually built.”
• Impossible media/apparatus/machines: “Imaginary media in the
true sense, by which I mean hermetic and hermeneutic machines. . .
they cannot actually be built, and whose implied meanings nonetheless have an impact on the factual world of media.”
A bit reminiscent of the baroque idea, variation is primary, claims
Zielinski. Whereas the capitalist orientated consumer media culture
is working towards a psychopathia medialis of homogenized media
technological environments, variantology is committed to promoting
heterogeneity, finding dynamic moments of media archaeological past,
and excavating radical experiments that push the limits of what can
be seen, heard and thought. Variantology is then implicitly suggested
as a mode of ontogenesis, of bringing forth, of modulation and change
– an active mode of creation instead of distanced contemplation.
Indeed, the aim of promoting diversity is a much welcomed one,
but I would like to propose a slight adjustment to this task, something that I engage under the banner of ‘insect media'. Whereas
Zielinski and much of the existing media archaeological research still
84

84

84

85

85

starts off from the human world of male inventor-geniuses, I propose
a slightly more distributed look at the media archaeology of affects,
capacities, modes of perception and movement, which are primarily
not attached to a specific substance (animal, technology), but since
the 19 th century at least, refer to a certain passage, vector from animals to technology and vice versa. Here, a mode of baroque thought,
a thought tuned in terms of variations becomes unravelled with the
help of animality that is not to be seen as a metaphor, but as a metamorphosis, as ‘teachings' in weird perceptions, novel ways of moving,
new ways of sensing, opening up to the world of sensations and contracting them. Instead of looking for variations through inventions of
people, we can turn to the ‘storehouses of invention' of for example
insects that from the 19 th century on were introduced as an alien
form of media in themselves. Next I will elaborate how we can use
these tiny animals as philosophical and media archaeological tools to
address media and technology as intensities that signal weird sensory
experiences.
Novel Sensoriums

During the latter half of the 19 th century, insects were seen as
uncanny but powerful forms of media in themselves, capable of weird
sensory and kinaesthetic experiences. Examples range from popular newspaper discourse to scientific measurements and such early
best-sellers as An Introduction to Entomology; or, Elements of the
Natural History of Insects: Comprising an Account of Noxious and
Useful Insects, of Their Metamorphoses, Hybernation, Instinct (1815—
1826) by William Kirby and William Spence.
Since the 19 th century, insects and animal affects are not only
found in biology but also in art, technology and popular culture. In
this sense, the 19 th century interest in insects produces a valuable
perspective on the intertwining of biology (entomology), technology
and art, where the basics of perception are radically detached from
human-centred models towards the animal kingdom. In addition, this
science-technology-art trio presents a challenge to rethink the forces
which form what we habitually refer to as ‘media' as modes of perception. By expanding our notions of ‘media' from the technological
85

85

85

86

86

apparatuses to the more comprehensive assemblages that connect biological, technological, social and aesthetic issues, we are also able to
bring forth novel contexts for contemporary analysis and design of media systems. In a way, then, the concept of the ‘insect' functions here
as a displacing and a deterritorialising force that seeks a questioning
of where and in what kind of conditions we approach media technologies. This is perhaps an approach that moves beyond a focus on
technology per se, but still does not remain blind to the material forces
of the world. It presents an alternative to the ‘substance-approaches'
that start from a stability or a ground like ‘technology' or ‘humans'.
It is my claim that Deleuzian biophilosophy, that has taken elements
from Spinozian ontology, von Uexküll's ethology, Whitehead's ideas
as well as Simondon's notions on individuation, is able to approach
the world as media in itself: a contracting of forces and analysing
them in terms of their affects, movements, speeds and slownesses.
These affects are primary defining capacities of an entity, instead of
a substance or a class it belongs to, as Deleuze explains in his short
book Spinoza: Practical Philosophy. From this perspective we can
adopt a novel media archaeological rewiring that looks at media history not as one of inventors, geniuses and solid technologies, but as a
field of affects, interactions and modes of sensation and perception.
Examples from the 19 th century popular discourse are illustrative.
In 1897, New York Times addressed spiders as ‘builders, engineers
and weavers', and also as ‘the original inventors of a system of telegraphy'. Spiders' webs offer themselves as ingenious communication
systems which do not merely signal according to a binary setting
(something has hit the web/has not hit the web) but transmits information regarding the “general character and weight of any object
touching it (. . . )” Or take for example the book Beautés et merveilles
de la nature et des arts by Eliçagaray from the 18 th century which
lists both technological and animal wonders, for example bees and
ants, electricity and architectural constructions as marvels of artifice
and nature.
Similar accounts abound since the mid 19 th century. Insects sense,
move, build, communicate and even create art in various ways that
raised wonder and awe for example in U.S. popular culture. Apt
86

86

86

87

87

example of the 19 th century insect mania is the New York Times
story (May 29, 1880) about the ‘cricket mania' of a certain young
lady who collected and trained crickets as musical instruments:
200 crickets in a wirework-house, filled with ferns and shells,
which she called a ‘fernery'. The constant rubbing of the wings
of these insects, producing the sounds so familiar to thousands
everywhere seemed to be the finest music to her ears. She
admitted at once that she had a mania for capturing crickets.
Besides entertainment, and in a much earlier framework, the classic
of modern entomology, the aforementioned An Introduction to Entomology by Kirby and Spence already implicitly presented throughout
its four volume best seller the idea of a primitive technics of nature –
insect technics that were immanent to their surroundings.
Kirby and Spence's take probably attracted the attention it did
because of the catchy language but also what could be called its
ethological touch. Insects were approached as living and interacting
entities that are intimately coupled with their environment. Insects
intertwine with human lives (“Direct and indirect injuries caused by
insects, injuries to our living vegetable property but also direct and
indirect benefits derived from insects”), but also engage in ingenious
building projects, stratagems, sexual behaviour and other expressive
modes of motion, perception and sensation. Instead of pertaining to a
taxonomic account of the interrelations between insect species, their
forms, growth or for example structural anatomy, An Introduction to
Entomology (vol. 1) is traversed by a curiosity cabinet kind of touch
on the ethnographics of insects. Here, insects are for example war
machines, like the horse-fly (Tabanus L.): “Wonderful and various
are the weapons that enable them to enforce their demand. What
would you think of any large animal that should come to attack you
with a tremendous apparatus of knives and lancets issuing from its
mouth?”.
From Kirby and Spence to later entomologists and other writers,
insects' powers of building continuously attracted the early entomological gaze. Buildings of nature were described as more fabulous than
87

87

87

88

88

the pyramids of Egypt or the aqueducts of Rome. Suddenly, in this
weird parallel world, such minuscule and admittedly small-brained
entities like termites were pictured as alike to the ancient monarchies
and empires of Western civilization. The Victorian appreciation of
ancient civilization could also incorporate animal kingdoms and their
buildings of monarchic measurements. Perhaps the parallel was not
to be taken literally, but in any case it expressed a curious interest
towards microcosmical worlds. A recurring trope was that of ‘insect
geometrics' which seemed with accuracy, paralleled only in mathematics, to follow and fold nature's resources into micro versions of
emerging urban culture. To quote Kirby and Spence's An Introduction to Entomology, vol. 2:
No thinking man ever witnesses the complexness and yet regularity and effciency of a great establishment, such as the Bank
of England or the Post Offce without marvelling that even human reason can put together, with so little friction and such
slight deviations from correctness, machines whose wheels are
composed not of wood and iron, but of fickle mortals of a thousand different inclinations, powers, and capacities. But if such
establishments be surprising even with reason for their prime
mover, how much more so is a hive of bees whose proceedings
are guided by their instincts alone!
Whereas the imperialist powers of Europe headed for overseas conquests, the mentality of exposition and mapping new terrains turned
also towards other fields than the geographical. The Seeing Eye – a
key figure of hierarchical modern power – could also be a non-human
eye, as with the fly which according to Steven Connor can be seen as
the recurring mode of “radically alien mode of entomological vision”
with its huge eyes consisting of 4000 sensors. Hence, it is fitting how
in 1898 the idea of “photographing through a fly's eye” was suggested
as a mode of experimental vision – able also to catch queen Victoria
with “the most infinitesimal lens known to science”, that of a dragon
fly.

88

88

88

89

89

Jean-Jacques Lecercle explains how the Victorian enthusiasm for
entomology and insect worlds is related to a general discourse of natural history that as a genre labelled the century. Through the themes
of ‘exploration' and ‘taxonomy' Lecercle claims how Alice in Wonderland can be read as a key novel of the era in its evaluation and
classification of various life worlds beyond the human. Like Alice in
the 1865 novel, new landscapes and exotic species are offered as an
armchair exploration of worlds not merely extensive but also opened
up by intensive gaze into microcosms. Uncanny phenomenal worlds
are what tie together the entomological quest, Darwinian inspired biological accounts of curious species and Alice's adventures into imaginative worlds of twisting logic. In taxonomic terms, the entomologist
is surrounded by a new cult of private and public archiving. New
modes of visualizing and representing insect life produce a new phase
of taxonomy becoming a public craze instead of merely a scientific
tool. Again the wonder worlds of Alice or Edward Lear, the Victorian nonsense poet, are the ideal point of reference for 19 th century
natural historian and entomologist, as Lecercle writes:
And it is part of a craze for discovering and classifying new
species. Its advantage over natural history is that it can invent those species (like the Snap-dragon-fly) in the imaginative
sense, whereas natural history can invent them only in the
archaeological sense, that is discover what already exists. Nonsense is the entomologist's dream come true, or the Linnaean
classification gone mad, because gone creative (. . . )
For Alice, the feeling of not being herself and “being so many different sizes in a day is very confusing”, which of course is something
incomprehensible to the Caterpillar she encounters. It is not queer for
the Caterpillar whose mode of being is defined by the metamorphosis
and the various perception/action-modulations it brings about. It
is only the suddenness of the becoming-insect of Alice that dizzies
her. A couple of years later, in The Population of an Old-Pear Tree,
or Stories of insect life (1870) an everyday meadow is disclosed as
a vivacious microcosm in itself. The harmonious scene, “like a great
89

89

89

90

90

amphitheatre”, is filled with life that easily escapes the (human) eye.
Like Alice, the protagonist wandering in the meadow is “lulled and
benumbed by dreamy sensations” which however transport him suddenly into new perceptions and bodily affects. What is revealed to
our boy hero in this educational novel fashioned in the style of travel
literature (connecting it thus to the colonialist contexts of its age)
is a world teeming with sounds, movements, sensations and insect
beings (huge spiders, cruel mole-crickets, energetic bees) that are beyond the human form (despite the constant tension of such narratives
as educational and moralising tales that anthropomorphize affective
qualities into human characteristics). True to entomological classification, a big part is reserved for the structural-anatomical differences
of the insect life but also the affect-life of how insects relate to their
surroundings is under scrutiny.
As precursors of ethology, such natural historical quests (whether
archaeological, entomological or imaginative) were expressing an appreciation of phenomenal worlds differing from that of the human
with its two hands, two eyes and two feet. In a way, this entailed a
kind of an extended Kantianism interested not only in the conditions
of possibility of experiences, but the emergence of alternative potentials on the immanent level of life that functions through a technics of
nature. Curiously the inspiration with new phenomenal worlds was
connected to the emergence of new technologies of movements, sensation and communication (all challenging the Kantian apperception of
Man as the historically constant basis of knowledge and perception).
Nature was gradually becoming the “new storehouse of invention”
(New York Times, August 4, 1901) that was to entice inventors into
perfecting their developments. What I argue is that this theme can
also be read as an expression of a shift in understanding technology
– a shift that marked the rise of modern discourse concerning media
technologies since the end of the 19 th century and that has usually
been attributed to an anthropological and ethnological turn in understanding technology. I also address this theme in another text of
mine, ‘Insect Technics'. For several writers such as Ernst Kapp who
became one of the predecessors of later theories of media as ‘extensions of man', it was the human body that served as a storage house
90

90

90

91

91

of potential media. However, at the same time, another undercurrent
proposed to think of technologies, inventions and solutions to problems posed by life as stemming from a much more different class of
bodies, namely insects.
So beyond Kant, we move onto a baroque world, not as a period of
art, but as a mode of folding and enveloping new ways of perception
and movement. The early years and decades of technical media were
characterized by the new imaginary of communication, from work
by inventors such as Nikola Tesla to various modes of e.g. spiritualism analyzed recently in her art works by Zoe Beloff. However, one
can radicalize the viewpoint even further and take an animal turn and
not look for alien but for animal and insect ways of sensing the world.
Naturally, this is exactly what is being proposed in a variety of media
art pieces and exhibitions. Insects have made their appearance for
example in Toshio Iwai's Music Insects (1990), Sarah Peebles' electroacoustic Insect Grooves as an example of imaginary soundscapes,
David Dunn's acoustic ecology pieces with insect sounds, the Sci-Art:
Bio-Robotic Choreography project (2001, with Stelarc as one of the
participators), and Laura Beloff's Spinne (2002), a networked spider installation that works according to the web spider/ant/crawler
technology.
Here we are dealing not just with representing the insect, but engaging with the animal affects, indistinguishable from those of the
technological, as in Stelarc's work where the experimentation with
new bodily realities is a form of becoming-insect of the technological
human body. Imagining by doing is a way to engage directly with
affects of becoming-animal of media where the work of sound and
body artists doubles the media archaeological analysis of historical
strata. In other words, one should not reside on the level of intriguing representations of imagined ways of communication, or imagined
apparatuses that never existed, but realize the overabundance of real
sensations, perceptions to contract, to fold, the neomaterialist view
towards imagined media.

91

91

91

92

92

Literature
Ernest van Bruyssel, The population of an old pear-tree; or, Stories
of insect life. (New York: Macmillan and co., 1870).
Lewis Carroll, Alice's Adventures in Wonderland and Through the
Looking Glass. Edited with an Introduction and Notes by Roger
Lancelyn Green. (Oxford: Oxford University Press, 1998).
Claire Colebrook, ‘The Sense of Space. On the Specificity of Affect
in Deleuze and Guattari.' In: Postmodern Culture, vol. 15, issue 1,
2004.
Steven Connor, fly. (London: Reaktion Books, 2006).
Manuel DeLanda, War in the Age of Intelligent Machines. (New
York: Zone Books, 1991).
Gilles Deleuze, Spinoza: Practical Philosophy. Transl. Robert
Hurley. (San Francisco: City Lights, 1988).
Gilles Deleuze, The Fold. Transl. Tom Conley. (Minneapolis:
University of Minnesota Press, 1993).
Ernst Kapp, Grundlinien einer Philosophie der Technik: Zur Entstehungsgeschichte der Kultur aus neuen Gesichtspunkten. (Braunschweig:
Druck und Verlag von George Westermann, 1877).
William Kirby & William Spence, An Introduction to Entomology,
or Elements of the Natural History of Insects. Volumes 1 and 2.
Unabridged Faximile of the 1843 edition. (London: Elibron, 2005).
Eric Kluitenberg (ed.), Book of Imaginary Media. Excavating the
Dream of the Ultimate Communication Medium. (Rotterdam: NAi
publishers, 2006).
Jean-Jacques Lecercle, Philosophy of Nonsense: The Intuitions of
Victorian Nonsense Literature. (London: Routledge, 1994).
Jussi Parikka, ‘Insect Technics: Intensities of Animal Bodies.' In:
(Un)Easy Alliance - Thinking the Environment with Deleuze/Guattari, edited by Bernd Herzogenrath. (Newcastle: Cambridge Scholars
Press, Forthcoming 2008).
Siegfried Zielinski, ‘Modelling Media for Ignatius Loyola. A Case
Study on Athanius Kircher's World of Apparatus between the Imaginary and the Real.' In: Book of Imaginary Media, edited by Kluitenberg. (Rotterdam: NAi, 2006).

92

92

92

93

93

PIERRE BERTHET
License: Creative Commons Attribution-NonCommercial-ShareAlike
EN

Extended speakers
& Concert with various extended objects
We invited Belgian artist Pierre Berthet to create an installation
for V/J10 that explores the resonance of EVP voices. He made a
netting of thin metal wires which he suspended from the ceiling of
the haunted house in the La Bellone courtyard.
Through these metal wires, loudspeakers without membranes were
connected to a network of resonating cans. Sinus tones and radio
recordings were transmitted through the speakers, making the metal
wires vibrate which, in their turn, caused the cans to resonate.

figure 26
A netting
of thin
metal wires
suspended
from the
ceiling of
the haunted
house in the
La Bellone
courtyard

figure 27


93

93

93

94

94

Concert with various extended objects

94

94

94

95

95

LEIff ELGREN, CM VON Hausswolff
License: Fully Restricted Copyright
EN

Elgaland-Vargaland
The Embassy of the The Kingdoms of Elgaland-Vargaland
(KREV)
The Kingdoms were proclaimed in 1992 and consist of all ‘Border
Territories': geographical, mental and digital. Elgaland-Vargaland is
the largest – and most populous – realm on Earth, incorporating all
boundaries between other nations as well as ‘Digital Territory' and
other states of existence. Every time you travel somewhere, and every
time you enter another form of being, such as the dream state, you
visit Elgaland-Vargaland, the kingdom founded by Leiff Elgren and
CM von Hausswolff.
During the Venice Biennale, Elgren stated that all dead people
are inhabitants of the country Elgaland-Vargaland unless they had
declared that they did not want to be an inhabitant.
Since V/J10, the Elgaland-Vargaland Embassy permanently resides in La Bellone.

figure 80
Since V/J10,
the Elgaland-Vargaland
Embassy permanently
resides in
La Bellone

figure 82

figure 81
Ambassadors
Yves
Poliart and
Wendy Van
Wynsberghe

figure 83

figure 85

figure 86

95

95

95

96

96

NL

Elgaland-Vargaland
figure 84
Every time
you travel
somewhere,
and every
time you
enter another form of
being, you
visit Elgaland-Vargaland.


CM VON Hausswolff, GUY-MARC HINANT
License: Creative Commons Attribution-NonCommercial-ShareAlike
figure 88
Drawings by
Dominique
Goblet,
EVP sounds
by Carl
Michael von
Hausswolff,
images by
Guy-Marc
Hinant

figure 87
EVP could
be the result
of psychic
echoes from
the past,
psychokinesis, or the
thoughts
of aliens
or nature
spirits.

For more information on EVP, see: http://en.wikipedia.org/wiki/Electronic_voice_phenomenon##_note
-fontana1
EN

Ghost Machinery
During V/J10 we showed an audiovisual installation entitled Ghost
Machinery, with drawings by Dominique Goblet, EVP sounds by Carl
Michael von Hausswolff, and images by Guy-Marc Hinant, based on
Dr. Stempnicks Electronic Voice Phenomena recordings.
EVP has been studied primarily by paranormal researchers since
the 1950s, who have concluded that the most likely explanation for
the phenomena is that they are produced by the spirits of the deceased. In 1959, Attila Von Szalay first claimed to have recorded the
‘voices of the dead', which led to the experiments of Friedrich Jürgenson. The 1970s brought increased interest and research including
the work of Konstantine Raudive. In 1980, William O'Neill backed by
industrialist George Meek built a ‘Spiricom' device, which was said to
facilitate very clear communication between this world and the spirit
world.
Investigation of EVP continues today through the work of many
experimenters, including Sarah Estep and Alexander McRae. In addition to spirits, paranormal researchers have claimed that EVP could
be due to psychic echoes from the past, psychokinesis unconsciously
produced by living people, or the thoughts of aliens or nature spirits.
Paranormal investigators have used EVP in various ways, including
as a tool in an attempt to contact the souls of dead loved ones and in
ghost hunting. Organizations dedicated to EVP include the American
Association of Electronic Voice Phenomena, the International Ghost
Hunters Society, as well as the skeptical Rorschach Audio project.

98

98

98

99

99

Read Feel Feed Real

101

101

101

102

102

Electro Magnetic fields of ordinary objects acted as EN
source material for an audio performance, surveillance
camera's and legislation are ingredients for a science fiction film, live annotation of videostreaming with the help
of IRC chats. . .
A mobile video laboratory was set up during the festival, to test out how to bring together scripting, annotation, data readings and recordings in digital archives.
Operating somewhere between surveillance and observation, the Open Source video team mixed hands-on Icecast
streaming workshops with experiments looking at the way
movements are regulated through motion control and vice
versa.

MANU LUKSCH, MUKUL PATEL
License: Creative Commons Attribution - NonCommercial - ShareAlike license
figure 94
CCTV
sculpture
in a park
in London

EN

Faceless: Chasing the Data Shadow
Stranger than fiction
Remote-controlled UAVs (Unmanned Aerial Vehicles) scan the city
for anti-social behaviour. Talking cameras scold people for littering
the streets (in children's voices). Biometric data is extracted from
CCTV images to identify pedestrians by their face or gait. A housing project's surveillance cameras stream images onto the local cable
channel, enabling the community to monitor itself.

figure 95
Poster in
London

These are not projections of the science fiction film that this text
discusses, but techniques that are used today in Merseyside 1. The
Guardian has reported the MoD rents out an RAF-staffed spy plane
for public surveillance, carrying reconnaissance equipment able to
monitor telephone conversations on the ground. It can also be used
for automatic number plate recognition: “Cheshire police recently revealed they were using the Islander [aircraft] to identify people speeding, driving when using mobile phones, overtaking on double white
lines, or driving erratically.”, Middlesborough 2, Newham and Shoreditch 3 in the UK. In terms of both density and sophistication, the UK
1

“Police spy in the sky fuels ‘Big Brother fears'”, Philip Johnston, Telegraph, 23/05/2007
http://www.telegraph.co.uk/news/main.jhtml?xml=/news/2007/05/22/ndrone22.xml
‘Talking' CCTV scolds offenders', BBC News, 4 April 2007 http://news.bbc.co.uk/2
/hi/uk_news/england/6524495.stm
3
“If the face fits, you're nicked”, Independent, Nick Huber, Monday, 1 April 2002 http:/
/www.independent.co.uk/news/business/analysis-and-features/if-the-face-fits-youre-nicked
-656092.html
“In 2001 the Newham system was linked to a central control room operated by the
London Metropolitan Police Force. In April 2001 the existing CCTV system in Birmingham city centre was upgraded to smart CCTV. People are routinely scanned by both
systems and have their faces checked against the police databases.”
Centre for Computing and Social Responsibility http://www.ccsr.cse.dmu.ac.uk
/resources/general/ethicol/Ecv12no1.html
2

104

104

104

105

105

leads the world in the deployment of surveillance technologies. With
an estimated 4.2 million CCTV cameras in place, its inhabitants are
the most watched in the world. 4 Many London buses have five or more
cameras inside, plus several outside, including one recording cars that
drive in bus lanes.
But CCTV images of our bodies are only one of many traces of
data that we leave in our wake, voluntarily and involuntarily. Vehicles are tracked using Automated Number Plate Recognition systems, our movements revealed via location-aware devices (such as
cell phones), the trails of our online activities recorded by Internet
Service Providers, our conversations overheard by the international
communications surveillance system Echelon, shopping habits monitored through store loyalty cards, individual purchases located using
RfiD (Radio-frequency identification) tags, and our meal preferences
collected as part of PNR (flight passenger) data. 5 Our digital selves
are many dimensional, alert, unforgetting.

4
5

A Report on the Surveillance Society. For the Information Commissioner by the Surveillance Studies Network, September 2006, p.19. Available from http://www.ico.gov.uk
‘e-Borders' is a £ 1.2bn passenger-screening programme to be introduced in 2009 and
to be complete by 2014. The single border agency, combining immigration, customs
and visa checks, includes a £ 650m contract with consortia Trusted Borders for a passenger-screening IT system: anyone entering or leaving Britain are to give 53 pieces
of information in advance of travel. This information, taken when a travel ticket is
bought, will be shared among police, customs, immigration and the security services
for at least 24 hours before a journey is due to take place. Trusted Borders consists
of US military contractor Raytheon Systems who will work with Accenture, Detica,
Serco, QinetiQ, Steria, Capgemini, and Daon. Ministers are also said to be considering
the creation of a list of ‘disruptive' passengers. It is expected to cost travel companies
£ 20million a year compiling the information. These costs will be passed on to customers via ticket prices, and the Government is considering introducing its own charge
on travellers to recoup costs. A pilot of the e-borders technology, known as Project
Semaphore, has already screened 29 million passengers.
Similarly, the arms manufacturer Lockheed Martin, the biggest defence contractor in
the U.S., that undertakes intelligence work as well as contributing to the Trident programme in the UK, is bidding to run the UK 2011 Census. New questions in the 2011
Census will include information about income and place of birth, as well as existing
questions about languages spoken in the household and many other personal details.
The Canadian Federal Government granted Lockheed Martin a $43.3 million deal to
conduct its 2006 Census. Public outcry against it resulted in only civil servants handling the actual data, and a new government task force being set up to monitor privacy
during the Census.
http://censusalert.org.uk/
http://www.vivelecanada.ca/staticpages/index.php/20060423184107361

105

105

105

106

106

Increasingly, these data traces are arrayed and administered in
networked structures of global reach. It is not necessary to posit a
totalitarian conspiracy behind this accumulation – data mining is an
exigency of both market effciency and bureaucratic rationality. Much
has been written on the surveillance society and the society of control,
and it is not the object here to construct a general critique of data
collection, retention and analysis. However, it should be recognised
that, in the name of effciency and rationality – and, of course, security – an ever-increasing amount of data is being shared (also sold,
lost and leaked 6) between the keepers of such seemingly unconnected
records as medical histories, shopping habits, and border crossings.
6

Sales: “Personal details of all 44 million adults living in Britain could be sold to
private companies as part of government attempts to arrest spiralling costs for the new
national identity card scheme, set to get the go-ahead this week. [...] ministers have
opened talks with private firms to pass on personal details of UK citizens for an initial
cost of £ 750 each.”
“Ministers plan to sell your ID card details to raise cash”, Francis Elliott, Andy McSmith and Sophie Goodchild, Independent, Sunday 26 June 2005
http://www.independent.co.uk/news/uk/politics/ministers-plan-to-sell-your-id-card-details
-to-raise-cash-496602.html
Losses: In January 2008, hundreds of documents with passport photocopies, bank
statements and benefit claims details from the Department of Work and Pensions were
found on a road near Exeter airport, following their loss from a TNT courier vehicle.
There were also documents relating to home loans and mortgage interest, and details
of national insurance numbers, addresses and dates of birth.
In November 2007, HM Revenue and Customs (HMRC) posted, unrecorded and unregistered via TNT, computer discs containing personal information on 25 million people
from families claiming child benefit, including the bank details of parents and the dates
of birth and national insurance numbers of children. The discs were then lost.
Also in November, HMRC admitted a CD containing the personal details of thousands
of Standard Life pension holders has gone missing, leaving them at heightened risk
of identity theft. The CD, which contained data relating to 15,000 Standard Life
pensions customers including their names, National Insurance numbers and pension
plan reference numbers was lost in transit from the Revenue offce in Newcastle to the
company's headquarters in Edinburgh by ‘an external courier'.
Thefts: In November 2007, MoD acknowledged the theft of a laptop computer containing the personal details of 600,000 Royal Navy, Royal Marines, and RAF recruits
and of people who had expressed interest in joining, which contained, among other
information, passport, and national insurance numbers and bank details.
In October 2007, a laptop holding sensitive information was stolen from the boot of
an HMRC car. A staff member had been using the PC for a routine audit of tax
information from several investment firms. HMRC refused to comment on how many
individuals may be at risk, or how many financial institutions have had their data
stolen as well. BBC suggest the computer held data on around 400 customers with
high value individual savings accounts (ISAs), at each of five different companies –
including Standard Life and Liontrust. (In May, Standard Life sent around 300 policy
documents to the wrong people.)

106

106

106

107

107

Legal frameworks intended to safeguard a conception of privacy by
limiting data transfers to appropriate parties exist. Such laws, and in
particular the UK Data Protection Act (DPA, 1998) 7, are the subject
of investigation of the film Faceless.
From Act to Manifesto
“I wish to apply, under the Data Protection Act,
for any and all CCTV images of my person held
within your system. I was present at [place] from
approximately [time] onwards on [date].” 8
For several years, ambientTV.NET conducted a series of exercises
to visualise the data traces that we leave behind, to render them
into experience and to dramatise them, to watch those who watch
us. These experiments, scrutinising the boundary between public
and private in post-9/11 daily life, were run under the title ‘the Spy
School'. In 2002, the Spy School carried out an exercise to test the
reach of the UK Data Protection Act as it applies to CCTV image
data.
The Data Protection Act 1998 seeks to strike a balance between
the rights of individuals and the sometimes competing interests
of those with legitimate reasons for using personal information.
The DPA gives individuals certain rights regarding information
held about them. It places obligations on those who process information (data controllers) while giving rights to those who are
the subject of that data (data subjects). Personal information
covers both facts and opinions about the individual. 9

7
9

The full text of the DPA (1998) is at http://www.opsi.gov.uk/ACTS/acts1998
/19980029.htm
Data Protection Act Fact Sheet available from the UK Information Commissioners
Offce, http://www.ico.gov.uk

107

107

107

108

108

The original DPA (1984) was devised to ‘permit and regulate'
access to computerised personal data such as health and financial
records. A later EU directive broadened the scope of data protection
and the remit of the DPA (1998) extended to cover, amongst other
data, CCTV recordings. In addition to the DPA, CCTV operators
‘must' comply with other laws related to human rights, privacy, and
procedures for criminal investigations, as specified in the CCTV Code
of Practice (http://www.ico.gov.uk).
As the first subject access request letters were successful in delivering CCTV recordings for the Spy School, it then became pertinent
to investigate how robust the legal framework was. The Manifesto for
CCTV filmmakers was drawn up, permitting the use only of recordings obtained under the DPA. Art would be used to probe the law.

figure 92
Still from
Faceless,
2007

figure 94
Multiple,
conflicting
timecode
stamps

A legal readymade
Vague spectres of menace caught on time-coded surveillance
cameras justify an entire network of peeping vulture lenses. A
web of indifferent watching devices, sweeping every street, every
building, to eliminate the possibility of a past tense, the freedom
to forget. There can be no highlights, no special moments: a
discreet tyranny of now has been established. Real time in its
most pedantic form. 10
Faceless is a CCTV science fiction fairy tale set in London, the city
with the greatest density of surveillance cameras on earth. The film
is made under the constraints of the Manifesto – images are obtained
from existing CCTV systems by the director/protagonist exercising
her/his rights as a surveilled person under the DPA. Obviously the
protagonist has to be present in every frame. To comply with privacy
legislation, CCTV operators are obliged to render other people in
the recordings unidentifiable – typically by erasing their faces, hence
the faceless world depicted in the film. The scenario of Faceless thus
derives from the legal properties of CCTV images.
10

(Ian Sinclair: Lights out for the territory, Granta, London, 1998, p. 91)

108

108

108

109

109

“RealTime orients the life of every citizen. Eating, resting, going
to work, getting married – every act is tied to RealTime. And every
act leaves a trace of data – a footprint in the snow of noise...” 11
The film plays in an eerily familiar city, where the reformed RealTime calendar has dispensed with the past and the future, freeing
citizens from guilt and regret, anxiety and fear. Without memory or
anticipation, faces have become vestigial – the population is literally
faceless. Unimaginable happiness abounds – until a woman recovers
her face...
There was no traditional shooting script: the plot evolved during
the four-year long process of obtaining images. Scenes were planned
in particular locations, but the CCTV recordings were not always
obtainable, so the story had to be continually rewritten.
Faceless treats the CCTV image as an example of a legal readymade (‘objet trouvé'). The medium, in the sense of raw materials
that are transformed into artwork, is not adequately described as
simply video or even captured light. More accurately, the medium
comprises images that exist contingent on particular social and legal
circumstances – essentially, images with a legal superstructure. Faceless interrogates the laws that govern the video surveillance of society
and the codes of communication that articulate their operation, and
in both its mode of coming into being and its plot, develops a specific
critique.
Reclaiming the data body
Through putting the DPA into practice and observing the consequences over a long exposure, close-up, subtle developments of the
law were made visible and its strengths and lacunae revealed.
“I can confirm there are no such recordings of
yourself from that date, our recording system was
not working at that time.” (11/2003)

11

Faceless, 2007

109

109

109

110

110

Many data requests had negative outcomes because either the surveillance camera, or the recorder, or the entire CCTV system in question
was not operational. Such a situation constitutes an illegal use of
CCTV: the law demands that operators: “comply with the DPA by
making sure [...] equipment works properly.” 12
In some instances, the non-functionality of the system was only
revealed to its operators when a subject access request was made. In
the case below, the CCTV system had been installed two years prior
to the request.
“Upon receipt of your letter [...] enclosing the
required 10£ fee, I have been sourcing a company
who would edit these tapes to preserve the privacy of other individuals who had not consented
to disclosure. [...] I was informed [...] that all
tapes on site were blank. [.. W]hen the engineer
was called he confirmed that the machine had not
been working since its installation.
Unfortunately there is nothing further that can be
done regarding the tapes, and I can only apologise
for all the inconvenience you have been caused.”
(11/2003)
Technical failures on this scale were common. Gross human errors
were also readily admitted to:

12

CCTV Systems and the Data Protection Act 1998, available from http://www.ico.gov
.uk

110

110

110

111

111

“As I had advised you in my previous letter, a request was made to remove the tape and for it not
to be destroyed. Unhappily this request was not
carried out and the tape was wiped according with
the standard tape retention policy employed by
[deleted]. Please accept my apologies for this and
assurance that steps have been taken to ensure a
similar mistake does not happen again.” (10/2003)

figure 98
The Rotain
Test, devised
by the
UK Home
Offce Police
Scientific
Development
Branch,
measures
surveillance
camera
performance.

Some responses, such as the following, were just mysterious (data
request made after spending an hour below several cameras installed
in a train carriage).
“We have carried out a careful review of all relevant tapes and we confirm that we have no images of
you in our control.” (06/2005)
Could such a denial simply be an excuse not to comply with the costly
demands of the DPA?
“Many older cameras deliver image quality so poor
that faces are unrecognisable. In such cases the
operator fails in the obligation to run CCTV for
the declared purposes.
You will note that yourself and a colleague's faces
look quite indistinct in the tape, but the picture you sent to us shows you wearing a similar
fur coat, and our main identification had been made
through this and your description of the location.”
(07/2002)

111

111

111

112

112

To release data on the basis of such weak identification compounds
the failure.
Much confusion is caused by the obligation to protect the privacy
of third parties in the images. Several data controllers claimed that
this relieved them of their duty to release images:
“[... W]e are not able to supply you with the images you requested because to do so would involve
disclosure of information and images relating to
other persons who can be identified from the tape
and we are not in a position to obtain their consent to disclosure of the images. Further, it is
simply not possible for us to eradicate the other
images. I would refer you to section 7 of the Data
Protection Act 1998 and in particular Section 7
(4).” (11/2003)
Even though the section referred to states that it is:
“not to be construed as excusing a data controller
from communicating so much of the information
sought by the request as can be communicated without disclosing the identity of the other individual concerned, whether by the omission of names or
other identifying particulars or otherwise.”
Where video is concerned, anonymisation of third parties is an expensive, labour-intensive procedure – one common technique is to occlude
each head with a black oval. Data controllers may only charge the
statutory maximum of 10 £ per request, though not all seemed to be
aware of this:

112

112

112

113

113

“It was our understanding that a charge for production of the tape should be borne by the person
making the enquiry, of course we will now be checking into that for clarification. Meanwhile please
accept the enclosed video tape with compliments of
[deleted], with no charge to yourself.” (07/2002)

figure 90
Off with
their heads!

Visually provocative and symbolically charged as the occluded heads
are, they do not necessarily guarantee anonymity. The erasure of a
face may be insuffcient if the third party is known to the person requesting images. Only one data controller undeniably (and elegantly)
met the demands of third party privacy, by masking everything but
the data subject, who was framed in a keyhole. (This was an uncommented second offering; the first tape sent was unprocessed.) One
CCTV operator discovered a useful loophole in the DPA:
“I should point out that we reserve the right, in
accordance with Section 8(2) of the Data Protection
Act, not to provide you with copies of the information requested if to do so would take disproportionate effort.” (12/2004)
What counts as ‘disproportionate effort'? The gold standard was set
by an institution whose approach was almost baroque – they delivered
hard copies of each of the several hundred relevant frames from the
time-lapse camera, with third parties heads cut out, apparently with
nail scissors.
Two documents had (accidentally?) slipped in between the printouts – one a letter from a junior employee tendering her resignation
(was it connected with the beheading job?), and the other an ironic
memo:

113

113

113

114

114

“And the good news -- I enclose the 10 £ fee to be
passed to the branch sundry income account.” (Head
of Security, internal communication 09/2003)
From 2004, the process of obtaining images became much more difficult.
“It is clear from your letter that you are aware
of the provisions of the Data Protection Act and
that being the case I am sure you are aware of
the principles in the recent Court of Appeal decision in the case of Durant vs. financial Services Authority. It is my view that the footage you
have requested is not personal data and therefore
[deleted] will not be releasing to you the footage
which you have requested.” (12/2004)
Under Common Law, judgements set precedents. The decision in
the case Durant vs. financial Service Authority (2003) redefined
‘personal data'; since then, simply featuring in raw video data does
not give a data subject the right to obtain copies of the recording.
Only if something of a biographical nature is revealed does the subject
retain the right.

114

114

114

115

115

“Having considered the matter carefully, we do not
believe that the information we hold has the necessary relevance or proximity to you. Accordingly
we do not believe that we are obligated to provide
you with a copy pursuant to the Data Protection Act
1988. In particular, we would remark that the video
is not biographical of you in any significant way.”
(11/2004)
Further, with the introduction of cameras that pan and zoom, being
filmed as part of a crowd by a static camera is no longer grounds for
a data request.
“[T]he Information Commissioners office has indicated that this would not constitute your personal
data as the system has been set up to monitor the
area and not one individual.” (09/2005)
As awareness of the importance of data rights grows, so the actual
provision of those rights diminishes:

115

115

115

116

116

figure 89
Still from
Faceless,
2007

"I draw your attention to CCTV systems and the Data
Protection Act 1998 (DPA) Guidance Note on when the
Act applies. Under the guidance notes our CCTV system is no longer covered by the DPA [because] we:
• only have a couple of cameras
• cannot move them remotely
• just record on video whatever the cameras pick
up
• only give the recorded images to the police to
investigate an incident on our premises"
(05/2004)
Data retention periods (which data controllers define themselves)
also constitute a hazard to the CCTV filmmaker:
“Thank you for your letter dated 9 November addressed to our Newcastle store, who have passed
it to me for reply. Unfortunately, your letter was
delayed in the post to me and only received this
week. [...] There was nothing on the tapes that you
requested that caused the store to retain the tape
beyond the normal retention period and therefore
CCTV footage from 28 October and 2 November is no
longer available.” (12/2004)
Amidst this sorry litany of malfunctioning equipment, erased tapes,
lost letters and sheer evasiveness, one CCTV operator did produce
reasonable justification for not being able to deliver images:

116

116

116

117

117

“We are not in a position to advise whether or not
we collected any images of you at [deleted]. The
tapes for the requested period at [deleted] had
been passed to the police before your request was
received in order to assist their investigations
into various activities at [deleted] during the
carnival.” (10/2003)

figure 91
Still from
Faceless,
2007

In the shadow of the shadow
There is debate about the effcacy, value for money, quality of
implementation, political legitimacy, and cultural impact of CCTV
systems in the UK. While CCTV has been presented as being vital in solving some high profile cases (e.g. the 1999 London nail
bomber, or the 1993 murder of James Bulger), at other times it has
been strangely, publicly, impotent (e.g. the 2005 police killing of Jean
Charles de Menezes). The prime promulgators of CCTV may have
lost some faith: during the 1990s the UK Home Offce spent 78% of
its crime prevention budget on installing CCTV, but in 2005, an evaluation report by the same offce concluded that, “the CCTV schemes
that have been assessed had little overall effect on crime levels.” 13
An earlier, 1992, evaluation reported CCTV's broadly positive
public reception due to its assumed effectiveness in crime control,
acknowledging “public acceptance is based on limited and partly inaccurate knowledge of the functions and capabilities of CCTV systems
in public places.” 14
By the 2005 assessment, support for CCTV still “remained high in
the majority of cases” but public support was seen to decrease after
implementation by as much as 20%. This “was found not to be the
reflection of increased concern about privacy and civil liberties, as
this remained at a low rate following the installation of the cameras,”
13

Gill, M. and Spriggs, A., Assessing the impact of CCTV. London: Home Offce
Research, Development and Statistics Directorate 2005, pp.60-61.
www.homeoffce.gov.uk/rds/pdfs05/hors292.pdf
14
http://www.homeoffce.gov.uk/rds/prgpdfs/fcpu35.pdf

117

117

117

118

118

but “that support for CCTV was reduced because the public became
more realistic about its capabilities” to lower crime.
Concerns, however, have begun to be voiced about function creep
and the rising costs of such systems, prompted, for example, by the
disclosure that the cameras policing London's Congestion Charge remain switched on outside charging hours and that the Met are to
have live access to them, having been exempted from parts of the
Data Protection Act to do so. 15 As such realities of CCTV's daily
operation become more widely known, existing acceptance may be
somewhat tempered.
Physical bodies leave data traces: shadows of presence, conversation, movement. Networked databases incorporate these traces into
data bodies, whose behaviour and risk are priorities for analysis and
commodification, by business and by government. The securing of
a data body is supposedly necessary to secure the human body, either preventatively or as a forensic tool. But if the former cannot
be assured, as is the case, what grounds are there for trust in the
hollow promise of the latter? The all-seeing eye of the panopticon is
not complete, yet. Regardless, could its one-way gaze ever assure an
enabling conception of security?

15

Surveillance State Function Creep – London Congestion Charge “real-time bulk data”
to be automatically handed over to the Metropolitan Police etc. http://p10.hostingprod
.com/@spyblog.org.uk/blog/2007/07/surveillance_state_function_creep_london_congestion
_charge_realtime_bulk_data.html

118

118

118

119

119

MICHAEL MURTAUGH

figure 113
Start
broadcasting
yourself!

License: Free Art License
EN

Active Archives
or: What's wrong with the YouTube documentary?
As someone who has shot video and programmed web-based interfaces to video over the past decade, it has been exciting to see how
distributing video via the Internet has become increasingly popularized, thanks in large part to video sharing sites like YouTube. At the
same time, I continue to design and write software in search of new
forms of collaborative and ‘evolving' documentaries; and for myself,
and others around me, I feel disinterest, even aversion, to posting
videos on YouTube. This essay has two threads: (1) I revisit an
earlier essay describing the ‘Evolving Documentary' model to get at
the roots of my enthusiasm for working with video online, and (2) I
examine why I find YouTube problematic, and more a reflection of
television than the possibilities that the web offers.
In 1996, I co-authored an essay with Glorianna Davenport, then
my teacher and director of the Interactive Cinema group at the MIT
Media Lab, called Automatist storyteller systems and the shifting
sands of story. 1 In it, we described a model for supporting ‘Evolving
Documentaries', or an “approach to documentary storytelling that
celebrates electronic narrative as a process in which the author(s), a
networked presentation system, and the audience actively collaborate
in the co-construction of meaning.” In this paper, Glorianna included
a section entitled ‘What's wrong with the Television Documentary?'
The main points of this argument were as follows:

1

figure 114
Join the
largest
worldwide
video-sharing
community!

http://www.research.ibm.com/journal/sj/363/davenport.html

131

131

131

132

132

1.
[... T]elevision consumes the viewer. Sitting passively in front
of a TV screen, you may appreciate an hour-long documentary;
you may even find the story of interest; however, your ability to
learn from the program is less than what it might be if you were
actively engaged with it, able to control its shape and probe its
contents.
Here, it is crucial to understand what is meant by the word ‘active'
. In a naive comparison between the activities of watching television
and surfing the web, one might say that the latter is inherently more
active in the sense that the process is ‘driven' by the choices of the
user; in the early days of the web it became popular to refer to this
split as ‘lean back vs. lean forward' media. Of course, if one means
to talk about cognitive activity, this is clearly misleading as aimlessly surfing the net can be achieved at near comatose levels of brain
function (as any late night surfer can attest to) and watching a particularly sharp television program can be incredibly engaging, even
life changing. Glorianna would often describe her frustration with
traditional documentary by observing the vast difference between her
own sense of engagement with a story gained through the process of
shooting and editing, versus the experience of an audience member
from simply viewing the end result. Thus ‘active' here relates to the
act of authoring and the construction of meaning. Rather than talking about leaning forward or backward, a more useful split might be
between reading and writing. Rather than being a question of bad
versus good access, the issue becomes about two interconnected cognitive processes, both hopefully ‘active' and involving thought. An
ideal platform for online documentary would be one that facilitates a
fluid movement between moments of reflection (reading) and of construction (writing).

132

132

132

133

133

2.
Television severely limits the ways in which an author can
‘grow' a story. A story must be composed into a fixed, unchanging form before the audience can see and react to it: there is no
obvious way to connect viewers to the process of story construction. Similarly, the medium offers no intrinsic, immediately
available way to interconnect the larger community of viewers
who wish to engage in debate about a particular story.
Part of the promise of crossing video with computation is the potential to combine the computers' ability to construct models and
run simulations with the random access possibilities of digitized media. Instead of editing a story down into a fixed form or ‘final cut',
one can program a ‘storytelling system' that can act as an ‘editor in
software'. Thus the system can maintain a dynamic representation
of the context of a particular telling, on which to base (or support a
viewer in making) editing decisions ‘on the fly'. The ‘Evolving Documentary' was intended to support complex stories that would develop
over time, and which could best be told from a variety of points of
view.
3.
Like published books and movies, television is designed for
unidirectional, one-to-many transmission to a mass audience,
without variation or personalization of presentation. The remote-control unit and the VCR (videocassette recorder) - currently the only devices that allow the viewer any degree of independent control over the play-out of television - are considered
anathema by commercial broadcasters. Grazing, time-shifting,
and ‘commercial zapping' run contrary to the desire of the industry for a demographically correct audience that passively
absorbs the programming - and the intrusive commercial messages - that the broadcasters offer.
133

133

133

134

134

Adding a decentralized means of distribution and feedback such
as the Internet provides the final piece of the puzzle in creating a
compelling new medium for the evolving documentary. No longer
would footage have to be excluded for reasons of reaching a ‘broad'
or average audience. An ideal storytelling system would be one that
could connect an individual viewer to whatever material was most
personally relevant. The Internet is a unique ‘mass media' in its
potential support for enabling access to non-mainstream, individually
relevant and personal subject matter.
What's wrong with the YouTube documentary?
YouTube has massively popularized the sharing and consumption
of video online. That said, most of the core concerns made in the
arguments related to television, are still relevant to YouTube when
considered as a platform for online collaborative documentary.
Clips are primarily ‘view-only'
Already in it's name, ‘YouTube' consciously invokes the television
set, thus inviting visitors to ‘lean back' and watch. The YouTube
interface functions primarily as a showcase of static monolithic elements. Clips are presented as fixed and finished, to be commented
upon, rated, and possibly bookmarked, but no more. The clip is
‘atomic' in the sense that it's not possible to make selections within a
clip, to export images or sound, or even to link to a particular starting
point. Without special plugins, the site doesn't even allow downloading of the clip. While users are encouraged ‘to embed' YouTube content in other websites (by cutting and pasting special HTML codes
that refer back to the YouTube site), the resulting video plays using
the YouTube player, complete with ‘related' links back into the service. It is in fact a violation of the YouTube terms of use to attempt
to display videos from the service in any other way.

134

134

134

135

135

The format of the clip is fixed and uniform for all kinds
of content
Technically, YouTube places some rather arbitrary limits on the
format of clips: all clips must contain an image and a sound track
and may not be longer than 10 minutes in length. Furthermore all
clips are treated equally, there is no notion of a ‘lecture', versus a
‘slideshow', versus a ‘music video', together with a sense that these
different kinds of material might need to be handled differently. Each
clip is compressed in a uniform way, meaning at the moment into a
flash format video file of fixed data rate and screen size.
Clips have no history
Despite these limitations, users of YouTube have found workarounds
to, for instance, download clips to then rework them into derived clips.
Although the derived works are often placed back again on YouTube,
the system itself has no means representing this kind of relationship.
(There is a mechanism for posting video responses to other clips, but
this kind of general purpose solution seems not to be understood or
used to track this kind of ‘derived' relationship.) The system is unable to model or otherwise make available the ‘history' of a particular
piece of media. Contrast this with a system like Wikipedia, where the
full history of an article, with a record of what was changed, by whom,
when, and even ‘meta-level' discussions about the changes (including
possible disagreement) is explicitly facilitated.
Weak or ‘flat' narrative structure
YouTube's primary model for narrative is a broad (and somewhat
obscure) sense of ‘relatedness' (based on user-defined tags) modulated
by popularity. As with many ‘social networking' and media sharing
sites, YouTube relies on ‘positive feedback' popularity mechanisms,
such as view counts, ‘star' ratings and favorites, to create ranked lists
of clips. Entry points like ‘Videos being watched right now', ‘Most
Viewed', ‘Top Favorites', only close the loop of featuring what's already popular to begin with. In addition, YouTube's commercial

135

135

135

136

136

model of enabling special paid levels of membership leads to ambiguous selection criteria, complicated by language as in the ‘Promoted
Videos' and ‘Featured Videos' of YouTube's front page (promoting
what?, featured by whom?).
The ‘editing logic' threading the user through the various clips is
flat, in that a clip is shown the same way regardless of what has been
viewed before it. Thus YouTube makes no visible use of a particular viewing history (though the fact that this information is stored
has been brought to the attention of the public via the ongoing Viacom lawsuit, http://news.bbc.co.uk/2/hi/technology/7506948.stm).
In this way it's difficult to get a sense of being in a particular ‘story
arc' or thread when moving from clip to clip in YouTube as in a sense
each click and each clip restarts the narrative experience.
No licenses for sharing / reuse
The lack of a download feature in YouTube could be said to protect the interests of those who wish to assert a claim of copyright.
However, YouTube ignores and thus obscures the question of license
altogether. One can find for instance the early films of Hitchcock,
now part of the public domain, in 10 minute chunks on YouTube;
despite this status (not indicated on the site), these clips are, like all
YouTube clips, unavailable for any kind of manipulation. This approach, and the limitations it places on the use of YouTube material,
highlights the fact that YouTube is primarily focused on getting users
to consume YouTube material, framed in YouTube's media player, on
YouTube's terms.
Traditional models for (software) authorship
While YouTube is built using open source software (Python and
ffmpeg for instance), the source code of the system itself is closed,
leaving little room for negotiation about how the software of the
site itself operates. This is a pity on a variety of levels. Free and
open source software is inextricably bound to the web not only in
terms of providing many of the underlying software (like the Apache
web server), but also in the reverse, as the possibilities for collaborative development that the web provides has catalyzed the process of
136

136

136

137

137

open source development. Software designed to support collaborative
work on code, like Subversion and other CVS's (concurrent versioning systems), and platforms for tracking and discussing software (like
TRAC), provide much richer models of use and relationship to work
than those which YouTube offer for video production.
Broadcasting over coherence
From it's slogan (‘Broadcast yourself'), to the language the service
uses around joining and uploading videos (see images), YouTube falls
very much into a traditional model of commercial broadcast television. In this model sharing means getting others to watch your clips,
with the more eyeballs the better.
The desire for broadness and the building of a ‘worldwide' community united only by a desire to ‘broadcast one's self' means creating
coherence is not a top priority. YouTube comments, for instance,
seem to suffer from this lack of coherence and context. Given no
particular focus, comments seem doomed to be similarly ungrounded
and broad. Indeed, comments in YouTube often seem to take on
more the character of public toilets than of public broadcasting, replete with the kind of sexism, racism, and homophobia that more or
less anonymous ‘blank wall' access seems to encourage.
A problematic space for ‘sharing'
The combination of all these aspects make YouTube for many a
problematic space for ‘sharing' - particularly when the material is of
a personal or particular nature. While on the one hand appearing
to pose an alternative platform to television, YouTube unfortunately
transposes many of that form's limitations and conventions onto the
web.
Looking to the future, what still remains challenging, is figuring
out how to fuse all those aspects that make the Internet so compelling
as a medium and enable them in the realm of online video: the net's
decentralized nature, the possibilities for participatory/collaboration
production, the ability to draw on diverse sources of knowledge (from
‘amateur' and home-based, to ‘expert'). How can the successful examples of collaborative text-based projects like Wikipedia inspire new
137

137

137

138

138

forms of collaborative video online; and in a way that escapes the
‘heaviness' and inertia of traditional forms of film/video. This fusion
can and needs to take place on a variety of levels, from the concept
of what a documentary is and can be, to the production tools and
content management systems media makers use, to a legal status of
media that reflects an understanding that culture is something which
is shared, down to the technical details of the formats and codecs
carrying the media in a way that facilitates sharing, instead of complicating it.

138

138

138

139

139

EN
NL
FR

Mutual Motions

139

139

139

140

140

Whether we operate a computer with the help of a command line interface, or by using buttons, switches and
clicks. . . the exact location of interaction often serves as
conduit for mutual knowledge - machines learn about bodies and bodies learn about machines. Dialogues happen
at different levels and in various forms: code, hardware,
interface, language, gestures, circuits.
Those conversations are sometimes gentle in tone - ubiquitous requests almost go unnoticed - and other times
they take us by surprise because of their authoritative
and demanding nature: “Put That There”. How can we
think about such feed back loops in productive ways?
How are interactions translated into software, and how
does software result in interaction? Could the practice of
using and producing free software help us find a middle
ground between technophobia and technofetishism? Can
we imagine ourselves and our realities differently, when we
try to re-design interfaces in a collaborative environment?
Would a different idea about ‘user' change our approach
to ‘use' as well?


7

“Classic puff pastry begins with a basic dough called a détrempe (pronounced day-trahmp) that is rolled out and
wrapped around a slab of butter. The
dough is then repeatedly rolled, folded,
and turned.”, Molly Stevens, A Shortcut
to flaky Puff Pastry. http://www.taunton
.com/finecooking/articles/how-to/rough-puff
-pastry.aspx 2008

146

146

146

147

147

figure XI

figure XIII

ADRIAN MACKENZIE
License: Creative Commons Attribution-NonCommercial-ShareAlike
EN

Centres of envelopment and intensive movement
in digital signal processing

figure 115
Adrian
Mackenzie
at V/J10

Abstract
The paper broadly concerns algorithmic processes commonly found
in wireless networks, video and audio compression. The problem it
addresses is how to account for the convoluted nature of the digital
signal processing (DSP). Why is signal processing so complex and relatively inaccessible? The paper argues that we can only understand
what is at stake in these labyrinthine calculations by switching focus away from abstract understandings of calculation to the dynamic
re-configuration of space and movement occurring in signal processing. The paper works through one example in detail of this reconfigured
movement in order to illustrate how digital signal processing enables
different experiences of proximity, intimacy, co-location and distance.
It explores how wireless signal processing algorithms envelope heterogeneous spaces in the form of hidden states, and logistical networks.
Importantly, it suggests that the ongoing dynamism of signal processing could be understood in terms of intensive movement produced by
a centre of envelopment. Centres of envelopment generate extensive
changes, but they also change the nature of change itself.
From sets to signals: digital signal processing
In new media art, in new media theory and in various forms of
media activism, there has been so much work that seizes on the possibilities of using digital technologies to design interactions, sound,
image, text, and movement that challenge dominant forms of experience, habit and selfhood. In various ways, the processes of branding,
commodification, consumption, control and surveillance associated
155

155

155

156

156

with contemporary media have been critically interrogated and challenged.
However, there are some domains of contemporary technological
and media culture that are really hard to work with. They may
be incredibly important, they may be an intimate part of everyday
life, yet remain relatively intractable. They resist contestation, and
engagement with may even seem pointless. This is because they may
contain intractable materials, or be organised in such complicated
ways that they are hard to change.
This paper concerns one such domain, digital signal processing
(DSP). I am not saying that new media has not engaged with DSP. Of
course it has, especially in video art and sound art, but there is little
work that helps us make sense of how the sensations, textures, and
movements associated with DSP come to be taken for granted, come
to appear as normal, and everyday, or how they could be contested.
A promotional video from Intel for the UltraMobilePC 1 promotes
change in relation to mobile media. Intel, because it makes semiconductors, is highly invested in digital signal processing in various forms.
In any case, video itself is a prime example of contemporary DSP at
work. Two aspects of this promotional video for the UMPC, the UltraMobile PC, relate to digital signal processing. There is much signal
processing here. It connects the individual's eyes, mouths and ears
to screens that display information services of various kinds. There
is also much signal processing in the wireless network infrastructures
that connect all these gadgets to each other and to various information services (maps, calendars, news feeds). In just this example,
sound, video, speech recognition, fibre, wireless and satellite, imaging
technologies in medicine all rely on DSP. We could say a good portion
of our experience is DSP-based.
This paper is an attempt to develop a theory of digital signal processing, a theory that could be used to talk about ways of contesting,
critiquing, or making alternatives. The theory under development
here relies a lot on two notions, ‘intensive movement' and ‘centre
of envelopment' that Deleuze proposed in Difference and Repetition.

figure 117
A promotional video
from Intel
for the UltraMobilePC

1

http://youtube.com/watch?v=GFS2TiK3AI

156

156

156

157

157

However, I want to keep the philosophy in the background as much as
possible. I basically want to argue that we need to ask: why does so
much have to be enveloped or interiorised in wireless or audiovisual
DSP?
How does DSP differ from other algorithmic processes?
What can we say about DSP? firstly, influenced by recent software
studies-based approaches (Fuller, Chun, Galloway, Manovich), I think
it is worth comparing the kinds of algorithmic processes that take
place in DSP with those found in new media more generally. Although
it is an incredibly broad generalisation, I think it is safe to say that
DSP does not belong to the set-based algorithms and data-structures
that form the basis of much interest in new media interactivity or
design.
DSP differs from set-based code. If we think of social software such
as flickr, Google, or Amazon, if we think of basic information infrastructures such as relational databases or networks, if we think of
communication protocols or search engines, all of these systems rely
on listing, enumerating, and sorting data. The practices of listing,
indexing, addressing, enumerating and sorting, all concern sets. Understood in a fairly abstract way, this is what much software and code
does: it makes and changes sets. Even areas that might seem quite
remote from set-making, such as the 3D-projective geometry used in
computer game graphics are often reduced algorithmically to complicated set-theoretical operations on shapes (polygons). Even many
graphic forms are created and manipulated using set operations.
The elementary constructs of most programming languages reflect
this interest in set-making. For instance, networks or, in computer
science terms, graphs, are visually represented like using lines and
boxes. But in terms of code, they are presented as either edge or
‘adjacency lists', like this: 2
graph = {'A': ['B', 'C'],
'B': ['C', 'D'],
2

http://www.python.org/doc/essays/graphs/

157

157

157

158

158

'C':
'D':
'E':
'F':

['D'],
['C'],
['F'],
['C']}

A graph or network can be seen as a list of lists. This kind of
representation in code of relations is very neat and nice. It means that
something like the structure of the internet, as a hybrid of physical
and logical relations, can be recorded, stored, sorted and re-ordered
in code. Importantly, it is highly open to modification and change.
Social software, or Web2.0, as exemplified in websites like Facebook or
YouTube also can be understood as massive deployments of set theory
in the form of code. Their sociality is very much dependent on set
making and set changing operations, both in the composition of the
user interfaces and in the underlying databases that make constantly
seek to attach new relations to data, to link identities and attributes.
In terms of activism, and artwork, relations that can be expressed in
the form of sets and operations on sets, are highly manipulable. They
can be learned relatively easily, and they are not too difficult to work
with. For instance, scripts that crawl or scrape websites have been
widely used in new media art and activism.
By contrast, DSP code is not based on set-making. It relies on
a different ordering of the world that lies closer to streams of signals that come from systems such as sensors, transducers, cameras,
and that propagate via radio or cable. Indeed, although it is very
widely used, DSP is not usually taught as part of the computer science or software engineering. The textbooks in these areas often do
not mention DSP. The distinction between DSP and other forms of
computation is clearly defined in a textbook of DSP:
Digital Signal Processing is distinguished from other areas in
computer science by the unique type of data it uses: signals.
In most cases, these signals originate as sensory data from the
real world: seismic vibrations, visual images, sound waves, etc.
DSP is the mathematics, the algorithms, and the techniques

158

158

158

159

159

used to manipulate these signals after they have been converted
into a digital form. (Smith, 2004)
While it draws on some of the logical and set-based operations
found in code in general, DSP code deals with signals that usually involve some kind of sensory data – vibrations, waves, electromagnetic
radiation, etc. These signals often involve forms of rapid movement,
rhythms, patterns or fluctuations. Sometimes these movements are
embodied in physical senses, such as the movements of air involved in
hearing, or the flux of light involved in seeing. Because they are often
irregular movements, they cannot be easily captured in the forms of
movement idealised in classical mechanics – translation, rotation, etc.
Think for instance of a typical photograph of a city street. Although
there are some regular geometrical forms, the way in which light is
reflected, the way shadows form, is very difficult to describe geometrically. It is much easier, as we will see, to think of an image as a
signal that distributes light and colour in space. Once an image or
sound can be seen as a signal, it can undergo digital signal processing.
What distinguishes DSP from other algorithmic processes is its
reliance on transforms rather than functions. This is a key difference.
The ‘transform' deals with many values at once. This is important
because it means it can deal with things that are temporal or spatial,
such as sounds, images, or signals in short. This brings algorithms
much closer to sensation, and to what bodies feel. While there is
codification going on, since the signal has to be treated digitally as
discrete numerical values, it is less reducible to the sequence of steps or
operations that characterise set-theoretical coding. Here for instance
is an important section of the code used in MPEG video encoding in
the free software ffmpeg package:

figure 116
The simplest
mpeg encoder

**
* @file mpegvideo.c
* The simplest mpeg encoder (well, it was the simplest!).
*
...
159

159

159

160

160

* for jpeg fast DCT */
#define CONST_BITS 14
static const uint16_t aanscales[64] = {
/* precomputed values scaled up by 14 bits */
16384, 22725, 21407, 19266, 16384, 12873, 8867, 4520,
22725, 31521, 29692, 26722, 22725, 17855, 12299, 6270,
21407, 29692, 27969, 25172, 21407, 16819, 11585, 5906,
19266, 26722, 25172, 22654, 19266, 15137, 10426, 5315,
16384, 22725, 21407, 19266, 16384, 12873, 8867, 4520,
12873, 17855, 16819, 15137, 12873, 10114, 6967, 3552,
8867, 12299, 11585, 10426, 8867, 6967, 4799, 2446,
4520, 6270, 5906, 5315, 4520, 3552, 2446, 1247
};
...
for(i=0;i<64;i++) {
const int j=
dsp{}->}idct_permutation[i];
qmat[qscale][i] = (int)((uint64_t_C(1)
<< (QMAT_SHIFT + 14))
(aanscales[i]
* qscale * quant_matrix[j]));
I don't think we need to understand this code in detail. There is
only one thing I want to point out in this code: the list of ‘precomputed' numerical values is used for ‘jpeg fast DCT'. This is a typical
piece of DSP type code. It refers to the way in which video frames are
encoding using Fast Fourier Transforms. The key point here is that
these values have been carefully worked out in advance to scale different colour and luminosity components of the image differently. The
transform, DCT (Discrete Cosine Transform), is applied to chunks of
sensation – video frames – to make them into something that can be
manipulated, stored, changed in size or shape, and circulated. Notice
160

160

160

161

161

that the code here is quite opaque in comparison to the graph data
structures discussed previously. This opacity reflects the sheer number of operations that have to be compressed into code in order for
digital signal processing to work.
Working with DSP: architecture and geography
So we can perhaps see from the two code examples above that there
is something different about DSP in comparison to the set-based
processing. DSP seems highly numerical and quantified, while the
set-based code is symbolic and logical. What is at stake in this difference? I would argue that it is something coming into the code from
outside, something that is difficult to read in the code itself because
it is so opaque and convoluted. Why is DSP code hard to understand
and also hard to write?
You will remember that I said at the outset that there are some
facets of technological cultures that resist appropriation or intervention. I think the mathematics of DSP is one of those facets. If I just
started explaining some of the mathematical models that have been
built into the contemporary world, I think it would be shoring up
or reinforcing a certain resistance to change associated with DSP, at
least in its main mathematical formalisations. I do think the mathematical models are worth engaging with, partly because they look
so different from the set-based operations found in much code today.
The mathematical models can tell us why DSP is difficult to intervene
in at a low level.
However, I don't think it is the mathematics as such that makes
digital signal processing hard to grapple with. The mathematics is an
architectural response to a geographical problem, a problem of where
code can go and be in the world. I would argue that it is the relation
between the architecture and geography of digital signal processing
itself that we should grapple with. It is something to do about the
immersion in everyday life, the proximity to sensation, the shifting
multi-sensory patterning of sociality, the movements of bodies across
variable distances, and the effervescent sense of impending change
that animates the convoluted architecture of DSP.

161

161

161

162

162

We could think of the situations in which DSP is commonly found.
For instance, in the background of the scenes in the daily lives of
businessmen shown in Intel's UPMC video, lie wireless infrastructures
and networks. Audiovisual media and wireless networks both use
signal processing, but for different reasons. Although they seem quite
disparate from each other in terms of how we embody them, they
actually sometimes use the same DSP algorithms. (In other work, I
have discussed video codecs. 3
3

The case of video codecs
In the foreground of the UMPC vision, stand images, video images in particular, and
to a lesser extent, sounds. They form a congested mass, created by media and information networks. People in electronic media cultures constantly encounter images in
circulation. Millions of images flash across TV, cinema and computer screens. DVD's
shower down on us. The internet is loaded down with video at the moment (Google
Video, YouTube.com, Yahoo video, etc.). A powerful media-technological imagining of
video moving everywhere, every which way, has taken root.
The growth of video material culture is associated with a key dynamic: the proliferation
of software and hardware codecs. Codecs generate linear transforms of images and
sound. Transformed images move through communication networks much more quickly
than uncompressed audiovisual materials. Without codecs, an hour of raw digital video
would need 165 CD-ROMs or take roughly 24 hours to move across a standard computer
network (10Mbit/sec ethernet). Instead of 165 CDs, we take a single DVD on which a
film has been encoded by a codec. We play it on a DVD player that also has a codec,
usually implemented in hardware. Instead of 32Mbyte/sec, between 1-10 MByte/sec
streams from the DVD into the player and then onto the television screen.
The economic and technical value of codecs can hardly be overstated. DVD, the transmission formats for satellite and cable digital television (DVB and ATSC), HDTV
as well as many internet streaming formats such as RealMedia and Windows Media,
third generation mobile phones and voice-over-ip (VoIP), all depend on video and audio codecs. They form a primary technical component of contemporary audiovisual
culture.
Physically, codecs take many forms, in software and hardware. Today, codecs nestle in
set-top boxes, mobile phones, video cameras and webcams, personal computers, media
players and other gizmos. Codecs perform encoding and decoding on a digital data
stream or signal, mainly in the interest of finding what is different in a signal and what
is mere repetition. They scale, reorder, decompose and reconstitute perceptible images
and sounds. They only move the differences that matter through information networks
and electronic media. This performance of difference and repetition of video comes at
a cost. Enormous complication must be compressed in the codec itself.
Much is at stake in this logistics from the perspective of cultural studies of technology
and media. On the one hand, codecs analyse, compress and transmit images that
fascinate, bore, fixate, horrify and entertain billions of spectators. Many of these
videos are repetitive or cliched. There are many re-runs of old television series or
Hollywood classics. YouTube.com, a video upload site, offers 13,500 wedding videos.
Yet the spatio-temporal dynamics of these images matters deeply. They open new
patterns of circulation. To understand that circulation matters deeply, we could think
of something we don't want to see, for instance, the execution of many hostages (Daniel
Perl, Nick Berg, and others) in Jihadist videos since 2002. Islamist and ‘shock-site' web

162

162

162

163

163

While images are visible, wireless signals are relatively hard to
sense. So they are a ‘hard case' to analyse. We know they surround
us, but we hardly have any sensation of them. A tightly packed
labyrinth of digital signal processing lies between antenna and what
reaches the business travellers' eyes and ears. Much of what they
look at and listen has passed through wireless chipsets. The chipsets,
produced by Broadcom, Intel, Texas Instruments, Motorola, Airgo or
Pico, are tiny (1 cm) fragments that support highly convoluted and
concatenated paths on nanometre scales. In wireless networks such
as Wi-fi, Bluetooth, and 3G mobile phones with their billions of
miniaturised chipsets, we encounter a vast proliferation of relations.
What is at stake in these convoluted, compressed packages of relationality, these densely patterned architectures dedicated to wireless
communication?
Take for instance the picoChip, a latest-generation wireless digital
signal processing chip, designed by a ‘fabless' semiconductor company,
picoChip Designs Ltd, in Bath, UK. The product brief describes the
chip as:
[t]he architecture of choice for next-generation wireless. Expressly designed to address the new air-interfaces, picoChip's
multi-core DSP is the most powerful baseband processor on
the market. Ideally suited to WiMAX, HSPA, UMTS-LTE,
802.16m, 802.20 and others, the picoArray delivers ten-times
better MIPS/$ than legacy approaches. Crucially, the picoArray is easy to program, with a robust development environment
and fast learning curve. (PicoChip, 2007)
Written for electronics engineers, the key points here are that the
chip is designed for wireless communication or ‘air-interface', that
servers streamed these videos across the internet using the low-bitrate Windows Media
Video codec, a proprietary variant of the industry-standard MPEG-4. The shock of
such events – the sight of a beheading, the sight of a journalist pleading for her life –
depends on its circulation through online and broadcast media. A video beheading lies
at the outer limit of the ordinary visual pleasures and excitations attached to video
cultures. Would that beheading, a corporeal event that takes video material culture to
its limits, occur without codecs and networked media?

163

163

163

164

164

its purpose is to receive and transmit information wirelessly, and
that it accommodates a variety of wireless communication standards
(WiMAX, HSPA, 802.16m, etc). In this context, much of the terminology of performance and low cost is familiar. The chip combines computing performance and value for money (“ten times better
MIPS/$ – Million Instructions Per Second/$”) as a ‘baseband processor'. That means that it could find its way into many different version of hardware being produced for applications that range between
large-scale wireless information infrastructures and small consumer
electronics applications. Only the last point is slightly surprisingly
emphatic: “[c]rucially, the picoArray is easy to program, with a robust development environment and fast learning curve.” Why should
ease of programming be important?
And why should so many processors be needed for wireless
signal processing?
The architecture of the picoChip stands on shifting ground. We
are witnessing, as Nigel Thrift writes, “a major change in the geography of calculation. Whereas ‘computing' used to consist of centres
of calculation located at definite sites, now, through the medium of
wireless, it is changing its shape” (Thrift, 2004, 182). The picoChip's
architecture is a respond to the changing geographies of calculation.
Calculation is not carried out at definite sites, but at almost any
site. We can see the picoChip as an architectural response to the
changing geography of computing. The architecture of the picoChip
is typical in the ways that it seeks to make a constant re-shaping
of computation possible, normal, affordable, accessible and programmable. This is particularly evident in the parallel character of its
architecture. Digital signal processing requires massive parallellisation: more chips everywhere, and chips that do more in parallel. The
advanced architecture of the picoChip is typical of the shape of things
more generally:
[t]he picoArray™ is a tiled processor architecture in which hundreds of processors are connected together using a deterministic
interconnect. The level of parallelism is relatively fine grained
164

164

164

165

165

with each processor having a small amount of local memory.
... Multiple picoArrayTM devices may be connected together to
form systems containing thousands of processors using on-chip
peripherals which effectively extend the on-chip bus structure.
(Panesar, et al., 2006, 324)
The array of processors shown then, is a partial representation, an
armature for a much more extensive diffusion of processors in wireless
digital signal processing: in wireless base stations, 3G phones, mobile
computing, local area networks, municipal, community and domestic
Wi-fi network, in femtocells, picocells, in backhaul, last-mile or first
mile infrastructures.

figure 118
Typical contemporary
wireless infrastructure
DSP chip architecture
PicoChip202

Architectures and intensive movement
It is as if the picoChip is a miniaturised version of the urban geography that contains the many gadgets, devices, and wireless and wired
infrastructures. However, this proliferation of processors is more than
a diffusion of the same. The interconnection between these arrays of
processors is not just extensive, as if space were blanketed by an ever
finer and wider grid of points occupied by processors at work shaping
signals. As we will see, the interconnection between processors in DSP
seeks to potentialise an intensive movement. It tries to accommodate
a change in the nature of movement. Since all movement is change,
intensive movement is a change in change. When intensive movement
occurs, there is always a change in kind, a qualitative change.
Intensive movements always respond to a relational problem. The
crux of the relational problem of wirelessness is this: how can many
things (signals, messages, flows of information) occupy the same space
at the same time, yet all be individualised and separate? The flow of
information and messages promises something highly individualised
(we saw this in the UMPC video from Intel). In terms of this individualising change, the movement of images, messages and data, and the
movement of people, have become linked in very specific ways today.
The greater the degree of individualization, the more dense becomes
the mobility of people and the signals they transmit and receive. And
as people mobilise, they drag personalised flows of communication on
165

165

165

166

166

the move with them. Hence flows of information multiply massively,
and networks must proliferate around those flows. The networks need
to become more dense, and imbricate lived spaces more closely in response to individual mobility.
This poses many problems for the architecture of communication infrastructure. The infrastructural problems of putting networks everywhere are increasingly, albeit only partially, solved by packing radio-frequency waves with more and more intricately modulated signal
patterns. This is the core response of DSP to the changing geography
of calculation, and to the changing media embodiments associated
with it. To be clear on this: were it not for digital signal processing,
the problems of interference, of unrelated communications mixing together, would be potentially insoluble. The very possibility of mobile
devices and mobility depends on ways of increasing the sheer density
of wireless transmissions. Radio spectrum becomes an increasingly
valuable, tightly controlled resource. For any one individual communication, not much space or time can be available. And even when
there is space, it may be noisy and packed with other people and
things trying to communicate. different kinds of wireless signals are
constantly added to the mix. Signals may have to work their way
through crowds of other signals to reach a desired receiver. Communication does not take place in open, uncluttered space. It takes
place in messy configurations of buildings, things and people, which
obstruct waves and bounce signals around. The same signal may
be received many times through different echoes (‘multipath echo'
). Because of the presence of crowds of other signals, and the limited spectrum available for any one transmission, wirelessness needs
to be very careful in its selection of paths if experience is to stream
rather than just buzz. The problem for wireless communication is to
micro-differentiate many paths and to allow them to interweave and
entwine with each other without coming into relation.
So the changing architectures of code and computation associated
with DSP in wireless networks does more, I would argue, than fit in
with changing geography of computing. It belongs to a more intensive, enveloped, and enveloping set of movements. To begin addressing this dynamic, we might say that wireless DSP is the armature
166

166

166

167

167

of a centre of envelopment. This is a concept that Gilles Deleuze
proposes late in Difference and Repetition. ‘Centres of envelopment'
are a way of understanding how extensive movements arise from intensive movement. Such centres crop up in ‘complex systems' when
differences come into relation:
to the extent that every phenomenon finds its reason in a difference of intensity which frames it, as though this constituted
the boundaries between which it flashes, we claim that complex
systems increasingly tend to interiorise their constitutive differences: the centres of envelopment carry out this interiorisation
of the individuating factors. (Deleuze, 2001, 256)
Much of what I have been describing as the intensive movement
that folds spaces and times inside DSP can be understood in terms
of an interiorisation of constitutive differences. An intensive movement always entails a change in the nature of change. In this case,
a difference in intensity arises when many signals need to co-habit
that same place and moment. The problem is: how can many signals
move simultaneously without colliding, without interfering with each
other? How can many signals pass by each other without needing
more space? These problems induce the compression and folding of
spaces inside wireless processing, the folding that we might understand as a ‘centre of envelopment' in action.
The Fast Fourier Transform: transformations between time
and space
I have been arguing that the complications of the mathematics
and the convoluted nature of the code or hardware used in DSP,
stems from an intensive movement or constitutive difference that is
interiorised. We can trace this interiorisation in the DSP used in
wireless networks. I do not have time to show how this happens
in detail, but hopefully one example of DSP that occurs but in the
video codecs and wireless networks will illustrate how this happens
in practice.
167

167

167

168

168

Late in the encoding process, and much earlier in the decoding
process in contemporary wireless networks, a fairly generic computational algorithm comes into action: the Fast Fourier Transform
(ffT). In some ways, it is not surprising to find the ffT in wireless networks or in digital video. Dating from the mid-1960s, ffTs
have long been used to analyse electrical signals in many scientific
and engineering settings. It provides the component frequencies of
a time-varying signal or waveform. Hence, in ‘spectral analysis', the
ffT can show the spectrum of frequencies present in a signal.
The notion of the Fourier transform is mathematical and has been
known since the early 19th century: it is an operation that takes
an arbitrary waveform and turns it into a set of periodic waves (sinusoids) of different frequencies and amplitudes. Some of these sinusoids
make more important contributions to overall shape of the waveform
than others. Added together again, these sine or cosine waves should
exactly re-constitute the original signal. Crucially, a Fourier transform can turn something that varies over time (a signal) into a set of
simple components (sine or cosine waves) that do not vary over time.
Put more technically, it switches between ‘time' and ‘frequency' domains. Something that changes in time, a signal, becomes a set of
distinct components that can be handled separately. 4
In a way, this analysis of a complex signal into simple static component signals means that DSP does use the set-based approaches I
described earlier. Once a complex signal, such as an image, has been
analysed into a set of static components, we can imagine code that

4

Humanities and social science work on the Fast Fourier Transform is hard to find, even
though the ffT is the common mathematical basis of contemporary digital image,
video and sound compression, and hence of many digital multimedia (in JPEG, MPEG
files, in DVDs). In the early 1990s, Friedrich Kittler wrote an article that discussed
it {Kittler, 1993 #753}. His key point was largely to show that there is no realtime
in digital signal processing. The ffT works by defining a sliding window of time for
a signal. It treats a complicated signal as a set of blocks that it lifts out of the time
domain and transforms into the frequency domain. The ffT effectively plots an event
in time as a graph in space. The experience of realtime is epiphenomenal. In terms of
the ffT, a signal is always partly in the future or the past. Although Kittler was not
referring to the use of ffT in wireless networks, the same point applies – there is no
realtime communication. However, while this point about the impossibility of realtime
calculation was important to make during the 1990s, it seems well-established now.

168

168

168

169

169

would select the most important or relevant components. This is precisely what happens in video and sound codecs such as MPEG and
MP3.
The ffT treats sounds and images as complicated superimpositions of waveforms. The envelope of a signal becomes something that
contains many simple signals. It is interesting that wireless networks
tend to use this process in reverse. It deliberately takes a well-separated and discrete set of signals – a digital datastream – and turns it
into a single complex signal. In contrast to the normal uses of ffT in
separating important from insignificant parts of a signal, in wireless
networks, and in many other communications setting, ffT is used to
put signals together in such a way as to contain them in a single envelope. The ffT is found in many wireless computation algorithms
because it allows many different digital signals to be put together on
a single wave and then extracted from it again.
Why would this superimposition of many signals onto a single complex waveform be desirable? Would it not increase the possibilities of
confusion or interference between signals? In some ways the ffT is
used to slow everything down rather than speed it up. Rather than
simply spatialising a duration, the ffT as used in wireless networks
defines a different way of inhabiting the crowded, noise space of electromagnetic radiation. Wireless transmitters are better at inhabiting
crowded signal spectrum when they don't try to separate themselves
off from each other, but actually take the presence of other transmitters into account. How does the ffT allow many transmitters to
inhabit the same spectrum, and even use the same frequencies?
The name of this technique is OFDM (Orthogonal Frequency Division Multiplexing). OFDM spreads a single data stream coming
from a single device across a large number of sub-carriers signals (52
in IEEE 802.11a/g). It splits the data stream into dozens of separate signals of slightly different frequency that together evenly use
the whole available radio spectrum. This is done in such a way that
many different transmitters can be transmitting at the same time,
on the same frequency, without interfering with each other. The advantage of spreading a single high speed data stream across many
signals (wideband) is that each individual signal can carry data at a
169

169

169

170

170

much slower rate. Because the data is split into 52 different signals,
each signal can be much slower (1/50). That means each bit of data
can be spaced apart more in time. This has great advances in urban
environments where there are many obstacles to signals, and signals
can reflect and echo often. In this context, the slower the data is
transmitted, the better.
At the transmitter, a reverse ffT (IffT) is used to re-combine
the 50 signals onto 1 signal. That is, it takes the 50 or so different
sub-carriers produced by OFDM, each of which has a single slightly
different, but carefully chosen frequency, and combines them into one
complex signal that has a wide spectrum. That is, it fills the available
spectrum quite evenly because it contains many different frequency
components. The waveform that results from the IffT looks like
'white noise': it has no remarkable or outstanding tendency whatsoever, except to a receiver synchronised to exactly the right carrier
frequency. At the receiver, this complex signal is transformed, using ffT, back into a set of 50 separate data streams, that are then
reconstituted into a single high speed stream.
Even if we cannot come to grips with the techniques of transformation using in DSP in any great detail, I hope that one point stands
out. The transformation involves ‘c'hanges in kind. Data does not
simply move through space. It changes in kind in order to move
through space, a space whose geography is understood as too full of
potential relations.
Conclusion
A couple of points in conclusion:
a. The spectrum of different wireless-audiovisual devices competing
to do more or less the same thing, are all a reproduction of the
same. Extensive movement associated with wireless networks and
digital video occur in various forms. firstly in the constant enveloping of spaces by wireless signals, and secondly in the dense

170

170

170

171

171

population of wireless spectrum by competing, overlapping signals, vying for market share in highly visible, well-advertised campaigns to dominate spectrum while at the same time allowing for
the presence of many others.
b. Actually, in various ways, wirelessness puts the very primacy of
extension as space-making in question. Signals seem to be able to
occupy the same space at the same time, something that should
not happen in space as usually understood. We can understand
this by re-conceptualising movement as intensive. Intensive movement occurs in multiple ways. Here I have emphasised the constant folding inwards or interiorisation of heterogeneous movements via algorithms used in digital signal processing. Intensive
movement ensues occurs when a centre of envelopment begins to
interiorise differences. While these interiorised spaces are computationally intensive (as exemplified by the picoChip's massive
processing power), the spaces they generate are not perceived as
calculated, precise or rigid. Wirelessness is a relatively invisible,
messy, amorphous, shifting sets of depths and distances that lacks
the visible form and organisation of other entities produced by
centres of calculation (for instance, the shape of a CAD-designed
building or car). However, similar processes occur around sound
and images through DSP. In fact, different layers of DSP are increasingly coupled in wireless media devices.
c. Where does this leave the centre of envelopment? The cost of
this freeing up of movement, of mobility, seems to me to be an
interiorisation of constitutive differences, not just in DSP code
but in the perceptual fields and embodiment of the mobile user.
The irony of the DSP is that it uses code to quantify sensations
or physical movements that lie at the fringes of representation
or awareness. We can't see DSP as such, but it supports our
seeing and moving. It brings code quite close to the body. It
can work with audio and images in ways that bring them much
closer to us. The proliferation of mobile devices such as mp3 and
digital cameras is one consequence of that. Yet the price DSP
pays for this proximity to sensation, to sounds, movement, and
others, is the envelopment I have been describing. DSP acts as
171

171

171

172

172

a centre of envelopment, as something that tends to interiorise
intensive movements, the changing nature of change, the intensive
movements that give rise to it.
d. This brings us back to the UMPC video: it shows two individuals.
Their relation can never, it seems, get very far. The provision
of images, sound and wireless connectivity has come so far, that
they hardly need encounter each other at all. There is something
intensely monadological here: DSP is heavily engaged in furnishing the interior walls of the monad, and with orienting the monad
in relation to other monads, but making sure that nothing much
need pass between them. So much has already been pre-processed
between, that nothing much need happen between. They already
have a complete perception of their relation to the other.
e. On a final constructive note, it seems that there is room for contestation here. The question is how to introduce the set-based
code processes that have proven productive in other areas into
the domain of DSP. What would that look like? How would it be
sensed? What could it do to our sensations of video or wireless
media?

172

172

172

173

173

References
Deleuze, Gilles. Difference and Repetition. Translated by Paul
Patton, Athlone Contemporary European Thinkers. (London; New
York: Continuum, 2001).
Panesar, Gajinder, Daniel Towner, Andrew Duller, Alan Gray, and
Will Robbins. ‘D'eterministic Parallel Processing, International Journal of Parallel Programming 34, no. 4 (2006): 323-41.
PicoChip. 'Advanced Wireless Technologies', (2007). http://www
.picochip.com/solutions/advanced_wireless_technologies
PicoChip. 'Pc202 Integrated Baseband Processor Product Brief',
(2007). http://www.picochip.com/downloads/03989ce88cdbebf5165e2f095a1cb1c8
/PC202_product_brief.pdf
Smith, Steven W. The Scientist and Engineer's Guide to Digital
Signal Processing: California Technical Publishing, 2004).
Thrift, Nigel. ‘R'emembering the Technological Unconscious by
Foregrounding Knowledges of Position, Environment & Planning D:
Society & Space 22, no. 1 (2004): 175-91.

173

173

173

174

174

ELPUEBLODECHINA A.K.A.
ALEJANDRA MARIA PEREZ NUNEZ
License: ??
EN

El Curanto
Curanto is a traditional method of cooking in the ground by the
people of Chiloe, in the south of Chile. This technique is practiced
throughout the world under different names. What follows is a summary of the ELEMENTS and steps enunciated and executed during el
curanto, which was performed in the centre of Brussels during V/J10.

Recipe

?

For making a curanto you need
to take the following steps and
arrange the following ELEMENTS:

This image is repeated in many
different cultures. Might be an
ancient way of cooking. What
does this underground cooking
imply? Most of all, it takes a lot
of TIME.

Free Libre Open Source
Curanto in the center
of Bruxelles

OVEN, a hole in the ground
filled with fire resistant STONES.

? find a way to get a good deal
at the market to get fresh
MUSSELS for x people.
It
helps to have a CHARISMATIC
WOMAN do it for you.

figure A

a slow cooking

OVEN

174

174

174

175

175

onomies of immaterial labour.
?

A BRIGHT WOMAN FRIEND to
find out about BELGIAN PORPHYRY and tell you about the
mining carrière in Quenast
(Hainaut).

? A CAMERA WOMAN to hand
you a MARBLE STONE to put
inside the OVEN.

figure B a TERRAIN VAGUE in
the centre of Brussels and a
NEIGHBOUR willing to let you in.

?

or some other MULwho is
extremely PATIENT and HUMOURISTIC and who helps
you to focus and takes pictures.
WENDY

TITASKING WOMAN

?

or some
that
TRUSTS the carrier of the
performance, will tell their
STORY about TRAVELING MUSSELS.
FEMKE

and

PETER

EXCENTRIC COUPLE

figure C A HOLE in the
ground 1.5 m deep, 1 m
diameter. (It makes me
think of a hole in my head).

A hole in the ground reminds me
of the unknown. FOOD cooked
inside the ground relates to ideas,
creativity and GIFT. It helps to
have GUILLAUME or a strong and
positive MAN to help you dig the
hole. A second PERSON would be
of great help, especially if, while
digging, he would talk about tax-

Mussels eaten in the centre of
Brussels are grown in Ireland and
immersed in Dutch seawater and
are then offcially called Dutch.
After 2 days in Dutch water, they
are ready to be exported to Brussels and become Belgian mussels
that are in fact Dutch-Irish.

175

175

175

176

176

figure D Original curanto
STONES are round fire
resistant stones. I couldn't
find them in Brussels.

figure E A good BUCKET
to scoop the rain out
of your newly dug HOLE

The only round and granite stones
were very expensive design ones.
In Chile you just dig a hole anywhere and find them. The only
fire resistant rock in Brussels was
the STREET itself.
? Square shaped rocks collected
randomly throughout the city
by means of appropriation.
Streets are made of a type of
granite rock, might be Belgian
porphyry. Note that there is a
message on one of the stones we
picked up in the centre. It reads
'watch your head'.

figure F A tent to protect
your fiRE from random RAIN

176

176

176

177

177

figure G LAIA or some
psychonaut, hierophant friend.

Should be someone who is able to
transmit confidence to the execution of el curanto and who will
keep you company while you are
appropriating stones in Brussels.
? A good BOUILLON made of
cheap white wine and concentrated bio vegetables and
spices is one of the secrets.

figure I GIRL that will
randomly come to the place
with her MOTHER and
speak in Spanish to the
carrier of the performance.

She will play the flute, give
the OVEN some orders to cook
well and sing improvised SONGS.
She and some other children will
play around by digging holes and
making their own CURANTO.

figure J A big fiRE to heat up
the wet cold ground of Brussels
figure H You need to find
or some Palestinian fellow
to help you keep the fire burning

MOAM

177

177

177

178

178

figure K

figure M A SACK CLOTH
to cover the food and to
retain STEAM for cooking.

RED HOT COAL

figure L Using some
cabbage leaves to cover
the RED HOT COAL to
place the FOOD on top of

figure N

or some
who is
happy to SHARE his expert
knowledge and willing to
join in the performance.
DIDIER

PANIC COOK MAN

178

178

178

179

179

?

?

HOLE

?

MUSSELS

?
figure O ONIONS,
and SPECULATIONS.

GESTURES

?

While reading VALIS, the carrier
of the performance will become
reverend TIMOTHY ARCHER and
read about TIME (something that
has mainly been forgotten is
Palestine).

figure P el curanto is
to be made together with
PEOPLE and for EVERYONE.

WOOD found in a dismantled
house. It helps to find a ride
to transport it.

SPICES,

leaf.

rosemary and bay

MICHAEL or some DEDICATED
friend that will assist with the
execution of the performance
and keep the pictures of it afterwards for months.

figure Q You can eat from
the shell by using your hands
or a little WOODEN SPOON.

If you want to eat later, take the
mussels out of their shell, add
OLIVE OIL, make a spread and
keep it cold in a jar. find QUEER
couples to savour it with BREAD
while talking about SEX.
179

179

179

180

180

?

fiRE

?

RED HOT COAL

?

FOOD

?

from the cooking MUSIt helps to use 'hot'
PIEZZO MICROPHONES.
NOISE

SELS.

Here TIME turns into space.
“Time can be overcome”, Mircea
Eliade wrote. That's what it's all
about.
The great mystery of Eleusis, of
the Orphics, of the early Christians, of Sarapis, of the Greco

1

-Roman mystery religions, of
Hermes Trismegistos, of the Renaissance Hermetic alchemists,
of the Rose Cross Brotherhood,
of Apollonius of Tyana, of Simon
Magus, of Asklepios, of Paracelsus, of Bruno, consists of the abolition of time. The techniques are
there. Dante discusses them in
the Comedy. It has to do with
the loss of amnesia; when forgetfulness is lost, true memory
spreads out backward and forward, into the past and into the
future, and also, oddly, into alternate universes; it is orthogonal as well as linear. 1

Philip K. Dick Valis (1972)

180

180

180

181

181

ALICE CHAUCHAT, FRÉDÉRIC GIES
License: Attribution-Noncommercial-No Derivative Work
EN

Praticable
Praticable is a collaborative research project between several artists
(currently: Alice Chauchat, Frédéric de Carlo, Frédéric Gies, Isabelle
Schad and Odile Seitz).
Praticable proposes itself as a horizontal work structure, which
brings research, creation, transmission and production structure into
relation with each other. This structure is the basis for the creation
of a variety of performances by either one or several of the project's
participants. In one way or another, these performances start from
the exploration of body practices, leading to a questioning of its representation. More concretely, Praticable takes the form of collective
periods of research and shared physical practices, both of which are
the basis for various creations. These periods of research can either
be independent of the different creation projects or integrated within
them.
During Jonctions/Verbindingen 10, Alice Chauchat and Frédéric
Gies gave a workshop for participants dealing with different ‘body
practices'. On the basis of Body-Mind Centering (BMC) techniques,
the body as a locus of knowledge production was made tangible. The
notation of the Dance performance with which Frédéric Gies concluded the day is reproduced in this book and published under an
open license.

figure 120
Workshop for
participants
with different
body
practices
at V/J10

figure 121
The body as
a locus of
knowledge
production
was made
tangible

figure 122

figure 123

184

Dance (Notation)
20 sec.
31. INTERCELLULAR flUID
Initiate movement in your intercellular fluid. Start slowly and
then put more and more energy
and speed in your movement, using intercellular fluid as a pump
to make you jump.

20 sec.
32. VENOUS BLOOD
Initiate movement in your venous
blood, rising and falling and following its waves.

20 sec.
33. VENOUS BLOOD
Initiate movement in your venous blood, slowing down progressively.

184

184

184

185

185

Less than 5 sec.
34. TRANSITION
Make visible in your movement a
transition from venous blood to
cerebrospinal fluid. finish in the
same posture you chose to start
PART 3.

1 min.
35. EACH flUID
Go through each fluid quality you
have moved with since the beginning of PART 3. The 1st one has
to be cerebrospinal fluid. After
this one, the order is free.

185

185

185

186

186

61. ALL GLANDS
Stand up slowly, building your
vertical axis from coccygeal body
to pineal gland. Use this time to
bound with earth through your
feet, as if you were growing roots.

INSTRUMENTAL (during the voice echo)
Down, down, down in your heart
find, find, find the secret
62. LOWER GLANDS OF THE
PELVIS
Dance as if you were dancing
in a club. Focus on your lower
glands, in your pelvis, to initiate your dance. Your arms, torso,
neck and head are also involved
in your dance.
SMALL PERIMETER
Turn, turn, turn your head around
63. MAMILLARY BODIES
Turn and turn your head around,
initiating this movement in
mamillary bodies. Let your head
drive the rest of your body into
turns.

186

186

186

187

187

Baby we can do it
We can do it alright
64. LOWER GLANDS OF THE
PELVIS
Dance as if you were dancing
in a club. Focus on your lower
glands, in your pelvis, to initiate your dance. Your arms, torso,
neck and head are also involved
in your dance.
Do you believe in love at first sight
It's an illusion, I don't care
Do you believe I can make you feel better
Too much confusion, come on over here
65. HEART BODY
Keep on dancing as if you were
dancing in a club and initiate
movements in your heart body,
connecting with your forearms
and hands.

License: Attribution-Noncommercial-No Derivative Work

187

187

187

188

188

Mutual Motions Video Library
To be browsed, a vision to be displaced

figure 126

figure 125

Wearing the video library, performer Isabelle Bats presents a selection of films related to the themes of V/J10. As a living memory, the
discs and media players in the video library are embedded in a dress
designed by artists collective De Geuzen. Isabelle embodies an accessible interface between you (the viewer), and the videos. This human
interface allows for a mutual relationship: viewing the films influences
the experience of other parts of the program, and the situation and
context in which you watch the films play a role in experiencing and
interpreting the videos. A physical exchange between existing imagery, real-time interpretation, experiences and context, emerges as
a result.
The V/J10 video library collects excerpts of performance and dance
video art, and (documentary) film, which reflect upon our complex
body–technique relations. Searching for the indicating, probing, disturbing or subverting gesture(s) in the endless feedback loop between
technology, tools, data and bodies, we collected historical as well as
contemporary material for this temporary archive.

Modern Times or the Assembly Line
Reflects the body in work environments, which are structured by
technology, ranging from the pre-industrial manual work with analogue
tools, to the assembly line, to postmodern surveillance configurations.
24 Portraits
Excerpt from a series of documentary portraits by Alain Cavalier, FR,
1988-1991.

umentaries paying tribute to women's
manual work. The intriguing and sensitive portraits of 24 women working
in different trades reveal the intimacy
of bodies and their working tools.

24 Portraits is a series of short doc-

198

198

198

199

199

Humain, trop humain
Quotes from a documentary by Louis
Malle, FR, 1972.
A documentary filmed at the Citroen
car factory in Rennes and at the 1972
Paris auto show, documenting the monotonous daily routines of working the
assembly lines, the close interaction
between bodies and machines.

Performing the Border
Video essay by Ursula Biemann, CH,
1999, 45 min.
“Performing the Border is a video
essay set in the Mexican-U.S. border town Ciudad Juarez, where the
U.S. industries assemble their electronic and digital equipment, located
right across El Paso, Texas.
The
video discusses the sexualization of
the border region through labour division, prostitution, the expression of
female desires in the entertainment industry, and sexual violence in the public sphere. The border is presented
as a metaphor for marginalization and
the artificial maintenance of subjective boundaries at a moment when
the distinctions between body and machine, between reproduction and production, between female and male,
have become more fluid than ever.”
(Ursula Biemann)
http://www.geobodies.org

Maquilapolis (city of factories)
A film by Vicky Funari and Sergio
De La Torre, Mexico/U.S.A., 2006, 68
min.

Carmen works the graveyard shift in
one of Tijuana's maquiladoras, the
multinationally-owned factories that
came to Mexico for its cheap labour.
After making television components
all night, Carmen comes home to a
shack she built out of recycled garage
doors, in a neighbourhood with no
sewage lines or electricity. She suffers
from kidney damage and lead poisoning from her years of exposure to toxic
chemicals. She earns six dollars a day.
But Carmen is not a victim. She is a
dynamic young woman, busy making
a life for herself and her children.
As Carmen and a million other
maquiladora workers produce televisions, electrical cables, toys, clothes,
batteries and IV tubes, they weave
the very fabric of life for consumer nations. They also confront labour violations, environmental devastation and
urban chaos – life on the frontier of
the global economy. In Maquilapolis Carmen and her colleague Lourdes reach beyond the daily struggle for
survival to organize for change: Carmen takes a major television manufacturer to task for violating her labour
rights, Lourdes pressures the government to clean up a toxic waste dump
left behind by a departing factory.
As they work for change, the world
changes too: a global economic crisis
and the availability of cheaper labour
in China begin to pull the factories
away from Tijuana, leaving Carmen,
Lourdes and their colleagues with an
uncertain future.
A co-production of the Independent
Television Service (ITVS), project of
Creative Capital.
http://www.maquilapolis.com

199

199

199

200

200

Practices of everyday life
Everyday life as the place of a performative encounter between bodies
and tools, from the U.S.A. of the 70s to contemporary South Africa.

Saute ma ville
Chantal Akerman, B, 1968, 13 min.

states that, “When the woman speaks,
she names her own oppression.”

A girl returns home happily. She locks
herself up in her kitchen and messes up
the domestic world. In her first film,
Chantal Akerman explores a scattered
form of being, where the relationship
with the controlled human world literally explodes. Abolition of oneself,
explosion of oneself.

“I was concerned with something like
the notion of ‘language speaking the
subject', and with the transformation
of the woman herself into a sign in
a system of signs that represent a
system of food production, a system
of harnessed subjectivity.” (Martha
Rosler)

Semiotics of the Kitchen

Choreography

Video by Martha Rosler, U.S.A., 1975,
05:30 min.
Semiotics of the Kitchen adopts the
form of a parodic cooking demonstration in which, Rosler states, “An
anti-Julia Child replaces the domesticated ‘meaning' of tools with a lexicon
of rage and frustration.” In this performance-based work, a static camera is
focused on a woman in a kitchen. On
a counter before her are a variety of
utensils, each of which she picks up,
names and proceeds to demonstrate,
but with gestures that depart from the
normal uses of the tool. In an ironic
grammatology of sound and gesture,
the woman and her implements enter
and transgress the familiar system of
everyday kitchen meanings – the securely understood signs of domestic
industry and food production erupt
into anger and violence. In this alphabet of kitchen implements, Rosler

Video installation preview by Anke
Schäfer, NL/South Africa, 13:07 min
(loop), 2007.
Choreography reflects on the notion
‘Armed Response' as an inner state
of mind. The split screen projection
shows the movements of two women
commuting to their work. On the one
side, the German-South African Edda
Holl, who lives in the rich Northern
suburbs of Johannesburg. Her search
for a safe journey is characterized
by electronic security systems, remote
controls, panic buttons, her constant
cautiousness, the reassuring glances
in the tinted car windows. On the
other side, you see the African-South
African Gloria Fumba, who lives in
Soweto and whose security techniques
are very basic: clutching her handbag to her body, the way she cues for
the bus, avoiding to go home alone
when it's dark. A classical continuity

200

200

200

201

201

editing, as seen fiction film, suggests
at first a narrative storyline, but is
soon interrupted by moments of pause.
These pauses represent the desires of
both women to break with the safety
mechanism that motivates their daily
movements.

Television
Ximena Cuevas, Mexico, 1999, 2 min.
“The vacuum cleaner becomes the device of the feminist ‘liberation', or the
monster that devours us.” (Insite 2000
program, San Diego Museum of Art)

http://www.livemovie.org

Perform the script, write the score
Considers dance and performance as knowledge systems where movement and data interact. With excerpts of performance documents,
interviews and (dance) films. But also the script, the code, as system
of perversion, as an explorative space for the circulation of bodies.
William Forsythe's works
Choreography can be understood as
writing moving bodies into space, a
complex act of inscription, which is
situated on the borderline between
creating and remembering, future and
past. Movement is prescribed and is
passing at the same time. It can be
inscribed into the visceral body memory through constant repetition, but
it is also always undone:
As Laurie Anderson says:
“You're walking. And you don't always realize it, but you're always
falling. With each step you fall forward slightly. And then catch yourself from falling.
Over and over,
you're falling.
And then catching
your self from falling.” (Quoted after
Gabriele Brandstetter, ReMembering
the Body)
William Forsythe, for instance, considers classical ballet as a historical
form of a knowledge system loaded

with ideologies about society, the self,
the body, rather than a fixed set
of rules, which simply can be implemented. An arabesque is a platonic ideal for him, a prescription,
but it can't be danced: “There is
no arabesque, there is only everyone's arabesque.” His choreography
is concerned with remembering and
forgetting: referencing classical ballet, creating a geometrical alphabet,
which expands the classical form, and
searching for the moment of forgetfulness, where new movement can arise.
Over the years, he and his company
developed an understanding of dance
as a complex system of processing information with some analogies to computer programming.

Chance favours
pared mind

the

pre-

Educational dance film, produced by
Vlaams Theaterinstituut, Ministerie
van Onderwijs dienst Media and Informatie, dir. Anne Quirynen, 1990,

201

201

201

202

202

Rehearsal Last Supper

25 min.
Chance favours the prepared mind
features discussions and demonstrations by William Forsythe and four
Frankfurt Ballet Dancers about their
understanding of movement and their
working methods: “Dance is like writing or drawing, some sort of inscription.” (William Forsythe)

The way of the weed
Experimental dance film featuring
William Forsythe, Thomas McManus
and dancers of the Frankfurt Ballet,
An-Marie Lambrechts, Peter Missotten and Anne Quirynen, soundtrack:
Peter Vermeersch, 1997, 83 min.
In this experimental dance film, investigator Thomas is dropped in a desert
in 7079, not only to investigate the
growth movements of the plant life
there, but also the life's work of the
obscure scientist William F. (William
Forsythe), who has achieved numerous insights and discoveries on the
growth and movement of plants. This
knowledge is stored in the enormous
data bank of an underground laboratory. It is Thomas's task to hack into
his computer and check the professor's secret discoveries. His research
leads him into the catacombs of a
complex building, where he finds people stored in cupboards in a comatose
state. They are loaded with professor F.'s knowledge of vegetation. He
puts the ‘people-plants' into a large
transparent pool of water and notices
that in the water the ‘samples' come
to life again. . . A complex reflection
on (body) memory, (digital) archives
and movement as repetition and interference.

Video installation preview by Anke
Schäfer, NL/South Africa, 16:40 min.
(loop), 2007.
The work Rehearsal Last Supper combines a kind of ‘Three Stooges' physical, slapstick-style comedy, but with
far more serious subject matters such
as abuse, gender violence, and the
general breakdown of family relationships. It's a South African and mixed
couple re-enactment of a similar scene
that Bruce Nauman realized in the 70s
with a white, middle-aged man and
woman.
The experience, the ‘Gestalt' of the
experienced violence, the frustration
and the unwillingly or even forced internalization are felt to the core of the
voice and the body. Humour can help
to express the suppressed and to use
your pain as power.
Actors: Nat Ramabulana, Tarryn Lee,
Megan Reeks, Raymond Ngomane
(from Wits University Drama department), Kekeletso Matlabe, Lebogang
Inno, Thabang Kwebu, Paul Noko
(from Market Theatre Laboratory).
http://www.livemovie.org

Nest Of Tens
Miranda July, U.S.A., 1999, 27 min.
Nest Of Tens is comprised of four alternating stories, which reveal mundane yet personal methods of control.
These systems are derived from intuitive sources. Children and a retarded
adult operate control panels made out
of paper, lists, monsters, and their
own bodies.
“A young boy, home alone, performing

202

202

202

203

203

a bizarre ritual with a baby; an uneasy, aborted sexual flirtation between
a teenage babysitter and an older man;
an airport lounge encounter between a
businesswoman (played by July) and a
young girl. Linked by a lecturer enumerating phobias in a quasi-academic
seminar, these three perverse, unnerving scenarios involving children and
adults provide authentic glimpses into
the queasy strangeness that lies behind the everyday.” (New York Video
Festival, 2000)

In the field of players
Jeanne Van Heeswijk & Marten Winters, 2004, NL
Duration: 25.01.2004 – 31.01.2004
Location: TENT.Rotterdam
Participants: 106 through casting, 260
visitors of TENT.
Together with artist Marten Winters,
Van Heeswijk developed a ‘game:set'.
In cooperation with graphic designer
Roger Teeuwen, they marked out a
set of lines and fields on the ground.
Just like in a sporting venue, these
lines had no meaning until used by the
players. The relationship between the
players was revealed by the rules of the
game.
Designer Arienne Boelens created special game cards that were handed out
during the festival by the performance
artists Bliss. Both Bliss and the cards
turned up all over the festival, showing
up at every hot spot or special event.
Through these game cards people were

invited to fulfil the various roles of
the game – like ‘Round Miss' (the
girl who walks around the ring holding up a numbered card at the start
of each round at boxing matches),
‘40-plus male in (high) cultural position', ‘Teen girl with star ambitions',
‘Vital 65-plus'. But even ‘Whisperer',
and ‘Audience' were specific roles.

Writing Desire
Video essay by Ursula Biemann, CH,
2000, 25 min.

Writing Desire is a video essay on
the new dream screen of the Internet, and its impact on the global circulation of women's bodies from the
‘Third World' to the ‘first World'
. Although underage Philippine ‘pen
pals' and post-Soviet mail-order brides
have been part of the transnational
exchange of sex in the post-colonial
and post-Cold War marketplace of desire before the digital age, the Internet has accelerated these transactions.
The video provides the viewers with
a thoughtful meditation on the obvious political, economic and gender inequalities of these exchanges by simulating the gaze of the Internet shopper
looking for the imagined docile, traditional, pre-feminist, but Web-savvy
mate.
http://www.geobodies.org

203

203

203

204

204


INÈS RABADAN
License: Creative Commons Attribution-NonCommercial-ShareAlike
EN

Does the repetition of a gesture irrevocably
lead to madness?

figure 127
Screening
Modern
Times at
V/J10

A personal introduction to Modern Times
(Charles Chaplin, 1936)
figure 128

One of the most memorable moments of Modern Times, is the one
where the tramp goes mad after having spent the whole day screwing
bolts on the assembly line. He is free: neither husband, nor worker,
nor follower of some kind of movement, nor even politically engaged.
His gestures are burlesque responses to the adversity in his life, or
just plain ‘exuberant'. But through the interaction with the machine,
however, he completely goes off the rails and ends up in prison.
Inès Rabadan made two short films in which a female protagonist
is confined by the fast-paced work of the assembly line. Tragically
and mercilessly, the machine changes the woman and reduces her to
a mechanical gesture – a gesture in which she sometimes takes pride,
precisely in order not to lose her sanity. Or else, she really goes mad,
ruined by the machine, eventually managing to free herself.

figure 129

figure 130


MICHAEL TERRY
License: Free Art License
EN

Data analysis as a discourse

figure 131
Michael
Terry in
between
LGM sessions

An interview with Michael Terry
Michael Terry is a computer scientist working at the Human Computer Interaction Lab of the University of Waterloo, Canada. His
main research focus is on improving usability in open source software, and ingimp is the first result of that work.
In a Skype conversation that was live broadcast in La Bellone during Verbindingen/Jonctions 10, we spoke about ingimp, a clone of the
popular image manipulation programme Gimp, but with an important difference. Ingimp allows users to record data about their usage
in to a central database, and subsequently makes this data available
to anyone.
At the Libre Graphics Meeting 2008 in Wroclaw, just before Michael
Terry presents ingimp to an audience of Gimp developers and users,
Ivan Monroy Lopez and Femke Snelting meet up with Michael Terry
again to talk more about the project and about the way he thinks
data analysis could be done as a form of discourse.

figure 132
Interview
at Wroclaw

Femke Snelting (FS) Maybe we could start this face-to-face conversation with a description of the ingimp project you are developing
and – what I am particularly interested in –, why you chose to work
on usability for Gimp?
Michael Terry (MT) So the project is ‘ingimp', which is an instrumented version of Gimp, it collects information about how the
software is used in practice. The idea is you download it, you install
it, and then with the exception of an additional start up screen, you
use it just like regular Gimp. So, our goal is to be as unobtrusive as
possible to make it really easy to get going with it, and then to just
217

217

217

218

218

forget about it. We want to get it into the hands of as many people
as possible, so that we can understand how the software is actually
used in practice. There are plenty of forums where people can express
their opinions about how Gimp should be designed, or what's wrong
with it, there are plenty of bug reports that have been filed, there
are plenty of usability issues that have been identified, but what we
really lack is some information about how people actually apply this
tool on a day to day basis. What we want to do is elevate discussion
above just anecdote and gut feelings, and to say, well, there is this
group of people who appear to be using it in this way, these are the
characteristics of their environment, these are the sets of tools they
work with, these are the types of images they work with and so on,
so that we have some real data to ground discussions about how the
software is actually used by people.
You asked me now why Gimp? I actually used Gimp extensively
for my PhD work. I had these little cousins come down and hang
out with me in my apartment after school, and I would set them up
with Gimp, and quite often they would start off with one picture,
they would create a sphere, a blue sphere, and then they played with
filters until they got something really different. I would turn to them
looking at what they had been doing for the past twenty minutes,
and would be completely amazed at the results they were getting
just by fooling around with it. And so I thought, this application
has lots and lots of power; I'd like to use that power to prototype
new types of interface mechanisms. So I created JGimp, which is
a Java based extension for the 1.0 Gimp series that I can use as a
back-end for prototyping novel user interfaces. I think that it is a
great application, there is a lot of power to it, and I had already an
investment in its code base, so it made sense to use that as a platform
for testing out ideas of open instrumentation.
FS: What is special about ingimp, is the fact that the data you
collect, is equally free to use, run, study and distribute, as the software
you are studying. Could you describe how that works?

218

218

218

219

219

MT: Every bit of data we collect, we make available: you can go to
the website, you can download every log file that we have collected.
The intent really is for us to build tools and infrastructure so that the
community itself can sustain this analysis, can sustain this form of
usability. We don't want to create a situation where we are creating
new dependencies on people, or where we are imposing new tasks on
existing project members. We want to create tools that follow the
same ethos as open source development, where anyone can look at
the source code, where anyone can make contributions, from filing
a bug to doing something as simple as writing a patch, where they
don't even have to have access to the source code repository, to make
valuable contributions. So importantly, we want to have a really low
barrier to participation. At the same time, we want to increase the
signal-to-noise ratio. Yesterday I talked with Peter Sikking, an information architect working for Gimp, and he and I both had this
experience where we work with user interfaces, and since everybody
uses an interface, everybody feels they are an expert, so there can be
a lot of noise. So, not only did we want to create an open environment for collecting this data, and analysing it, but we also wanted to
increase the chance that we are making valuable contributions, and
that the community itself can make valuable contributions. Like I
said, there is enough opinion out there. What we really need to do
is to better understand how the software is being used. So, we have
made a point from the start to try to be as open as possible with
everything, so that anyone can really contribute to the project.
FS: Ingimp has been running for a year now. What are you finding?
MT: I have started analysing the data, and I think one of the things
that we realised early on is that it is a very rich data set; we have lots
and lots of data. So, after a year we've had over 800 installations, and
we've collected about 5000 log files, representing over half a million
commands, representing thousands of hours of the application being
used. And one of the things you have to realise is that when you have
a data set of that size, there are so many different ways to look at it
that my particular perspective might not be enough. Even if you sit
219

219

219

220

220

someone down, and you have him or her use the software for twenty
minutes, and you videotape it, then you can spend hours analysing
just those twenty minutes of videotape. And so, I think that one of
the things we realised is that we have to open up the process so that
anyone could easily participate. We have the log files available, but
they really didn't have an infrastructure for analysing them. So, we
created this new piece of software called ‘Stats Jam', an extension
to MediaWiki, which allows anyone to go to the website and embed
SQL-queries against the ingimp data set and then visualise those
results within the Wiki text. So, I'll be announcing that today and
demonstrating that, but I have been using that tool now for a week
to complement the existing data analysis we have done.
One of the first things that we realized is that we have over 800
installations, but then you have to ask, how many of those are really serious users? A lot of people probably just were curious, they
downloaded it and installed it, found that it didn't really do much
for them and so maybe they don't use it anymore. So, the first thing
we had to do is figure out which data points should we really pay
attention to. We decided that a person should have used ingimp on
two different occasions, preferably at least a day apart, where they'd
saved an image on both of the instances. We used that as an indication of what a serious user is. So with that filter in place, the ‘800
installations' drops down to about 200 people. So we had about 200
people using ingimp; and looking at the data, this represents about
800 hours of use, about 4000 log files, and again still about half a
million commands. So, it's still a very significant group of people.
200 people are still a lot, and that's a lot of data, representing about
11000 images they have been working on – there's just a lot.
From that group, what we found is that use of ingimp is really
short and versatile. So, most sessions are about fifteen minutes or
less, on average. There are outliers, there are some people who use it
for longer periods of time, but really it boils down to them using it for
about fifteen minutes, and they are applying fewer than a hundred
operations when they are working on the image. I should probably
be looking at my data analysis as I say this, but they are very quick,
220

220

220

221

221

short, versatile sessions, and when they use it, they use less than 10
different tools, or they apply less than 10 different commands.
What else did we find? We found that the two most popular monitor resolutions are 1280 by 1024, and 1024 by 768. So, those represent
collectively 60 % of the resolutions, and really 1280 by 1024 represents
pretty much the maximum for most people, although you have some
higher resolutions. So one of the things that's always contentious
about Gimp, is its window management scheme and the fact that it
has multiple windows, right? And some people say, well you know,
this works fine if you have two monitors, because you can throw out
the tools on one monitor and then your images are on another monitor. Well, about 10 to 15 % of ingimp users have two monitors, so
that design decision is not working out for most of the people, if that
is the best way to work. These are things I think that people have
been aware of, it's just now we have some actual concrete numbers
where you can turn to and say: now this is how people are using it.
There is a wide range of tasks that people are performing with the
tool, but they are really short, quick tasks.
FS: Every time you start up ingimp, a screen comes up asking
you to describe what you are planning to do and I am interested in
the kind of language users invent to describe this, even when they
sometimes don't know exactly what it is they are going to do. So
inventing language for possible actions with the software has in a
way become a creative process that is now shared between interface
designer, developer and user. If you look at the ‘activity tags' you
are collecting, do you find a new vocabulary developing?
MT: I think there are 300 to 600 different activity tags that people
register within that group of ‘significant users'. I didn't have time to
look at all of them, but it is interesting to see how people are using
that as a medium for communicating to us. Some people will say,
“Just testing out, ignore this!” Or, people are trying to do things like
insert HTML code, to do like a cross-site scripting attack, because,
you have all the data on the website, so they will try to play with
that. Some people are very sparse and they say ‘image manipulation'
221

221

221

222

222

or ‘graphic design' or something like that, but then some people are
much more verbose, and they give more of a plan, “This is what I
expect to be doing.” So, I think it has been interesting to see how
people have adopted that and what's nice about it, is that it adds a
really nice human element to all this empirical data.
Ivan Monroy Lopez (IM): I wanted to ask you about the data;
without getting too technical, could you explain how these data are
structured, what do the log files look like?
MT: So the log files are all in XML, and generally we compress
them, because they can get rather large. And the reason that they
are rather large is that we are very verbose in our logging. We want
to be completely transparent with respect to everything, so that if
you have some doubts or if you have some questions about what kind
of data has been collected, you should be able to look at the log file,
and figure out a lot about what that data is. That's how we designed
the XML log files, and it was really driven by privacy concerns and
by the desire to be transparent and open. On the server side we take
that log file and we parse it out, and then we throw it into a database,
so that we can query the data set.
FS: Now we are talking about privacy. . . I was impressed by the
work you have done on this; the project is unusually clear about why
certain things are logged, and other things not; mainly to prevent
the possibility of ‘playing back' actions so that one could identify
individual users from the data set. So, while I understand there are
privacy issues at stake I was wondering... what if you could look at the
collected data as a kind of scripting for use, as writing a choreography
that might be replayed later?
MT: Yes, we have been fairly conservative with the type of information that we collect, because this really is the first instance where
anyone has captured such rich data about how people are using software on a day to day basis, and then made it all that data publicly
222

222

222

223

223

available. When a company does this, they will keep the data internally, so you don't have this risk of someone outside figuring something out about a user that wasn't intended to be discovered. We
have to deal with that risk, because we are trying to go about this
in a very open and transparent way, which means that people may
be able to subject our data to analysis or data mining techniques
that we haven't thought of, and extract information that we didn't
intent to be recording in our file, but which is still there. So there are
fairly sophisticated techniques where you can do things like look at
audio recordings of typing and the timings between keystrokes, and
then work backwards with the sounds made to figure out the keys
that people are likely pressing. So, just with keyboard audio and
keystroke timings alone, you can often give enough information to be
able to reconstruct what people are actually typing. So we are always
sort of weary about how much information is in there.
While it might be nice to be able to do something like record people's actions and then share that script, I don't think that that is
really a good use of ingimp. That said, I think it is interesting to
ask: could we characterize people's use enough, so that we can start
clustering groups of people together and then providing a forum for
these people to meet and learn from one another? That's something
we haven't worked out. I think we have enough work cut out for us
right now just to characterize how the community is using it.
FS: It was not meant as a feature request, but as a way to imagine
how usability research could flip around and also become productive
work.
MT: Yes, totally. I think one of the things that we found when
bringing people into the basic usability of the ingimp software and
ingimp website, is that people like looking at what commands other
people are using, what the most frequently used commands are; and
part of the reason that they like that, is because of what it teaches
them about the application. So they might see a command they were
unaware of. So we have toyed with the idea of then providing not
223

223

223

224

224

only the command name, but then a link from that command name
to the documentation – but I didn't have time to implement it, but
certainly there are possibilities like that, you can imagine.
FS: Maybe another group can figure something out like that? That's
the beauty of opening up your software plus data set of course.
Well, just a bit more on what is logged and what not... Maybe you
could explain where and why you put the limit, and what kind of use
you might miss out on as a result?
MT: I think it is important to keep in mind that whatever instrument you use to study people, you are going to have some kind of
bias, you are going to get some information at the cost of other information. So if you do a video taped observation of a user and you
just set up a camera, then you are not going to find details about
the monitor maybe, or maybe you are not really seeing what their
hands are doing. No matter what instrument you use, you are always
getting a particular slice.
I think you have to work backwards and ask what kind of things
do you want to learn. And so the data that we collect right now, was
really driven by what people have done in the past in the area of instrumentation, but also by us bringing people into the lab, observing
them as they are using the application, and noticing particular behaviours and saying, hey, that seems to be interesting, so what kind of
data could we collect to help us identify those kind of phenomena, or
that kind of performance, or that kind of activity? So again, the data
that we were collecting was driven by watching people, and figuring
out what information will help us to identify these types of activities.
As I've said, this is really the first project that is doing this, and
we really need to make sure we don't poison the well. So if it happens that we collect some bit of information, that then someone can
later say, “Oh my gosh, here is the person's file system, here are the
names they are using for the files” or whatever, then it's going to
make the normal user population weary of downloading this type of
224

224

224

225

225

instrumented application. The thing that concerns me most about
open source developers jumping into this domain, is that they might
not be thinking about how you could potentially impact privacy.
IM: I don't know, I don't want to get paranoid. But if you are
doing it, then there is a possibility someone else will do it in a less
considerate way.
MT: I think it is only a matter of time before people start doing
this, because there are a lot of grumblings about, “We should be
doing instrumentation, someone just needs to sit down and do it.”
Now there is an extension out for firefox that will collect this kind
of data as well, so you know. . .
IM: Maybe users could talk with each other, and if they are aware
that this type of monitoring could happen, then that would add a
different social dimension. . .
MT: It could. I think it is a matter of awareness, really. We have a
lengthy concern agreement that details the type of information we are
collecting and the ways your privacy could be impacted, but people
don't read it.
FS: So concretely... what information are you recording, and what
information are you not recording?
MT: We record every command name that is applied to a document,
to an image. Where your privacy is at risk with that, is that if you
write a custom script, then that custom script's name is going to be
inserted into a log file. And so if you are working for example for Lucas
or DreamWorks or something like that, or ILM, in some Hollywood
movie studio and you are using ingimp and you are writing scripts,
then you could have a script like ‘fixing Shrek's beard', and then that
is getting put into the log file and then people are going to know that
the studio uses ingimp.
225

225

225

226

226

We collect command names, we collect things like what windows
are on the screen, their positions, their sizes, and we take hashes of
layer names and file names. We take a string and then we create a
hash code for it, and we also collect information about how long is
this string, how many alphabetical characters, numbers; things like
that, to get a sense of whether people are using the same files, the
same layer names time and time again, and so on. But this is an
instance where our first pass at this, actually left open the possibility
of people taking those hashes and then reconstructing the original
strings from that. Because we have the hash code, we have the length
of the string – all you have to do is generate all possible strings of
that length, take the hash codes and figure out which hashes match.
And so we had to go back and create a new scheme for recording this
type of information where we create a hash and we create a random
number, we pair those up on the client machine but we only log the
random number. So, from log to log then, we can track if people
use the same image names, but we have no idea of what the original
string was.
There are these little ‘gotchas' like that, that I don't think most
people are aware of, and this is why I get really concerned about
instrumentation efforts right now, because there isn't this body of
experience of what kind of data should we collect, and what shouldn't
we collect.
FS: As we are talking about this, I am already more aware of what
data I would allow being collected. Do you think by opening up this
data set and the transparent process of collecting and not collecting,
this will help educate users about these kinds of risks?
MT: It might, but honestly I think probably the thing that will
educate people the most is if there was a really large privacy error
and that it got a lot of news, because then people would become more
aware of it because right now – and this is not to say that we want
that to happen with ingimp – but when we bring people in and we ask
them about privacy, “Are you concerned about privacy?” and they
say “No”, and we say “Why?” Well, they inherently trust us, but the
226

226

226

227

227

fact is that open source also lends a certain amount of trust to it,
because they expect that since it is open source, the community will
in some sense police it and identify potential flaws with it.
FS: Is that happening? Are you in dialogue with the open source
community about this?
MT: No, I think probably five to ten people have looked at the
ingimp code – realistically speaking I don't think a lot of people looked
at it. Some of the Gimp developers took a gander at it to see “How
could we put this upstream?” But I don't want it upstream, because
I want it to always be an opt-in, so that it can't be turned on by
mistake.
FS: You mean you have to download ingimp and use it as a separate
program? It functions in the same way as Gimp, but it makes the
fact that it is a different tool very clear.
MT: Right. You are more aware, because you are making that
choice to download that, compared to the regular version. There is
this awareness about that.
We have this lengthy text based consent agreement that talks about
the data we collect, but less than two percent of the population reads
license agreements. And, most of our users are actually non-native
English speakers, so there are all these things that are working against
us. So, for the past year we have really been focussing on privacy, not
only in terms of how we collect the data, but how we make people
aware of what the software does.
We have been developing wordless diagrams to illustrate how the
software functions, so that we don't have to worry about localisation
errors as much. And so we have these illustrations that show someone
downloading ingimp, starting it up, a graph appears, there is a little
icon of a mouse and a keyboard on the graph, and they type and you
see the keyboard bar go up, and then at the end when they close the
application, you see the data being sent to a web server. And then
227

227

227

228

228

we show snapshots of them doing different things in the software, and
then show a corresponding graph change. So, we developed these by
bringing in both native and non-native speakers, having them look at
the diagrams and then tell us what they meant. We had to go through
about fifteen people and continual redesign until most people could
understand and tell us what they meant, without giving them any
help or prompts. So, this is an ongoing research effort, to come up
with techniques that not only work for ingimp, but also for other
instrumentation efforts, so that people can become more aware of the
implications.
FS: Can you say something about how this type of research relates
to classic usability research and in particular to the usability work
that is happening in Gimp?
MT: Instrumentation is not new, commercial software companies
and researchers have been doing instrumentation for at least ten years,
probably ten to twenty years. So, the idea is not new, but what is
new – in terms of the research aspects of this –, is how do we do this
in a way where we can make all the data open? The fact that you
make the data open, really impacts your decision about the type of
data you collect and how you are representing it. And you need to
really inform people about what the software does.
But I think your question is... how does it impact the Gimp's
usability process? Not at all, right now. But that is because we have
intentionally been laying off to the side, until we got to the point
where we had an infrastructure, where the entire community could
really participate with the data analysis. We really want to have
this to be a self-sustaining infrastructure, we don't want to create a
system where you have to rely on just one other person for this to
work.
IM: What approach did you take in order to make this project
self-sustainable?
228

228

228

229

229

MT: Collecting data is not hard. The challenge is to understand
the data, and I don't want to create a situation where the community
is relying on only one person to do that kind of analysis, because this
is dangerous for a number of reasons. first of all, you are creating
a dependency on an external party, and that party might have other
obligations and commitments, and might have to leave at some point.
If that is the case, then you need to be able to pass the baton to
someone else, even if that could take a considerate amount of time
and so on.
You also don't want to have this external dependency, because of
the richness in the data, you really need to have multiple people
looking at it, and trying to understand and analyse it. So how are
we addressing this? It is through this Stats Jam extension to the
MediaWiki that I will introduce today. Our hope is that this type
of tool will lower the barrier for the entire community to participate
in the data analysis process, whether they are simply commenting on
the analysis we made or taking the existing analysis, tweaking it to
their own needs, or doing something brand new.
In talking with members of the Gimp project here at the Libre
Graphics Meeting, they started asking questions like, “So how many
people are doing this, how many people are doing this and how many
this?” They'll ask me while we are sitting in a café, and I will be able
to pop the database open and say, “A certain number of people have
done this.” or, “No one has actually used this tool at all.”
The danger is that this data is very rich and nuanced, and you
can't really reduce these kinds of questions to an answer of “N people
do this”, you have to understand the larger context. You have to
understand why they are doing it, why they are not doing it. So, the
data helps to answer some questions, but it generates new questions.
They give you some understanding of how the people are using it,
but then it generates new questions of, “Why is this the case?” Is this
because these are just the people using ingimp, or is this some more
widespread phenomenon?
They asked me yesterday how many people are using this colour
picker tool – I can't remember the exact name – so I looked and there
229

229

229

230

230

was no record of it being used at all in my data set. So I asked them
when did this come out, and they said, “Well it has been there at
least since 2.4.” And then you look at my data set, and you notice
that most of my users are in the 2.2 series, so that could be part of
the reasons. Another reason could be, that they just don't know that
it is there, they don't know how to use it and so on. So, I can answer
the question, but then you have to sort of dig a bit deeper.
FS: You mean you can't say that because it is not used, it doesn't
deserve any attention?
MT: Yes, you just can't jump to conclusions like that, which is
again why we want to have this community website, which shows the
reasoning behind the analysis: here are the steps we had to go through
to get this result, so you can understand what that means, what the
context means – because if you don't have that context, then it's sort
of meaningless. It's like asking, “What are the most frequently used
commands?” This is something that people like to ask about. Well
really, how do you interpret that? Is it the numbers of times it has
been used across all log files? Is it the number of people that have
used it? Is it the number of log files where it has been used at least
once? There are lots and lots of ways in which you can interpret
this question. So, you really need to approach this data analysis as
a discourse, where you are saying: here are my assumptions, here is
how I am getting to this conclusion, and this is what it means for
this particular group of people. So again, I think it is dangerous if
one person does that and you become to rely on that one person. We
really want to have lots of people looking at it, and considering it,
and thinking about the implications.
FS: Do you expect that this will impact the kind of interfaces that
can be done for Gimp?
MT: I don't necessarily think it is going to impact interface design,
I see it really as a sort of reality check: this is how communities are
using the software and now you can take that information and ask,
230

230

230

231

231

do we want to better support these people or do we. . . For example
on my data set, most people are working on relatively small images
for short periods of time, the images typically have one or two layers,
so they are not really complex images. So regarding your question,
one of the things you can ask is, should we be creating a simple tool
to meet these people's needs? All the people are just doing cropping
and resizing, fairly common operations, so should we create a tool
that strips away the rest of the stuff? Or, should we figure out why
people are not using any other functionality, and then try to improve
the usability of that?
There are so many ways to use data – I don't really know how
it is going to be used, but I know it doesn't drive design. Design
happens from a really good understanding of the users, the types of
tasks they perform, the range of possible interface designs that are
out there, lots of prototyping, evaluating those prototypes and so on.
Our data set really is a small potential part of that process. You can
say, well, according to this data set, it doesn't look like many people
are using this feature, let's not too much focus on that, let's focus on
these other features or conversely, let's figure out why they are not
using them. . . Or you might even look at things like how big their
monitor resolutions are, and say, well, given the size of the monitor
resolution, maybe this particular design idea is not feasible. But I
think it is going to complement the existing practices, in the best
case.
FS: And do you see a difference in how interface design is done in
free software projects, and in proprietary software?
MT: Well, I have been mostly involved in the research community,
so I don't have a lot of exposure to design projects. I mean, in my
community we are always trying to look at generating new knowledge,
and not necessarily at how to get a product out the door. So, the
goals or objectives are certainly different.

231

231

231

232

232

I think one of the dangers in your question is that you sort of
lump a lot of different projects and project styles into one category
of ‘open source'. ‘Open source' ranges from volunteer driven projects
to corporate projects, where they are actually trying to make money
out of it. There is a huge diversity of projects that are out there;
there is a wide diversity of styles, there is as much diversity in the
open source world as there is in the proprietary world.
One thing you can probably say, is that for some projects that are
completely volunteer driven like Gimp, they are resource strapped.
There is more work than they can possibly tackle with the number of
resources they have. That makes it very challenging to do interface
design; I mean, when you look at interface code, it costs you 50 or 75
% of a code base. That is not insignificant, it is very difficult to hack,
and you need to have lots of time and manpower to be able to do
significant things. And that's probably one of the biggest differences
you see for the volunteer driven projects: it is really a labour of
love for these people and so very often the new things interest them,
whereas with a commercial software company developers are going to
have to do things sometimes they don't like, because that is what is
going to sell the product.

232

232

232

233

233


SADIE PLANT
License: Creative Commons Attribution-NonCommercial-ShareAlike
Interwoven with her own thoughts and experiences, Sadie Plant gave a situated report on the Mutual
Motions track, and responded to the issues discussed during the week-end.

figure 146
Sadie Plant
reports
at V/J10

EN

A Situated Report
I have to begin with many thanks to Femke and Laurence, because
it really has been a great pleasure for me to have been here this weekend. It's nearly five years since I came to an event like this, believe
it or not, and I really cannot say enough how much I have enjoyed it,
and how stimulating I have found it. So yes, a big thank you to both
for getting me here. And as you say, it's ten years since I wrote Zeros
+ Ones, and you are marking ten years of this festival too, so it's an
interesting moment to think about a lot of the issues that have come
up over the weekend. This is a more or less spontaneous report, very
much an ‘open performance', to use Simon Yuill's words, and not to
be taken as any kind of definitive account of what has happened this
weekend. But still I hope it can bring a few of the many and varied strands of this event together, not to form a true conclusion, but
perhaps to provide some kind of digestif after a wonderful meal.
I thought I should begin as Femke very wisely began, with the
theme of cooking. Femke gave us a recipe at the beginning of the
weekend, really a kind of recipe for the whole event, with cooking as
an example of the fact that there are many models, many activities,
many things that we do in our everyday lives, which might inform
and expand our ideas about technology and how we work with them.
So, I too will begin with this idea of cooking, which is as Femke
said a very magical, transformative experience. Femke's clip from
the Cathérine Deneuve film was a really lovely instance of the kind
of deep elemental, magical chemistry which goes on in cooking. It is
this that makes it such an instructive and interesting candidate, for a
model to illuminate the work of programming, which itself obviously
has this same kind of potential to bring something into effect in a very
275

275

275

276

276

direct and immediate sense. And cooking is also the work behind the
scene, the often forgotten work, again a little bit like programming,
that results in something which – again like a lot of technology – can
operate on many different scales. Cooking is in one sense the most
basic kind of activity, a simple matter of survival, but it can also
work on a gourmet level too, where it becomes the most refined – and
well paid – kind of work. It can be the most detailed, fiddly, sort of
decorative work; it can be the most backbreaking, heavy industrial
work – bread making for example as well. So it really covers the whole
panoply of these extremes.
If we think about a recipe, and ask ourselves about the machine that
the recipe requires, it's obviously running on an incredibly complex
assemblage: you have the kitchen, you have all the ingredients, you
have machines for cooling things, machines for heating things, you
have the person doing the cooking, the tools in question. We really
are talking here about a complex process, and not just an end result.
The process is also, again, a very ‘open' activity. Simon Yuill defined
an `open performance' as a partial composition completed in the
performance.
Cooking is always about experimentation and the kitchen really is
a kind of lab. The instructions may be exact, the conditions may be
more or less precise but the results are never the same twice. There
are just too many variables, too many contingencies involved. Of
course like any experimental work, it can go completely wrong, it
often does go wrong: sometimes it really is all about process, and
not about eating at all! But as Simon again said today, quoting Sun
Ra: there are no real mistakes, there are no truly wrong things. This
was certainly the case with the fantastic cooking process that we
had throughout the whole day yesterday, which ended with us eating
these fantastic mussels, which I am sure elpueblodechina thought in
fact were not as they should have been. But only she knew what
she was aiming at: for the people who ate them they were delicious,
their flavour enhanced by the whole experience of their production.
elpueblodechina's meal made us ask: what does it mean for something
to go wrong? She was using a cooking technique which has come out
of generations and generations of errors, mistakes, probings, fallings
276

276

276

277

277

backs, not just simply a continuous kind of story of progress, success,
and forward movement. So the mistakes are clearly always a very big
part of how things work in life, in any context in life, but especially
of course in the context of programming and working with software
and working with technologies, which we often still tend to assume
are incredibly reliable, logical systems, but in fact are full of glitches
and errors. As thinkers and activists resistant to and critical of mainstream methods and cultures, this is something that we need to keep
encouraging.
I have for a long time been interested in textiles, and I can't resist mentioning the fact that the word ‘recipe' was the old word for
knitting patterns: people didn't talk about knitting patterns, but
‘recipes' for knitting. This brings us to another interesting junction
with another set of very basic, repetitive kinds of domestic and often
overlooked activities, which are nevertheless absolutely basic to human existence. Just as we all eat food, so we all wear clothes. As with
cooking, the production of textiles again has this same kind of sense
of being very basic to our survival, very elemental in that sense, but
it can also function at a high level of detailed, refined activity as well.
With a piece of knitting it is difficult to see the ways in which a single
thread becomes looped into a continuous textile. But if you look at a
woven pattern, the program that has led to the pattern is right there
in front of you, as you see the textile itself. This makes weaving a
very nice, basic and early example of how this kind of immediacy can
be brought into operation. What you look at in a piece of woven cloth
is not just a representation of something that can happen somewhere
else, but the actual instructions for producing and reproducing that
piece of woven cloth as well. So that's the kind of deep intuitive connection that it has with computer programming, as well as the more
linear historical connections of which I have often spoken.
There are some other nice connections between textiles, cooking
and programming as well. Several times yesterday there was a lot
of talk about both experts and amateurs, and developers and users.
These are divisions which constantly, and often perhaps with good
reason, reassert themselves, and often carry gendered connotations
too. In the realm of cooking, you have the chef on the one hand,
277

277

277

278

278

who is often male and enjoys the high status of the inventive, creative expert, and the cook on the other, who is more likely to be
female and works under quite a different rubric. In reality, it might
be said that the distinction is far from precise: the very practise of
using computers, of cooking, of knitting, is almost inevitably one of
constantly contributing to their development, because they are all relatively open systems and they all evolve through people's constant,
repetitive use of them. So it is ultimately very difficult to distinguish
between the user and the developer, or the expert and the amateur.
The experiment, the research, the development is always happening
in the kitchen, in the bedroom, on the bus, using your mobile or
using your computer. Fernand Braudel speaks about this kind of ‘micro-histories', this sense of repetitive activity, which is done in many
trades and many lines, and that really is the deep unconscious history
of human activity. And arguably that's where the most interesting
developments happen, albeit in a very unsung, unseen, often almost
hidden way. It is this kind of deep collectivity, this profound sense of
micro-collaboration, which has often been tapped into this weekend.
Still, of course, the social and conceptual divisions persist, and
still, just as we have our celebrity chefs, so we have our celebrity
programmers and dominant corporate software developers. And just
as we have our forgotten and overlooked cooks, so we have people who
are dismissed, or even dismiss themselves, as ‘just computer users'.
The technological realities are such that people are often forced into
this role, with programmes that really are so fixed and closed that
almost nothing remains for the user to contribute. The structural
and social divisions remain, and are reproduced on gendered lines as
well.
In the 1940s, computer programming was considered to be extremely menial, and not at all a glamorous or powerful activity.
Then of course, the business of dealing with the software was strictly
women's work, and it was with the hardware of the system that the
most powerful activity lay. That was where the real solid development was done, and that was where the men were working, with what
were then the real nuts and bolts of the machines. Now of course, it
has all turned around. It is women who are building the chips and
278

278

278

279

279

putting the hardware – such as it is these days – together, while the
male expertise has shifted to the writing of software. In only half a
century, the evolution of the technology has shifted the whole notion
of where the power lies. No doubt – and not least through weekends
like this – the story will keep moving on.
But as the world of computing does move more and more into
software and leave the hardware behind, it is accompanied by the
perceived danger that the technology and, by extension, the cultures
around it, tend to become more and more disembodied and intangible.
This has long been seen as a danger because it tends to reinforce what
have historically, in the Western world at least, been some of the more
oppressive tendencies to affect women and all the other bodies that
haven't quite fitted the philosophical ideal. Both the Platonic and
Christian traditions have tended to dismissing or repress the body,
and with it all the kind of messy, gritty, tangible stuff of culture,
as transient, difficult, and flawed. And what has been elevated is of
course the much more formal, idealist, disembodied kind of activities
and processes. This is a site of continual struggle, and I guess part of
the purpose of a weekend like this is to keep working away, re-injecting
some sense of materiality, of physicality, of the body, of geography,
into what are always in danger of becoming much more formal and
disembodied worlds. What Femke and Laurence have striven to remind us this weekend is that however elevated and removed our work
appears to be from the matter of bodies and physical techniques,
we remain bodies, complex material processes, working in a complex
material work.
Once again, there still tends to be something of a gendered divide.
The dance workshop organised this morning by Alice Chauchat and
Frédéric Gies was an inspiring but also difficult experience for many
of us, unused as we are to using our bodies in such literally physical
and public ways. It was not until we came out of the workshop into
a space which was suddenly mixed in terms of gender, that I realised
that the participants in the workshop had been almost exclusively
female. It was only the women who had gone to this kind of more
physical, embodied, and indeed personally challenging part of the
weekend. But we all need to continually re-engage with this sense
279

279

279

280

280

of the body, all this messiness and grittiness, which it is in many
vested interests to constantly cleanse from the world. We have to
make ourselves deal with all the embarrassment, the awkwardness,
and the problematic side of this more tangible and physical world.
For that reason it has been fantastic that we have had such strong
input from people involved in dance and physical movement, people
working with bodies and the real sense of space. Sabine Prokhoris
and Simon Hecquet made us think about what it means to transcribe
the movements of the body; Séverine Dusollier and Valérie Laure
Benabou got us to question the legal status of such movements too.
And what we have gained from all of this is this sense that we are all
always working with our bodies, we are always using our bodies, with
more or less awareness and talent, of course, whether we are dancing
or baking or knitting or slumped over our keyboards. In some ways we
shouldn't even need to say it, but the fact that we do need to remind
ourselves of our embodiment shows just how easy it is for us to forget
our physicality. This morning's dance workshop really showed some
of the virtues of being able to turn off one's self-consciousness, to
dismiss the constantly controlling part of one's self and to function
on a different, slightly more automatic level. Or perhaps one might
say just to prioritise a level of bodily activity, of bodily awareness,
of a sense of spatiality that is so easy to forget in our very cerebral
society.
What Frédéric and Alice showed us was not simply about using the
body, but rather how to overcome the old dualism of thinking of the
body as a kind of servant of the mind. Perhaps this is how we should
think about our relationships to our technologies as well, not just to
see them as our servants, and ourselves as the authors or subjects of
the activity, but rather to perceive the interactivity, the sense of an
interplay, not between two dualistic things, the body and the mind, or
the agent and the tool, the producer and the user, but to try and see
much more of a continuum of different levels and different kinds and
different speeds of material activity, some very big and clunky, others at extremely complex micro-levels. During the dance workshop,
Frédéric talked about all the synaptic connections that are happening as one moves one's body, in order to instil in us this awareness
280

280

280

281

281

of ourselves as physical, material, thinking machines, assemblages of
many different kinds of activity. And again, I think this idea of bringing together dance, food, software, and brainpower, to see ourselves
operating at all these different levels, has been extremely rewarding.
Femke asked a question of Sabine and Simon yesterday, which perhaps never quite got answered, but expressed something about how
as people living in this especially wireless world, we are now carrying more and more technical devices, just as I am now holding this
microphone, and how these additional machines might be changing
our awarenesses of ourselves. Again it came up this morning in the
workshop when we were asked to imagine that we might have different parts of our bodies, another head, or our feet may have mirrors
in them, or in one brilliant example that we might have magnets,
so that we were forced to have parts of our bodies drawn together
in unlikely combinations, just to imagine a different kind of sense of
self that you get from that experience, or a different way of moving
through space. But in many ways, because of our technologies now,
we don't need to imagine such shifts: we are most of us now carrying
some kind of telecommunicating device, for example, and while we
are not physically attached to our machines – not yet anyway –, we
are at least emotionally attached to them. Often they are very much
with us and part of us: the mobile phone in your pocket is to hand,
it is almost a part of us. And I too am very interested in how that
has changed not only our more intellectual conceptions of ourselves,
but also our physical selves. The fact that I am holding this thing
[the microphone] obviously does change my body, its capacities, and
its awareness of itself. We are all aware of this to some extent: everyone knows that if you put on very formal clothes, for example, you
behave in different ways, your body and your whole experience of its
movement and spatiality changes. Living in a very conservative part
of Pakistan a few years ago, where I had to really be completely covered up and just show my eyes, gave me an acute sense of this kind
of change: I had to sit, stand, walk and turn to look at things in an
entirely new set of ways. In a less dramatic but equally affective way,
wirelessness obviously introduces a new sense of our bodies, of what
we can do with our bodies, of what we carry with us on our bodies,
281

281

281

282

282

and consequently of who we are and how we interact with our environment. And in this sense wirelessness has also brought the body
back into play, rescuing us from what only ten years ago seemed to
be the very real dangers of a more formal and disembodied sense of a
virtual world, which was then imagined as some kind of ‘other place'
, a notion of cyberspace, up there somehow, in an almost heavenly
conception. Wirelessness has made it possible for computer devices to
operate in an actual, geographical environment: they can now come
with us. We can almost start to talk more realistically about a much
more interesting notion of the cyborg, rather than some big clunky
thing trailing wires. It really can start to function as a more interesting idea, and I am very interested in the political and philosophical
implications of this development as well, and in that it does reintroduce the body to as I say what was in danger of becoming a very
kind of abstract and formal kind of cyberspace. It brings us back into
touch with ourselves and our geographies.
The interaction between actual space and virtual space, has been
another theme of this weekend; this ability to translate, to move between different kinds of spaces, to move from the analogue to the
digital, to negotiate the interface between bodies and machines. Yesterday we heard from Adrian Mackenzie about digital signal processing, the possibility of moving between that real sort of analogue world
of human experience and the coding necessary to computing. Sabine
and Simon talked about the possibilities of translating movement into
dance, and this also has come up several times today, and also with
Simon's work in relation to music and notation. Simon and Sabine
made the point that with the transcription and reading of a dance,
one is offered – rather as with a recipe – the same ingredients, the
same list of instructions, but once again as with cooking, you will
never get the same dance, or you will never get the same food as a
consequence. They were interested in the idea of notation, not to
preserve or to conserve, but rather to be able to send food or dance
off into the future, to make it possible in the future. And Simon
referred to these fantastic diagrams from The Scratch Orchestra, as
an entirely different way of conceiving and perceiving music, not as
a score, a notation in this prescriptive, conserving sense of the word,
282

282

282

283

283

but as the opportunity to take something forward into the future.
And to do so not by writing down the sounds, or trying to capture
the sounds, but rather as a way of describing the actions necessary
to produce those sounds, is almost to conceive the production of music as a kind of dance, and again to emphasise its embodiment and
physicality.
This sense of performance brings into play the idea of ‘play' itself,
whether ‘playing' a musical instrument, ‘playing' a musical score, or
‘playing' the body in an effort to dance. I think in some dance traditions one speaks about ‘playing the body'; in Tai Chi it is certainly
said that one plays the body, as though it was an instrument. And
when I think about what I have been doing for the last five years,
it's involved having children, it's involved learning languages, it's involved doing lots of cooking, and lots of playing, funny enough. And
what has been lovely for me about this weekend is that all of these
things have been discussed, but they haven't been just discussed, they
have actually been done as well. So we have not only thought about
cooking, but cooking has happened, not only with the mussels, but
also with the fantastic food that has been provided all weekend. We
haven't just thought about dancing, but dancing has actually been
done. We haven't just thought about translating, but with great
thanks to the translators – who I think have often had a very difficult job – translating has also happened as well. And in all of these
cases we have seen what might so easily have been a simply theoretical discussion, has itself been translated into real bodily activity:
they have all been, literally, brought into play. And this term ‘play'
, which spans a kind of mathematical play of numbers, in relation to
software and programming, and also the world of music and dance,
has enormous potential for us all: Simon talked about ‘playing free'
as an alternative term to ‘improvisation', and this notion of ‘playing
free' might well prove very useful in relation to all these questions of
making music, using the body, and even playing the system in terms
of subverting or hacking into the mainstream cultural and technical
programs with which we presented.

283

283

283

284

284

This weekend was inspired by several desires and impulses to which
I feel very sympathetic, and which remain very urgent in all our debates about technology. As we have seen, one of the most important
of those desires is to reinsert the body into what is always in danger of becoming a disembodied realm of computing and technology.
And to reinsert that body not as a kind of Chaplinesque cog in the
wheel that we saw when Inès Rabadán introduced Modern Times last
night, but as something more problematic, something more complex
and more interesting. And also not to do so nostalgically, with some
idea of some kind of lost natural activity that we need to regain, or to
reassert, or to reintroduce. There is no true body, there is no natural
body, that we can recapture from some mythical past and bring back
into play. At the same time we need to find a way of moving forward,
and inserting our senses of bodies and physicality into the future, to
insist that there is something lively and responsive and messy and
awkward always at work in what could have the tendency otherwise
to be a world of closed systems and dead loops.
One of the ways of doing this is to constantly problematise both
individualised conceptions of the body and orthodox notions of communities and groups. Michael Terry's presentation about ingimp, developed in order to imagine the community of people who are using
his image manipulation software, raised some very problematic issues
about the notion of community, which were also brought up again by
Simon today, with this ideas about collaboration and collectivity, and
what exactly it means to come together and try to escape an individualised notion of one's own work. Femke's point to Michael exemplified
the ways in which the notion of community has some real dangers:
Michael or his team had done the representations of the community
themselves – so if people told them they were graphic artists, they
had found their own kind of symbols for what a graphic artist would
look like –, and when Femke suggested that people – especially if
they were graphic artists – might be capable of producing their own
representations and giving their own way of imagining themselves,
Michael's response was to the effect that people might then come up
with what he and his team would consider to be ‘undesirable images'
of themselves. And this of course is the age old problem with the idea
284

284

284

285

285

of a community: an open, democratic grouping is great when you're
in it and you all agree what's desirable, but what happens to all the
people that don't quite fit the picture? How open can one afford to
be? We need some broader, different senses of how to come together
which, as Alice and Frédéric were discussed, are ways of collaborating
without becoming a new fixed totality. If we go back to the practices
of cooking, weaving, knitting, and dancing, these long histories of
very everyday activities that people have performed for generation
after generation, in every culture in the world – it is at this level that
we can see a kind of collective activity, which is way beyond anything
one might call a ‘community' in the self-conscious sense of the term.
And it's also way beyond any simple notion of a distributed collection of individuals: it is perhaps somewhere at the junction of these
modes, an in-between way of working which has come together in its
own unconscious ways over long periods of time.
This weekend has provided a rich menu of questions and themes to
feed in and out of the writing and use of software, as well as all our
other ways of dealing with our machines, ourselves, and each other.
To keep the body and all its flows and complexities in play, in a lively
and productive sense; to keep all the interruptive possibilities alive;
to stop things closing down; to keep or to foster the sense of collectivity in a highly individualised and totalising world; to find new
ways – constantly find new ways – of collaborating and distributing
information: these are all crucial and ongoing struggles in which we
must all remain continually engaged. And I notice even now that I
used this term ‘to keep', as though there was something to conserve
and preserve, as though the point of making the recipes and writing
the programs is to preserve something. But the ‘keeping' in question
here is much more a matter of ‘keeping on', of constantly inventing
and producing without, as Simon said earlier, leaving ourselves too
vulnerable to all the new kinds of exploitation, the new kinds of territorialisation, which are always waiting around the corner to capture
even the most fluid and radical moves we make. This whole weekend
has been an energising reminder, a stimulating and inspiriting call to

285

285

285

286

286

keep problematising things, to keep inventing and to keep reinventing, to keep on keeping on. And I thank you very much for giving me
the chance to be here and share it all. Thank you.
A quick postscript. After this ‘spontaneous report' was made,
the audience moved upstairs to watch a performance by the dancer
Frédéric Gies, who had co-hosted the morning's workshop. I found
the energy, the vulnerability, and the emotion with which he danced
quite overwhelming. The Madonna track - Hung Up (Time Goes by
so Slowly) – to which he danced ran through my head for the whole
train journey back to Birmingham, and when I got home and checked
out the Madonna video on YouTube I was even more moved to see
what a beautiful commentary and continuation of her choreography
Frédéric had achieved. This really was an example not only of playing
the body, the music, and the culture, but also of effecting the kind of
‘free play' and ‘open performance', which had resonated through the
whole weekend and inspired us all to keep our work and ourselves in
motion. So here's an extra thank you to Frédéric Gies. Madonna will
never sound the same to me.

286

286

286

287

287

Biographies
Valérie Laure Benabou
http://www.juriscom.net/minicv/vlb
EN

Valérie Laure Benabou is associate
Professor at the University of Versailles-Saint Quentin and teaches at
the Ecole des Mines. She is a member of the Centre d'Etude et de
Recherche en Droit de l'Immatériel
(CERDI), and of the Editorial Board
of Propriétés Intellectuelles. She also
teaches civil law at the University
of Barcelona and taught international
commercial law at the Law University
in Phnom Penh, Cambodia. She was a
member of the Commission de réflexion du Conseil d'Etat sur Internet et
les réseaux numériques, co-ordinated
by Ms Falque-Pierrotin, which produced the Rapport du Conseil d'Etat,
(La Documentation française, 1998).
She is the author of a number of works
and articles, including ‘La directive
droit d'auteur, droits voisins et société
de l'information: valse à trois temps
avec l'acquis communautaire', in Europe, No. 8-9, September 2001, p.
3, and in Communication Commerce
Electronique, October 2001, p. 8., and
‘Vie privée sur Internet: le traçage', in
Les libertés individuelles à l'épreuve
des NTIC, PUL, 2001, p. 89.

Pierre Berthet
http://pierre.berthet.be/
EN

Studied percussion with André Van

287

287

287

288

288

Belle and Georges-Elie Octors, improvisation with Garrett List, composition with Frederic Rzewski, and music theory with Henri Pousseur. Designs and builds sound objects and installations (composed of steel, plastic,
water, magnetic fields etc.). Presents
them in exhibitions and solo or duo
performances with Brigida Romano
(CD Continuum asorbus on the Sub
Rosa label) or Frédéric Le Junter (CD
Berthet Le Junter on the Vandœuvres
label). Collaborated with 13th tribe
(CD Ping pong anthropology). Played
percussion in Arnold Dreyblatt's Orchestra of excited strings (CD Animal magnetism, label Tzadik; CD The
sound of one string, label Table of the
elements).

avec Garrett List, la composition avec
Frederic Rzewski, et la théorie de
la musique avec Henri Pousseur. Il
conçoit et construit des objets et installations sonores (en acier, plastique, eau, champs magnétiques etc.),
et les a présentés lors d'expositions et
de performances en solo ou en duo
avec Brigida Romano (CD Continuum asorbus sur le label Sub Rosa)
or Frédéric Le Junter (CD Berthet Le
Junter sur le label Vandœuvres). A
collaboré avec 13th tribe (CD Ping
pong anthropology). A joué de la
percussion chez Orchestra of excited
strings d'Arnold Dreyblatt (CD Animal magnetism, label Tzadik; CD The
sound of one string, sur le label Table
of the elements).

NL

Alice Chauchat
Geluidskunstenaar.
Studeerde percussie met André Van Belle en Georges-Eliehttp://www.theselection.net/dance/
Octors, improvisatie met Garrett List,
EN
compositie met Frederic Rzewski, en
muziektheorie met Henri Pousseur.
Member of the Praticable collective.
Hij ontwerpt en bouwt sonore voorAlice Chauchat was born in 1977 in
werpen en installaties (in staal, plasSaint-Etienne (France) and lives in
tiek, water, magnetische velden etc.).
Paris. She studied at the ConservaDeze toont hij tijdens tentoonstellintoire National Supérieur de Lyon and
gen en performances, solo of samen
P.A.R.T.S in Brussels. She is a foundmet Brigida Romano (cd Continuum
ing member of the collective B.D.C.
asorbus bij het label Sub Rosa) en
With other members such as Tom PlisFrédéric Le Junter (cd Berthet Le
chke, Martin Nachbar and Hendrik
Junter bij het label Vandœuvres).
Laevens she created Events for TeleBerthet werkte samen met 13th tribe
vision, Affects and(Re)sort, between
(cd Ping pong anthropology). Hij ver1999 and 2001. In 2001 she presented
zorgde de percussie voor Arnold Dreyher first solo Quotation marks me.
blatts Orchestra of excited strings (cd
In 2003 she collaborated with Vera
Animal magnetism, label Tzadik; cd
Knolle (A Number of Classics in the
The sound of one string, bij het label
Age of Performance). In 2004 she
Table of the elements).
made J'aime, together with Anne JuFR

Plasticien sonore. A étudié la percussion avec André Van Belle et
Georges-Elie Octors, l'improvisation

ren, and CRYSTALLL, a collaboration with Alix Eynaudi. She also takes
part in other people's projects, such as
Projet, initiated by Xavier Le Roy, or

288

288

288

289

289

Michel Cleempoel
http://www.michelcleempoel.be/
EN

Graduated from the National Superior Art School La Cambre in Brussels.
Author of numerous digital art works
and exhibitions. Worked in collaboration with Nicolas Malevé:
http://www.deshabillez-vous.be

289

289

289

290

290

http://www.geuzen.org/

EN

EN

Femke Snelting, Renée Turner and
Riek Sijbring form the art and design
collective De Geuzen (a foundation for
multi-visual research). De Geuzen develop various strategies on and off line,
to explore their interests in the female
identity, critical resistance, representation and narrative archives.

Séverine Dusollier

Doctor in Law, Professor at the University of Namur (Belgium), Head of
the Department of Intellectual Property Rights at the Research Center for
Computer and Law of the University
of Namur, and Project Leader Creative Commons Belgium, Namur.
NL

EN

Leif Elggren (born 1950, Linköping,
Sweden) is a Swedish artist who lives
and works in Stockholm.
Active since the late 1970s, Leif
Elggren has become one of the most
constantly surprising conceptual artists
to work in the combined worlds of
audio and visual. A writer, visual
artist, stage performer and composer,
he has many albums to his credits, solo and with the Sons of God,
on labels such as Ash International,

http://www.fundp.ac.be/universite/personnes
/page_view/01003580/

290

290

290

291

291

Touch, Radium and his own firework Edition. His music, often conceived as the soundtrack to a visual
installation or experimental stage performance, usually presents carefully
selected sound sources over a long
stretch of time and can range from
mesmerising quiet electronics to harsh
noise. His wide-ranging and prolific
body of art often involves dreams and
subtle absurdities, social hierarchies
turned upside-down, hidden actions
and events taking on the quality of
icons.
Together with artist Carl Michael
von Hausswolff, he is a founder of
the Kingdoms of Elgaland-Vargaland
(KREV), where he enjoys the title of
King.

EN

elpueblodechina a.k.a.
Alejandra
Perez Nuñez is a sound artist and
performer working with open source

291

291

291

292

292

tools, electronic wiring and essay writing. In collaborative projects with
Barcelona based group Redactiva, she
works on psychogeography and social science fiction projects, developing narratives related to the mapping of collective imagination. She received an MA in Media Design at the
Piet Zwart Institute in 2005, and has
worked with the organization V2_ in
Rotterdam. She is currently based in
Valparaíso, Chile, where she is developing a practice related to appropriation, civil society and self-mediation
through electronic media.



EN

Born in Bari (Italy) in 1980, and graduated in May 2005 in Communication
Sciences at the University of Rome
La Sapienza, with a dissertation thesis on software as cultural and social
artefact. His educational background
is mostly theoretical: Humanities and
Media Studies. More recently, he has
been focussing on programming and
the development of web based applications, mostly using open source technologies. In 2007 he received an M.A.
in Media Design at the Piet Zwart Institute in Rotterdam.
His areas of interest are:
social
software, actor network theory, digital archives, knowledge management,
machine readability, semantic web,
data mining, information visualization, profiling, privacy, ubiquitous
computing, locative media.

292

292

293

293

ware, de compilatie van data en de
exploratie van numerieke archieven en
privacy. In 2007 behaalde hij een M.A.
in Media Design aan het Piet Zwart
Instituut in Rotterdam.

amazons (1st version in Tanzfabrik,
2nd in Ausland, Berlin) and The
bitch is back under pressure (reloaded) (Basso, Berlin). As a memeber of the Praticable collective, he
created Dance and The breast piece,
in collaboration with Alice Chauchat.
He also collaborated on Still Lives
(Good Work: Anderson/ Gies/ Pelmus/ Pocheron/ Schad).

EN
After studying ballet and contemfaut (CND, Parijs), Le principal déporary dance, Frédéric Gies worked
faut-solo (Tipi de Beaubourg, Parijs),
with various choreographers such as
En corps (CND, Parijs), Post porn
Daniel Larrieu, Bernard Glandier,
traffc (Macba, Barcelona), In bed
Jean-François Duroure, Olivia Grandville with Rebecca (Vooruit, Gent), (don't)
and Christophe Haleb. In 1995, he
Show it! (Scène nationale, Dieppe),
created a duet in collaboration with
Second hand vintage collector (someOdile Seitz (Because I love). In 1998
times we like to mix it up!) (Ausland,
he started working with Frédéric De
Berlijn).
Carlo. Together they have created
In 2004 danst hij in The better you
various performances such as Le prinlook, the more you see


293

293

293

294

294

Dominique Goblet
http://www.dominique-goblet.be/
EN

Visual artist. She shows her work in
galleries and publishes her stories in
magazines and books. In all cases,
what she tries to pursue is an art of
the multi-faceted narrative. Her exhibitions of paintings – from frame to
frame and in the whole space of the
gallery – could be ‘read' as fragmented
stories. Her comic books question the
deep or thin relations between human
beings. As an author, she has taken
part in almost all the Frigobox series
published by Fréon (Brussels) and to
several Lapin magazines, published by
L'Association (Paris). A silent comic
book was published in the gigantic
Comix 2000 (L'Association). In the
beginning of 2002, a second book is
published by the same editor: Souvenir d'une journée parfaite - Memories of a perfect day - a complex story
that combines autobiographical facts
and fictions.

Tsila Hassine
http://www.missdata.org/

EN

Tsila Hassine is a media artist / designer.
Her interests lie with the
hidden potentialities withheld in the
electronic data mines. In her practice she endeavours to extrude undercurrents of information and traces of
processes that are not easily discerned
through regular consumption of mass
networked media. This she accomplishes through repetitive misuse of
available platforms.
She completed a BScs in Mathematics and Computer Science and spent
2003 at the New Media department
of the HGK Zürich.
In 2004 she
joined the Piet Zwart Institute in Rotterdam, where she pursued an MA
in Media Design, until graduating in
June 2006 with Google randomizer
Shmoogle.
She is currently a researcher at the Design department of
the Jan van Eyck Academie.

Simon Hecquet
EN

Dancer and choreographer. Educated
in classical and contemporary dance,
Hecquet has worked with many different dance companies, specialised
in contemporary as well as baroque
dance.
During this time, he also
studied different notation systems to
describe movement, after which he
wrote scores for several dance pieces
from the contemporary choreographic
repertory. He also contributed, among
others, with the Quatuor Knust,
to projects that restaged important
dance pieces of the 20th century. Together with Sabine Prokhoris he made
a movie, Ceci n'est pas une danse
chorale (2004), and a book, Fabriques
de la Danse (PUF, 2007). He teaches

transcription systems for movement,
among others, at the department of
Dance at the Université de Paris VIII.


Guy Marc Hinant
EN

Guy Marc Hinant is a filmmaker of
films like The Garden is full of Metal
(1996), Éléments d'un Merzbau oublié (1999), The Pleasure of Regrets
– a Portrait of Léo Kupper (2003),
Luc Ferrari face to his Tautology
(2006) and I never promised you a
rose garden – a portrait of David
Toop through his records collection
(2008), all developed together with
Dominique Lohlé. He is the curator
of An Anthology of Noise and Electronic Music CD Series, and manages
the Sub Rosa label. He writes fragmented fictions and notes on aesthetics (some of his texts have been published by Editions de l'Heure, Luna
Park, Leonardo Music Journal etc.).

Dmytri Kleiner
http://www.telekommunisten.net/
EN

Dmytri Kleiner is a USSR-born, Canadian software developer and cultural
producer. In his work, he investigates the intersections of art, technology and political economy. He is a
founder of Telekommunisten, an anarchist technology collective, and lives
in Berlin with his wife Franziska and
his daughter Henriette.


Bettina Knaup
EN

Cultural producer and curator with a
background in theatre and film studies, political science and gender studies. She is interested in the interface
of live arts, politics and knowledge
production, and has curated and/or
produced transnational projects such
as the public arts and science program ‘open space' of the International Women's University (Hannover,
1998-2000), and the transdisciplinary
performing arts laboratory, IN TRANSIT (Berlin, House of World Cultures
2002-2003). Between 2001 and 2004,
she has co-curated and co-directed
the international festival of contemporary arts, CITY OF WOMEN (Ljubljana). After directing the new European platform for cultural exchange
LabforCulture during its launch phase
(Amsterdam, 2004-06), Knaup works
again as an independent curator with
a base in Berlin.


EN

Christophe Lazaro is a scientific collaborator at the Law department
of the Facultés Notre-Dame de la
Paix, Namur, and researcher at the
Research Centre for Computer and
Law. His interest in legal matters is
complemented by socio-anthropological research on virtual communities
(free software community), the human/artefact relationship (prothesis,
implants, RfiD chips), transhumanism and posthumanism.

Manu Luksch, founder of ambientTV.NET,
is a filmmaker who works outside the
frame. The ‘moving image', and in
particular the evolution of film in the
digital or networked age, has been
a core theme of her works. Characteristic is the blurring of boundaries between linear and hypertextual
narrative, directed work and multiple
authorship, and post-produced and
self-generative pieces. Expanding the
idea of the viewing environment is also
of importance; recent works have been
NL
shown on electronic billboards in pub



Nicolas Malevé

He has recently been working on sigSince 1998 multimedia artist Nicolas
nal processing, looking at how artists,
Malevé has been an active member of
activists, development projects, and
the organization of Constant. As such,
community groups are making alterhe has taken part in organizing varinate or competing communication inous activities connected with alternafrastructures.
tives to copyrights, such as ‘Copy.cult



Michael Murtaugh
http://automatist.org/

EN

Born in September 2001, represented
here by Valérie Cordy and Natalia
De Mello, the MéTAmorphoZ collective is a multidisciplinary association that create installations, spectacles and transdisciplinary performances that mix artistic experiments
and digital practices.

EN

Freelance developer of (tools for) online documentaries and other forms of
digital archives. He works and lives in
the Netherlands and online at automatist.org. He teaches at the MA Media
Design program at the Piet Zwart Institute in Rotterdam.

301

301

301

302

302

Julien Ottavi
http://www.noiser.org/

Ottavi is the founder, artistic programmer, audio computer researcher
(networks and audio research) and
sound artist of the experimental music
organization Apo33. Founded in 1997,
Apo33 is a collective of artists, musicians, sound artists, philosophers and
computer scientists, who aim to promote new types of music and sound
practices that do not receive large media coverage. The purpose of Apo33
is to create the conditions for the development of all of the kinds of music
and sound practices that contribute
to the advancement of sound creation,
including electronic music, concrete
music, contemporary written music,
sound poetry, sound art and other
practices which as yet have no name.
Apo33 refers to all of these practices
as ‘Audio Art'.

EN

Jussi Parikka teaches and writes on
the cultural theory and history of new
media. He has a PhD in Cultural
History from the University of Turku,
finland, and is Senior Lecturer in
Media Studies at Anglia Ruskin University, Cambridge, UK. Parikka has
published a book on ‘cultural theory
in the age of digital machines' (Koneoppi, in finnish) and his Digital
Contagions: A Media Archaeology of
Computer Viruses has been published
by Peter Lang, New York, Digital Formations-series (2007). Parikka is currently working on a book on ‘Insect
Media', which focuses on the media
theoretical and historical interconnections of biology and technology.


Sadie Plant

Sadie Plant is the author of The Most
Radical Gesture, Zeros and Ones,
and Writing on Drugs.
She has
taught in the Department of Cultural
Studies, University of Birmingham,
and the Department of Philosophy,
University of Warwick. For the last
ten years she has been working independently and living in Birmingham,
where she is involved with the Ikon
Gallery, Stan's Cafe Theatre Company, and the Birmingham Institute
of Art and Design.




EN

Praticable proposes itself as a horizontal work structure, which brings into
relation research, creation, transmission and production structure. This
structure is the basis for the creation
of many performances that will be
signed by one or more participants in
the project. These performances are
grounded, in one way or another, in
the exploration of body practices to
approach representation. Concretely,
the form of Praticable is periods of
common research of /on physical practices which will be the soil for the various creations. The creation periods
will be part of the research periods.
Thus, each specific project implies the
involvement of all participants in the
practice, the research and the elaboration of the practice from which the
piece will ensue.

304

304

304

305

305

Sabine Prokhoris

EN

EN

Psychoanalyst and author of, among
others, Witch's Kitchen:
Freud,
Faust, and the Transference (Cornell
University Press, 1995), and co-author
with Simon Hecquet of Fabriques de la
Danse (PUF, 2007). She is also active
in contemporary dance, as a critic and
a choreographer. In 2004 she made the
film Ceci n'est pas une danse chorale
together with Simon Hecquet.



After obtaining a master's degree in
Philosophy and Letters, Inès Rabadan
studied film at the IAD. Her short
films (Vacance, Surveiller les Tortues,
Maintenant, Si j'avais dix doigts,
Le jour du soleil), were shown at
about sixty festivals. Surveiller les
tortues and Maintenant were awarded
at the festivals of Clermont, Vendôme,
Chicago, Aix, Grenoble, Brest and
Namur. Occasionally she supervises
scenario workshops.
Her first feature film, Belhorizon, was selected
for the festivals of Montréal, Namur, Créteil, Buenos Aires, Santiago de Chile, Santo Domingo and
Mannheim-Heidelberg.
At the end
of 2006, it was released in Belgium,
France and Switzerland.

305

305

305

306

306
EN

Antoinette Rouvroy is researcher at
the Law department of the Facultés
Notre-Dame de la Paix in Namur,
and at the Research Centre for Computer and Law. Her domains of expertise range from rights and ethics
of biotechnologies, philosophy of Law
and ‘critical legal studies' to interdisciplinary questions related to privacy
and non-discrimination, science and
technology studies, law and language.
NL

Antoinette Rouvroy is onderzoekster
aan het departement Rechten van de
Facultés Notre-Dame de la Paix in Namen, en aan het Centre de Recherche
Informatique et Droit van de Universiteit van Namen. Zij is gespecialiseerd in het recht en de ethiek

Femke Snelting is a member of the
art and design collective De Geuzen
and of the experimental design agency
OSP.
NL


Michael Terry
http://www.ingimp.org/

Computer Scientist, University of Waterloo, Canada.

Carl Michael von Hausswolff

Von Hausswolff was born in 1956 in
Linkšping, Sweden.
He lives and
works in Stockholm. Since the end
of the 70s, von Hausswolff has been
working as a composer using the tape
recorder as his main instrument and
as a conceptual visual artist working with performance art, light- and
sound installations and photography.
His audio compositions from 1979 to
1992, constructed almost exclusively
from basic material taken from earlier audiovisual installations and performance works, essentially consist of
complex macromal drones with a surface of aesthetic elegance and beauty.
In later works, von Hausswolff retained the aesthetic elegance and the
drone, and added a purely isolationistic sonic condition to composing.


Marc Wathieu
http://www.erg.be/sdr/blog/

Marc Wathieu teaches at Erg (digital arts) and HEAJ (visual communication). He is a digital artist (he
works with the Brussels based collective LAB[au]) and sound designer.
He is also an offcial representative of
the Robots Trade Union with the human institutions. During V/J10 he
presented the Robots Trade Union's
Chart and ambitions.


Peter Westenberg

Brian Wyrick

FR

Peter Westenberg is an artist and film
and video maker, and member of Constant. His projects evolve from an
interest in social cartography, urban
anomalies and the relationships between locative identity and cultural

Brian Wyrick is an artist, filmmaker
and web developer working in Berlin
and Chicago. He is also co-founder
of Group 312 films, a Chicago-based
film group.


Simon Yuill
http://www.spring-alpha.org/
EN

Artist and programmer based in Glasgow, Scotland. He is a developer in
the spring_alpha and Social Versioning System (SVS) projects. He has
helped to set up and run a number
of hacklabs and free media labs in
Scotland including the Chateau Institute of Technology (ChIT) and Electron Club, as well as the Glasgow
branch of OpenLab. He has written
on aspects of Free Software and cultural praxis, and has contributed to
publications such as Software Studies
(MIT Press, 2008), the flOSS Manuals and Digital Artists Handbook project (GOTO10 and Folly).


License Register
??

65, 174

a
Attribution-Noncommercial-No Derivative Work

181, 188

c
Copyright Presses Universitaires de France, 2007 188
Creative Commons Attribution-NonCommercial-ShareAlike 58, 71,
73, 81, 93, 98, 155, 215, 254, 275
Creative Commons Attribution - NonCommercial - ShareAlike license
104
d
Dmytri Kleiner & Brian Wyrick, 2007. Anti-Copyright. Use as desired in whole or in part. Independent or collective commercial use
encouraged. Attribution optional.
47
f
Free Art License 38, 70, 75, 131, 143, 217
Fully Restricted Copyright 95
g
GNUFDL 119

311

311

311

312

312

t
The text is under a GPL. The images are a little trickier as none of
them belong to me. The images from ap and David Griffths can
be GPL as well, the Scratch Orchestra images (the graphic music
scores) were always published ‘without copyright' so I guess are
public domain. The photograph of the Scratch Orchestra performance can be GPL or public domain and should be credited to
Stefan Szczelkun. The other images, Sun Ra, Black Arts Group
and Lester Bowie would need to mention ‘contact the photographers'. Sorry the images are complicated but they largely come
from a time before copyleft was widespread.
233

312

312

312

313

313

This publication was produced with a set of digital tools that are
rarely used outside the world of scientific publishing: TEX, LATEX and
ConTEXt. As early as the summer of 2008, when most contributions
and translations to Tracks in electronic fields were reaching their final
stage, we started discussing at OSP 1 how we could design and produce
a book in a way that responded to the theme of the festival itself. OSP
is a design collective working with Free Software, and our relation to
the software we design with, is particular on purpose. At the core
of our design practice is the ongoing investigation of the intimate
connection between form, content and technology. What follows, is a
report of an experiment that stretched out over a little more than a
year.
For the production of previous books, OSP used Scribus, an Open
Source Desktop Publishing tool which resembles its proprietary variants PageMaker, InDesign or QuarkXpress. In this type of software,
each single page is virtually present as a ‘canvas' that has the same
proportions as a physical page and each of these ‘pages' can be individually altered through adding or manipulating the virtual objects
on it. Templates or ‘master pages' allow the automatic placement
of repeated elements such as page numbers and text blocks, but like
in a paper-based design workflow, each single page can be treated as
an autonomous unit that can be moved, duplicated and when necessary removed. Scribus would have certainly been fit for this job,
though the rapidly developing project is currently in a stage that the
production of books with more than 40 pages can become tedious.
Users are advised to split up such documents into multiple sections
which means that in able to keep continuity between pages, design
decisions are best made beforehand. As a result, the design workflow
is rendered less flexible than you would expect from state-of-the-art

5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35

1

Open Source Publishing http://ospublish.constantvzw.org

36

323

323

323

324

324

creative software. In previous projects, Scribus' rigid workflow challenged us to relocate our creative energy to another territory: that
of computation. We experimented with its powerful Python scripting
API to create 500 unique books. In another project, we transformed
a text block over a sequence of pages with the help of a fairy-tale
script. But for Tracks in electronic fields we dreamed of something
else.
Pierre Huyghebaert takes on the responsibility for the design of
the book. He had been using various generations of lay-out software
since the early 90's, and gathered an extensive body of knowledge
about their potential and limitations. More than once he brought up
the desire to try out a legendary typesetting system called TEX a
sublime typographic engine that allegedly implemented the work of
grandmaster Jan Tshichold 2 with mathematical precision.
TEX is a computer language designed by Donald Knuth in the
1970's, specifically for typesetting mathematical and other scientific
material. Powerful algorithms automatize widow and orphan control and can handle intelligent image placement. It is renowned for
being extremely stable, for running on many different kinds of computers and for being virtually bug free. In the academic tradition
of free knowledge exchange, Knuth decided to make TEX available
‘for no monetary fee' and modifications of or experimentations with
the source code are encouraged. In typical self referential style, the
near perfection of its software design is expressed in a version number
which is converging to π 3.
For OSP, TEX represents the potential of doing design differently.
Through shifting our software habits, we try to change our way of
working too. But Scribus, like the kinds of proprietary softwares it is
modeled on, has a ‘productionalist' view of design built into it 4, which

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30

2

In Die neue Typographie (1928), Jan Tschichold formulated the classic canon of modernist bookdesign.

3

The value of Π (3.141592653589793...) is the ratio of any circle's circumference to its
diameter and it's decimal representation never repeats. The current version number of
TEX is 3.141592

4

“A DTP program is the equivalent of a final assembly in an industrial process”
Christoph Schäfer, Gregory Pittman et al. The Offcial Scribus Manual.fles Books,
2009

31
32
33
34
35
36

324

324

324

325

325

is undeniably seeping through in the way we use it. An exotic Free
Software tool like TEX, rooted firmly in an academic context rather
than in commercial design, might help us to re-imagine the familiar
skill of putting type on a page. By making this kind of ‘domain
shift' 5 we hope to discover another experience of making, and find a
more constructive relation between software, content and form. So
when Pierre suggests that this V/J10 publication is possibly the right
occasion to try, we respond with enthusiasm.
By the end of 2008, Pierre starts carving out a path in the dense
forest of manuals, advice, tips-and-tricks with the help of Ivan Monroy Lopez. Ivan is trained as mathematician and more or less familiar with the exotic culture of TEX. They decide to use the popular
macro-package LATEX 6 to interface with TEX and find out about the
tong-in-cheek concept of ‘badness' (depending on the tension put on
hyphenated paragraphs, compiling a .tex document produces ‘badness' for each block on a scale from 0 to 10.000), and encounter a
long history of wonderful but often incoherent layers of development
that envelope the mysterious lasagna beauty of TEX's typographic
algorithms.
Laying-out a publication in LATEX is an entirely different experience than working with a canvas-based software. first of all, design decisions are executed through the application of markup which
vaguely reminds of working with CSS or HTML. The actual design is
only complete after ‘compiling' the document, and this is where TEX
magic happens. The software passes several times over a marked up
.tex file, incrementally deciding where to hyphenate a word, place a
paragraph or image. In principle, the concept of a page only applies
after compilation is complete. Design work therefore radically shifts
from the act of absolute placement to co-managing a flow. All elements remain relatively placed until the last tour has passed, and
while error messages, warnings and hyphenation decisions scroll by on
the command line, the sensation of elasticity is almost tangible. And

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33

5

See: Richard Sennett. The Craftsman. Allen Lane (Penguin Press), 2008

6 L
ATEX

is a high-level markup language that was first developed by Leslie Lamport in
1985. Lamport is a computer scientist also known for his work on distributed systems
and multi-treading algorithms.

34
35
36

325

325

325

326

326

indeed, when within the acceptable ‘stretch' of the program placement of a paragraph is exceeded, words literally break out of the grid
(see page 34 example).
When I join Pierre to continue the work in January 2009, the
book is still far from finished. By now, we can produce those typical
academic-style documents with ease, but we still have not managed to
use our own fonts 7. flipping back and forth in the many manuals and
handbooks that exist, we enjoy discovering a new culture. Though
we occasionally cringe at the paternalist humour that seems to have
infected every corner of the TEX community and which is clearly
inspired by witticisms of the founding father, Donald Knuth himself,
we experience how the lightweight, flexible document structure of
TEX allows for a less hierarchical and non-linear workflow, making
it easier to collaborate on a project. It is an exhilarating experience
to produce a lay-out in dialogue with a tool and the design process
takes on an almost rhythmical quality, iterative and incremental. It
also starts to dawn on us, that souplesse comes with a price.
“Users only need to learn a few easy-to-understand commands that
specify the logical structure of a document” promises The Not So
Short Introduction to LATEX. “They almost never need to tinker with
the actual layout of the document”. It explains why using LATEX
stops being easy-to-understand once you attempt to expand its strict
model of ‘book', ‘article' or ‘thesis': the ‘users' that LATEX addresses
are not designers and editors like us. At this point, we doubt whether
to give up or push through, and decide to set ourselves a limit of a
week in which we should be able to to tick off a minimal amount of
items from a list of essential design elements. Custom page size and
headers, working with URL's... they each require a separate ‘package'
that may or may not be compatible with another one. At the end of
the week, just when we start to regain confidence in the usability of
LATEX for our purpose, our document breaks beyond repair when we
try to use custom paper size with custom headers at the same time.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33

7

“Installing fonts in LATEX has the name of being a very hard task to accomplish. But
it is nothing more than following instructions. However, the problem is that, first, the
proper instructions have to be found and, second, the instructions then have to be read
and understood”. http://www.ntg.nl/maps/29/13.pdf

34
35
36

326

326

326

327

327

In February, more than 6 months into the process, we briefly consider switching to OpenOffce instead (which we had never tried for
such a large publication) or go back to Scribus (which means for
Pierre, learning a new tool). Then we remember ConTEXt, a relatively young ‘macro package' that uses the TEX engine as well. “While
LATEX insulates the writer from typographical details, ConTEXt takes
a complementary approach by providing structured interfaces for handling typography, including extensive support for colors, backgrounds,
hyperlinks, presentations, figure-text integration, and conditional compilation” 8. This is what we have been looking for.
ConTEXt was developed in the 1990's by a Dutch company specialised in ‘Advanced Document Engineering'. They needed to produce complex educational materials and workplace manuals and came
up with their own interface to TEX. “The development was purely
driven by demand and configurability, and this meant that we could
optimize most workflows that involved text editing”. 9
However frustrating it is to re-learn yet another type of markup
(even if both are based on the same TEX language, most of the LATEX
commands do not work in ConTEXt and vice versa), many of the
things that we could only achieve by means of ‘hack' in LATEX, are
built in and readily available in ConTEXt. With the help of the
very active ConTEXt mailinglist we find a way to finally use our own
fonts and while plenty of questions, bugs and dark areas remain, it
feels we are close to producing the kind of multilingual, multi-format,
multi-layered publication we imagine Tracks in Electr(on)ic fields to
be.
However, Pierre and I are working on different versions of Ubuntu,
respectively on a Mac and on a PC and we soon discover that our
installations of ConTEXt produce different results. We can't find
a solution in the nerve-wrackingly incomplete, fragmented though
extensive documentation of ConTEXt and by June 2009, we still have
not managed to print the book. As time passes, we find it increasingly

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33

8

Interview with Hans Hagen http://www.tug.org/interviews/interview-files/hans-hagen
.html

9

Interview with Hans Hagen http://www.tug.org/interviews/interview-files/hans-hagen
.html

34
35
36

327

327

327

328

328

difficult to allocate concentrated time for learning and it is a humbling
experience that acquiring some sort of fluency seems to pull us in all
directions. The stretched out nature of the process also feeds our
insecurity: Maybe we should have tried this package also? Have we
read that manual correctly? Have we read the right manual? Did we
understand those instructions really? If we were computer scientists
ourselves, would we know what to do? Paradoxically, the more we
invest into this process, mentally and physically, the harder it is to
let go. Are we refusing to see the limits of this tool, or even scarier,
our own limitations? Can we accept that the experience we'd hoped
for, is a lot more banal than the sublime results we secretly expected?
A fellow Constant member suggests in desperation: “You can't just
make a book, can you?”
In July, Pierre decides to pay for a consult with the developers
of ConTEXt themselves, and once and for all solve some of the issues we continue to struggle with. We drive up expectantly to the
headquarters of Pragma in Hasselt (NL) and discuss our problems,
seated in the recently redecorated rooms of a former bank building.
Hans Hagen himself reinstalls markIV (the latest in ConTEXt) on the
machine of Pierre, while his colleague Ton Otten tours me through
samples of the colorful publications produced by Pragma. In the afternoon, Hans gathers up some code examples that could help us place
thumbnail images and before we know it we are on our way South
again. Our visit confirms the impression we had from the awkwardly
written manuals and peculiar syntax, that ConTEXt is in essence a
one man mission. It is hard to imagine that a tool written to solve
particular problems of a certain document engineer, will ever grow
into the kind of tool that we desire too as well.
In August, as I type up this report, the book is more or less ready
to go to print. Although it looks ‘handsome' according to some, due
to unexpected bugs and time restraints, we have had to let go of
some of the features we hoped to implement. Looking at it now, just
before going to print, it has certainly not turned out to be the kind of
eye-opening typographic experience we dreamt of and sadly, we will
never know whether that is due to our own limited understanding
of TEX, LATEX and ConTEXt, to the inherent limits of those tools

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36

328

328

328

329

329

themselves, or to the crude decision to finally force through a lay-out
in two weeks. Probably a mix of all of the above, it is first of all a
relief that the publication finally exists. Looking back at the process, I
am reminded of the wise words of Joseph Weizenbaum, who observed
that “Only rarely, if indeed ever, are a tool and an altogether original
job it is to do, invented together” 10.
While this book nearly crumbled under the weight of the projections it had to carry, I often thought that outside academic publishing, the power of TEX is much like a Fata Morgana. Mesmerizing
and always out of reach, TEX continues to represent a promise of an
alternative technological landscape that keeps our dream of changing
software habits alive.

1
2
3
4
5
6
7
8
9
10
11
12
13
14

Femke Snelting (OSP), August 2009

15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35

10

Joseph Weizenbaum. Computer power and human reason: from judgment to calculation.
MIT, 1976

36

329

329

329

330

330

330

330

330

331

331

Colophon
Tracks in electr(on)ic fields is a publication of Constant, Association for Art
and Media, Brussels.
Translations: Steven Tallon, Anne Smolar, Yves Poliart, Emma Sidgwick
Copy editing: Emma Sidgwick, Femke Snelting, Wendy Van Wynsberghe
English editing and translations: Sophie Burm
Design: Pierre Huyghebaert, Femke Snelting (OSP)
Photos, unless otherwise noted: Constant (Peter Westenberg). figure 5-9: Marc
Wathieu, figure 31-96: Constant (Christina Clar, video stills), figure 102-104:
Leiff Elgren, CM von Hausswolff, figure 107-116: Manu Luksch, figure A-Q:
elpueblodechina, figure 151 + 152: Pierre Huyghebaert, figure 155: Cornelius
Cardew, figure 160-162: Scratch Orchestra, figure 153 + 154: Michael E. Emrick
(Courtesy of Ben Looker), figure 156-157 + 159: photographer unknown, figure
158: David Griffths, pages 19, 25, 35, 77 and 139: public domain or unknown.
This book was produced in ConTEXt, based on the TEX typesetting engine, and
other Free Softwares (OpenOffce, Gimp, Inkscape). For a written account of
the production process see The Making Of on page 323.
Printing: Drukkerij Geers Offset, Gent

EN

FR

NL

Copyright © 2009, Constant.
Copyleft: this book is free. You can distribute and modify it according to the
terms of the Free Art Licence. You can find an example of this licence on the
site ‘Copyleft Attitude' http://www.artlibre.org
Copyleft : cette oeuvre est libre, vous pouvez la redistribuer et/ou la modifier selon les termes de la Licence Art Libre. Vous trouverez un exemplaire de
cette Licence sur le site Copyleft Attitude http://www.artlibre.org ainsi que sur
d'autres sites.
Copyleft: dit boek is een vrij werk. Je kunt het verspreiden en/of veranderen
volgens de termen van de Free Art Licence. Je vindt de tekst van deze licentie
onder andere op de site ‘Copyleft Attitude' http://www.artlibre.org
This book can be downloaded from: http://www.constantvzw.org/verlag. Sources
are available from http://osp.constantvzw.org/sources/vj10

331

331

331

332

332

figure 148 De Vlaamse Minister van Cultuur,
Jeugd, Sport en Brussel

figure 149 De Vlaamse Gemeenschapscommissie

332

332

332

 

Display 200 300 400 500 600 700 800 900 1000 ALL characters around the word.