Medak, Mars & WHW
Public Library
2015


Public Library

may • 2015
price 50 kn

This publication is realized along with the exhibition
Public Library • 27/5 –13/06 2015 • Gallery Nova • Zagreb
Izdavači / Publishers
Editors
Tomislav Medak • Marcell Mars •
What, How & for Whom / WHW
ISBN 978-953-55951-3-7 [Što, kako i za koga/WHW]
ISBN 978-953-7372-27-9 [Multimedijalni institut]
A Cip catalog record for this book is available from the
National and University Library in Zagreb under 000907085

With the support of the Creative Europe Programme of the
European Union

ZAGREB • ¶ May • 2015

Public Library

1.
Marcell Mars, Manar Zarroug
& Tomislav Medak

75

Public Library (essay)
2.
Paul Otlet

87

Transformations in the Bibliographical
Apparatus of the Sciences
(Repertory — Classification — Office
of Documentation)
3.
McKenzie Wark

111

Metadata Punk
4.
Tomislav Medak
The Future After the Library
UbuWeb and Monoskop’s Radical Gestures

121

Marcell Mars,
Manar Zarroug
& Tomislav Medak

Public library (essay)

In What Was Revolutionary about the French Revolution? 01 Robert Darnton considers how a complete collapse of the social order (when absolutely
everything — all social values — is turned upside
down) would look. Such trauma happens often in
the life of individuals but only rarely on the level
of an entire society.
In 1789 the French had to confront the collapse of
a whole social order—the world that they defined
retrospectively as the Ancien Régime — and to find
some new order in the chaos surrounding them.
They experienced reality as something that could
be destroyed and reconstructed, and they faced
seemingly limitless possibilities, both for good and
evil, for raising a utopia and for falling back into
tyranny.02
The revolution bootstraps itself.
01 Robert H. Darnton, What Was Revolutionary about the
French Revolution? (Waco, TX: Baylor University Press,
1996), 6.
02 Ibid.

Public library (essay)

75

In the dictionaries of the time, the word revolution was said to derive from the verb to revolve and
was defined as “the return of the planet or a star to
the same point from which it parted.” 03 French political vocabulary spread no further than the narrow
circle of the feudal elite in Versailles. The citizens,
revolutionaries, had to invent new words, concepts
… an entire new language in order to describe the
revolution that had taken place.
They began with the vocabulary of time and space.
In the French revolutionary calendar used from 1793
until 1805, time started on 1 Vendémiaire, Year 1, a
date which marked the abolition of the old monarchy on (the Gregorian equivalent) 22 September
1792. With a decree in 1795, the metric system was
adopted. As with the adoption of the new calendar,
this was an attempt to organize space in a rational
and natural way. Gram became a unit of mass.
In Paris, 1,400 streets were given new names.
Every reminder of the tyranny of the monarchy
was erased. The revolutionaries even changed their
names and surnames. Le Roy or Leveque, commonly
used until then, were changed to Le Loi or Liberté.
To address someone, out of respect, with vous was
forbidden by a resolution passed on 24 Brumaire,
Year 2. Vous was replaced with tu. People are equal.
The watchwords Liberté, égalité, fraternité (freedom, equality, brotherhood)04 were built through
03 Ibid.
04 Slogan of the French Republic, France.fr, n.d.,
http://www.france.fr/en/institutions-and-values/slogan
-french-republic.html.

76

M. Mars • M. Zarroug • T. Medak

literacy, new epistemologies, classifications, declarations, standards, reason, and rationality. What first
comes to mind about the revolution will never again
be the return of a planet or a star to the same point
from which it departed. Revolution bootstrapped,
revolved, and hermeneutically circularized itself.
Melvil Dewey was born in the state of New York in
1851.05 His thirst for knowledge was found its satisfaction in libraries. His knowledge about how to
gain knowledge was developed by studying libraries.
Grouping books on library shelves according to the
color of the covers, the size and thickness of the spine,
or by title or author’s name did not satisfy Dewey’s
intention to develop appropriate new epistemologies in the service of the production of knowledge
about knowledge. At the age of twenty-four, he had
already published the first of nineteen editions of
A Classification and Subject Index for Cataloguing
and Arranging the Books and Pamphlets of a Library,06 the classification system that still bears its
author’s name: the Dewey Decimal System. Dewey
had a dream: for his twenty-first birthday he had
announced, “My World Work [will be] Free Schools
and Free Libraries for every soul.”07
05 Richard F. Snow, “Melvil Dewey”, American Heritage 32,
no. 1 (December 1980),
http://www.americanheritage.com/content/melvil-dewey.
06 Melvil Dewey, A Classification and Subject Index for Cataloguing and Arranging the Books and Pamphlets of a
Library (1876), Project Gutenberg e-book 12513 (2004),
http://www.gutenberg.org/files/12513/12513-h/12513-h.htm.
07 Snow, “Melvil Dewey”.

Public library (essay)

77

His dream came true. Public Library is an entry
in the catalog of History where a fantastic decimal08
describes a category of phenomenon that—together
with free public education, a free public healthcare,
the scientific method, the Universal Declaration of
Human Rights, Wikipedia, and free software, among
others—we, the people, are most proud of.
The public library is a part of these invisible infrastructures that we start to notice only once they
begin to disappear. A utopian dream—about the
place from which every human being will have access to every piece of available knowledge that can
be collected—looked impossible for a long time,
until the egalitarian impetus of social revolutions,
the Enlightment idea of universality of knowledge,
and the expcetional suspenssion of the comercial
barriers to access to knowledge made it possible.
The internet has, as in many other situations, completely changed our expectations and imagination
about what is possible. The dream of a catalogue
of the world — a universal approach to all available
knowledge for every member of society — became
realizable. A question merely of the meeting of
curves on a graph: the point at which the line of
global distribution of personal computers meets
that of the critical mass of people with access to
the internet. Today nobody lacks the imagination
necessary to see public libraries as part of a global infrastructure of universal access to knowledge
for literally every member of society. However, the
08 “Dewey Decimal Classification: 001.”, Dewey.info, 27 October 2014, http://dewey.info/class/001/2009-08/about.en.

78

M. Mars • M. Zarroug • T. Medak

emergence and development of the internet is taking place precisely at the point at which an institutional crisis—one with traumatic and inconceivable
consequences—has also begun.
The internet is a new challenge, creating experiences commonly proferred as ‘revolutionary’. Yet, a
true revolution of the internet is the universal access
to all knowledge that it makes possible. However,
unlike the new epistemologies developed during
the French revolution the tendency is to keep the
‘old regime’ (of intellectual property rights, market
concentration and control of access). The new possibilities for classification, development of languages,
invention of epistemologies which the internet poses,
and which might launch off into new orbits from
existing classification systems, are being suppressed.
In fact, the reactionary forces of the ‘old regime’
are staging a ‘Thermidor’ to suppress the public libraries from pursuing their mission. Today public
libraries cannot acquire, cannot even buy digital
books from the world’s largest publishers.09 The
small amount of e-books that they were able to acquire already they must destroy after only twenty-six
lendings.10 Libraries and the principle of universal
09 “American Library Association Open Letter to Publishers on
E-Book Library Lending”, Digital Book World, 24 September
2012, http://www.digitalbookworld.com/2012/americanlibrary-association-open-letter-to-publishers-on-e-booklibrary-lending/.
10 Jeremy Greenfield, “What Is Going On with Library E-Book
Lending?”, Forbes, 22 June 2012, http://www.forbes.com/
sites/jeremygreenfield/2012/06/22/what-is-going-on-withlibrary-e-book-lending/.

Public library (essay)

79

access to all existing knowledge that they embody
are losing, in every possible way, the battle with a
market dominated by new players such as Amazon.
com, Google, and Apple.
In 2012, Canada’s Conservative Party–led government cut financial support for Libraries and
Archives Canada (LAC) by Can$9.6 million, which
resulted in the loss of 400 archivist and librarian
jobs, the shutting down of some of LAC’s internet
pages, and the cancellation of the further purchase
of new books.11 In only three years, from 2010 to
2012, some 10 percent of public libraries were closed
in Great Britain.12
The commodification of knowledge, education,
and schooling (which are the consequences of a
globally harmonized, restrictive legal regime for intellectual property) with neoliberal austerity politics
curtails the possibilities of adapting to new sociotechnological conditions, let alone further development, innovation, or even basic maintenance of
public libraries’ infrastructure.
Public libraries are an endangered institution,
doomed to extinction.
Petit bourgeois denial prevents society from confronting this disturbing insight. As in many other
fields, the only way out offered is innovative mar11 Aideen Doran, “Free Libraries for Every Soul: Dreaming
of the Online Library”, The Bear, March 2014, http://www.
thebear-review.com/#!free-libraries-for-every-soul/c153g.
12 Alison Flood, “UK Lost More than 200 Libraries in 2012”,
The Guardian, 10 December 2012, http://www.theguardian.
com/books/2012/dec/10/uk-lost-200-libraries-2012.

80

M. Mars • M. Zarroug • T. Medak

ket-based entrepreneurship. Some have even suggested that the public library should become an
open software platform on top of which creative
developers can build app stores13 or Internet cafés
for the poorest, ensuring that they are only a click
away from the Amazon.com catalog or the Google
search bar. But these proposals overlook, perhaps
deliberately, the fundamental principles of access
upon which the idea of the public library was built.
Those who are well-meaning, intelligent, and
tactfull will try to remind the public of all the many
sides of the phenomenon that the public library is:
major community center, service for the vulnerable,
center of literacy, informal and lifelong learning; a
place where hobbyists, enthusiasts, old and young
meet and share knowledge and skills.14 Fascinating. Unfortunately, for purely tactical reasons, this
reminder to the public does not always contain an
explanation of how these varied effects arise out of
the foundational idea of a public library: universal
access to knowledge for each member of the society produces knowledge, produces knowledge about
knowledge, produces knowledge about knowledge
transfer: the public library produces sociability.
The public library does not need the sort of creative crisis management that wants to propose what
13 David Weinberger, “Library as Platform”, Library Journal,
4 September 2012, http://lj.libraryjournal.com/2012/09/
future-of-libraries/by-david-weinberger/.
14 Shannon Mattern, “Library as Infrastructure”, Design
Observer, 9 June 2014, http://places.designobserver.com/
entryprint.html?entry=38488.

Public library (essay)

81

the library should be transformed into once our society, obsessed with market logic, has made it impossible for the library to perform its main mission. Such
proposals, if they do not insist on universal access
to knowledge for all members, are Trojan horses for
the silent but galloping disappearance of the public
library from the historical stage. Sociability—produced by public libraries, with all the richness of its
various appearances—will be best preserved if we
manage to fight for the values upon which we have
built the public library: universal access to knowledge for each member of our society.
Freedom, equality, and brotherhood need brave librarians practicing civil disobedience.
Library Genesis, aaaaarg.org, Monoskop, UbuWeb
are all examples of fragile knowledge infrastructures
built and maintained by brave librarians practicing
civil disobedience which the world of researchers
in the humanities rely on. These projects are re-inventing the public library in the gap left by today’s
institutions in crisis.
Library Genesis15 is an online repository with over
a million books and is the first project in history to
offer everyone on the Internet free download of its
entire book collection (as of this writing, about fifteen terabytes of data), together with the all metadata
(MySQL dump) and PHP/HTML/Java Script code
for webpages. The most popular earlier reposito15 See http://libgen.org/.

82

M. Mars • M. Zarroug • T. Medak

ries, such as Gigapedia (later Library.nu), handled
their upload and maintenance costs by selling advertising space to the pornographic and gambling
industries. Legal action was initiated against them,
and they were closed.16 News of the termination of
Gigapedia/Library.nu strongly resonated among
academics and book enthusiasts circles and was
even noted in the mainstream Internet media, just
like other major world events. The decision by Library Genesis to share its resources has resulted
in a network of identical sites (so-called mirrors)
through the development of an entire range of Net
services of metadata exchange and catalog maintenance, thus ensuring an exceptionally resistant
survival architecture.
aaaaarg.org, started by the artist Sean Dockray, is
an online repository with over 50,000 books and
texts. A community of enthusiastic researchers from
critical theory, contemporary art, philosophy, architecture, and other fields in the humanities maintains,
catalogs, annotates, and initiates discussions around
it. It also as a courseware extension to the self-organized education platform The Public School.17
16 Andrew Losowsky, “Library.nu, Book Downloading Site,
Targeted in Injunctions Requested by 17 Publishers,” Huffington Post, 15 February 2012, http://www.huffingtonpost.
com/2012/02/15/librarynu-book-downloading-injunction_
n_1280383.html.
17 “The Public School”, The Public School, n.d.,
https://www.thepublicschool.org/.

Public library (essay)

83

UbuWeb18 is the most significant and largest online
archive of avant-garde art; it was initiated and is lead
by conceptual artist Kenneth Goldsmith. UbuWeb,
although still informal, has grown into a relevant
and recognized critical institution of contemporary
art. Artists want to see their work in its catalog and
thus agree to a relationship with UbuWeb that has
no formal contractual obligations.
Monoskop is a wiki for the arts, culture, and media
technology, with a special focus on the avant-garde,
conceptual, and media arts of Eastern and Central
Europe; it was launched by Dušan Barok and others.
In the form of a blog Dušan uploads to Monoskop.
org/log an online catalog of curated titles (at the
moment numbering around 3,000), and, as with
UbuWeb, it is becoming more and more relevant
as an online resource.
Library Genesis, aaaaarg.org, Kenneth Goldsmith,
and Dušan Barok show us that the future of the
public library does not need crisis management,
venture capital, start-up incubators, or outsourcing but simply the freedom to continue extending
the dreams of Melvil Dewey, Paul Otlet19 and other
visionary librarians, just as it did before the emergence of the internet.

18 See http://ubu.com/.
19 “Paul Otlet”, Wikipedia, 27 October 2014,
http://en.wikipedia.org/wiki/Paul_Otlet.

84

M. Mars • M. Zarroug • T. Medak

With the emergence of the internet and software
tools such as Calibre and “[let’s share books],”20 librarianship has been given an opportunity, similar to astronomy and the project SETI@home21, to
include thousands of amateur librarians who will,
together with the experts, build a distributed peerto-peer network to care for the catalog of available
knowledge, because
a public library is:
— free access to books for every member of society
— library catalog
— librarian
With books ready to be shared, meticulously
cataloged, everyone is a librarian.
When everyone is librarian, library is
everywhere.22


20 “Tools”, Memory of the World, n.d.,
https://www.memoryoftheworld.org/tools/.
21 See http://setiathome.berkeley.edu/.
22 “End-to-End Catalog”, Memory of the World, 26 November 2012,
https://www.memoryoftheworld.org/end-to-end-catalog/.

Public library (essay)

85

Paul Otlet

Transformations
in the Bibliographical Apparatus
of the Sciences [1]
Repertory — Classification — Office
of Documentation
1. Because of its length, its extension to all countries,
the profound harm that it has created in everyone’s
life, the War has had, and will continue to have, repercussions for scientific productivity. The hour for
the revision of the old order is about to strike. Forced
by the need for economies of men and money, and
by the necessity of greater productivity in order to
hold out against all the competition, we are going to
have to introduce reforms into each of the branches
of the organisation of science: scientific research, the
preservation of its results, and their wide diffusion.
Everything happens simultaneously and the distinctions that we will introduce here are only to
facilitate our thinking. Always adjacent areas, or
even those that are very distant, exert an influence
on each other. This is why we should recognize the
impetus, growing each day even greater in the organisation of science, of the three great trends of
our times: the power of associations, technological
progress and the democratic orientation of institutions. We would like here to draw attention to some
of their consequences for the book in its capacity

Transformations In The Bibliographical
Apparatus Of The Sciences

87

as an instrument for recording what has been discovered and as a necessary means for stimulating
new discoveries.
The Book, the Library in which it is preserved,
and the Catalogue which lists it, have seemed for
a long time as if they had achieved their heights of
perfection or at least were so satisfactory that serious
changes need not be contemplated. This may have
been so up to the end of the last century. But for a
score of years great changes have been occurring
before our very eyes. The increasing production of
books and periodicals has revealed the inadequacy of
older methods. The increasing internationalisation
of science has required workers to extend the range
of their bibliographic investigations. As a result, a
movement has occurred in all countries, especially
Germany, the United States and England, for the
expansion and improvement of libraries and for
an increase in their numbers. Publishers have been
searching for new, more flexible, better-illustrated,
and cheaper forms of publication that are better-coordinated with each other. Cataloguing enterprises
on a vast scale have been carried out, such as the
International Catalogue of Scientific Literature and
the Universal Bibliographic Repertory. [2]
Three facts, three ideas, especially merit study
for they represent something really new which in
the future can give us direction in this area. They
are: The Repertory, Classification and the Office of
Documentation.
•••

88

Paul Otlet

2. The Repertory, like the book, has gradually been
increasing in size, and improvements in it suggest
the emergence of something new which will radically modify our traditional ideas.
From the point of view of form, a book can be
defined as a group of pages cut to the same format
and gathered together in such a way as to form a
whole. It was not always so. For a long time the
Book was a roll, a volumen. The substances which
then took the place of paper — papyrus and parchment — were written on continuously from beginning to end. Reading required unrolling. This was
certainly not very practical for the consultation of
particular passages or for writing on the verso. The
codex, which was introduced in the first centuries of
the modern era and which is the basis of our present
book, removed these inconveniences. But its faults
are numerous. It constitutes something completed,
finished, not susceptible of addition. The Periodical
with its successive issues has given science a continuous means of concentrating its results. But, in
its turn, the collections that it forms runs into the
obstacle of disorder. It is impossible to link similar
or connected items; they are added to one another
pell-mell, and research requires handling great masses of heavy paper. Of course indexes are a help and
have led to progress — subject indexes, sometimes
arranged systematically, sometimes analytically,
and indexes of names of persons and places. These
annual indexes are preceded by monthly abstracts
and are followed by general indexes cumulated every
five, ten or twenty-five years. This is progress, but
the Repertory constitutes much greater progress.

Transformations In The Bibliographical
Apparatus Of The Sciences

89

The aim of the Repertory is to detach what the
book amalgamates, to reduce all that is complex to
its elements and to devote a page to each. Pages, here,
are leaves or cards according to the format adopted.
This is the “monographic” principle pushed to its
ultimate conclusion. No more binding or, if it continues to exist, it will become movable, that is to
say, at any moment the cards held fast by a pin or a
connecting rod or any other method of conjunction
can be released. New cards can then be intercalated,
replacing old ones, and a new arrangement made.
The Repertory was born of the Catalogue. In
such a work, the necessity for intercalations was
clear. Nor was there any doubt as to the unitary or
monographic notion: one work, one title; one title,
one card. As a result, registers which listed the same
collections of books for each library but which had
constantly to be re-done as the collections expanded,
have gradually been discarded. This was practical
and justified by experience. But upon reflection one
wonders whether the new techniques might not be
more generally applied.
What is a book, in fact, if not a single continuous line which has initially been cut to the length
of a page and then cut again to the size of a justified
line? Now, this cutting up, this division, is purely
mechanical; it does not correspond to any division
of ideas. The Repertory provides a practical means
of physically dividing the book according to the
intellectual division of ideas.
Thus, the manuscript library catalogue on cards
has been quickly followed by catalogues printed on
cards (American Library Bureau, the Catalogue or

90

Paul Otlet

the Library of Congress in Washington) [3]; then by
bibliographies printed on cards (International Institute of Bibliography, Concilium Bibliographicum)
[4]; next, indices of species have been published on
cards (Index Speciorum) [5]. We have moved from
the small card to the large card, the leaf, and have
witnessed compendia abandoning the old form for
the new (Jurisclasseur, or legal digests in card form).
Even the idea of the encyclopedia has taken this
form (Nelson’s Perpetual Cyclopedia [6]).
Theoretically and technically, we now have in
the Repertory a new instrument for analytically or
monographically recording data, ideas, information. The system has been improved by divisionary cards of various shapes and colours, placed in
such a way that they express externally the outline
of the classification being used and reduce search
time to a minimum. It has been improved further
by the possibility of using, by cutting and pasting,
materials that have been printed on large leaves or
even books that have been published without any
thought of repertories. Two copies, the first providing the recto, the second the verso, can supply
all that is necessary. One has gone even further still
and, from the example of statistical machines like
those in use at the Census of Washington (sic) [7],
extrapolated the principle of “selection machines”
which perform mechanical searches in enormous
masses of materials, the machines retaining from
the thousands of cards processed by them only those
related to the question asked.
•••

Transformations In The Bibliographical
Apparatus Of The Sciences

91

3. But such a development, like the Repertory before it, presupposes a classification. This leads us to
examine the second practical idea that is bringing
about the transformation of the book.
Classification plays an enormous role in scientific thought. If one could say that a science was a
well-made language, one could equally assert that
it is a completed classification. Science is made up
of verified facts which are organised in a structure
of systems, hypotheses, theories, laws. If there is
a certain order in things, it is necessary to have it
also in science which reflects and explains nature.
That is why, since the time of Greek thought until
the present, constant efforts have been made to improve classification. These have taken three principal directions: classification studied as an activity
of the mind; the general classification and sequence
of the sciences; the systematization appropriate to
each discipline. The idea of order, class, genus and
species has been studied since Aristotle, in passing
by Porphyrus, by the scholastic philosophers and by
modern logicians. The classification of knowledge
goes back to the Greeks and owes much to the contributions of Bacon and the Renaissance. It was posed
as a distinct and separate problem by D’Alembert
and the Encyclopédie, and by Ampère, Comte, and
Spencer. The recent work of Manouvrier, Durand
de Cros, Goblot, Naville, de la Grasserie, has focussed on various aspects of it. [8] As to systematics,
one can say that this has become the very basis of
the organisation of knowledge as a body of science.
When one has demonstrated the existence of 28 million stars, a million chemical compounds, 300,000

92

Paul Otlet

vegetable species, 200,000 animal species, etc., it is
necessary to have a means, an Ariadne’s thread, of
finding one’s way through the labyrinth formed by
all these objects of study. Because there are sciences of beings as well as sciences of phenomena, and
because they intersect with each other as we better
understand the whole of reality, it is necessary that
this means be used to retrieve both. The state of development of a science is reflected at any given time
by its systematics, just as the general classification
of the sciences reflects the state of development of
the encyclopedia, of the philosophy of knowledge.
The need has been felt, however, for a practical
instrument of classification. The classifications of
which we have just spoken are constantly changing, at least in their detail if not in broad outline. In
practice, such instability, such variability which is
dependent on the moment, on schools of thought
and individuals, is not acceptable. Just as the Repertory had its origin in the catalogue, so practical
classification originated in the Library. Books represent knowledge and it is necessary to arrange them
in collections. Schemes for this have been devised
since the Middle Ages. The elaboration of grand
systems occurred in the 17th and 18th centuries
and some new ones were added in the 19th century. But when bibliography began to emerge as an
autonomous field of study, it soon began to develop
along the lines of the catalogue of an ideal library
comprising the totality of what had been published.
From this to drawing on library classifications was
but a step, and it was taken under certain conditions
which must be stressed.

Transformations In The Bibliographical
Apparatus Of The Sciences

93

Up to the present time, 170 different classifications
have been identified. Now, no cooperation is possible if everyone stays shut up in his own system. It
has been necessary, therefore, to choose a universal
classification and to recommend it as such in the
same way that the French Convention recognized
the necessity of a universal system of weights and
measures. In 1895 the first International Conference
of Bibliography chose the Decimal Classification
and adopted a complete plan for its development. In
1904, the edition of the expanded tables appeared. A
new edition was being prepared when the war broke
out Brussels, headquarters of the International Institute of Bibliography, which was doing this work,
was part of the invaded territory.
In its latest state, the Decimal Classification has
become an instrument of great precision which
can meet many needs. The printed tables contain
33,000 divisions and they have an alphabetical index consisting of about 38,000 words. Learning is
here represented in its entire sweep: the encyclopedia of knowledge. Its principle is very simple. The
empiricism of an alphabetical classification by subject-heading cannot meet the need for organising
and systematizing knowledge. There is scattering;
there is also the difficulty of dealing with the complex expressions which one finds in the modern terminology of disciplines like medicine, technology,
and the social sciences. Above all, it is impossible
to achieve any international cooperation on such
a national basis as language. The Decimal Classification is a vast systematization of knowledge, “the
table of contents of the tables of contents” of all

94

Paul Otlet

treatises. But, as it would be impossible to find a
particular subject’s relative place by reference to
another subject, a system of numbering is needed.
This is decimal, which an example will make clear.
Optical Physiology would be classified thus:
5 th Class
3rd Group
5th Division
7th Sub-division

Natural Sciences
Physics
Optics
Optical Physiology

or 535.7
This number 535.7 is called decimal because all
knowledge is taken as one of which each science is
a fraction and each individual subject is a decimal
subdivided to a lesser or greater degree. For the sake
of abbreviation, the zero of the complete number,
which would be 0.5357, has been suppressed because
the zero would be repeated in front of each number.
The numbers 5, 3, 5, 7 (which one could call five hundred and thirty-five point seven and which could
be arranged in blocks of three as for the telephone,
or in groups of twos) form a single number when
the implied words, “class, group, division and subdivision,” are uttered.
The classification is also called decimal because
all subjects are divided into ten classes, then each
of these into at least ten groups, and each group
into at least ten divisions. All that is needed for the
number 535.7 always to have the same meaning is
to translate the tables into all languages. All that is
needed to deal with future scientific developments

Transformations In The Bibliographical
Apparatus Of The Sciences

95

in optical physiology in all of its ramifications is to
subdivide this number by further decimal numbers
corresponding to the subdivisions of the subject
Finally, all that is needed to ensure that any document or item pertaining to optical physiology finds
its place within the sum total of scientific subjects
is to write this number on it In the alphabetic index
to the tables references are made from each word
to the classification number just as the index of a
book refers to page numbers.
This first remarkable principle of the decimal
classification is generally understood. Its second,
which has been introduced more recently, is less
well known: the combination of various classification numbers whenever there is some utility in expressing a compound or complex heading. In the
social sciences, statistics is 31 and salaries, 331.2. By
a convention these numbers can be joined by the
simple sign : and one may write 31:331.2 statistics
of salaries.01
This indicates a general relationship, but a subject also has its place in space and time. The subject
may be salaries in France limited to a period such as
the 18th century (that is to say, from 1700 to 1799).
01 The first ten divisions are: 0 Generalities, 1 Philosophy, 2
Religion, 3 Social Sciences, 4 Philology, Language, 5 Pure
Sciences, 6 Applied Science, Medicine, 7 Fine Arts, 8 Literature, 9 History and Geography. The Index number 31 is
derived from: 3rd class social sciences, 1st group statistics. The
Index number 331.2 is derived from 3rd class social sciences,
3rd group political economy, 1st division topics about work,
2nd subdivision salaries.

96

Paul Otlet

The sign that characterises division by place being
the parenthesis and that by time quotation marks
or double parentheses, one can write:
33:331.2 (44) «17» statistics — of salaries — in
France — in the 17th century
or ten figures and three signs to indicate, in terms
of the universe of knowledge, four subordinated
headings comprising 42 letters. And all of these
numbers are reversible and can be used for geographic or chronologic classification as well as for
subject classification:
(44) 31:331.2 «17»
France — Statistics — Salaries — 17th Century
«17» (44) 31:331.2
17th Century — France — Statistics — Salaries
The subdivisions of relation and location explained
here, are completed by documentary subdivisions
for the form and the language of the document (for
example, periodical, in Italian), and by functional
subdivisions (for example, in zoology all the divisions by species of animal being subdivided by biological aspects). It follows by virtue of the law of
permutations and combinations that the present
tables of the classification permit the formulation
at will of millions of classification numbers. Just as
arithmetic does not give us all the numbers readymade but rather a means of forming them as we
need them, so the classification gives us the means

Transformations In The Bibliographical
Apparatus Of The Sciences

97

of creating classification numbers insofar as we have
compound headings that must be translated into a
notation of numbers.
Like chemistry, mathematics and music, bibliography thus has its own extremely simple notations:
numbers. Immediately and without confusion, it
allows us to find a place for each idea, for each thing
and consequently for each book, article, or document and even for each part of a book or document
Thus it allows us to take our bearings in the midst
of the sources of knowledge, just as the system of
geographic coordinates allows us to take our bearings on land or sea.
One may well imagine the usefulness of such a
classification to the Repertory. It has rid us of the
difficulty of not having continuous pagination. Cards
to be intercalated can be placed according to their
class number and the numbering is that of tables
drawn up in advance, once and for all, and maintained with an unvarying meaning. As the classification has a very general use, it constitutes a true
documentary classification which can be used in
various kinds of repertories: bibliographic repertories; catalogue-like repertories of objects, persons,
phenomena; and documentary repertories of files
made up of written or printed materials of all kinds.
The possibility can be envisaged of encyclopedic
repertories in which are registered and integrated
the diverse data of a scientific field and which draw
for this purpose on materials published in periodicals. Let each article, each report, each item of news
henceforth carry a classification number and, automatically, by clipping, encyclopedias on cards can

98

Paul Otlet

be created in which all the results of international
scientific cooperation are brought together at the
same number. This constitutes a profound change
in the technology of the Book, since the repertory
thus formed is simultaneously a constantly up-dated book and a cooperative book in which are found
printed elements produced in all locations.
•••
4. If we can realize the third idea, the Office of Documentation, then reform will be complete. Such an
office is the old library, but adapted to a new function. Hitherto the library has been a museum of
books. Works were preserved in libraries because
they were precious objects. Librarians were keepers.
Such establishments were not organised primarily
for the use of documents. Moreover, their outmoded
regulations if they did not exclude the most modern
forms of publication at least did not admit them.
They have poor collections of journals; collections
of newspapers are nearly nonexistent; photographs,
films, phonograph discs have no place in them, nor
do film negatives, microscopic slides and many other “documents.” The subject catalogue is considered
secondary in the library so long as there is a good
register for administrative purposes. Thus there is
little possibility of developing repertories in the
library, that is to say of taking publications to pieces and redistributing them in a more directly and
quickly accessible form. For want of personnel to
arrange them, there has not even been a place for
the cards that are received already printed.

Transformations In The Bibliographical
Apparatus Of The Sciences

99

The Office of Documentation, on the contrary, is
conceived of in such a way as to achieve all that is
lacking in the library. Collections of books are the
necessary basis for it, but books, far from being
considered as finished products, are simply materials which must be developed more fully. This
development consists in establishing the connections each individual book has with all of the other
books and forming from them all what might be
called The Universal Book. It is for this that we use
repertories: bibliographic repertories; repertories of
documentary dossiers gathering pamphlets and extracts together by subject; catalogues; chronological
repertories of facts or alphabetical ones of names;
encyclopedic repertories of scientific data, of laws,
of patents, of physical and technical constants, of
statistics, etc. All of these repertories will be set up
according to the method described above and arranged by the same universal classification. As soon
as an organisation to contain these repertories is
created, the Office of Documentation, one may be
sure that what happened to the book when libraries
first opened — scientific publication was regularised
and intensified — will happen to them. Then there
will be good reason for producing in bibliographies,
catalogues, and above all in books and periodicals
themselves, the rational changes which technology and the creative imagination suggest. What is
still an exception today will be common tomorrow.
New possibilities will exist for cooperative work
and for the more effective organisation of science.
•••

100

Paul Otlet

5. Repertory, Classification, Office of Documentation are therefore the three related elements of a
single reform in our methods of registering scientific discoveries and making them available to the
greatest number of people. Already one must speak
less of experiments and uncertain trials than of the
beginning of serious achievement. The International Institute of Bibliography in Brussels constitutes
a vast intellectual cooperative whose members are
becoming more numerous each day. Associations,
scientific establishments, periodical publications,
scientific and technical workers of every kind are
affiliating with it. Its repertories contain millions of
cards. There are sections in several countries02 . But
this was before the War. Since its outbreak, a movement in France, England and the United States has
been emerging everywhere to improve the organisation of the Book. The Office of Documentation has
been suggested as the solution for the requirements
that have been discussed.
It is important that the world of science and
technology should support this movement and
above all that it should endeavour to apply the new
methods to the works which it will be necessary to
re-organise. Among the most important of these is
the International Catalogue of Scientific Literature,
that fine and great work begun at the initiative of the
Royal Society of London. Until now, this work has
02 In France, the Bureau Bibliographique de Paris and great
associations such as the Société pour l’encouragement de
l’industrie nationale, l’Association pour l’avancement des
sciences, etc., are affiliated with it.

Transformations In The Bibliographical
Apparatus Of The Sciences

101

been carried on without relation to other works of
the same kind: it has not recognised the value of a
card repertory or a universal classification. It must
recognise them in the future.03 ❧

03 See Paul Otlet, “La Documentation et I’information au service de I’industrie”, Bulletin de la Société d’encouragement
de l’industrie nationale, June 1917. — La Documentation au
service de l’invention. Euréka, October 1917. — L’Institut
International de Bibliographie, Bibliographie de la France,
21 December 1917. — La Réorganisation du Catalogue international de la littérature scientifique. Revue générale des
sciences, IS February 1918. The publications of the Institute,
especially the expanded tables of the Decimal Classification,
have been deposited at the Bureau Bibliographique de Paris,
44 rue de Rennes at the apartments of the Société de l’encouragement. — See also the report presented by General
Sebert (9] to the Congrès du Génie civil, in March 1918 and
whose conclusions about the creation in Paris of a National
Office of Technical Documentation have been adopted.

102

Paul Otlet

Editor’s Notes
[1] “Transformations operées dans l’appareil bibliographique
des sciences,” Revue scientifique 58 (1918): 236-241.
[2] The International Catalogue of Scientific Literature, an enormous work, was compiled by a Central Bureau under the
sponsorship of the Royal Society from material sent in from
Regional Bureaus around the world. It was published annually beginning in 1902 in 17 parts each corresponding to
a major subject division and comprising one or more volumes. Publication was effectively suspended in 1914. By the
time war broke out, the Universal Bibliographic Repertory
contained over 11 million entries.
[3] For card publication by the Library Bureau and Library of
Congress, see Edith Scott, “The Evolution of Bibliographic
Systems in the United States, 1876–1945” and Editor’s Note
36 to the second paper and Note 5 to the seventh paper in
International Organisation and Dissemination of Knowledge; Selected Essays of Paul Otlet, translated and edited by
W. Boyd Rayward. Amsterdam: Elsevier, 1990: 148–156.
[4] Otlet refers to the Concilium Bibliographicum also in Paper
No. 7, “The Reform of National Bibliographies...” in International Organisation and Dissemination of Knowledge; Selected
Essays of Paul Otlet. See also Editor’s Note 5 in that paper
for the major bibliographies published by the Concilium
Bibliographicum.
[5] A possible example of what Otlet is referring to here is the
Gray Herbarium Index. This was “planned to provide cards
for all the names of vascular plant taxa attributable to the

Transformations In The Bibliographical
Apparatus Of The Sciences

103

Western Hemisphere beginning with the literature of 1886”
(Gray Herbarium Index, Preface, p. iii). Under its first compiler, 20 instalments consisting in all of 28,000 cards were
issued between 1894 and 1903. It has been continued after
that time and was for many years “issued quarterly at the
rate of about 4,000 cards per year.” At the time the cards
were reproduced in a printed catalogue by G. K. Hall in 1968,
there were 85 subscribers to the card sets.
[6] Nelson’s Perpetual Loose-Leaf Encylcopedia was a popular,
12-volume work which went through many editions, its
principle being set down at the beginning of the century.
It was published in binders and the publisher undertook to
supply a certain number of pages of revisions (or renewals)
semi-annually after each edition, the first of which appeared
in 1905. An interesting reference presumably to this work
occurs in a notice, “An Encylcopedia on the Card-Index System,” in the Scientific American 109 (1913): 213. The Berlin
Correspondent of the journal reports a proposal made in
Berlin which contains “an idea, in a sense ... already carried
out in an American loose-leaf encyclopedia, the publishers
of which supply new pages to take the place of those that
are obsolete” (Nelsons, an English firm, set up a New York
branch in 1896. Publication in the U.S. of works to be widely
circulated there was a requirement of the copyright law.)
The reporter observes that the principle suggested “affords
a means of recording all facts at present known as well as
those to be discovered in the future, with the same safety
and ease as though they were registered in our memory, by
providing a universal encyclopedia, incessantly keeping
abreast of the state of human knowledge.” The “bookish”
form of conventional encyclopedias acts against its future
success. “In the case of a mere storehouse of facts the in-

104

Paul Otlet

finitely more mobile form of the card index should however
be adopted, possibly,” the author goes on making a most interesting reference, “in conjunction with Dr. Goldschmidt’s
Microphotographic Library System.” The need for a central
institute, the nature of its work, the advantages of the work
so organised are described in language that is reminiscent
of that of Paul Otlet (see also the papers of Goldschmidt
and Otlet translated in International Organisation and
Dissemination of Knowledge; Selected Essays of Paul Otlet).
[7] These machines were derived from Herman Hollerith’s
punched cards and tabulating machines. Hollerith had
introduced them under contract into the U.S. Bureau of
the Census for the 1890 census. This equipment was later
modified and developed by the Bureau. Hollerith, his invention and his business connections lie at the roots of the
present IBM company. The equipment and its uses in the
census from 1890 to 1910 are briefly described in John H.
Blodgett and Claire K. Schultz, “Herman Hollerith: Data
Processing Pioneer,” American Documentation 20 (1969):
221-226. As they observe, suggesting the accuracy of Otlet’s
extrapolation, “his was not simply a calculating machine,
it performed selective sorting, an operation basic to all information retrieval.”
[8] The history of the classification of knowledge has been treated
in English in detail by E.C. Richardson in his Classification
Theoretical and Practical, the first edition of which appeared
in 1901 and was followed by editions in 1912 and 1930. A
different treatment is given in Robert Flint’s Philosophy as
Scientia Scientarium: a History of the Classification of the
Sciences which appeared in 1904. Neither of these works
deal with Manouvrier, a French anthropologist, or Durand

Transformations In The Bibliographical
Apparatus Of The Sciences

105

de Cros. Joseph-Pierre Durand, sometimes called Durand
de Cros after his birth place, was a French physiologist and
philosopher who died in 1900. In his Traité de documentation,
in the context of his discussion of classification, Otlet refers
to an Essai de taxonomie by Durand published by Alcan. It
seems that this is an error for Aperçus de taxonomie (Alcan,
1899).
[9] General Hippolyte Sebert was President of the Association française pour l’avancement des sciences, and the Société d’encouragement pour l’industrie nationale. He had
been active in the foundation of the Bureau bibliographique
de Paris. For other biographical information about him see
Editor’s Note 9 to Paper no 17, “Henri La Fontaine”, in International Organisation and Dissemination of Knowledge;
Selected Essays of Paul Otlet.

English translation of the Paul Otlet’s text published with the
permission of W. Boyd Rayward. The translation was originally
published as Paul Otlet, “Transformations in the Bibliographical
Apparatus of the Sciences: Repertory–Classification–Office of
Documentation”, in International Organisation and Dissemination of Knowledge; Selected Essays of Paul Otlet, translated and
edited by W. Boyd Rayward, Amsterdam: Elsevier, 1990: 148–156.

106

Paul Otlet

107

108

public library

http://aaaaarg.org/

109

McKenzie Wark

Metadata Punk

So we won the battle but lost the war. By “we”, I
mean those avant-gardes of the late twentieth century whose mission was to free information from the
property form. It was always a project with certain
nuances and inconsistencies, but over-all it succeeded beyond almost anybody’s wildest dreams. Like
many dreams, it turned into a nightmare in the end,
the one from which we are now trying to awake.
The place to start is with what the situationists
called détournement. The idea was to abolish the
property form in art by taking all of past art and
culture as a commons from which to copy and correct. We see this at work in Guy Debord’s texts and
films. They do not quote from past works, as to do
so acknowledges their value and their ownership.
The elements of détournement are nothing special.
They are raw materials for constructing theories,
narratives, affects of a subjectivity no longer bound
by the property form.
Such a project was recuperated soon enough
back into the art world as “appropriation.” Richard
Prince is the dialectical negation of Guy Debord,

Metadata Punk

111

in that appropriation values both the original fragment and contributes not to a subjectivity outside of
property but rather makes a career as an art world
star for the appropriating artist. Of such dreams is
mediocrity made.
If there was a more promising continuation of
détournement it had little to do with the art world.
Détournement became a social movement in all but
name. Crucially, it involved an advance in tools,
from Napster to Bitorrent and beyond. It enabled
the circulation of many kinds of what Hito Steyerl
calls the poor image. Often low in resolution, these
détourned materials circulated thanks both to the
compression of information but also because of the
addition of information. There might be less data
but there’s added metadata, or data about data, enabling its movement.
Needless to say the old culture industries went
into something of a panic about all this. As I wrote
over ten years ago in A Hacker Manifesto, “information wants to be free but is everywhere in chains.”
It is one of the qualities of information that it is indifferent to the medium that carries it and readily
escapes being bound to things and their properties.
Yet it is also one of its qualities that access to it can
be blocked by what Alexander Galloway calls protocol. The late twentieth century was — among other
things — about the contradictory nature of information. It was a struggle between détournement and
protocol. And protocol nearly won.
The culture industries took both legal and technical steps to strap information once more to fixity
in things and thus to property and scarcity. Inter-

112

McKenzie Wark

estingly, those legal steps were not just a question of
pressuring governments to make free information
a crime. It was also a matter of using international
trade agreements as a place outside the scope of de­
mo­­cratic oversight to enforce the old rules of property. Here the culture industries join hands with the
drug cartels and other kinds of information-based
industry to limit the free flow of information.
But laws are there to be broken, and so are protocols of restriction such as encryption. These were
only ever delaying tactics, meant to shore up old
monopoly business for a bit longer. The battle to
free information was the battle that the forces of
détournement largely won. Our defeat lay elsewhere.
While the old culture industries tried to put information back into the property form, there were
other kinds of strategy afoot. The winners were not
the old culture industries but what I call the vulture
industries. Their strategy was not to try to stop the
flow of free information but rather to see it as an
environment to be leveraged in the service of creating a new kind of business. “Let the data roam free!”
says the vulture industry (while quietly guarding
their own patents and trademarks). What they aim
to control is the metadata.
It’s a new kind of exploitation, one based on an
unequal exchange of information. You can have the
little scraps of détournement that you desire, in exchange for performing a whole lot of free labor—and
giving up all of the metadata. So you get your little
bit of data; they get all of it, and more importantly,
any information about that information, such as
the where and when and what of it.

Metadata Punk

113

It is an interesting feature of this mode of exploitation that you might not even be getting paid for your
labor in making this information—as Trebor Scholz
as pointed out. You are working for information
only. Hence exploitation can be extended far beyond
the workplace and into everyday life. Only it is not
so much a social factory, as the autonomists call it.
This is more like a social boudoir. The whole of social
space is in some indeterminate state between public
and private. Some of your information is private to
other people. But pretty much all of it is owned by
the vulture industry — and via them ends up in the
hands of the surveillance state.
So this is how we lost the war. Making information free seemed like a good idea at the time. Indeed, one way of seeing what transpired is that we
forced the ruling class to come up with these new
strategies in response to our own self-organizing
activities. Their actions are reactions to our initiatives. In this sense the autonomists are right, only
it was not so much the actions of the working class
to which the ruling class had to respond in this case,
as what I call the hacker class. They had to recuperate a whole social movement, and they did. So our
tactics have to change.
In the past we were acting like data-punks. Not
so much “here’s three chords, now form your band.”
More like: “Here’s three gigs, now go form your autonomous art collective.” The new tactic might be
more question of being metadata-punks. On the one
hand, it is about freeing information about information rather than the information itself. We need
to move up the order of informational density and

114

McKenzie Wark

control. On the other hand, it might be an idea to
be a bit discreet about it. Maybe not everyone needs
to know about it. Perhaps it is time to practice what
Zach Blas calls infomatic opacity.
Three projects seem to embody much of this
spirit to me. One I am not even going to name or
discuss, as discretion seems advisable in that case.
It takes matters off the internet and out of circulation among strangers. Ask me about it in person if
we meet in person.
The other two are Monoskop Log and UbuWeb.
It is hard to know what to call them. They are websites, archives, databases, collections, repositories,
but they are also a bit more than that. They could be
thought of also as the work of artists or of curators;
of publishers or of writers; of archivists or researchers. They contain lots of files. Monoskop is mostly
books and journals; UbuWeb is mostly video and
audio. The work they contain is mostly by or about
the historic avant-gardes.
Monoskop Log bills itself as “an educational
open access online resource.” It is a component part
of Monoskop, “a wiki for collaborative studies of
art, media and the humanities.” One commenter
thinks they see the “fingerprint of the curator” but
nobody is named as its author, so let’s keep it that
way. It is particularly strong on Eastern European
avant-garde material. UbuWeb is the work of Kenneth Goldsmith, and is “a completely independent
resource dedicated to all strains of the avant-garde,
ethnopoetics, and outsider arts.”
There’s two aspects to consider here. One is the
wealth of free material both sites collect. For any-

Metadata Punk

115

body trying to teach, study or make work in the
avant-garde tradition these are very useful resources.
The other is the ongoing selection, presentation and
explanation of the material going on at these sites
themselves. Both of them model kinds of ‘curatorial’
or ‘publishing’ behavior.
For instance, Monoskop has wiki pages, some
better than Wikipedia, which contextualize the work
of a given artist or movement. UbuWeb offers “top
ten” lists by artists or scholars which give insight
not only into the collection but into the work of the
person making the selection.
Monoskop and UbuWeb are tactics for intervening in three kinds of practices, those of the artworld, of publishing and of scholarship. They respond to the current institutional, technical and
political-economic constraints of all three. As it
says in the Communist Manifesto, the forces for social change are those that ask the property question.
While détournement was a sufficient answer to that
question in the era of the culture industries, they try
to formulate, in their modest way, a suitable tactic
for answering the property question in the era of
the vulture industries.
This takes the form of moving from data to metadata, expressed in the form of the move from writing
to publishing, from art-making to curating, from
research to archiving. Another way of thinking this,
suggested by Hiroki Azuma would be the move from
narrative to database. The object of critical attention
acquires a third dimension, a kind of informational
depth. The objects before us are not just a text or an
image but databases of potential texts and images,
with metadata attached.

116

McKenzie Wark

The object of any avant-garde is always to practice the relation between aesthetics and everyday
life with a new kind of intensity. UbuWeb and
Monoskop seem to me to be intimations of just
such an avant-garde movement. One that does not
offer a practice but a kind of meta-practice for the
making of the aesthetic within the everyday.
Crucial to this project is the shifting of aesthetic
intention from the level of the individual work to the
database of works. They contain a lot of material, but
not just any old thing. Some of the works available
here are very rare, but not all of them are. It is not
just rarity, or that the works are available for free.
It is more that these are careful, artful, thoughtful
collections of material. There are the raw materials here with which to construct a new civilization.
So we lost the battle, but the war goes on. This
civilization is over, and even its defenders know it.
We live in among ruins that accrete in slow motion.
It is not so much a civil war as an incivil war, waged
against the very conditions of existence of life itself.
So even if we have no choice but to use its technologies and cultures, the task is to build another way
of life among the ruins. Here are some useful practices, in and on and of the ruins. ❧

Metadata Punk

117

118

public library

http://midnightnotes.memoryoftheworld.org/

119

Tomislav Medak

The Future After the Library
UbuWeb and Monoskop’s
Radical Gestures

The institution of the public library has crystallized,
developed and advanced around historical junctures
unleashed by epochal economic, technological and
political changes. A series of crises since the advent
of print have contributed to the configuration of the
institutional entanglement of the public library as
we know it today:01 defined by a publicly available
collection, housed in a public building, indexed and
made accessible with a help of a public catalog, serviced by trained librarians and supported through
public financing. Libraries today embody the idea
of universal access to all knowledge, acting as custodians of a culture of reading, archivists of material
and ephemeral cultural production, go-betweens
of information and knowledge. However, libraries have also embraced a broader spirit of public
service and infrastructure: providing information,
01 For the concept and the full scope of the contemporary library
as institutional entanglement see Shannon Mattern, “Library
as Infrastructure”, Places Journal, accessed April 9, 2015,
https://placesjournal.org/article/library-as-infrastructure/.

The Future After the Library

121

education, skills, assistance and, ultimately, shelter
to their communities — particularly their most vulnerable members.
This institutional entanglement, consisting in
a comprehensive organization of knowledge, universally accessible cultural goods and social infrastructure, historically emerged with the rise of (information) science, social regulation characteristic
of modernity and cultural industries. Established
in its social aspect as the institutional exemption
from the growing commodification and economic
barriers in the social spheres of culture, education
and knowledge, it is a result of struggles for institutionalized forms of equality that still reflect the
best in solidarity and universality that modernity
had to offer. Yet, this achievement is marked by
contradictions that beset modernity at its core. Libraries and archives can be viewed as an organon
through which modernity has reacted to the crises
unleashed by the growing production and fixation
of text, knowledge and information through a history of transformations that we will discuss below.
They have been an epistemic crucible for the totalizing formalizations that have propelled both the
advances and pathologies of modernity.
Positioned at a slight monastic distance and indolence toward the forms of pastoral, sovereign or
economic domination that defined the surrounding world that sustained them, libraries could never
close the rift or between the universalist aspirations
of knowledge and their institutional compromise.
Hence, they could never avoid being the battlefield
where their own, and modernity’s, ambivalent epis-

122

Tomislav Medak

temic and social character was constantly re-examined and ripped asunder. It is this ambivalent
character that has been a potent motor for critical theory, artistic and political subversion — from
Marx’s critique of political economy, psychoanalysis
and historic avant-gardes, to revolutionary politics.
Here we will examine the formation of the library
as an epistemic and social institution of modernity
and the forms of critical engagement that continue
to challenge the totalizing order of knowledge and
appropriation of culture in the present.
Here Comes the Flood02
Prior to the advent of print, the collections held in
monastic scriptoria, royal courts and private libraries
typically contained a limited number of canonical
manuscripts, scrolls and incunabula. In Medieval
and early Renaissance Europe the canonized knowledge considered necessary for the administration of
heavenly and worldly affairs was premised on reading and exegesis of biblical and classical texts. It is
02 The metaphor of the information flood, here incanted in the
words of Peter Gabriel’s song with apocalyptic overtones, as
well as a good part of the historic background of the development of index card catalog in the following paragraphs
are based on Markus Krajewski, Paper Machines: About
Cards & Catalogs, 1548–1929 (MIT Press, 2011). The organizing idea of Krajewski’s historical account, that the index
card catalog can be understood as a Turing machine avant
la lettre, served as a starting point for the understanding
of the library as an epistemic institution developed here.

The Future After the Library

123

estimated that by the 15th century in Western Europe
there were no more than 5 million manuscripts held
mainly in the scriptoria of some 21,000 monasteries and a small number of universities. While the
number of volumes had grown sharply from less
than 0.8 million in the 12th century, the number of
monasteries had remained constant throughout that
period. The number of manuscripts read averaged
around 1,000 per million inhabitants, with the total
population of Europe peaking around 60 million.03
All in all, the book collections were small, access was
limited and reading culture played a marginal role.
The proliferation of written matter after the invention of mechanical movable type printing would
greatly increase the number of books, but also the
patterns of literacy and knowledge production. Already in the first fifty years after Gutenberg’s invention, 12 million volumes were printed, and from
this point onwards the output of printing presses
grew exponentially to 700 million volumes in the
18th century. In the aftermath of the explosion in
book production the cost of producing and buying
books fell drastically, reducing the economic barriers to literacy, but also creating a material vector
for a veritable shift of the epistemic paradigm. The
03 For an economic history of the book in the Western Europe
see Eltjo Buringh and Jan Luiten Van Zanden, “Charting
the ‘Rise of the West’: Manuscripts and Printed Books in
Europe, A Long-Term Perspective from the Sixth through
Eighteenth Centuries”, The Journal of Economic History 69,
No. 02 (June 2009): 409–45, doi:10.1017/S0022050709000837,
particularly Tables 1-5.

124

Tomislav Medak

emerging reading public was gaining access to the
new works of a nascent Enlightenment movement,
ushering in the modern age of science. In parallel
with those larger epochal transformations, the explosion of print also created a rising tide of new books
that suddenly inundated the libraries. The libraries
now had to contend both with the orders-of-magnitude greater volume of printed matter and the
growing complexity of systematically storing, ordering, classifying and tracking all of the volumes
in their collection. An once almost static collection
of canonical knowledge became an ever expanding
dynamic flux. This flood of new books, the first of
three to follow, presented principled, infrastructural and organizational challenges to the library that
radically transformed and coalesced its functions.
The epistemic shift created by this explosion of
library holdings led to a revision of the assumption
that the library is organized around a single holy
scripture and a small number of classical sources.
Coextensive with the emergence and multiplication of new sciences, the books that were entering
the library now covered an ever diversified scope
of topics and disciplines. And the sheer number of
new acquisitions demanded the physical expansion of libraries, which in turn required a radical
rethinking of the way the books were stored, displayed and indexed. In fact, the flood caused by the
printing press was nothing short of a revolution in
the organization, formalization and processing of
information and knowledge. This becomes evident
in the changes that unfolded between the 16th and
the early 20th in the cataloging of library collections.

The Future After the Library

125

The initial listings of books were kept in bound
volumes, books in their own right. But as the number of items arriving into the library grew, the constant need to insert new entries made the bound
book format increasingly impractical for library
catalogs. To make things more complicated still,
the diversification of the printed matter demanded
a richer bibliographic description that would allow
better comprehension of what was contained in the
volumes. Alongside the name of the author and the
book’s title, the description now needed to include
the format of the volume, the classification of the
subject matter and the book’s location in the library.
As the pace of new arrivals accelerated, the effort to
create a library catalog became unending, causing a
true crisis in the emerging librarian profession. This
would result in a number of physical and epistemic
innovations in the organization and formalization
of information and knowledge. The requirement
to constantly rearrange the order of entries in the
listing lead to the eventual unbinding of the bound
catalog into separate slips of paper and finally to the
development of the index card catalog. The unbound
index cards and their floating rearrangement, not
unlike that of the movable type, would in turn result in the design of filing cabinets. From Conrad
Gessner’s Bibliotheca Universalis, a three-volume
book-format catalog of around 3,000 authors and
10,000 texts, arranged alphabetically and topically,
published in the period 1545–1548; Gottfried Wilhelm Leibniz’s proposals for a universal library
during his tenure at the Wolfenbüttel library in the
late 17th century; to Gottfried van Swieten’s catalog

126

Tomislav Medak

of the Viennese court library, the index card catalog and the filing cabinets would develop almost to
their present form.04
The unceasing inflow of new books into the library
prompted the need to spatially organize and classify
the arrangement of the collection. The simple addition of new books to the shelves by size; canonical
relevance or alphabetical order, made little sense
in a situation where the corpus of printed matter
was quickly expanding and no individual librarian
could retain an intimate overview of the library’s
entire collection. The inflow of books required that
the brimming shelf-space be planned ahead, while
the increasing number of expanding disciplines required that the collection be subdivided into distinct
sections by fields. First the shelves became classified
and then the books individually received a unique
identifier. With the completion of the Josephinian
catalog in the Viennese court library, every book became compartmentalized according to a systematic
plan of sciences and assigned a unique sequence of
a Roman numeral, a Roman letter and an Arabic
numeral by which it could be tracked down regardless of its physical location.05 The physical location
of the shelves in the library no longer needed to be
reflected in the ordering of the catalog, and the catalog became a symbolic representation of the freely
re-arrangeable library. In the technological lingo of
today, the library required storage, index, search
and address in order to remain navigable. It is this
04 Krajewski, Paper Machines, op. cit., chapter 2.
05 Ibid., 30.

The Future After the Library

127

formalization of a universal system of classification
of objects in the library with the relative location of
objects and re-arrangeable index that would then in
1876 receive its present standardized form in Melvil
Dewey’s Decimal System.
The development of the library as an institution of
public access and popular literacy did not proceed
apace with the development of its epistemic aspects.
It was only a series of social upheavals and transformations in the course of the 18th and 19th century
that would bring about another flood of books and
political demands, pushing the library to become
embedded in an egalitarian and democratic political culture. The first big step in that direction came
with the decision of the French revolutionary National Assembly from 2 November 1789 to seize all
book collections from the Church and aristocracy.
Million of volumes were transferred to the Bibliothèque Nationale and local libraries across France.
In parallel, particularly in England, capitalism was
on the rise. It massively displaced the impoverished rural population into growing urban centers,
propelled the development of industrial production and, by the mid-19th century, introduced the
steam-powered rotary press into the book business.
As books became more easily, and mass produced,
the commercial subscription libraries catering to the
better-off parts of society blossomed. This brought
the class aspect of the nascent demand for public
access to books to the fore. After the failed attempts
to introduce universal suffrage and end the system
of political representation based on property entitlements in 1830s and 1840s, the English Chartist

128

Tomislav Medak

movement started to open reading rooms and cooperative lending libraries that would quickly become
a popular hotbed of social exchanges between the
lower classes. In the aftermath of the revolutionary
upheavals of 1848, the fearful ruling classes heeded
the demand for tax-financed public libraries, hoping
that the access to literature and edification would
ultimately hegemonize the working class for the
benefits of capitalism’s culture of self-interest and
competition.06
The Avant-gardes in the Library
As we have just demonstrated, the public library
in its epistemic and social aspects coalesced in the
context of the broader social transformations of
modernity: early capitalism and processes of nation-building in Europe and the USA. These transformations were propelled by the advancement of
political and economic rationalization, public and
business administration, statistical and archival
procedures. Archives underwent a corresponding and largely concomitant development with the
libraries, responding with a similar apparatus of
classification and ordering to the exponential expansion of administrative records documenting the
social world and to the historicist impulse to capture the material traces of past events. Overlaying
the spatial organization of documentation; rules
06 For the social history of public library see Matthew Battles,
Library: An Unquiet History (Random House, 2014) chapter
5: “Books for all”.

The Future After the Library

129

of its classification and symbolic representation of
the archive in reference tools, they tried to provide
a formalization adequate to the passion for capturing historical or present events. Characteristic
of the ascendant positivism of the 19th century, the
archivists’ and librarians’ epistemologies harbored
a totalizing tendency that would become subject to
subversion and displacement in the first decades of
the 20th century.
The assumption that the classificatory form can
fully capture the archival content would become
destabilized over and over by the early avant-gardist
permutations of formal languages of classification:
dadaist montage of the contingent compositional
elements, surrealist insistence on the unconscious
surpluses produced by automatized formalized language, constructivist foregrounding of dynamic and
spatialized elements in the acts of perception and
cognition of an artwork.07 The material composition
of the classified and ordered objects already contained formalizations deposited into those objects
by the social context of their provenance or projected onto them by the social situation of encounter
with them. Form could become content and content
could become form. The appropriations, remediations and displacements exacted by the neo-avantgardes in the second half of the 20th century pro07 Sven Spieker, The Big Archive: Art from Bureaucracy (MIT
Press, 2008) provides a detailed account of strategies that
the historic avant-gardes and the post-war art have developed toward the classificatory and ordering regime of the
archive.

130

Tomislav Medak

duced subversions, resignifications and simulacra
that only further blurred the lines between histories
and their construction, dominant classifications and
their immanent instabilities.
Where does the library fit into this trajectory? Operating around an uncertain and politically embattled universal principle of public access to knowledge
and organization of information, libraries continued being sites of epistemic and social antagonisms,
adaptations and resilience in response to the challenges created by the waves of radical expansion of
textuality and conflicting social interests between
the popular reading culture and the commodification of cultural consumption. This precarious position is presently being made evident by the third
big flood — after those unleashed by movable type
printing and the social context of industrial book
production — that is unfolding with the transition
of the book into the digital realm. Both the historic
mode of the institutional regulation of access and
the historic form of epistemic classification are
swept up in this transformation. While the internet
has made possible a radically expanded access to
digitized culture and knowledge, the vested interests of cultural industries reliant on copyright for
their control over cultural production have deepened the separation between cultural producers and
their readers, listeners and viewers. While the hypertextual capacity for cross-reference has blurred
the boundaries of the book, digital rights management technologies have transformed e-books into
closed silos. Both the decommodification of access
and the overcoming of the reified construct of the

The Future After the Library

131

self-enclosed work in the form of a book come at
the cost of illegality.
Even the avant-gardes in all their inappropriable
and idiosyncratic recalcitrance fall no less under
the legally delimited space of copyrightable works.
As they shift format, new claims of ownership and
appropriation are built. Copyright is a normative
classification that is totalizing, regardless of the
effects of leaky networks speaking to the contrary.
Few efforts have insisted on the subverting of juridical classification by copyright more lastingly than
the UbuWeb archive. Espousing the avant-gardes’
ethos of appropriation, for almost 20 years it has
collected and made accessible the archives of the
unknown; outsider, rare and canonized avant-gardes and contemporary art that would otherwise remained reserved for the vaults and restricted access
channels of esoteric markets, selective museological
presentations and institutional archives. Knowing
that asking to publish would amount to aligning itself with the totalizing logic of copyright, UbuWeb
has shunned the permission culture. At the level of
poetical operation, as a gesture of displacing the cultural archive from a regime of limited, into a regime
of unlimited access, it has created provocations and
challenges directed at the classifying and ordering
arrangements of property over cultural production.
One can only assume that as such it has become a
mechanism for small acts of treason for the artists,
who, short of turning their back fully on the institutional arrangements of the art world they inhabit,
use UbuWeb to release their own works into unlimited circulation on the net. Sometimes there might

132

Tomislav Medak

be no way or need to produce a work outside the
restrictions imposed by those institutions, just as
sometimes it is for academics impossible to avoid
the contradictory world of academic publishing,
yet that is still no reason to keep one’s allegiance to
their arrangements.
At the same time UbuWeb has played the game
of avant-gardist subversion: “If it doesn’t exist on
the internet, it doesn’t exist”. Provocation is most
effective when it is ignorant of the complexities of
the contexts that it is directed at. Its effect starts
where fissures in the defense of the opposition start
to show. By treating UbuWeb as massive evidence
for the internet as a process of reappropriation, a
process of “giving to all”, its volunteering spiritus
movens, Kenneth Goldsmith, has been constantly rubbing copyright apologists up the wrong way.
Rather than producing qualifications, evasions and
ambivalences, straightforward affirmation of copy­
ing, plagiarism and reproduction as a dominant
yet suppressed mode of operation of digital culture re-enacts the avant-gardes’ gesture of taking
no hostages from the officially sanctioned systems
of classification. By letting the incumbents of control over cultural production react to the norm of
copying, you let them struggle to dispute the norm
rather than you having to try to defend the norm.
UbuWeb was an early-comer, starting in 1996
and still functioning today on seemingly similar
technology, it’s a child of the early days of World
Wide Web and the promissory period of the experimental internet. It’s resolutely Web 1.0, with
a single maintainer, idiosyncratically simple in its

The Future After the Library

133

layout and programmatically committed to the
eventual obsolescence and sudden abandonment.
No platform, no generic design, no widgets, no
kludges and no community features. Only Beckett
avec links. Endgame.
A Book is an Index is an Index is an Index...
Since the first book flood, the librarian dream of
epistemological formalization has revolved around
the aspiration to cross-reference all the objects in
the collection. Within the physical library the topical designation has been relegated to the confines of
index card catalog that remained isolated from the
structure of citations and indexes in the books themselves. With the digital transition of the book, the
time-shifted hypertextuality of citations and indexes
became realizable as the immediate cross-referentiality of the segments of individual text to segments
of other texts and other digital artifacts across now
permeable boundaries of the book.
Developed as a wiki for collaborative studies of
art, media and the humanities, Monoskop.org took
up the task of mapping and describing avant-gardes and media art in Europe. In its approach both
indexical and encyclopedic, it is an extension of
the collaborative editing made possible by wiki
technology. Wikis rose to prominence in the early
2000s allowing everyone to edit and extend websites running on that technology by mastering a
very simple markup language. Wikis have been the
harbinger of a democratization of web publishing
that would eventually produce the largest collabo-

134

Tomislav Medak

rative website on the internet — the Wikipedia, as
well as a number of other collaborative platforms.
Monoskop.org embraces the encyclopedic spirit of
Wikipedia, focusing on its own specific topical and
topological interests. However, from its earliest days
Monoskop.org has also developed as a form of index
that maps out places, people, artworks, movements,
events and venues that compose the dense network
of European avant-gardes and media art.
If we take the index as a formalization of cross-referential relations between names of people, titles
of works and concepts that exist in the books and
across the books, what emerges is a model of a relational database reflecting the rich mesh of cultural
networks. Each book can serve as an index linking
its text to people, other books, segments in them.
To provide a paradigmatic demonstration of that
idea, Monoskop.org has assembled an index of all
persons in Friedrich Kittler’s Discourse Networks,
with each index entry linking both to its location
in the digital version of the book displayed on the
aaaaarg.org archive and to relevant resources for
those persons on the Monoskop.org and the internet. Hence, each object in the library, an index
in its own right, potentially allows one to initiate
the relational re-classification and re-organization
of all other works in the library through linkable
information.
Fundamental to the works of the post-socialist
retro-avant-gardes of the last couple of decades has
been the re-writing of a history of art in reverse.
In the works of IRWIN, Laibach or Mladen Stilinović, or comparable work of Komar & Melamid,

The Future After the Library

135

totalizing modernity is detourned by re-appropriating the forms of visual representation and classification that the institutions of modernity used to
construct a linear historical narrative of evolutions
and breaks in the 19th and 20th century. Genealogical
tables, events, artifacts and discourses of the past
were re-enacted, over-affirmed and displaced to
open up the historic past relegated to the archives
to an understanding that transformed the present
into something radically uncertain. The efforts of
Monoskop.org in digitizing of the artifacts of the
20th century avant-gardes and playing with the
epistemic tools of early book culture is a parallel
gesture, with a technological twist. If big data and
the control over information flows of today increasingly naturalizes and re-affirms the 19th century
positivist assumptions of the steerablity of society,
then the endlessly recombinant relations and affiliations between cultural objects threaten to overflow
that recurrent epistemic framework of modernity’s
barbarism in its cybernetic form.
The institution of the public library finds itself
today under a double attack. One unleashed by
the dismantling of the institutionalized forms of
social redistribution and solidarity. The other by
the commodifying forces of expanding copyright
protections and digital rights management, control
over the data flows and command over the classification and order of information. In a world of
collapsing planetary boundaries and unequal development, those who control the epistemic order

136

Tomislav Medak

control the future.08 The Googles and the NSAs run
on capturing totality — the world’s knowledge and
communication made decipherable, organizable and
controllable. The instabilities of the epistemic order
that the library continues to instigate at its margins
contributes to keeping the future open beyond the
script of ‘commodify and control’. In their acts of
re-appropriation UbuWeb and Monoskop.org are
but a reminder of the resilience of libraries’ instability that signals toward a future that can be made
radically open. ❧

08 In his article “Controlling the Future—Edward Snowden and
the New Era on Earth”, (accessed April 13, 2015, http://www.
eurozine.com/articles/2014-12-19-altvater-en.html), Elmar
Altvater makes a comparable argument that the efforts of
the “Five Eyes” to monitor the global communication flows,
revealed by Edward Snowden, and the control of the future
social development defined by the urgency of mitigating the
effects of the planetary ecological crisis cannot be thought
apart.

The Future After the Library

137

138

public library

http://kok.memoryoftheworld.org

139

Public Library
www.memoryoftheworld.org

Publishers
What, How & for Whom / WHW
Slovenska 5/1 • HR-10000 Zagreb
+385 (0) 1 3907261
whw@whw.hr • www.whw.hr
ISBN 978-953-55951-3-7 [Što, kako i za koga/WHW]
Multimedia Institute
Preradovićeva 18 • HR-10000 Zagreb
+385 (0)1 4856400
mi2@mi2.hr • www.mi2.hr
ISBN 978-953-7372-27-9 [Multimedijalni institut]
Editors
Tomislav Medak • Marcell Mars • What, How & for Whom / WHW
Copy Editor
Dušanka Profeta [Croatian]
Anthony Iles [English]
Translations
Una Bauer
Tomislav Medak
Dušanka Profeta
W. Boyd Rayward
Design & layout
Dejan Kršić @ WHW
Typography
MinionPro [robert slimbach • adobe]

English translation of the Paul
Otlet’s text published with the permission of W. Boyd
Rayward. The translation was originally published as
Paul Otlet, “Transformations in the Bibliographical
Apparatus of the Sciences: Repertory–Classification–Office
of Documentation”, in International Organisation and
Dissemination of Knowledge; Selected Essays of Paul Otlet,
translated and edited by W. Boyd Rayward, Amsterdam:
Elsevier, 1990: 148–156. ❧
format / size
120 × 200 mm
pages
144
Paper
Agrippina 120 g • Rives Laid 300 g
Printed by
Tiskara Zelina d.d.
Print Run
1000
Price
50 kn
May • 2015

This publication, realized along with the exhibition
Public Library in Gallery Nova, Zagreb 2015, is a part of
the collaborative project This Is Tomorrow. Back to Basics:
Forms and Actions in the Future organized by What, How
& for Whom / WHW, Zagreb, Tensta Konsthall, Stockholm
and Latvian Center for Contemporary Art / LCCA, Riga, as a
part of the book edition Art As Life As Work As Art. ❧

Supported by
Office of Culture, Education and Sport of the City of Zagreb
Ministry of Culture of the Republic of Croatia
Croatian Government Office for Cooperation with NGOs
Creative Europe Programme of the European Commission.
National Foundation for Civil Society Development
Kultura Nova Foundation

This project has been funded with support
from European Commision. This publication reflects
the views only of the authors, and the Commission
cannot be held responsible for any use which may be
made of the information contained therein. ❧
Publishing of this book is enabled by financial support of
the National Foundation for Civil Society Development.
The content of the publication is responsibility of
its authors and as such does not necessarily reflect
the views of the National Foundation. ❧
This project is financed
by the Croatian Government Office for Cooperation
with NGOs. The views expressed in this publication
are the sole responsibility of the publishers. ❧

This book is licensed under a Creative
Commons Attribution–ShareAlike 4.0
International License. ❧

Public Library

may • 2015
price 50 kn


Sekulic
Legal Hacking and Space
2015


# Legal hacking and space

## What can urban commons learn from the free software hackers?

* [Dubravka Sekulic](https://www.eurozine.com/authors/sekulic-dubravka/)

4 November 2015

There is now a need to readdress urban commons through the lens of the digital
commons, writes Dubravka Sekulic. The lessons to be drawn from the free
software community and its resistance to the enclosure of code will likely
prove particularly valuable where participation and regulation are concerned.

> Commons are a particular type of institutional arrangement for governing the
use and disposition of resources. Their salient characteristic, which defines
them in contradistinction to property, is that no single person has exclusive
control over the use and disposition of any particular resource. Instead,
resources governed by commons may be used or disposed of by anyone among some
(more or less defined) number of persons, under rules that may range from
"anything goes" to quite crisply articulated formal rules that are effectively
enforced.
> (Benkler 2003: 6)

The above definition of commons, from the seminal paper "The political economy
of commons" by Yochai Benkler, addresses any type of commons, whether analogue
or digital. In fact, the concept of commons entered the digital realm from
physical space in order to interpret the type of communities, relationships
and production that started to appear with the development of the free as
opposed to the proprietary. Peter Linebaugh charted in his excellent book
_Magna Carta Manifesto_ , how the creation and development of the concept of
commons were closely connected to constantly changing relationships of people
and communities to the physical space. Here, I argue that the concept was
enriched when it was implemented in the digital field. Readdressing urban
space through the lens of digital commons can enable another imagination and
knowledge to appear around urban commons.

[![](http://www.eurozine.com/UserFiles/illustrations/sekulic_commons_220w.jpg)](http://www.derive.at/)The
notion of commons in (urban) space is often complicated by archaic models of
organization and management - "the pasture we knew how to share". There is a
tendency to give the impression that the solution is in reverting to the past
models. In the realm of digital though, there is no "pasture" from the Middle
Ages to fall back on. Digital commons had to start from scratch and define its
own protocols of production and reproduction (caring and sharing). Therefore,
the digital commons and free software community can be the one to turn to, not
only for inspiration and advice, but also as a partner when addressing
questions of urban commons. Or, as Marcell Mars would put it "if we could
start again with (regulating and defining) land, knowing what we know now
about digital networks, we could come up with something much better and
appropriate for today's world. That property wouldn't be private, maybe not
even property, but something else. Only then can we say we have learned
something from the digital" (2013).

## Enclosure as the trigger for action

The moment we turn to commons in relation to (urban) space is the moment in
which the pressure to privatize public space and to commodify every aspect of
urban life has become so strong that it can be argued that it mirrors a moment
in which Magna Carta Libertatum was introduced to protect the basic
reproduction of life for those whose sustenance was connected to the common
pastures and forests of England in the thirteenth century. At the end of the
twentieth century, urban space became the ultimate commodity, and increasing
privatization not only endangered the reproduction of everyday life in the
city; the rent extraction through privatized public space and housing
endangered bare life itself. Additionally, the cities' continuous
privatization of its amenities transformed almost every action in the city, no
matter how mundane - as for example, drinking a glass of water from a tap -,
into an action that creates profit for some private entity and extracts it
from the community. Thus every activity became labour, which a citizen-worker
is not only alienated from, but also unaware of. David Harvey's statement
about the city replacing the factory as a site of class war seems to be not
only an apt description of the condition of life in the city, but also a cry
for action.

When Richard Stallman turned to the foundational gesture of the creation of
free software, GNU/GPL (General Public Licence) was his reaction to the
artificially imposed logic of scarcity on the world of code - and the
increasing and systematic enclosure that took place in the late 1970s and
1980s as "a tidal wave of commercialization transformed software from a
technical object into a commodity, to be bought and sold on the open market
under the alleged protection of intellectual property law" (Coleman 2012:
138). Stallman, who worked as a researcher at MIT's Artificial Intelligence
Laboratory, detected how "[m]any programmers are unhappy about the
commercialization of system software. It may enable them to make more money,
but it requires them to feel in conflict with other programmers in general
rather than feel as comrades. The fundamental act of friendship among
programmers is the sharing of programs; marketing arrangements now typically
used essentially forbid programmers to treat others as friends. The purchaser
of software must choose between friendship and obeying the law. Naturally,
many decide that friendship is more important. But those who believe in law
often do not feel at ease with either choice. They become cynical and think
that programming is just a way of making money" (Stallman 2002: 32).

In the period between 1980 and 1984, "one man [Stallman] envisioned a crusade
to change the situation" (Moglen 1999). Stallman understood that in order to
subvert the system, he would have to intervene in the protocols that regulate
the conditions under which the code is produced, and not the code itself;
although he did contribute some of the best lines of code into the compiler
and text editor - the foundational infrastructure for any development. The
gesture that enabled the creation of a free software community that yielded
the complex field of digital commons was not a perfect line of code. The
creation of GNU General Public License (GPL) was a legal hack to counteract
the imposing of intellectual property law on code. At that time, the only
license available for programmers wanting to keep the code free was public
domain, which gave no protection against the code being appropriated and
closed. GPL enabled free codes to become self-perpetuating. Everything built
using a free code had to be made available under the same condition, in order
to secure the freedom for programmers to continue sharing and not breaking the
law. "By working on and using GNU rather than proprietary programs, we can be
hospitable to everyone and obey the law. In addition, GNU serves as an example
to inspire and as a banner to rally others to join in sharing. This can give
us a feeling of harmony, which is impossible if we use software, which is not
free. For about half the programmers I talk to, this is an important happiness
that money cannot replace" (Stallman 2002: 33).

Architects and planners as well as environmental designers have for too long
believed the opposite, that a good enough design can subvert the logic of
enclosure that dominates the production and reproduction of space; that a good
enough design can keep space open and public by the sheer strength of spatial
intervention. Stallman rightfully understands that no design is strong enough
to keep private ownership from claiming what it believes belongs to it.
Digital and urban commons, despite operating in completely different realms
and economies, are under attack from the same threat of "market processes"
that "crucially depend upon the individual monopoly of capitalists (of all
sorts) over ownership of the means of production, including finance and land.
All rent, recall, is a return to the monopoly power of private ownership of
some crucial asset, such as land or a patent. The monopoly power of private
property is therefore both the beginning-point and the end-point of all
capitalist activity" (Harvey 2012: 100). Stallman envisioned a bleak future
(2003: 26-28) but found a way to "relate the means to the ends". He understood
that the emancipatory task of a struggle "is not only what has to be done, but
also how it will be done and who will do it" (Stavrides & De Angelis: 7).
Thus, to produce the necessary requirements - both for a community to emerge,
but also for the basis of future protocols - tools and methodologies are
needed for the community to create both free software and itself.

## Renegotiating (undoing) property, hacking the law, creating community

Property, as an instrument of allocation of resources, is a right that is
negotiated within society and by society and not written in stone or given as
such. The digital, more than any other field, discloses property as being
inappropriate for contemporary relationships between production and
reproduction and, additionally, proves how it is possible to fundamentally
rethink it. The digital offers this possibility as it is non-material, non-
rival and non-exclusive (Meretz 2013), unlike anything in the physical world.
And Elinor Ostrom's lifelong empirical researches give ground to the belief
that eschewing property, being the sole instrument of allocation, can work as
a tool of management even for rival, excludable goods.
The value of information in digital form is not flat, but property is not the
way to protect that value, as the music industry realized during the course of
the last ten years. Once the copy is _out there_ , the cost of protecting its
exclusivity on the grounds of property becomes too high in relation to the
potential value to be extracted. For example, the value is extracted from
information through controlling the moment of its release and not through
subsequent exploitation. Stallman decided to tackle the imposition of the
concept of property on computer code (and by extension to the digital realm as
a whole) by articulating it in another field: just as property is the product
of constant negotiations within a society, so are legal regulations. After
some time, he was joined by "[m]any free software developers [who] do not
consider intellectual property instruments as the pivotal stimulus for a
marketplace of ideas and knowledge. Instead, they see them as a form of
restriction so fundamental (or poorly executed) that they need to be
counteracted through alternative legal agreements that treat knowledge,
inventions, and other creative expressions not as property but rather as
speech to be freely shared, circulated, and modified" (Coleman 2012: 26).

The digital sphere can give a valid example of how renegotiating regulation
can transform a resource from scarce to abundant. When the change from
analogue signal to packet switching begun to take effect, the distribution of
finite territory and the way the radio frequency spectrum was managed got
renegotiated and the amount of slots of space to be allocated grew by an order
of magnitude while the absolute size of the spectrum stayed the same. This
shift enabled Brecht's dream of a two-sided radio to become reality, thus
enabling what he had suggested: "change this apparatus over from distribution
to communication".1

According to Lawrence Lessig, what regulates behavior in cyberspace is an
interdependence of four constraints: market, law, architecture and norms
(Lessig 2012: 121-25). Analogously, space can be put in place of cyberspace,
as the regulation of space is the sum of these four constraints. These four
constraints are in a dynamic relationship in which the balance can be tilted
towards one, depending on how much each of these categories puts pressure on
the other three. Changes in any one reflect the regulation of the whole.
"Architecture" in Lessig's theory should be understood broadly as the "built
environment" that regulates behaviour in (cyber)space. In the last few decades
we have experienced the domination of the market reconfiguring the basis of
norms, law and architecture. In order to counteract this, the other three
constraints need to be re-negotiated. In digital space, this reconfiguration
happened by declaring the code - that is, the set of instructions written as
highly formalized text in a specific programming language to be executed
(usually) by the computer - to be considered as speech in front of the law,
and by hacking the law in order to disrupt the way that property relationships
are formed.

To put it simply, in order to create a change in dynamics between the
architecture, norms and the market, the law had to be addressed first. This is
not a novel procedure, "legal hacking is going on all the time, it is just
that politics is doing it under the veil of legality because they are the
parliament, they are Microsoft, which can hire a whole law firm to defend them
and find all the legal loopholes. Legal hacking is the norm actually" (Bailey
2013). When it comes to physical space, one of the most obvious examples of
the reconfiguration of regulations under the influence of the market is to
create legal provisions, norms and architecture to sustain the concept of
developing (and privatizing) public space through public-private partnerships.
The decision of the Italian parliament that the privatization of services
(specifically of water management) is legal and does not obstruct one's access
to water as a human right, is another example of a crude manipulation of the
law by the state in favour of the market. Unlike legal hacks by corporations
that aim to create a favourable legal climate for another round of
accumulation through dispossession, Stallman's hack tries to limit the impact
of the market and to create a space of freedom for the creation of a code and
of sharable knowledge, by questioning one of the central pillars of liberal
jurisprudence: (intellectual) property law.

Similarly, translated into physical space, one of the initiatives in Europe
that comes closest to creating a real existing urban commons, Teatro Valle
Occupato in Rome, is doing the same, "pushing the borders of legality of
private property" by legally hacking the institution of a foundation to "serve
a public, or common, purpose" and having "notarized [a] document registered
with the Italian state, that creates a precedent for other people to follow in
its way" (Bailey 2013). Sounds familiar to Stallman's hack as the fundamental
gesture by which community and the whole eco-system can be formed.

It is obvious that, in order to create and sustain that type of legal hack, it
is a necessity to have a certain level of awareness and knowledge of how
systems, both political and legal, work, i.e. to be politically literate.
"While in general", says Italian commons-activist and legal scholar Saki
Bailey, "we've become extremely lazy [when it comes to politics]. We've
started to become a kind of society of people who give up their responsibility
to participate by handing it over to some charismatic leaders, experts of [a]
different type" (2013). Free software hackers, in order to understand and take
part in a constant negotiation that takes place on a legal level between the
market that seeks to cloister the code and hackers who want to keep it free,
had to become literate in an arcane legal language. Gabriella Coleman notes in
_Coding Freedom_ that hacker forums sometimes tend to produce legal analysis
that is just as serious as one would expect to find in a law office. Like the
occupants of Teatro Valle, free software hackers understand the importance of
devoting time and energy to understand constraints and to find ways to
structurally divert them.

This type of knowledge is not shared and created in isolation, but in
socialization, in discussions in physical or cyber spaces (such as #irc chat
rooms, forums, mailing lists…), the same way free software hackers share their
knowledge about code. Through this process of socializing knowledge, "the
community is formed, developed, and reproduced through practices focused on
common space. To generalize this principle: the community is developed through
commoning, through acts and forms of organization oriented towards the
production of the common" (Stavrides 2012: 588). Thus forming a community is
another crucial element of the creation of digital commons, but even more
important are its development and resilience. The emerging community was not
given something to manage, it created something together, and together devised
rules of self-regulation and decision-making.

The prime example of this principle in the free software community is the
Debian Project, formed around the development of the Debian Linux
distribution. It is a volunteer organization consisting of around 3,000
developers that since its inception in 1993 has defined a set of basic
principles by which the project and its members conduct their affairs. This
includes the introduction of new people into the community, a process called
Debian Social Contract (DSC). A special part of the DSC defines the criteria
for "free software", thus regulating technical aspects of the project and also
technical relations with the rest of a free software community. The Debian
Constitution, another document created by the community so it can govern
itself, describes the organizational structure for formal decision-making
within the project.

Another example is Wikipedia, where the community that makes the online
encyclopedia also takes part in creating regulations, with some aspects
debated almost endlessly on forums. It is even possible to detect a loose
community of "Internet users" who took to the streets all over the world when
SOPA (Stop Online Piracy Act) and PIPA (Preventing Real Online Threats to
Economic Creativity and Theft of Intellectual Property Act) threatened to
enclose the Internet, as we know it; the proposed legislation was successfully
contested.

Free software projects that represent the core of the digital commons are most
of the time born of the initiative of individuals, but their growth and life
cycle depend on the fact that they get picked up by a community or generate
community around them that is allowed to take part in their regulation and in
decisions about which shape and forms the project will take in the future.
This is an important lesson to be transferred to the physical space in which
many projects fail because they do not get picked up by the intended
community, as the community is not offered a chance to partake in its creation
and, more importantly, its regulation.

## Building common infrastructure and institutions

"The expansion of intellectual property law" as the main vehicle of the trend
to enclose the code that leads to the act of the creation of free software
and, thus, digital commons, "is part and parcel of a broader neoliberal trend
to privatize what was once under public or under the state's aegis, such as
health provision, water delivery, and military services" (Coleman 2012: 16).
The structural fight headed by the GNU/GPL against the enclosure of code
"defines the contractual relationship that serves to secure the freedom of
means of production and to constitute a community of those participating in
the production and reproduction of free resources. And it is this constitutive
character, as an answer to an every time singular situation of appropriation
by the capital, that is a genuine political emancipation striving for an equal
and free collective production" (Mars & Medak 2004). Thus digital commons "is
based on the _communication_ among _singularities_ and emerges through
collaborative social processes of production " (Negri & Hardt 2005: 204).

The most important lesson urban commons can take from its digital counterpart
is at the same time the most difficult one: how to make a structural hack in
the moment of the creation of an urban commons that will enable it to become
structurally self-perpetuating, thus creating fertile ground not only for a
singular spatialization of urban commons to appear, but to multiply and create
a whole new eco-system. Digital commons was the first field in which what
Negri and Hardt (2009: 3-21) called the "republic of property" was challenged.
Urban commons, in order to really emerge as a spatialization of a new type of
relationship, need to start undoing property as well in order to socially re-
appropriate the city. Or in the words of Stavros Stavrides "the most urgent
and promising task, which can oppose the dominant governance model, is the
reinvention of common space. The realm of the common emerges in a constant
confrontation with state-controlled 'authorized' public space. This is an
emergence full of contradictions, perhaps, quite difficult to predict, but
nevertheless necessary. Behind a multifarious demand for justice and dignity,
new roads to collective emancipation are tested and invented. And, as the
Zapatistas say, we can create these roads only while walking. But we have to
listen, to observe, and to feel the walking movement. Together" (Stavrides
2012: 594).

The big task for both digital and urban commons is "[b]uilding a core common
infrastructure [which] is a necessary precondition to allow us to transition
away from a society of passive consumers buying what a small number of
commercial producers are selling. It will allow us to develop into a society
in which all can speak to all, and in which anyone can become an active
participant in political, social and cultural discourse" (Benkler 2003: 9).
This core common infrastructure has to be porous enough to include people that
are not similar, to provide "a ground to build a public realm and give
opportunities for discussing and negotiating what is good for all, rather than
the idea of strengthening communities in their struggle to define their own
commons. Relating commons to groups of "similar" people bears the danger of
eventually creating closed communities. People may thus define themselves as
commoners by excluding others from their milieu, from their own privileged
commons." (Stavrides 2010). If learning carefully from digital commons, urban
commons need to be conceptualized on the basis of the public, with a self-
regulating community that is open for others to join. That socializes
knowledge and thus produces and reproduces the commons, creating a space for
political emancipation that is capable of judicial arguments for the
protection and extension of regulations that are counter-market oriented.

## References

Bailey, Saki (2013): Interview by Dubravka Sekulic and Alexander de Cuveland.

Benkler, Yochai (2003): "The political economy of commons". _Upgrade_ IV, no.
3, 6-9, [www.benkler.org/Upgrade-
Novatica%20Commons.pdf](http://www.benkler.org/Upgrade-
Novatica%20Commons.pdf).

Benkler, Yochai (2006): _The Wealth of Networks: How Social Production
Transforms Markets and Freedom_. New Haven: Yale University Press.

Brecht, Bertolt (2000): "The radio as a communications apparatus". In: _Brecht
on Film and Radio_ , edited by Marc Silberman. Methuen, 41-6.

Coleman, E. Gabriella (2012): _Coding Freedom: The Ethics and Aesthetics of
Hacking_. Princeton University Press / Kindle edition.

Hardt, Michael and Antonio Negri (2005): _Multitude: War and Democracy in the
Age of Empire_. Penguin Books.

Hardt, Michael and Antonio Negri (2011): _Commonwealth_. Belknap Press of
Harvard University Press.

Harvey, David (2012): The Art of Rent. In: _Rebel Cities: From the Right to
the City to the Urban Revolution_ , 1st ed. Verso, 94-118.

Hill, Benjamin Mako (2012): Freedom for Users, Not for Software. In: Bollier,
David & Helfrich, Silke (Ed.): _The Wealth of the Commons: a World Beyond
Market and State_. Levellers Press / E-book.

Lessig, Lawrence (2012): _Code: Version 2.0_. Basic Books.

Linebaugh, Peter (2008): _The Magna Carta Manifesto: Liberties and Commons for
All_. University of California Press.

Mars, Marcell (2013): Interview by Dubravka Sekulic.

Mars, Marcell and Tomislav Medak (2004): "Both devil and gnu",
[www.desk.org:8080/ASU2/newsletter.Zarez.N5M.MedakRomicTXT.EnGlish](http://www.desk.org:8080/ASU2/newsletter.Zarez.N5M.MedakRomicTXT.EnGlish).

Martin, Reinhold (2013): "Public and common(s): Places: Design observer",
[placesjournal.org/article/public-and-
commons](https://placesjournal.org/article/public-and-commons).

Meretz, Stefan (2010): "Commons in a taxonomy of goods", [keimform.de/2010
/commons-in-a-taxonomy-of-goods](http://keimform.de/2010/commons-in-a
-taxonomy-of-goods/).

Mitrasinovic, Miodrag (2006): _Total Landscape, Theme Parks, Public Space_ ,
1st ed. Ashgate.

Moglen, Eben (1999): "Anarchism triumphant: Free software and the death of
copyright", First Monday,
[firstmonday.org/ojs/index.php/fm/article/view/684/594](http://firstmonday.org/ojs/index.php/fm/article/view/684/594).

Stallman, Richard and Joshua Gay (2002): _Free Software, Free Society:
Selected Essays of Richard M. Stallman_. GNU Press.

Stallman, Richard and Joshua Gay (2003): "The Right to Read". _Upgrade_ IV,
no. 3, 26-8.

Stavrides, Stavros (2012) "Squares in movement". _South Atlantic Quarterly_
111, no. 3, 585-96.

Stavrides, Stavros (2013): "Contested urban rhythms: From the industrial city
to the post-industrial urban archipelago". _The Sociological Review_ 61,
34-50.

Stavrides, Stavros, and Massimo De Angelis (2010): "On the commons: A public
interview with Massimo De Angelis and Stavros Stavrides". _e-flux_ 17, 1-17,
[www.e-flux.com/journal/on-the-commons-a-public-interview-with-massimo-de-
angelis-and-stavros-stavrides/](http://www.e-flux.com/journal/on-the-commons-a
-public-interview-with-massimo-de-angelis-and-stavros-stavrides/).

1

"[...] radio is one-sided when it should be two-. It is purely an apparatus
for distribution, for mere sharing out. So here is a positive suggestion:
change this apparatus over from distribution to communication". See "The radio
as a communications apparatus", Brecht 2000.

Published 4 November 2015
Original in English
First published by derive 61 (2015)

Contributed by dérive © Dubravka Sekulic / dérive / Eurozine

[PDF/PRINT](https://www.eurozine.com/legal-hacking-and-space/?pdf)


Constant
Tracks in Electronic fields
2009


figure 3 Dmytri Kleiner: Web 2.0
is a business model, it capitalises
on community created values.

figure 1 E-traces: In the reductive
world of Web 2.0 there are no
insignificant actors because once
added up, everybody counts.

figure 4 Christophe Lazaro:
Sociologists and anthropologists
are trying to stick the notion of
‘social network' to the specificities
of digital networks, that is to say
to their horizontal character

figure 2

1

1

1

2

2

figure 5 The Robot Syndicat:
Destined to survive collectively
through multi-agent systems
and colonies of social robots

figure 6

figure 11

figure 7
figure 9

figure 8

figure 10

2

2

2

3

3

figure 12
Destination port:
Every single passing
of a visitor triggers
the projection of
a simultaneous
registration

figure 15

figure 18

figure 16

figure 13

figure 17

figure 19
Doppelgänger: The
electronic double
(duplicate, twin) in
a society of control
and surveillance

figure 14

3

3

3

4

4

figure 20 CookieSensus: Cookies
found on washingtonpost.com

figure 22 Image Tracer: Images
and data accumulate into layers as
the query is repeated over time

figure 21 ... and
cookies sent by tacodo.net
figure 23 Shmoogle: In one click,
Google hierarchy crumbles down

4

4

4

5

5

figure 24 Jussa
Parrikka: We move
onto a baroque world,
a mode of folding
and enveloping new
ways of perception
and movement

figure 25

figure 26 Extended Speakers: A
netting of thin metal wires suspends
from the ceiling of the haunted
house in the La Bellone courtyard

figure 28

figure 27

figure 29

figure 30

5

5

5

6

6

figure 31

figure 32

figure 33

figure 34

figure 35

figure 38

figure 41

figure 44

figure 47

figure 36

figure 39

figure 42

figure 45

figure 48

figure 37

figure 40

figure 43

figure 46

figure 49

6

6

6

7

7

figure 50

figure 55

figure 60

figure 65

figure 70

figure 75

figure 51

figure 56

figure 61

figure 66

figure 71

figure 76

figure 52

figure 57

figure 62

figure 67

figure 72

figure 77

figure 53

figure 58

figure 63

figure 68

figure 73

figure 78

figure 54

figure 59

figure 64

figure 69

figure 74

figure 79

7

7

7

8

8

figure 80 Elgaland-Vargaland:
Since November 2007, the Embassy
permanently resides in La Bellone

figure 81 Ambassadors Yves
Poliart and Wendy Van Wynsberghe

figure 85

figure 82
figure 84

figure 86
figure 83

8

8

8

9

9

figure 87 It could be the
result of psychic echoes from
the past, psychokinesis, or the
thoughts of aliens or nature spirits

figure 89 Manu
Luksch: Our
digital selves are
many dimensional,
alert, unforgetting

figure 88

figure 91

figure 93

figure 92

figure 94

figure 90

9

9

9

10

10

figure 95

figure 97

figure 96

figure 99

figure 98

10

10

10

11

11

figure 100

figure 101

figure 103
Audio-geographic
dérive: Listening to
the electro-magnetic
spectrum of Brussels

figure 106

figure 107

figure 102

figure 104

figure 108

figure 110

figure 105

figure 112

figure 111

figure 109

11

11

11

12

12

figure 113 Michael Murtaugh:
Rather than talking about
leaning forward or backward,
a more useful split might be
between reading and writing

figure 114

figure 117
figure 115 Adrian
Mackenzie: This
opacity reflects the
sheer number of
operations that have
to be compressed
into code ...

figure 116 ... in
order for digital signal
processing to work

figure 118

12

12

12

13

13

figure 119 Sabine Prokhoris and
Simon Hecquet: What happens
precisely when one decides to
consider these margins, these
‘supplementen', as fullgrown
creations – slave, nor attachment?

figure 120 Praticable:
Making the body as a locus of
knowledge production tangible

figure 121

figure 123

figure 122

figure 124

figure 125

13

13

13

14

14

figure 126 Mutual
Motions Video Library:
A physical exchange
between existing
imagery, real-time
interpretation,
experiences
and context

figure 129

figure 130

figure 127 Modern
Times: His gestures
are burlesque responses
to the adversity
in his life, or just
plain ‘exuberant'

figure 131 Michael
Terry: We really
want to have lots of
people looking at it,
and considering it,
and thinking about
the implications

figure 128

figure 132

figure 133 Görkem
Çetin: There's a lack of
usability bug reporting
tool which can be
used to submit, store,
modify and maintain
user submitted videos,
audio files and pictures

figure 134 Simon
Yuill: It is here
where contingency
and notation meet,
but it is here also
that error enters

14

14

14

15

15

figure 135

figure 141
figure 138

figure 136

figure 139

figure 137

figure 140

15

15

15

16

16

figure 144 Séverine Dusollier:
I think amongst many of the
movements that are made, most are
not ‘a work', they are subconscious
movements, movements that
are translations of gestures that
are simply banal or necessary

figure 142

figure 145

figure 143

16

16

16

17

17

figure 146 Sadie Plant: It is
this kind of deep collectivity,
this profound sense of
micro-collaboration, which
has often been tapped into

17

17

17

18

18

18

18

18

19

19

Verbindingen/Jonctions 10
EN
NL
FR

Tracks in electr(on)ic fields

19

19

19

20

20

Introduction
E-Traces

25

EN, NL, FR

35

EN, NL, FR

Nicolas Malevé, Michel Cleempoel
E-traces en contexte NL, FR

38

Dmytri Kleiner, Brian Wyrick
InfoEnclosure 2.0 NL

47

Christophe Lazaro

58

Marc Wathieu

65

Michel Cleempoel
Destination port
Métamorphoz
Doppelgänger
Andrea fiore
Cookiesensus

FR

70

EN, NL, FR

71

FR, NL, EN

73

EN

Tsila Hassine
Shmoogle and Tracer

EN

Jussi Parikka
Insects, Affects and Imagining New
Sensoriums EN

75
77

81

20

20

20

21

21

Pierre Berthet
Concert with various extended objects

EN, NL, FR

93

Leiff Elgren, CM von Hausswolff
Elgaland-Vargaland EN, NL, FR

95

CM von Hausswolff, Guy-Marc Hinant
Ghost Machinery EN, NL

98

Read Feel Feed Real

101

EN, NL, FR

Manu Luksch, Mukul Patel
Faceless: Chasing the Data Shadow

EN

104

Julien Ottavi
Electromagnetic spectrum Research code
0608 FR

119

Michael Murtaugh
Active Archives or: What's wrong with the
YouTube documentary? EN

131


EN, NL, FR

Femke Snelting

NL

139
143

Adrian Mackenzie
Centres of envelopment and intensive
movement in digital signal processing EN

155

Elpueblodechina
El Curanto EN

174

21

21

21

22

22

Alice Chauchat, Frédéric Gies

181

Dance (notation)

184

EN

Sabine Prokhoris, Simon Hecquet
Mutual Motions Video Library

188
198

EN, NL, FR

Inès Rabadan
Does the repetition of a gesture irrevocably
lead to madness?

215

Michael Terry (interview)
Data analysis as a discourse

217

EN

233

254

Sadie Plant
A Situated Report

275

Biographies

EN

287

EN, NL, FR

License register

311

Vocabulary

313

22

22

22

23

23

The Making-of

323

EN

Colophon

331

23

23

23

24

24

24

24

24

25

25

EN

Introduction

25

25

25

26

26


29

EN

Traces in electr(on)ic fields documents the 10 th edition
of Verbindingen/Jonctions with the same name, a bi-annual multidisciplinary festival organised by Constant, association for arts and media. It is a meeting point for a
diverse public that from an artistic, activist and / or theoretical perspective is interested in experimental reflections
on technological culture.
Not for the first time, but during this edition more explicit than ever, we put the question of the interaction
between body and technology on the table. How to think
about the actual effects of surveillance, the ubiquitous presence of cameras and public safety procedures that can only
regard individuals as an amalgamate of analysable data?
What is the status of ‘identity' when it appears both elusive and unchangeable? How are we conditioned by the
technology we use? What is the relationship between commitment and reward? flexibility of work and healthy life?
Which traces does technology leave in our thinking, behavior, our routine movements? And what residue do we
leave behind ourselves on electr(on)ic fields through our
presence in forums, social platforms, databases, log files?
The dual nature of the term ‘notation' formed an important source of inspiration. Systems that choreographers,
composers and computer programmers use to record ideas
and observations, can then be interpreted as instruction,
as a command which puts an actor, software, performing artist or machine in to motion. From punch card to
musical scale, from programming language to Laban notation, we were interested in the standards and protocols
needed to make such documents work. It was the reason
29

29

29

30

30

to organise the festival inside the documentation, library
and workshop for theater and dance, ‘maison du spectacle'
La Bellone. Located in the heart of Brussels, La Bellone
offered hospitality to a diverse group of thinkers, dancers,
artists, programmers, interface designers and others and
its meticulously renovated 17th century façade formed the
perfect backdrop for this intense program.
Throughout the festival we worked with a number of
themes, not meant to isolate areas of thinking, but rather
as ‘spider threads' interlinking various projects:
E-traces (p. 35) subjected the current reality of Web 2.0
to a number of critical considerations. How do we regain
control of the abundant data correlation that mega-companies such as Google and Yahoo produce, in exchange for
our usage of their services? How do we understand ‘service' when we are confronted with their corporate Janus
face: one a friendly interface, the other Machiavellian
user licenses?
Around us, magnetic fields resonate unseen waves (p.
77) took the ghostly presence of technology as a starting
point and Read Feel Feed Real (p. 101) listened to unheard
sounds and looked behind the curtains in Do-It-Yourself,
walks and urban interventions. Through the analysis of radio waves and their use in artistic installations, by making
electro-magnetic fields heard, we made unexplained phenomena tangible.
As machines learn about bodies, bodies learn about machines and the movements that emerge as a result, are
not readily reduced to cause and effect. Mutual movements (p. 139) started in the kitchen, the perfect place to
30

30

30

31

31

reconsider human-machine configurations, without having
to separate these from everyday life and the patterns that
are ingrained in it. Would a different idea of ‘user' also
change our approach to ‘use'?
At the end of the adventure Sadie Plant remarked in
her ‘situated report' on Tracks in electr(on)ic fields (p.
275): “It is ultimately very difficult to distinguish between
the user and the developer, or the expert and the amateur. The experiment, the research, the development is
always happening in the kitchen, in the bedroom, on the
bus, using your mobile or using your computer. (...) this
sense of repetitive activity, which is done in many trades
and many lines, and that really is the deep unconscious
history of human activity. And arguably that's where the
most interesting developments happen, albeit in a very unsung, unseen, often almost hidden way. It is this kind of
deep collectivity, this profound sense of micro-collaboration, which has often been tapped into.”
Constant, October 2009

34

34

35

35

EN

E-Traces

35

35

35

36

36

How does the information we seize in search engines
circulate, what happens to our data entered in social networking sites, health records, news sites, forums and chat
services we use? Who is interested? How does the ‘market' of the electronic profile function? These questions
constitute the framework of the E-traces project.
For this, we started to work on Yoogle!, an online game.
This game, still in an early phase of development, will allow users to play with the parameters of the Web 2.0 economy and to exchange roles between the different actors
of this economy. We presented a first demo of this game,
accompanied by a public discussion with lawyers, artists
and developers. The discussion and lecture were meant
to analyse more deeply the mechanism of the economy
behind its friendly interface, the speculation on profiling,
the exploitation of free labor, but also to develop further
the scenario of the game.

EN

NL

36

36

36

37

37

47

DMYTRI KLEINER, BRIAN WYRICK
License: Dmytri Kleiner & Brian Wyrick, 2007. Anti-Copyright. Use as desired in whole or in part. Independent or collective commercial use encouraged. Attribution optional.
Text first published in English in Mute: http://www.metamute.org/InfoEnclosure-2.0. For translations in
Polish and Portuguese, see http://www.telekommunisten.net

figure 3
Dmytri
Kleiner


MICHEL CLEEMPOEL
License: Free Art License
figure 12
Every single
passing of
a visitor
triggered the
projection
of a
simultaneous
registration

figure 14

EN

Destination port
During the Jonctions festival, Destination port registered the flux
of visitors in the entrance hall of La Bellone. Every single passing
of a visitor triggered the projection of a simultaneous registration
in the hall, and this in superposition with formerly captured images
of visitors, thus creating temporary and unlikely encounters between
persons.

Doppelgänger
Born in September 2001, represented here by Valérie Cordy et
Natalia De Mello, the MéTAmorphoZ collective is a multidisciplinary
association that create installations, spectacles and transdisciplinary
performances that mix artistic experiments and digital practices.
With the project Doppelganger, the collective MéTAmorphoZ focuses on the thematic of the electronic double(duplicate, twin) in a
society of control and surveillance.
“Our electronic identity, symbol of this new society of control,
duplicates our organic and social identity. But this legal obligation
to be assigned a unique, stable and unforgeable identity isn't, in the
end, a danger for our fundamental freedom to claim identitites which
are irreducibly multiple for each of us?”
72

72

72

73

73

ANDREA fiORE
License: Creative Commons Attribution-NonCommercial-ShareAlike
EN

Cookiecensus
Although still largely perceived as a private activity, web surfing
leaves persistent trails. While users browse and interact through the
web, sites watch them read, write, chat and buy. Even on the basis
of a few basic web publishing experiences one can conclude that most
web servers record ‘by default' their entire clickstream in persistent
‘log' files.
‘Web cookies' are sort of digital labels sent by websites to web
browsers in order to assign them a unique identity and automatically
recognize their users over several visits. Today, this technology, which
was introduced with the first version of the Netscape browser in 1994,
constitutes the de facto standard upon which a wide range of interactive functionalities are built that were not conceived by the early web
protocol design. Think, for example, of user accounts and authentications, personalized content and layouts, e-commerce and shopping
charts.
While it has undeniably contributed to the development and the
social spread of the new medium, web cookie technology is still to
be considered as problematic. Especially the so-called ‘third party
cookies' issue – a technological loophole enabling marketeers and advertisement firms to invisibly track users over large networks of syndicated websites – has been the object of a serious controversy, involving
a varied set of actors and stakeholders.
Cookiecensus is a software prototype. A wannabe info tool for
studying electronic surveillance in one of its natively digital environments. Its core functionality consists of mapping and analyzing third
party's cookies distribution patterns within a given web, in order to
identify its trackers and its network of syndicated sites. A further
feature of the tool is the possibility to inspect the content of a web
page in relation to its third party cookie sources.

figure 20
Cookies
found on
Washingtonpost.com

figure 21
Cookies
sent by
Tacodo.net

73

73

73

74

74

It is an attempt to deconstruct the perceived unity and consistency
of web pages by making their underlying content assemblage and their
related attention flows visible.

74

74

74

75

75

TSILA HASSINE
License: Free Art License
EN

Shmoogle and Tracer
What is Shmoogle? Shmoogle is a Google randomizer. In one
click, Google hierarchy crumbles down. Results that were usually exiled to pages beyond user attention get their ‘15 seconds of PageRank
fame'. While also being a useful tool for internet research, Shmoogle
is a comment, a constant reminder that the Google order is not necessarily ‘the good order', and that sometimes chaos is more revealing
than order. While Google serves the users with information ready for
immediate consumption, Shmoogle forces its users to scroll down and
make their own choices. If Google is a search engine, then Shmoogle
is a research engine.

figure 22
Images
and data
accumulate
into layers
as the query
is repeated
over time

figure 23 In
one click,
Google
hierarchy
crumbles
down

In Image Tracer, order is important. Image Tracer is a collaboration between artist group De Geuzen and myself. Tracer was born
out of our mutual interest in the traces images leave behind them on
their networked paths. In Tracer images and data accumulate into
layers as the query is repeated over time. Boundaries between image
and data are blurred further as the image is deliberately reduced to
thumbnail size, and emphasis is placed on the image's context, the
neighbouring images, and the metadata related to that image. Image Tracer builds up an archive of juxtaposed snapshots of the web.
As these layers accumulate, patterns and processes reveal themselves,
and trace a historiography in the making.

75

75

75

76

76

76

76

76

77

77

EN

NL

FR

Around us, magnetic fields resonate
unseen waves
Om ons heen resoneren ongeziene
golven
Autour de nous, les champs
magnétiques font résonner des ondes
invisibles

77

77

77

78

78

In computer terminology many words refer to chimerical images such as bots, demons and ghosts. Dr. Konstantin Raudive, a Latvian psychologist, and Swedish film
producer Friedrich Jurgenson went a step further and explored the territory of the Electric Voice phenomena. Electronic voice phenomena (EVP) are speech or speech-like
sounds that can be heard on electronic devices that were
not present at the time the recording was made. Some
believe these could be of paranormal origin.
For this part of the V/J10 programme, we chose a
metaphorical approach, working with bodiless entities and
hidden processes, finding inspiration in The Embassy of
Elgaland-Vargaland, semi-fictional kingdoms, consisting
of all Border Territories (Geographical, Mental & Digital). These kingdoms were founded by Leiff Elgren and
CM Von Hausswolff. Elgren stated that: “All dead people
are inhabitants of the country Elgaland-Vargaland, unless
they stated that they did not want to be an inhabitant”.


JUSSI PARIKKA
License: Creative Commons Attribution-NonCommercial-ShareAlike
EN

Insects, Affects and Imagining New Sensoriums

figure 24
Jussa
Parrikka
at V/J10

A Media Archaeological Rewiring
from Geniuses to Animals
An insect media artist or a media archaeologist imagining a potential weird medium might end up with something that sounds quite
mundane to us humans. For the insect probe head, the question of
what it feels like to perceive with two eyes and ears and move with two
legs would be a novel one, instead of the multiple legs and compound
eyes that it has to use to manoeuvre through space. The uncanny
formations often used in science fiction to describe something radically inhuman (like the killing machine insects of Alien movies) differ
from the human being in their anatomy, behaviour and morals. The
human brain might be a much more effcient problem solver and the
human hands are quite handy tool making metatools, and the human
body could be seen as an original form of any model of technics, as
Ernst Kapp already suggested by the end of the 19 th century. But
still, such realisations do not take away the fascination that emerges
from the question of what would it be like to move, perceive and think
differently; what does a becoming-animal entail.
I am of course taking my cue here from the philosopher Manuel DeLanda who in his 1991 book War in the Age of Intelligent Machines,
asked what would the history of warfare look like from the viewpoint
of a future robot historian? An exercise perhaps in creative imagination, DeLanda's question also served other ends relating to physics of
self-organization. My point is not to discuss DeLanda, or the history
of war machines, but I want to pick an idea from this kind of an
approach, an idea that could be integrated into media archaeological considerations, concerning actual or imaginary media. As already
said, imagining alternative worlds is not the endpoint of this exercise
81

81

81

82

82

in ‘insect media', but a way to dip into an alternative understanding
of media and technology, where such general categories as ‘humans'
and ‘machines' are merely the endpoints of intensive flows, capacities, tendencies and functions. Such a stance takes much of its force
from Gilles Deleuze's philosophical ontology of abstract materialism,
which focuses primarily on a Spinozian ontology of intensities, capacities and functions. In this sense, the human being is not a distinct
being in the world with secondary qualities, but a “capacity to signify, exchange, and communicate”, as Claire Colebrook has pointed
out in her article ‘The Sense of Space' (Postmodern Culture). This
opens up a new agenda not focused on ‘beings' and their tools, but
on capacities and tendencies that construct and create beings in a
move which emphasizes Deleuze's interest in pre-Kantian worlds of
baroque. In addition, this move includes a multiplication of subjectivities and objects of the world, a certain autonomy of the material
world beyond the privileged observer. Like everybody who has done
gardening knows: there is a world teeming with life outside the human
sphere, with every bush and tree being a whole society in itself.
To put it shortly, still following Colebrook's recent writing on the
concept of affect, what Deleuze found in the baroque worlds of windowless monads was a capacity of perception that does not stem from
a universalising idea of perception in general. Man or any general
condition of perception is not the primary privileged position of perception but perceptions and creations of space and temporality are
multiplied in the numerous monadic worlds, a distributed perception
of a kind that according to Deleuze later found resonance in the philosophy of A.N.Whitehead. For Whitehead, the perceiving subject is
more akin to a ‘superject', a second order construction from the sum
of its perceptions. It is the world perceived that makes up superjects
and based on the variations of perceptions also alternative worlds.
Baroque worlds, argues Deleuze in his book Le Pli from 1988, are
characterised by the primacy of variation and perspectivism which is
a much more radical notion than a relativist idea of different subjects
having different perspectives on the world. Instead, “the subject will
be what comes to the point of view”, and where “the point of view is
not what varies with the subject, at least in the first instance; it is, to
82

82

82

83

83

the contrary, the condition in which an eventual subject apprehends
a variation (metamorphosis). . . ”.
Now why this focus on philosophy, this short excursion that merely
sketches some themes around variation and imagination? What I am
after is an idea of how to smuggle certain ideas of variation, modulation and perception into considerations of media culture, media
archaeology and potentially also imaginary media, where imaginary
media become less a matter of a Lacanian mirror phase looking for
utopian communication offering unity, but a deterritorialising way
of understanding the distributed ontology of the world and media
technologies. Variation and imagination become something else than
the imaginations of a point of view – quite the contrary, the imagination and variation give rise to points of view, which opens up a
whole new agenda of a past paradoxically not determined, and even
further, future as open to variation. This would mean taking into
account perceptions unheard of, unfelt, unthought-of, but still real in
their intensive potentiality, a becoming-other of the sensorium so to
speak. Hence, imagination becomes not a human characteristic but
an epistemological tool that interfaces analytics of media theory and
history with the world of animals and novel affects.
Imaginary media and variations at the heart of media cultural
modes of seeing and hearing have been discussed in various recent
books. The most obvious one is The Book of Imaginary Media, edited
by Eric Kluitenberg. According to the introduction, all media consist
of a real and an imagined part, a functional coupling of material characteristics and discursive dreams which fabricate the crucial features
of modern communication tied intimately with utopian ideals. Imaginary media – or actual media imagined beyond its real capacities
– have been dreamed to compensate insuffcient communication, a
realisation that Kluitenberg elaborates with the argument that “central to the archaeology of imaginary media in the end are not the
machines, but the human aspirations that more often than not are
left unresolved by the machines. . . ”. Powers of imagination are then
based in the human beings doing the imagining, in the human powers
able to transcend the actual and factual ways of perception and to

83

83

83

84

84

grasp the unseen, unheard and unthought of media creations. Variation remains connected to the principle of the central point where
variation is perceived.
Talking of the primacy of variation, we are easily reminded of
Siegfried Zielinski's application of the idea of ‘variantology' as an
‘anarchaeology of media', a task dedicated to the primacy of variation resisting the homogeneous drive of commercialised media spheres.
Excavating dreams of past geniuses, from Empedocles to Athanius
Kircher's cosmic machines and communication networks to Ernst florens Friedrich Chladni's visualisation of sound, Zielinski has been underlining the creative potential in an exercise of imagining media. In
this context, he defines in threefold the term ‘imaginary media' in his
chapter in the Book of Imaginary Media:
• Untimely media/apparatus/machines: “Media devised and designed
either much too late or much too early. . . ”
• Conceptual media/apparatus/machines: “Artefacts that were only
ever sketched as models. . . but never actually built.”
• Impossible media/apparatus/machines: “Imaginary media in the
true sense, by which I mean hermetic and hermeneutic machines. . .
they cannot actually be built, and whose implied meanings nonetheless have an impact on the factual world of media.”
A bit reminiscent of the baroque idea, variation is primary, claims
Zielinski. Whereas the capitalist orientated consumer media culture
is working towards a psychopathia medialis of homogenized media
technological environments, variantology is committed to promoting
heterogeneity, finding dynamic moments of media archaeological past,
and excavating radical experiments that push the limits of what can
be seen, heard and thought. Variantology is then implicitly suggested
as a mode of ontogenesis, of bringing forth, of modulation and change
– an active mode of creation instead of distanced contemplation.
Indeed, the aim of promoting diversity is a much welcomed one,
but I would like to propose a slight adjustment to this task, something that I engage under the banner of ‘insect media'. Whereas
Zielinski and much of the existing media archaeological research still
84

84

84

85

85

starts off from the human world of male inventor-geniuses, I propose
a slightly more distributed look at the media archaeology of affects,
capacities, modes of perception and movement, which are primarily
not attached to a specific substance (animal, technology), but since
the 19 th century at least, refer to a certain passage, vector from animals to technology and vice versa. Here, a mode of baroque thought,
a thought tuned in terms of variations becomes unravelled with the
help of animality that is not to be seen as a metaphor, but as a metamorphosis, as ‘teachings' in weird perceptions, novel ways of moving,
new ways of sensing, opening up to the world of sensations and contracting them. Instead of looking for variations through inventions of
people, we can turn to the ‘storehouses of invention' of for example
insects that from the 19 th century on were introduced as an alien
form of media in themselves. Next I will elaborate how we can use
these tiny animals as philosophical and media archaeological tools to
address media and technology as intensities that signal weird sensory
experiences.
Novel Sensoriums

During the latter half of the 19 th century, insects were seen as
uncanny but powerful forms of media in themselves, capable of weird
sensory and kinaesthetic experiences. Examples range from popular newspaper discourse to scientific measurements and such early
best-sellers as An Introduction to Entomology; or, Elements of the
Natural History of Insects: Comprising an Account of Noxious and
Useful Insects, of Their Metamorphoses, Hybernation, Instinct (1815—
1826) by William Kirby and William Spence.
Since the 19 th century, insects and animal affects are not only
found in biology but also in art, technology and popular culture. In
this sense, the 19 th century interest in insects produces a valuable
perspective on the intertwining of biology (entomology), technology
and art, where the basics of perception are radically detached from
human-centred models towards the animal kingdom. In addition, this
science-technology-art trio presents a challenge to rethink the forces
which form what we habitually refer to as ‘media' as modes of perception. By expanding our notions of ‘media' from the technological
85

85

85

86

86

apparatuses to the more comprehensive assemblages that connect biological, technological, social and aesthetic issues, we are also able to
bring forth novel contexts for contemporary analysis and design of media systems. In a way, then, the concept of the ‘insect' functions here
as a displacing and a deterritorialising force that seeks a questioning
of where and in what kind of conditions we approach media technologies. This is perhaps an approach that moves beyond a focus on
technology per se, but still does not remain blind to the material forces
of the world. It presents an alternative to the ‘substance-approaches'
that start from a stability or a ground like ‘technology' or ‘humans'.
It is my claim that Deleuzian biophilosophy, that has taken elements
from Spinozian ontology, von Uexküll's ethology, Whitehead's ideas
as well as Simondon's notions on individuation, is able to approach
the world as media in itself: a contracting of forces and analysing
them in terms of their affects, movements, speeds and slownesses.
These affects are primary defining capacities of an entity, instead of
a substance or a class it belongs to, as Deleuze explains in his short
book Spinoza: Practical Philosophy. From this perspective we can
adopt a novel media archaeological rewiring that looks at media history not as one of inventors, geniuses and solid technologies, but as a
field of affects, interactions and modes of sensation and perception.
Examples from the 19 th century popular discourse are illustrative.
In 1897, New York Times addressed spiders as ‘builders, engineers
and weavers', and also as ‘the original inventors of a system of telegraphy'. Spiders' webs offer themselves as ingenious communication
systems which do not merely signal according to a binary setting
(something has hit the web/has not hit the web) but transmits information regarding the “general character and weight of any object
touching it (. . . )” Or take for example the book Beautés et merveilles
de la nature et des arts by Eliçagaray from the 18 th century which
lists both technological and animal wonders, for example bees and
ants, electricity and architectural constructions as marvels of artifice
and nature.
Similar accounts abound since the mid 19 th century. Insects sense,
move, build, communicate and even create art in various ways that
raised wonder and awe for example in U.S. popular culture. Apt
86

86

86

87

87

example of the 19 th century insect mania is the New York Times
story (May 29, 1880) about the ‘cricket mania' of a certain young
lady who collected and trained crickets as musical instruments:
200 crickets in a wirework-house, filled with ferns and shells,
which she called a ‘fernery'. The constant rubbing of the wings
of these insects, producing the sounds so familiar to thousands
everywhere seemed to be the finest music to her ears. She
admitted at once that she had a mania for capturing crickets.
Besides entertainment, and in a much earlier framework, the classic
of modern entomology, the aforementioned An Introduction to Entomology by Kirby and Spence already implicitly presented throughout
its four volume best seller the idea of a primitive technics of nature –
insect technics that were immanent to their surroundings.
Kirby and Spence's take probably attracted the attention it did
because of the catchy language but also what could be called its
ethological touch. Insects were approached as living and interacting
entities that are intimately coupled with their environment. Insects
intertwine with human lives (“Direct and indirect injuries caused by
insects, injuries to our living vegetable property but also direct and
indirect benefits derived from insects”), but also engage in ingenious
building projects, stratagems, sexual behaviour and other expressive
modes of motion, perception and sensation. Instead of pertaining to a
taxonomic account of the interrelations between insect species, their
forms, growth or for example structural anatomy, An Introduction to
Entomology (vol. 1) is traversed by a curiosity cabinet kind of touch
on the ethnographics of insects. Here, insects are for example war
machines, like the horse-fly (Tabanus L.): “Wonderful and various
are the weapons that enable them to enforce their demand. What
would you think of any large animal that should come to attack you
with a tremendous apparatus of knives and lancets issuing from its
mouth?”.
From Kirby and Spence to later entomologists and other writers,
insects' powers of building continuously attracted the early entomological gaze. Buildings of nature were described as more fabulous than
87

87

87

88

88

the pyramids of Egypt or the aqueducts of Rome. Suddenly, in this
weird parallel world, such minuscule and admittedly small-brained
entities like termites were pictured as alike to the ancient monarchies
and empires of Western civilization. The Victorian appreciation of
ancient civilization could also incorporate animal kingdoms and their
buildings of monarchic measurements. Perhaps the parallel was not
to be taken literally, but in any case it expressed a curious interest
towards microcosmical worlds. A recurring trope was that of ‘insect
geometrics' which seemed with accuracy, paralleled only in mathematics, to follow and fold nature's resources into micro versions of
emerging urban culture. To quote Kirby and Spence's An Introduction to Entomology, vol. 2:
No thinking man ever witnesses the complexness and yet regularity and effciency of a great establishment, such as the Bank
of England or the Post Offce without marvelling that even human reason can put together, with so little friction and such
slight deviations from correctness, machines whose wheels are
composed not of wood and iron, but of fickle mortals of a thousand different inclinations, powers, and capacities. But if such
establishments be surprising even with reason for their prime
mover, how much more so is a hive of bees whose proceedings
are guided by their instincts alone!
Whereas the imperialist powers of Europe headed for overseas conquests, the mentality of exposition and mapping new terrains turned
also towards other fields than the geographical. The Seeing Eye – a
key figure of hierarchical modern power – could also be a non-human
eye, as with the fly which according to Steven Connor can be seen as
the recurring mode of “radically alien mode of entomological vision”
with its huge eyes consisting of 4000 sensors. Hence, it is fitting how
in 1898 the idea of “photographing through a fly's eye” was suggested
as a mode of experimental vision – able also to catch queen Victoria
with “the most infinitesimal lens known to science”, that of a dragon
fly.

88

88

88

89

89

Jean-Jacques Lecercle explains how the Victorian enthusiasm for
entomology and insect worlds is related to a general discourse of natural history that as a genre labelled the century. Through the themes
of ‘exploration' and ‘taxonomy' Lecercle claims how Alice in Wonderland can be read as a key novel of the era in its evaluation and
classification of various life worlds beyond the human. Like Alice in
the 1865 novel, new landscapes and exotic species are offered as an
armchair exploration of worlds not merely extensive but also opened
up by intensive gaze into microcosms. Uncanny phenomenal worlds
are what tie together the entomological quest, Darwinian inspired biological accounts of curious species and Alice's adventures into imaginative worlds of twisting logic. In taxonomic terms, the entomologist
is surrounded by a new cult of private and public archiving. New
modes of visualizing and representing insect life produce a new phase
of taxonomy becoming a public craze instead of merely a scientific
tool. Again the wonder worlds of Alice or Edward Lear, the Victorian nonsense poet, are the ideal point of reference for 19 th century
natural historian and entomologist, as Lecercle writes:
And it is part of a craze for discovering and classifying new
species. Its advantage over natural history is that it can invent those species (like the Snap-dragon-fly) in the imaginative
sense, whereas natural history can invent them only in the
archaeological sense, that is discover what already exists. Nonsense is the entomologist's dream come true, or the Linnaean
classification gone mad, because gone creative (. . . )
For Alice, the feeling of not being herself and “being so many different sizes in a day is very confusing”, which of course is something
incomprehensible to the Caterpillar she encounters. It is not queer for
the Caterpillar whose mode of being is defined by the metamorphosis
and the various perception/action-modulations it brings about. It
is only the suddenness of the becoming-insect of Alice that dizzies
her. A couple of years later, in The Population of an Old-Pear Tree,
or Stories of insect life (1870) an everyday meadow is disclosed as
a vivacious microcosm in itself. The harmonious scene, “like a great
89

89

89

90

90

amphitheatre”, is filled with life that easily escapes the (human) eye.
Like Alice, the protagonist wandering in the meadow is “lulled and
benumbed by dreamy sensations” which however transport him suddenly into new perceptions and bodily affects. What is revealed to
our boy hero in this educational novel fashioned in the style of travel
literature (connecting it thus to the colonialist contexts of its age)
is a world teeming with sounds, movements, sensations and insect
beings (huge spiders, cruel mole-crickets, energetic bees) that are beyond the human form (despite the constant tension of such narratives
as educational and moralising tales that anthropomorphize affective
qualities into human characteristics). True to entomological classification, a big part is reserved for the structural-anatomical differences
of the insect life but also the affect-life of how insects relate to their
surroundings is under scrutiny.
As precursors of ethology, such natural historical quests (whether
archaeological, entomological or imaginative) were expressing an appreciation of phenomenal worlds differing from that of the human
with its two hands, two eyes and two feet. In a way, this entailed a
kind of an extended Kantianism interested not only in the conditions
of possibility of experiences, but the emergence of alternative potentials on the immanent level of life that functions through a technics of
nature. Curiously the inspiration with new phenomenal worlds was
connected to the emergence of new technologies of movements, sensation and communication (all challenging the Kantian apperception of
Man as the historically constant basis of knowledge and perception).
Nature was gradually becoming the “new storehouse of invention”
(New York Times, August 4, 1901) that was to entice inventors into
perfecting their developments. What I argue is that this theme can
also be read as an expression of a shift in understanding technology
– a shift that marked the rise of modern discourse concerning media
technologies since the end of the 19 th century and that has usually
been attributed to an anthropological and ethnological turn in understanding technology. I also address this theme in another text of
mine, ‘Insect Technics'. For several writers such as Ernst Kapp who
became one of the predecessors of later theories of media as ‘extensions of man', it was the human body that served as a storage house
90

90

90

91

91

of potential media. However, at the same time, another undercurrent
proposed to think of technologies, inventions and solutions to problems posed by life as stemming from a much more different class of
bodies, namely insects.
So beyond Kant, we move onto a baroque world, not as a period of
art, but as a mode of folding and enveloping new ways of perception
and movement. The early years and decades of technical media were
characterized by the new imaginary of communication, from work
by inventors such as Nikola Tesla to various modes of e.g. spiritualism analyzed recently in her art works by Zoe Beloff. However, one
can radicalize the viewpoint even further and take an animal turn and
not look for alien but for animal and insect ways of sensing the world.
Naturally, this is exactly what is being proposed in a variety of media
art pieces and exhibitions. Insects have made their appearance for
example in Toshio Iwai's Music Insects (1990), Sarah Peebles' electroacoustic Insect Grooves as an example of imaginary soundscapes,
David Dunn's acoustic ecology pieces with insect sounds, the Sci-Art:
Bio-Robotic Choreography project (2001, with Stelarc as one of the
participators), and Laura Beloff's Spinne (2002), a networked spider installation that works according to the web spider/ant/crawler
technology.
Here we are dealing not just with representing the insect, but engaging with the animal affects, indistinguishable from those of the
technological, as in Stelarc's work where the experimentation with
new bodily realities is a form of becoming-insect of the technological
human body. Imagining by doing is a way to engage directly with
affects of becoming-animal of media where the work of sound and
body artists doubles the media archaeological analysis of historical
strata. In other words, one should not reside on the level of intriguing representations of imagined ways of communication, or imagined
apparatuses that never existed, but realize the overabundance of real
sensations, perceptions to contract, to fold, the neomaterialist view
towards imagined media.

91

91

91

92

92

Literature
Ernest van Bruyssel, The population of an old pear-tree; or, Stories
of insect life. (New York: Macmillan and co., 1870).
Lewis Carroll, Alice's Adventures in Wonderland and Through the
Looking Glass. Edited with an Introduction and Notes by Roger
Lancelyn Green. (Oxford: Oxford University Press, 1998).
Claire Colebrook, ‘The Sense of Space. On the Specificity of Affect
in Deleuze and Guattari.' In: Postmodern Culture, vol. 15, issue 1,
2004.
Steven Connor, fly. (London: Reaktion Books, 2006).
Manuel DeLanda, War in the Age of Intelligent Machines. (New
York: Zone Books, 1991).
Gilles Deleuze, Spinoza: Practical Philosophy. Transl. Robert
Hurley. (San Francisco: City Lights, 1988).
Gilles Deleuze, The Fold. Transl. Tom Conley. (Minneapolis:
University of Minnesota Press, 1993).
Ernst Kapp, Grundlinien einer Philosophie der Technik: Zur Entstehungsgeschichte der Kultur aus neuen Gesichtspunkten. (Braunschweig:
Druck und Verlag von George Westermann, 1877).
William Kirby & William Spence, An Introduction to Entomology,
or Elements of the Natural History of Insects. Volumes 1 and 2.
Unabridged Faximile of the 1843 edition. (London: Elibron, 2005).
Eric Kluitenberg (ed.), Book of Imaginary Media. Excavating the
Dream of the Ultimate Communication Medium. (Rotterdam: NAi
publishers, 2006).
Jean-Jacques Lecercle, Philosophy of Nonsense: The Intuitions of
Victorian Nonsense Literature. (London: Routledge, 1994).
Jussi Parikka, ‘Insect Technics: Intensities of Animal Bodies.' In:
(Un)Easy Alliance - Thinking the Environment with Deleuze/Guattari, edited by Bernd Herzogenrath. (Newcastle: Cambridge Scholars
Press, Forthcoming 2008).
Siegfried Zielinski, ‘Modelling Media for Ignatius Loyola. A Case
Study on Athanius Kircher's World of Apparatus between the Imaginary and the Real.' In: Book of Imaginary Media, edited by Kluitenberg. (Rotterdam: NAi, 2006).

92

92

92

93

93

PIERRE BERTHET
License: Creative Commons Attribution-NonCommercial-ShareAlike
EN

Extended speakers
& Concert with various extended objects
We invited Belgian artist Pierre Berthet to create an installation
for V/J10 that explores the resonance of EVP voices. He made a
netting of thin metal wires which he suspended from the ceiling of
the haunted house in the La Bellone courtyard.
Through these metal wires, loudspeakers without membranes were
connected to a network of resonating cans. Sinus tones and radio
recordings were transmitted through the speakers, making the metal
wires vibrate which, in their turn, caused the cans to resonate.

figure 26
A netting
of thin
metal wires
suspended
from the
ceiling of
the haunted
house in the
La Bellone
courtyard

figure 27


93

93

93

94

94

Concert with various extended objects

94

94

94

95

95

LEIff ELGREN, CM VON Hausswolff
License: Fully Restricted Copyright
EN

Elgaland-Vargaland
The Embassy of the The Kingdoms of Elgaland-Vargaland
(KREV)
The Kingdoms were proclaimed in 1992 and consist of all ‘Border
Territories': geographical, mental and digital. Elgaland-Vargaland is
the largest – and most populous – realm on Earth, incorporating all
boundaries between other nations as well as ‘Digital Territory' and
other states of existence. Every time you travel somewhere, and every
time you enter another form of being, such as the dream state, you
visit Elgaland-Vargaland, the kingdom founded by Leiff Elgren and
CM von Hausswolff.
During the Venice Biennale, Elgren stated that all dead people
are inhabitants of the country Elgaland-Vargaland unless they had
declared that they did not want to be an inhabitant.
Since V/J10, the Elgaland-Vargaland Embassy permanently resides in La Bellone.

figure 80
Since V/J10,
the Elgaland-Vargaland
Embassy permanently
resides in
La Bellone

figure 82

figure 81
Ambassadors
Yves
Poliart and
Wendy Van
Wynsberghe

figure 83

figure 85

figure 86

95

95

95

96

96

NL

Elgaland-Vargaland
figure 84
Every time
you travel
somewhere,
and every
time you
enter another form of
being, you
visit Elgaland-Vargaland.


CM VON Hausswolff, GUY-MARC HINANT
License: Creative Commons Attribution-NonCommercial-ShareAlike
figure 88
Drawings by
Dominique
Goblet,
EVP sounds
by Carl
Michael von
Hausswolff,
images by
Guy-Marc
Hinant

figure 87
EVP could
be the result
of psychic
echoes from
the past,
psychokinesis, or the
thoughts
of aliens
or nature
spirits.

For more information on EVP, see: http://en.wikipedia.org/wiki/Electronic_voice_phenomenon##_note
-fontana1
EN

Ghost Machinery
During V/J10 we showed an audiovisual installation entitled Ghost
Machinery, with drawings by Dominique Goblet, EVP sounds by Carl
Michael von Hausswolff, and images by Guy-Marc Hinant, based on
Dr. Stempnicks Electronic Voice Phenomena recordings.
EVP has been studied primarily by paranormal researchers since
the 1950s, who have concluded that the most likely explanation for
the phenomena is that they are produced by the spirits of the deceased. In 1959, Attila Von Szalay first claimed to have recorded the
‘voices of the dead', which led to the experiments of Friedrich Jürgenson. The 1970s brought increased interest and research including
the work of Konstantine Raudive. In 1980, William O'Neill backed by
industrialist George Meek built a ‘Spiricom' device, which was said to
facilitate very clear communication between this world and the spirit
world.
Investigation of EVP continues today through the work of many
experimenters, including Sarah Estep and Alexander McRae. In addition to spirits, paranormal researchers have claimed that EVP could
be due to psychic echoes from the past, psychokinesis unconsciously
produced by living people, or the thoughts of aliens or nature spirits.
Paranormal investigators have used EVP in various ways, including
as a tool in an attempt to contact the souls of dead loved ones and in
ghost hunting. Organizations dedicated to EVP include the American
Association of Electronic Voice Phenomena, the International Ghost
Hunters Society, as well as the skeptical Rorschach Audio project.

98

98

98

99

99

Read Feel Feed Real

101

101

101

102

102

Electro Magnetic fields of ordinary objects acted as EN
source material for an audio performance, surveillance
camera's and legislation are ingredients for a science fiction film, live annotation of videostreaming with the help
of IRC chats. . .
A mobile video laboratory was set up during the festival, to test out how to bring together scripting, annotation, data readings and recordings in digital archives.
Operating somewhere between surveillance and observation, the Open Source video team mixed hands-on Icecast
streaming workshops with experiments looking at the way
movements are regulated through motion control and vice
versa.

MANU LUKSCH, MUKUL PATEL
License: Creative Commons Attribution - NonCommercial - ShareAlike license
figure 94
CCTV
sculpture
in a park
in London

EN

Faceless: Chasing the Data Shadow
Stranger than fiction
Remote-controlled UAVs (Unmanned Aerial Vehicles) scan the city
for anti-social behaviour. Talking cameras scold people for littering
the streets (in children's voices). Biometric data is extracted from
CCTV images to identify pedestrians by their face or gait. A housing project's surveillance cameras stream images onto the local cable
channel, enabling the community to monitor itself.

figure 95
Poster in
London

These are not projections of the science fiction film that this text
discusses, but techniques that are used today in Merseyside 1. The
Guardian has reported the MoD rents out an RAF-staffed spy plane
for public surveillance, carrying reconnaissance equipment able to
monitor telephone conversations on the ground. It can also be used
for automatic number plate recognition: “Cheshire police recently revealed they were using the Islander [aircraft] to identify people speeding, driving when using mobile phones, overtaking on double white
lines, or driving erratically.”, Middlesborough 2, Newham and Shoreditch 3 in the UK. In terms of both density and sophistication, the UK
1

“Police spy in the sky fuels ‘Big Brother fears'”, Philip Johnston, Telegraph, 23/05/2007
http://www.telegraph.co.uk/news/main.jhtml?xml=/news/2007/05/22/ndrone22.xml
‘Talking' CCTV scolds offenders', BBC News, 4 April 2007 http://news.bbc.co.uk/2
/hi/uk_news/england/6524495.stm
3
“If the face fits, you're nicked”, Independent, Nick Huber, Monday, 1 April 2002 http:/
/www.independent.co.uk/news/business/analysis-and-features/if-the-face-fits-youre-nicked
-656092.html
“In 2001 the Newham system was linked to a central control room operated by the
London Metropolitan Police Force. In April 2001 the existing CCTV system in Birmingham city centre was upgraded to smart CCTV. People are routinely scanned by both
systems and have their faces checked against the police databases.”
Centre for Computing and Social Responsibility http://www.ccsr.cse.dmu.ac.uk
/resources/general/ethicol/Ecv12no1.html
2

104

104

104

105

105

leads the world in the deployment of surveillance technologies. With
an estimated 4.2 million CCTV cameras in place, its inhabitants are
the most watched in the world. 4 Many London buses have five or more
cameras inside, plus several outside, including one recording cars that
drive in bus lanes.
But CCTV images of our bodies are only one of many traces of
data that we leave in our wake, voluntarily and involuntarily. Vehicles are tracked using Automated Number Plate Recognition systems, our movements revealed via location-aware devices (such as
cell phones), the trails of our online activities recorded by Internet
Service Providers, our conversations overheard by the international
communications surveillance system Echelon, shopping habits monitored through store loyalty cards, individual purchases located using
RfiD (Radio-frequency identification) tags, and our meal preferences
collected as part of PNR (flight passenger) data. 5 Our digital selves
are many dimensional, alert, unforgetting.

4
5

A Report on the Surveillance Society. For the Information Commissioner by the Surveillance Studies Network, September 2006, p.19. Available from http://www.ico.gov.uk
‘e-Borders' is a £ 1.2bn passenger-screening programme to be introduced in 2009 and
to be complete by 2014. The single border agency, combining immigration, customs
and visa checks, includes a £ 650m contract with consortia Trusted Borders for a passenger-screening IT system: anyone entering or leaving Britain are to give 53 pieces
of information in advance of travel. This information, taken when a travel ticket is
bought, will be shared among police, customs, immigration and the security services
for at least 24 hours before a journey is due to take place. Trusted Borders consists
of US military contractor Raytheon Systems who will work with Accenture, Detica,
Serco, QinetiQ, Steria, Capgemini, and Daon. Ministers are also said to be considering
the creation of a list of ‘disruptive' passengers. It is expected to cost travel companies
£ 20million a year compiling the information. These costs will be passed on to customers via ticket prices, and the Government is considering introducing its own charge
on travellers to recoup costs. A pilot of the e-borders technology, known as Project
Semaphore, has already screened 29 million passengers.
Similarly, the arms manufacturer Lockheed Martin, the biggest defence contractor in
the U.S., that undertakes intelligence work as well as contributing to the Trident programme in the UK, is bidding to run the UK 2011 Census. New questions in the 2011
Census will include information about income and place of birth, as well as existing
questions about languages spoken in the household and many other personal details.
The Canadian Federal Government granted Lockheed Martin a $43.3 million deal to
conduct its 2006 Census. Public outcry against it resulted in only civil servants handling the actual data, and a new government task force being set up to monitor privacy
during the Census.
http://censusalert.org.uk/
http://www.vivelecanada.ca/staticpages/index.php/20060423184107361

105

105

105

106

106

Increasingly, these data traces are arrayed and administered in
networked structures of global reach. It is not necessary to posit a
totalitarian conspiracy behind this accumulation – data mining is an
exigency of both market effciency and bureaucratic rationality. Much
has been written on the surveillance society and the society of control,
and it is not the object here to construct a general critique of data
collection, retention and analysis. However, it should be recognised
that, in the name of effciency and rationality – and, of course, security – an ever-increasing amount of data is being shared (also sold,
lost and leaked 6) between the keepers of such seemingly unconnected
records as medical histories, shopping habits, and border crossings.
6

Sales: “Personal details of all 44 million adults living in Britain could be sold to
private companies as part of government attempts to arrest spiralling costs for the new
national identity card scheme, set to get the go-ahead this week. [...] ministers have
opened talks with private firms to pass on personal details of UK citizens for an initial
cost of £ 750 each.”
“Ministers plan to sell your ID card details to raise cash”, Francis Elliott, Andy McSmith and Sophie Goodchild, Independent, Sunday 26 June 2005
http://www.independent.co.uk/news/uk/politics/ministers-plan-to-sell-your-id-card-details
-to-raise-cash-496602.html
Losses: In January 2008, hundreds of documents with passport photocopies, bank
statements and benefit claims details from the Department of Work and Pensions were
found on a road near Exeter airport, following their loss from a TNT courier vehicle.
There were also documents relating to home loans and mortgage interest, and details
of national insurance numbers, addresses and dates of birth.
In November 2007, HM Revenue and Customs (HMRC) posted, unrecorded and unregistered via TNT, computer discs containing personal information on 25 million people
from families claiming child benefit, including the bank details of parents and the dates
of birth and national insurance numbers of children. The discs were then lost.
Also in November, HMRC admitted a CD containing the personal details of thousands
of Standard Life pension holders has gone missing, leaving them at heightened risk
of identity theft. The CD, which contained data relating to 15,000 Standard Life
pensions customers including their names, National Insurance numbers and pension
plan reference numbers was lost in transit from the Revenue offce in Newcastle to the
company's headquarters in Edinburgh by ‘an external courier'.
Thefts: In November 2007, MoD acknowledged the theft of a laptop computer containing the personal details of 600,000 Royal Navy, Royal Marines, and RAF recruits
and of people who had expressed interest in joining, which contained, among other
information, passport, and national insurance numbers and bank details.
In October 2007, a laptop holding sensitive information was stolen from the boot of
an HMRC car. A staff member had been using the PC for a routine audit of tax
information from several investment firms. HMRC refused to comment on how many
individuals may be at risk, or how many financial institutions have had their data
stolen as well. BBC suggest the computer held data on around 400 customers with
high value individual savings accounts (ISAs), at each of five different companies –
including Standard Life and Liontrust. (In May, Standard Life sent around 300 policy
documents to the wrong people.)

106

106

106

107

107

Legal frameworks intended to safeguard a conception of privacy by
limiting data transfers to appropriate parties exist. Such laws, and in
particular the UK Data Protection Act (DPA, 1998) 7, are the subject
of investigation of the film Faceless.
From Act to Manifesto
“I wish to apply, under the Data Protection Act,
for any and all CCTV images of my person held
within your system. I was present at [place] from
approximately [time] onwards on [date].” 8
For several years, ambientTV.NET conducted a series of exercises
to visualise the data traces that we leave behind, to render them
into experience and to dramatise them, to watch those who watch
us. These experiments, scrutinising the boundary between public
and private in post-9/11 daily life, were run under the title ‘the Spy
School'. In 2002, the Spy School carried out an exercise to test the
reach of the UK Data Protection Act as it applies to CCTV image
data.
The Data Protection Act 1998 seeks to strike a balance between
the rights of individuals and the sometimes competing interests
of those with legitimate reasons for using personal information.
The DPA gives individuals certain rights regarding information
held about them. It places obligations on those who process information (data controllers) while giving rights to those who are
the subject of that data (data subjects). Personal information
covers both facts and opinions about the individual. 9

7
9

The full text of the DPA (1998) is at http://www.opsi.gov.uk/ACTS/acts1998
/19980029.htm
Data Protection Act Fact Sheet available from the UK Information Commissioners
Offce, http://www.ico.gov.uk

107

107

107

108

108

The original DPA (1984) was devised to ‘permit and regulate'
access to computerised personal data such as health and financial
records. A later EU directive broadened the scope of data protection
and the remit of the DPA (1998) extended to cover, amongst other
data, CCTV recordings. In addition to the DPA, CCTV operators
‘must' comply with other laws related to human rights, privacy, and
procedures for criminal investigations, as specified in the CCTV Code
of Practice (http://www.ico.gov.uk).
As the first subject access request letters were successful in delivering CCTV recordings for the Spy School, it then became pertinent
to investigate how robust the legal framework was. The Manifesto for
CCTV filmmakers was drawn up, permitting the use only of recordings obtained under the DPA. Art would be used to probe the law.

figure 92
Still from
Faceless,
2007

figure 94
Multiple,
conflicting
timecode
stamps

A legal readymade
Vague spectres of menace caught on time-coded surveillance
cameras justify an entire network of peeping vulture lenses. A
web of indifferent watching devices, sweeping every street, every
building, to eliminate the possibility of a past tense, the freedom
to forget. There can be no highlights, no special moments: a
discreet tyranny of now has been established. Real time in its
most pedantic form. 10
Faceless is a CCTV science fiction fairy tale set in London, the city
with the greatest density of surveillance cameras on earth. The film
is made under the constraints of the Manifesto – images are obtained
from existing CCTV systems by the director/protagonist exercising
her/his rights as a surveilled person under the DPA. Obviously the
protagonist has to be present in every frame. To comply with privacy
legislation, CCTV operators are obliged to render other people in
the recordings unidentifiable – typically by erasing their faces, hence
the faceless world depicted in the film. The scenario of Faceless thus
derives from the legal properties of CCTV images.
10

(Ian Sinclair: Lights out for the territory, Granta, London, 1998, p. 91)

108

108

108

109

109

“RealTime orients the life of every citizen. Eating, resting, going
to work, getting married – every act is tied to RealTime. And every
act leaves a trace of data – a footprint in the snow of noise...” 11
The film plays in an eerily familiar city, where the reformed RealTime calendar has dispensed with the past and the future, freeing
citizens from guilt and regret, anxiety and fear. Without memory or
anticipation, faces have become vestigial – the population is literally
faceless. Unimaginable happiness abounds – until a woman recovers
her face...
There was no traditional shooting script: the plot evolved during
the four-year long process of obtaining images. Scenes were planned
in particular locations, but the CCTV recordings were not always
obtainable, so the story had to be continually rewritten.
Faceless treats the CCTV image as an example of a legal readymade (‘objet trouvé'). The medium, in the sense of raw materials
that are transformed into artwork, is not adequately described as
simply video or even captured light. More accurately, the medium
comprises images that exist contingent on particular social and legal
circumstances – essentially, images with a legal superstructure. Faceless interrogates the laws that govern the video surveillance of society
and the codes of communication that articulate their operation, and
in both its mode of coming into being and its plot, develops a specific
critique.
Reclaiming the data body
Through putting the DPA into practice and observing the consequences over a long exposure, close-up, subtle developments of the
law were made visible and its strengths and lacunae revealed.
“I can confirm there are no such recordings of
yourself from that date, our recording system was
not working at that time.” (11/2003)

11

Faceless, 2007

109

109

109

110

110

Many data requests had negative outcomes because either the surveillance camera, or the recorder, or the entire CCTV system in question
was not operational. Such a situation constitutes an illegal use of
CCTV: the law demands that operators: “comply with the DPA by
making sure [...] equipment works properly.” 12
In some instances, the non-functionality of the system was only
revealed to its operators when a subject access request was made. In
the case below, the CCTV system had been installed two years prior
to the request.
“Upon receipt of your letter [...] enclosing the
required 10£ fee, I have been sourcing a company
who would edit these tapes to preserve the privacy of other individuals who had not consented
to disclosure. [...] I was informed [...] that all
tapes on site were blank. [.. W]hen the engineer
was called he confirmed that the machine had not
been working since its installation.
Unfortunately there is nothing further that can be
done regarding the tapes, and I can only apologise
for all the inconvenience you have been caused.”
(11/2003)
Technical failures on this scale were common. Gross human errors
were also readily admitted to:

12

CCTV Systems and the Data Protection Act 1998, available from http://www.ico.gov
.uk

110

110

110

111

111

“As I had advised you in my previous letter, a request was made to remove the tape and for it not
to be destroyed. Unhappily this request was not
carried out and the tape was wiped according with
the standard tape retention policy employed by
[deleted]. Please accept my apologies for this and
assurance that steps have been taken to ensure a
similar mistake does not happen again.” (10/2003)

figure 98
The Rotain
Test, devised
by the
UK Home
Offce Police
Scientific
Development
Branch,
measures
surveillance
camera
performance.

Some responses, such as the following, were just mysterious (data
request made after spending an hour below several cameras installed
in a train carriage).
“We have carried out a careful review of all relevant tapes and we confirm that we have no images of
you in our control.” (06/2005)
Could such a denial simply be an excuse not to comply with the costly
demands of the DPA?
“Many older cameras deliver image quality so poor
that faces are unrecognisable. In such cases the
operator fails in the obligation to run CCTV for
the declared purposes.
You will note that yourself and a colleague's faces
look quite indistinct in the tape, but the picture you sent to us shows you wearing a similar
fur coat, and our main identification had been made
through this and your description of the location.”
(07/2002)

111

111

111

112

112

To release data on the basis of such weak identification compounds
the failure.
Much confusion is caused by the obligation to protect the privacy
of third parties in the images. Several data controllers claimed that
this relieved them of their duty to release images:
“[... W]e are not able to supply you with the images you requested because to do so would involve
disclosure of information and images relating to
other persons who can be identified from the tape
and we are not in a position to obtain their consent to disclosure of the images. Further, it is
simply not possible for us to eradicate the other
images. I would refer you to section 7 of the Data
Protection Act 1998 and in particular Section 7
(4).” (11/2003)
Even though the section referred to states that it is:
“not to be construed as excusing a data controller
from communicating so much of the information
sought by the request as can be communicated without disclosing the identity of the other individual concerned, whether by the omission of names or
other identifying particulars or otherwise.”
Where video is concerned, anonymisation of third parties is an expensive, labour-intensive procedure – one common technique is to occlude
each head with a black oval. Data controllers may only charge the
statutory maximum of 10 £ per request, though not all seemed to be
aware of this:

112

112

112

113

113

“It was our understanding that a charge for production of the tape should be borne by the person
making the enquiry, of course we will now be checking into that for clarification. Meanwhile please
accept the enclosed video tape with compliments of
[deleted], with no charge to yourself.” (07/2002)

figure 90
Off with
their heads!

Visually provocative and symbolically charged as the occluded heads
are, they do not necessarily guarantee anonymity. The erasure of a
face may be insuffcient if the third party is known to the person requesting images. Only one data controller undeniably (and elegantly)
met the demands of third party privacy, by masking everything but
the data subject, who was framed in a keyhole. (This was an uncommented second offering; the first tape sent was unprocessed.) One
CCTV operator discovered a useful loophole in the DPA:
“I should point out that we reserve the right, in
accordance with Section 8(2) of the Data Protection
Act, not to provide you with copies of the information requested if to do so would take disproportionate effort.” (12/2004)
What counts as ‘disproportionate effort'? The gold standard was set
by an institution whose approach was almost baroque – they delivered
hard copies of each of the several hundred relevant frames from the
time-lapse camera, with third parties heads cut out, apparently with
nail scissors.
Two documents had (accidentally?) slipped in between the printouts – one a letter from a junior employee tendering her resignation
(was it connected with the beheading job?), and the other an ironic
memo:

113

113

113

114

114

“And the good news -- I enclose the 10 £ fee to be
passed to the branch sundry income account.” (Head
of Security, internal communication 09/2003)
From 2004, the process of obtaining images became much more difficult.
“It is clear from your letter that you are aware
of the provisions of the Data Protection Act and
that being the case I am sure you are aware of
the principles in the recent Court of Appeal decision in the case of Durant vs. financial Services Authority. It is my view that the footage you
have requested is not personal data and therefore
[deleted] will not be releasing to you the footage
which you have requested.” (12/2004)
Under Common Law, judgements set precedents. The decision in
the case Durant vs. financial Service Authority (2003) redefined
‘personal data'; since then, simply featuring in raw video data does
not give a data subject the right to obtain copies of the recording.
Only if something of a biographical nature is revealed does the subject
retain the right.

114

114

114

115

115

“Having considered the matter carefully, we do not
believe that the information we hold has the necessary relevance or proximity to you. Accordingly
we do not believe that we are obligated to provide
you with a copy pursuant to the Data Protection Act
1988. In particular, we would remark that the video
is not biographical of you in any significant way.”
(11/2004)
Further, with the introduction of cameras that pan and zoom, being
filmed as part of a crowd by a static camera is no longer grounds for
a data request.
“[T]he Information Commissioners office has indicated that this would not constitute your personal
data as the system has been set up to monitor the
area and not one individual.” (09/2005)
As awareness of the importance of data rights grows, so the actual
provision of those rights diminishes:

115

115

115

116

116

figure 89
Still from
Faceless,
2007

"I draw your attention to CCTV systems and the Data
Protection Act 1998 (DPA) Guidance Note on when the
Act applies. Under the guidance notes our CCTV system is no longer covered by the DPA [because] we:
• only have a couple of cameras
• cannot move them remotely
• just record on video whatever the cameras pick
up
• only give the recorded images to the police to
investigate an incident on our premises"
(05/2004)
Data retention periods (which data controllers define themselves)
also constitute a hazard to the CCTV filmmaker:
“Thank you for your letter dated 9 November addressed to our Newcastle store, who have passed
it to me for reply. Unfortunately, your letter was
delayed in the post to me and only received this
week. [...] There was nothing on the tapes that you
requested that caused the store to retain the tape
beyond the normal retention period and therefore
CCTV footage from 28 October and 2 November is no
longer available.” (12/2004)
Amidst this sorry litany of malfunctioning equipment, erased tapes,
lost letters and sheer evasiveness, one CCTV operator did produce
reasonable justification for not being able to deliver images:

116

116

116

117

117

“We are not in a position to advise whether or not
we collected any images of you at [deleted]. The
tapes for the requested period at [deleted] had
been passed to the police before your request was
received in order to assist their investigations
into various activities at [deleted] during the
carnival.” (10/2003)

figure 91
Still from
Faceless,
2007

In the shadow of the shadow
There is debate about the effcacy, value for money, quality of
implementation, political legitimacy, and cultural impact of CCTV
systems in the UK. While CCTV has been presented as being vital in solving some high profile cases (e.g. the 1999 London nail
bomber, or the 1993 murder of James Bulger), at other times it has
been strangely, publicly, impotent (e.g. the 2005 police killing of Jean
Charles de Menezes). The prime promulgators of CCTV may have
lost some faith: during the 1990s the UK Home Offce spent 78% of
its crime prevention budget on installing CCTV, but in 2005, an evaluation report by the same offce concluded that, “the CCTV schemes
that have been assessed had little overall effect on crime levels.” 13
An earlier, 1992, evaluation reported CCTV's broadly positive
public reception due to its assumed effectiveness in crime control,
acknowledging “public acceptance is based on limited and partly inaccurate knowledge of the functions and capabilities of CCTV systems
in public places.” 14
By the 2005 assessment, support for CCTV still “remained high in
the majority of cases” but public support was seen to decrease after
implementation by as much as 20%. This “was found not to be the
reflection of increased concern about privacy and civil liberties, as
this remained at a low rate following the installation of the cameras,”
13

Gill, M. and Spriggs, A., Assessing the impact of CCTV. London: Home Offce
Research, Development and Statistics Directorate 2005, pp.60-61.
www.homeoffce.gov.uk/rds/pdfs05/hors292.pdf
14
http://www.homeoffce.gov.uk/rds/prgpdfs/fcpu35.pdf

117

117

117

118

118

but “that support for CCTV was reduced because the public became
more realistic about its capabilities” to lower crime.
Concerns, however, have begun to be voiced about function creep
and the rising costs of such systems, prompted, for example, by the
disclosure that the cameras policing London's Congestion Charge remain switched on outside charging hours and that the Met are to
have live access to them, having been exempted from parts of the
Data Protection Act to do so. 15 As such realities of CCTV's daily
operation become more widely known, existing acceptance may be
somewhat tempered.
Physical bodies leave data traces: shadows of presence, conversation, movement. Networked databases incorporate these traces into
data bodies, whose behaviour and risk are priorities for analysis and
commodification, by business and by government. The securing of
a data body is supposedly necessary to secure the human body, either preventatively or as a forensic tool. But if the former cannot
be assured, as is the case, what grounds are there for trust in the
hollow promise of the latter? The all-seeing eye of the panopticon is
not complete, yet. Regardless, could its one-way gaze ever assure an
enabling conception of security?

15

Surveillance State Function Creep – London Congestion Charge “real-time bulk data”
to be automatically handed over to the Metropolitan Police etc. http://p10.hostingprod
.com/@spyblog.org.uk/blog/2007/07/surveillance_state_function_creep_london_congestion
_charge_realtime_bulk_data.html

118

118

118

119

119

MICHAEL MURTAUGH

figure 113
Start
broadcasting
yourself!

License: Free Art License
EN

Active Archives
or: What's wrong with the YouTube documentary?
As someone who has shot video and programmed web-based interfaces to video over the past decade, it has been exciting to see how
distributing video via the Internet has become increasingly popularized, thanks in large part to video sharing sites like YouTube. At the
same time, I continue to design and write software in search of new
forms of collaborative and ‘evolving' documentaries; and for myself,
and others around me, I feel disinterest, even aversion, to posting
videos on YouTube. This essay has two threads: (1) I revisit an
earlier essay describing the ‘Evolving Documentary' model to get at
the roots of my enthusiasm for working with video online, and (2) I
examine why I find YouTube problematic, and more a reflection of
television than the possibilities that the web offers.
In 1996, I co-authored an essay with Glorianna Davenport, then
my teacher and director of the Interactive Cinema group at the MIT
Media Lab, called Automatist storyteller systems and the shifting
sands of story. 1 In it, we described a model for supporting ‘Evolving
Documentaries', or an “approach to documentary storytelling that
celebrates electronic narrative as a process in which the author(s), a
networked presentation system, and the audience actively collaborate
in the co-construction of meaning.” In this paper, Glorianna included
a section entitled ‘What's wrong with the Television Documentary?'
The main points of this argument were as follows:

1

figure 114
Join the
largest
worldwide
video-sharing
community!

http://www.research.ibm.com/journal/sj/363/davenport.html

131

131

131

132

132

1.
[... T]elevision consumes the viewer. Sitting passively in front
of a TV screen, you may appreciate an hour-long documentary;
you may even find the story of interest; however, your ability to
learn from the program is less than what it might be if you were
actively engaged with it, able to control its shape and probe its
contents.
Here, it is crucial to understand what is meant by the word ‘active'
. In a naive comparison between the activities of watching television
and surfing the web, one might say that the latter is inherently more
active in the sense that the process is ‘driven' by the choices of the
user; in the early days of the web it became popular to refer to this
split as ‘lean back vs. lean forward' media. Of course, if one means
to talk about cognitive activity, this is clearly misleading as aimlessly surfing the net can be achieved at near comatose levels of brain
function (as any late night surfer can attest to) and watching a particularly sharp television program can be incredibly engaging, even
life changing. Glorianna would often describe her frustration with
traditional documentary by observing the vast difference between her
own sense of engagement with a story gained through the process of
shooting and editing, versus the experience of an audience member
from simply viewing the end result. Thus ‘active' here relates to the
act of authoring and the construction of meaning. Rather than talking about leaning forward or backward, a more useful split might be
between reading and writing. Rather than being a question of bad
versus good access, the issue becomes about two interconnected cognitive processes, both hopefully ‘active' and involving thought. An
ideal platform for online documentary would be one that facilitates a
fluid movement between moments of reflection (reading) and of construction (writing).

132

132

132

133

133

2.
Television severely limits the ways in which an author can
‘grow' a story. A story must be composed into a fixed, unchanging form before the audience can see and react to it: there is no
obvious way to connect viewers to the process of story construction. Similarly, the medium offers no intrinsic, immediately
available way to interconnect the larger community of viewers
who wish to engage in debate about a particular story.
Part of the promise of crossing video with computation is the potential to combine the computers' ability to construct models and
run simulations with the random access possibilities of digitized media. Instead of editing a story down into a fixed form or ‘final cut',
one can program a ‘storytelling system' that can act as an ‘editor in
software'. Thus the system can maintain a dynamic representation
of the context of a particular telling, on which to base (or support a
viewer in making) editing decisions ‘on the fly'. The ‘Evolving Documentary' was intended to support complex stories that would develop
over time, and which could best be told from a variety of points of
view.
3.
Like published books and movies, television is designed for
unidirectional, one-to-many transmission to a mass audience,
without variation or personalization of presentation. The remote-control unit and the VCR (videocassette recorder) - currently the only devices that allow the viewer any degree of independent control over the play-out of television - are considered
anathema by commercial broadcasters. Grazing, time-shifting,
and ‘commercial zapping' run contrary to the desire of the industry for a demographically correct audience that passively
absorbs the programming - and the intrusive commercial messages - that the broadcasters offer.
133

133

133

134

134

Adding a decentralized means of distribution and feedback such
as the Internet provides the final piece of the puzzle in creating a
compelling new medium for the evolving documentary. No longer
would footage have to be excluded for reasons of reaching a ‘broad'
or average audience. An ideal storytelling system would be one that
could connect an individual viewer to whatever material was most
personally relevant. The Internet is a unique ‘mass media' in its
potential support for enabling access to non-mainstream, individually
relevant and personal subject matter.
What's wrong with the YouTube documentary?
YouTube has massively popularized the sharing and consumption
of video online. That said, most of the core concerns made in the
arguments related to television, are still relevant to YouTube when
considered as a platform for online collaborative documentary.
Clips are primarily ‘view-only'
Already in it's name, ‘YouTube' consciously invokes the television
set, thus inviting visitors to ‘lean back' and watch. The YouTube
interface functions primarily as a showcase of static monolithic elements. Clips are presented as fixed and finished, to be commented
upon, rated, and possibly bookmarked, but no more. The clip is
‘atomic' in the sense that it's not possible to make selections within a
clip, to export images or sound, or even to link to a particular starting
point. Without special plugins, the site doesn't even allow downloading of the clip. While users are encouraged ‘to embed' YouTube content in other websites (by cutting and pasting special HTML codes
that refer back to the YouTube site), the resulting video plays using
the YouTube player, complete with ‘related' links back into the service. It is in fact a violation of the YouTube terms of use to attempt
to display videos from the service in any other way.

134

134

134

135

135

The format of the clip is fixed and uniform for all kinds
of content
Technically, YouTube places some rather arbitrary limits on the
format of clips: all clips must contain an image and a sound track
and may not be longer than 10 minutes in length. Furthermore all
clips are treated equally, there is no notion of a ‘lecture', versus a
‘slideshow', versus a ‘music video', together with a sense that these
different kinds of material might need to be handled differently. Each
clip is compressed in a uniform way, meaning at the moment into a
flash format video file of fixed data rate and screen size.
Clips have no history
Despite these limitations, users of YouTube have found workarounds
to, for instance, download clips to then rework them into derived clips.
Although the derived works are often placed back again on YouTube,
the system itself has no means representing this kind of relationship.
(There is a mechanism for posting video responses to other clips, but
this kind of general purpose solution seems not to be understood or
used to track this kind of ‘derived' relationship.) The system is unable to model or otherwise make available the ‘history' of a particular
piece of media. Contrast this with a system like Wikipedia, where the
full history of an article, with a record of what was changed, by whom,
when, and even ‘meta-level' discussions about the changes (including
possible disagreement) is explicitly facilitated.
Weak or ‘flat' narrative structure
YouTube's primary model for narrative is a broad (and somewhat
obscure) sense of ‘relatedness' (based on user-defined tags) modulated
by popularity. As with many ‘social networking' and media sharing
sites, YouTube relies on ‘positive feedback' popularity mechanisms,
such as view counts, ‘star' ratings and favorites, to create ranked lists
of clips. Entry points like ‘Videos being watched right now', ‘Most
Viewed', ‘Top Favorites', only close the loop of featuring what's already popular to begin with. In addition, YouTube's commercial

135

135

135

136

136

model of enabling special paid levels of membership leads to ambiguous selection criteria, complicated by language as in the ‘Promoted
Videos' and ‘Featured Videos' of YouTube's front page (promoting
what?, featured by whom?).
The ‘editing logic' threading the user through the various clips is
flat, in that a clip is shown the same way regardless of what has been
viewed before it. Thus YouTube makes no visible use of a particular viewing history (though the fact that this information is stored
has been brought to the attention of the public via the ongoing Viacom lawsuit, http://news.bbc.co.uk/2/hi/technology/7506948.stm).
In this way it's difficult to get a sense of being in a particular ‘story
arc' or thread when moving from clip to clip in YouTube as in a sense
each click and each clip restarts the narrative experience.
No licenses for sharing / reuse
The lack of a download feature in YouTube could be said to protect the interests of those who wish to assert a claim of copyright.
However, YouTube ignores and thus obscures the question of license
altogether. One can find for instance the early films of Hitchcock,
now part of the public domain, in 10 minute chunks on YouTube;
despite this status (not indicated on the site), these clips are, like all
YouTube clips, unavailable for any kind of manipulation. This approach, and the limitations it places on the use of YouTube material,
highlights the fact that YouTube is primarily focused on getting users
to consume YouTube material, framed in YouTube's media player, on
YouTube's terms.
Traditional models for (software) authorship
While YouTube is built using open source software (Python and
ffmpeg for instance), the source code of the system itself is closed,
leaving little room for negotiation about how the software of the
site itself operates. This is a pity on a variety of levels. Free and
open source software is inextricably bound to the web not only in
terms of providing many of the underlying software (like the Apache
web server), but also in the reverse, as the possibilities for collaborative development that the web provides has catalyzed the process of
136

136

136

137

137

open source development. Software designed to support collaborative
work on code, like Subversion and other CVS's (concurrent versioning systems), and platforms for tracking and discussing software (like
TRAC), provide much richer models of use and relationship to work
than those which YouTube offer for video production.
Broadcasting over coherence
From it's slogan (‘Broadcast yourself'), to the language the service
uses around joining and uploading videos (see images), YouTube falls
very much into a traditional model of commercial broadcast television. In this model sharing means getting others to watch your clips,
with the more eyeballs the better.
The desire for broadness and the building of a ‘worldwide' community united only by a desire to ‘broadcast one's self' means creating
coherence is not a top priority. YouTube comments, for instance,
seem to suffer from this lack of coherence and context. Given no
particular focus, comments seem doomed to be similarly ungrounded
and broad. Indeed, comments in YouTube often seem to take on
more the character of public toilets than of public broadcasting, replete with the kind of sexism, racism, and homophobia that more or
less anonymous ‘blank wall' access seems to encourage.
A problematic space for ‘sharing'
The combination of all these aspects make YouTube for many a
problematic space for ‘sharing' - particularly when the material is of
a personal or particular nature. While on the one hand appearing
to pose an alternative platform to television, YouTube unfortunately
transposes many of that form's limitations and conventions onto the
web.
Looking to the future, what still remains challenging, is figuring
out how to fuse all those aspects that make the Internet so compelling
as a medium and enable them in the realm of online video: the net's
decentralized nature, the possibilities for participatory/collaboration
production, the ability to draw on diverse sources of knowledge (from
‘amateur' and home-based, to ‘expert'). How can the successful examples of collaborative text-based projects like Wikipedia inspire new
137

137

137

138

138

forms of collaborative video online; and in a way that escapes the
‘heaviness' and inertia of traditional forms of film/video. This fusion
can and needs to take place on a variety of levels, from the concept
of what a documentary is and can be, to the production tools and
content management systems media makers use, to a legal status of
media that reflects an understanding that culture is something which
is shared, down to the technical details of the formats and codecs
carrying the media in a way that facilitates sharing, instead of complicating it.

138

138

138

139

139

EN
NL
FR

Mutual Motions

139

139

139

140

140

Whether we operate a computer with the help of a command line interface, or by using buttons, switches and
clicks. . . the exact location of interaction often serves as
conduit for mutual knowledge - machines learn about bodies and bodies learn about machines. Dialogues happen
at different levels and in various forms: code, hardware,
interface, language, gestures, circuits.
Those conversations are sometimes gentle in tone - ubiquitous requests almost go unnoticed - and other times
they take us by surprise because of their authoritative
and demanding nature: “Put That There”. How can we
think about such feed back loops in productive ways?
How are interactions translated into software, and how
does software result in interaction? Could the practice of
using and producing free software help us find a middle
ground between technophobia and technofetishism? Can
we imagine ourselves and our realities differently, when we
try to re-design interfaces in a collaborative environment?
Would a different idea about ‘user' change our approach
to ‘use' as well?


7

“Classic puff pastry begins with a basic dough called a détrempe (pronounced day-trahmp) that is rolled out and
wrapped around a slab of butter. The
dough is then repeatedly rolled, folded,
and turned.”, Molly Stevens, A Shortcut
to flaky Puff Pastry. http://www.taunton
.com/finecooking/articles/how-to/rough-puff
-pastry.aspx 2008

146

146

146

147

147

figure XI

figure XIII

ADRIAN MACKENZIE
License: Creative Commons Attribution-NonCommercial-ShareAlike
EN

Centres of envelopment and intensive movement
in digital signal processing

figure 115
Adrian
Mackenzie
at V/J10

Abstract
The paper broadly concerns algorithmic processes commonly found
in wireless networks, video and audio compression. The problem it
addresses is how to account for the convoluted nature of the digital
signal processing (DSP). Why is signal processing so complex and relatively inaccessible? The paper argues that we can only understand
what is at stake in these labyrinthine calculations by switching focus away from abstract understandings of calculation to the dynamic
re-configuration of space and movement occurring in signal processing. The paper works through one example in detail of this reconfigured
movement in order to illustrate how digital signal processing enables
different experiences of proximity, intimacy, co-location and distance.
It explores how wireless signal processing algorithms envelope heterogeneous spaces in the form of hidden states, and logistical networks.
Importantly, it suggests that the ongoing dynamism of signal processing could be understood in terms of intensive movement produced by
a centre of envelopment. Centres of envelopment generate extensive
changes, but they also change the nature of change itself.
From sets to signals: digital signal processing
In new media art, in new media theory and in various forms of
media activism, there has been so much work that seizes on the possibilities of using digital technologies to design interactions, sound,
image, text, and movement that challenge dominant forms of experience, habit and selfhood. In various ways, the processes of branding,
commodification, consumption, control and surveillance associated
155

155

155

156

156

with contemporary media have been critically interrogated and challenged.
However, there are some domains of contemporary technological
and media culture that are really hard to work with. They may
be incredibly important, they may be an intimate part of everyday
life, yet remain relatively intractable. They resist contestation, and
engagement with may even seem pointless. This is because they may
contain intractable materials, or be organised in such complicated
ways that they are hard to change.
This paper concerns one such domain, digital signal processing
(DSP). I am not saying that new media has not engaged with DSP. Of
course it has, especially in video art and sound art, but there is little
work that helps us make sense of how the sensations, textures, and
movements associated with DSP come to be taken for granted, come
to appear as normal, and everyday, or how they could be contested.
A promotional video from Intel for the UltraMobilePC 1 promotes
change in relation to mobile media. Intel, because it makes semiconductors, is highly invested in digital signal processing in various forms.
In any case, video itself is a prime example of contemporary DSP at
work. Two aspects of this promotional video for the UMPC, the UltraMobile PC, relate to digital signal processing. There is much signal
processing here. It connects the individual's eyes, mouths and ears
to screens that display information services of various kinds. There
is also much signal processing in the wireless network infrastructures
that connect all these gadgets to each other and to various information services (maps, calendars, news feeds). In just this example,
sound, video, speech recognition, fibre, wireless and satellite, imaging
technologies in medicine all rely on DSP. We could say a good portion
of our experience is DSP-based.
This paper is an attempt to develop a theory of digital signal processing, a theory that could be used to talk about ways of contesting,
critiquing, or making alternatives. The theory under development
here relies a lot on two notions, ‘intensive movement' and ‘centre
of envelopment' that Deleuze proposed in Difference and Repetition.

figure 117
A promotional video
from Intel
for the UltraMobilePC

1

http://youtube.com/watch?v=GFS2TiK3AI

156

156

156

157

157

However, I want to keep the philosophy in the background as much as
possible. I basically want to argue that we need to ask: why does so
much have to be enveloped or interiorised in wireless or audiovisual
DSP?
How does DSP differ from other algorithmic processes?
What can we say about DSP? firstly, influenced by recent software
studies-based approaches (Fuller, Chun, Galloway, Manovich), I think
it is worth comparing the kinds of algorithmic processes that take
place in DSP with those found in new media more generally. Although
it is an incredibly broad generalisation, I think it is safe to say that
DSP does not belong to the set-based algorithms and data-structures
that form the basis of much interest in new media interactivity or
design.
DSP differs from set-based code. If we think of social software such
as flickr, Google, or Amazon, if we think of basic information infrastructures such as relational databases or networks, if we think of
communication protocols or search engines, all of these systems rely
on listing, enumerating, and sorting data. The practices of listing,
indexing, addressing, enumerating and sorting, all concern sets. Understood in a fairly abstract way, this is what much software and code
does: it makes and changes sets. Even areas that might seem quite
remote from set-making, such as the 3D-projective geometry used in
computer game graphics are often reduced algorithmically to complicated set-theoretical operations on shapes (polygons). Even many
graphic forms are created and manipulated using set operations.
The elementary constructs of most programming languages reflect
this interest in set-making. For instance, networks or, in computer
science terms, graphs, are visually represented like using lines and
boxes. But in terms of code, they are presented as either edge or
‘adjacency lists', like this: 2
graph = {'A': ['B', 'C'],
'B': ['C', 'D'],
2

http://www.python.org/doc/essays/graphs/

157

157

157

158

158

'C':
'D':
'E':
'F':

['D'],
['C'],
['F'],
['C']}

A graph or network can be seen as a list of lists. This kind of
representation in code of relations is very neat and nice. It means that
something like the structure of the internet, as a hybrid of physical
and logical relations, can be recorded, stored, sorted and re-ordered
in code. Importantly, it is highly open to modification and change.
Social software, or Web2.0, as exemplified in websites like Facebook or
YouTube also can be understood as massive deployments of set theory
in the form of code. Their sociality is very much dependent on set
making and set changing operations, both in the composition of the
user interfaces and in the underlying databases that make constantly
seek to attach new relations to data, to link identities and attributes.
In terms of activism, and artwork, relations that can be expressed in
the form of sets and operations on sets, are highly manipulable. They
can be learned relatively easily, and they are not too difficult to work
with. For instance, scripts that crawl or scrape websites have been
widely used in new media art and activism.
By contrast, DSP code is not based on set-making. It relies on
a different ordering of the world that lies closer to streams of signals that come from systems such as sensors, transducers, cameras,
and that propagate via radio or cable. Indeed, although it is very
widely used, DSP is not usually taught as part of the computer science or software engineering. The textbooks in these areas often do
not mention DSP. The distinction between DSP and other forms of
computation is clearly defined in a textbook of DSP:
Digital Signal Processing is distinguished from other areas in
computer science by the unique type of data it uses: signals.
In most cases, these signals originate as sensory data from the
real world: seismic vibrations, visual images, sound waves, etc.
DSP is the mathematics, the algorithms, and the techniques

158

158

158

159

159

used to manipulate these signals after they have been converted
into a digital form. (Smith, 2004)
While it draws on some of the logical and set-based operations
found in code in general, DSP code deals with signals that usually involve some kind of sensory data – vibrations, waves, electromagnetic
radiation, etc. These signals often involve forms of rapid movement,
rhythms, patterns or fluctuations. Sometimes these movements are
embodied in physical senses, such as the movements of air involved in
hearing, or the flux of light involved in seeing. Because they are often
irregular movements, they cannot be easily captured in the forms of
movement idealised in classical mechanics – translation, rotation, etc.
Think for instance of a typical photograph of a city street. Although
there are some regular geometrical forms, the way in which light is
reflected, the way shadows form, is very difficult to describe geometrically. It is much easier, as we will see, to think of an image as a
signal that distributes light and colour in space. Once an image or
sound can be seen as a signal, it can undergo digital signal processing.
What distinguishes DSP from other algorithmic processes is its
reliance on transforms rather than functions. This is a key difference.
The ‘transform' deals with many values at once. This is important
because it means it can deal with things that are temporal or spatial,
such as sounds, images, or signals in short. This brings algorithms
much closer to sensation, and to what bodies feel. While there is
codification going on, since the signal has to be treated digitally as
discrete numerical values, it is less reducible to the sequence of steps or
operations that characterise set-theoretical coding. Here for instance
is an important section of the code used in MPEG video encoding in
the free software ffmpeg package:

figure 116
The simplest
mpeg encoder

**
* @file mpegvideo.c
* The simplest mpeg encoder (well, it was the simplest!).
*
...
159

159

159

160

160

* for jpeg fast DCT */
#define CONST_BITS 14
static const uint16_t aanscales[64] = {
/* precomputed values scaled up by 14 bits */
16384, 22725, 21407, 19266, 16384, 12873, 8867, 4520,
22725, 31521, 29692, 26722, 22725, 17855, 12299, 6270,
21407, 29692, 27969, 25172, 21407, 16819, 11585, 5906,
19266, 26722, 25172, 22654, 19266, 15137, 10426, 5315,
16384, 22725, 21407, 19266, 16384, 12873, 8867, 4520,
12873, 17855, 16819, 15137, 12873, 10114, 6967, 3552,
8867, 12299, 11585, 10426, 8867, 6967, 4799, 2446,
4520, 6270, 5906, 5315, 4520, 3552, 2446, 1247
};
...
for(i=0;i<64;i++) {
const int j=
dsp{}->}idct_permutation[i];
qmat[qscale][i] = (int)((uint64_t_C(1)
<< (QMAT_SHIFT + 14))
(aanscales[i]
* qscale * quant_matrix[j]));
I don't think we need to understand this code in detail. There is
only one thing I want to point out in this code: the list of ‘precomputed' numerical values is used for ‘jpeg fast DCT'. This is a typical
piece of DSP type code. It refers to the way in which video frames are
encoding using Fast Fourier Transforms. The key point here is that
these values have been carefully worked out in advance to scale different colour and luminosity components of the image differently. The
transform, DCT (Discrete Cosine Transform), is applied to chunks of
sensation – video frames – to make them into something that can be
manipulated, stored, changed in size or shape, and circulated. Notice
160

160

160

161

161

that the code here is quite opaque in comparison to the graph data
structures discussed previously. This opacity reflects the sheer number of operations that have to be compressed into code in order for
digital signal processing to work.
Working with DSP: architecture and geography
So we can perhaps see from the two code examples above that there
is something different about DSP in comparison to the set-based
processing. DSP seems highly numerical and quantified, while the
set-based code is symbolic and logical. What is at stake in this difference? I would argue that it is something coming into the code from
outside, something that is difficult to read in the code itself because
it is so opaque and convoluted. Why is DSP code hard to understand
and also hard to write?
You will remember that I said at the outset that there are some
facets of technological cultures that resist appropriation or intervention. I think the mathematics of DSP is one of those facets. If I just
started explaining some of the mathematical models that have been
built into the contemporary world, I think it would be shoring up
or reinforcing a certain resistance to change associated with DSP, at
least in its main mathematical formalisations. I do think the mathematical models are worth engaging with, partly because they look
so different from the set-based operations found in much code today.
The mathematical models can tell us why DSP is difficult to intervene
in at a low level.
However, I don't think it is the mathematics as such that makes
digital signal processing hard to grapple with. The mathematics is an
architectural response to a geographical problem, a problem of where
code can go and be in the world. I would argue that it is the relation
between the architecture and geography of digital signal processing
itself that we should grapple with. It is something to do about the
immersion in everyday life, the proximity to sensation, the shifting
multi-sensory patterning of sociality, the movements of bodies across
variable distances, and the effervescent sense of impending change
that animates the convoluted architecture of DSP.

161

161

161

162

162

We could think of the situations in which DSP is commonly found.
For instance, in the background of the scenes in the daily lives of
businessmen shown in Intel's UPMC video, lie wireless infrastructures
and networks. Audiovisual media and wireless networks both use
signal processing, but for different reasons. Although they seem quite
disparate from each other in terms of how we embody them, they
actually sometimes use the same DSP algorithms. (In other work, I
have discussed video codecs. 3
3

The case of video codecs
In the foreground of the UMPC vision, stand images, video images in particular, and
to a lesser extent, sounds. They form a congested mass, created by media and information networks. People in electronic media cultures constantly encounter images in
circulation. Millions of images flash across TV, cinema and computer screens. DVD's
shower down on us. The internet is loaded down with video at the moment (Google
Video, YouTube.com, Yahoo video, etc.). A powerful media-technological imagining of
video moving everywhere, every which way, has taken root.
The growth of video material culture is associated with a key dynamic: the proliferation
of software and hardware codecs. Codecs generate linear transforms of images and
sound. Transformed images move through communication networks much more quickly
than uncompressed audiovisual materials. Without codecs, an hour of raw digital video
would need 165 CD-ROMs or take roughly 24 hours to move across a standard computer
network (10Mbit/sec ethernet). Instead of 165 CDs, we take a single DVD on which a
film has been encoded by a codec. We play it on a DVD player that also has a codec,
usually implemented in hardware. Instead of 32Mbyte/sec, between 1-10 MByte/sec
streams from the DVD into the player and then onto the television screen.
The economic and technical value of codecs can hardly be overstated. DVD, the transmission formats for satellite and cable digital television (DVB and ATSC), HDTV
as well as many internet streaming formats such as RealMedia and Windows Media,
third generation mobile phones and voice-over-ip (VoIP), all depend on video and audio codecs. They form a primary technical component of contemporary audiovisual
culture.
Physically, codecs take many forms, in software and hardware. Today, codecs nestle in
set-top boxes, mobile phones, video cameras and webcams, personal computers, media
players and other gizmos. Codecs perform encoding and decoding on a digital data
stream or signal, mainly in the interest of finding what is different in a signal and what
is mere repetition. They scale, reorder, decompose and reconstitute perceptible images
and sounds. They only move the differences that matter through information networks
and electronic media. This performance of difference and repetition of video comes at
a cost. Enormous complication must be compressed in the codec itself.
Much is at stake in this logistics from the perspective of cultural studies of technology
and media. On the one hand, codecs analyse, compress and transmit images that
fascinate, bore, fixate, horrify and entertain billions of spectators. Many of these
videos are repetitive or cliched. There are many re-runs of old television series or
Hollywood classics. YouTube.com, a video upload site, offers 13,500 wedding videos.
Yet the spatio-temporal dynamics of these images matters deeply. They open new
patterns of circulation. To understand that circulation matters deeply, we could think
of something we don't want to see, for instance, the execution of many hostages (Daniel
Perl, Nick Berg, and others) in Jihadist videos since 2002. Islamist and ‘shock-site' web

162

162

162

163

163

While images are visible, wireless signals are relatively hard to
sense. So they are a ‘hard case' to analyse. We know they surround
us, but we hardly have any sensation of them. A tightly packed
labyrinth of digital signal processing lies between antenna and what
reaches the business travellers' eyes and ears. Much of what they
look at and listen has passed through wireless chipsets. The chipsets,
produced by Broadcom, Intel, Texas Instruments, Motorola, Airgo or
Pico, are tiny (1 cm) fragments that support highly convoluted and
concatenated paths on nanometre scales. In wireless networks such
as Wi-fi, Bluetooth, and 3G mobile phones with their billions of
miniaturised chipsets, we encounter a vast proliferation of relations.
What is at stake in these convoluted, compressed packages of relationality, these densely patterned architectures dedicated to wireless
communication?
Take for instance the picoChip, a latest-generation wireless digital
signal processing chip, designed by a ‘fabless' semiconductor company,
picoChip Designs Ltd, in Bath, UK. The product brief describes the
chip as:
[t]he architecture of choice for next-generation wireless. Expressly designed to address the new air-interfaces, picoChip's
multi-core DSP is the most powerful baseband processor on
the market. Ideally suited to WiMAX, HSPA, UMTS-LTE,
802.16m, 802.20 and others, the picoArray delivers ten-times
better MIPS/$ than legacy approaches. Crucially, the picoArray is easy to program, with a robust development environment
and fast learning curve. (PicoChip, 2007)
Written for electronics engineers, the key points here are that the
chip is designed for wireless communication or ‘air-interface', that
servers streamed these videos across the internet using the low-bitrate Windows Media
Video codec, a proprietary variant of the industry-standard MPEG-4. The shock of
such events – the sight of a beheading, the sight of a journalist pleading for her life –
depends on its circulation through online and broadcast media. A video beheading lies
at the outer limit of the ordinary visual pleasures and excitations attached to video
cultures. Would that beheading, a corporeal event that takes video material culture to
its limits, occur without codecs and networked media?

163

163

163

164

164

its purpose is to receive and transmit information wirelessly, and
that it accommodates a variety of wireless communication standards
(WiMAX, HSPA, 802.16m, etc). In this context, much of the terminology of performance and low cost is familiar. The chip combines computing performance and value for money (“ten times better
MIPS/$ – Million Instructions Per Second/$”) as a ‘baseband processor'. That means that it could find its way into many different version of hardware being produced for applications that range between
large-scale wireless information infrastructures and small consumer
electronics applications. Only the last point is slightly surprisingly
emphatic: “[c]rucially, the picoArray is easy to program, with a robust development environment and fast learning curve.” Why should
ease of programming be important?
And why should so many processors be needed for wireless
signal processing?
The architecture of the picoChip stands on shifting ground. We
are witnessing, as Nigel Thrift writes, “a major change in the geography of calculation. Whereas ‘computing' used to consist of centres
of calculation located at definite sites, now, through the medium of
wireless, it is changing its shape” (Thrift, 2004, 182). The picoChip's
architecture is a respond to the changing geographies of calculation.
Calculation is not carried out at definite sites, but at almost any
site. We can see the picoChip as an architectural response to the
changing geography of computing. The architecture of the picoChip
is typical in the ways that it seeks to make a constant re-shaping
of computation possible, normal, affordable, accessible and programmable. This is particularly evident in the parallel character of its
architecture. Digital signal processing requires massive parallellisation: more chips everywhere, and chips that do more in parallel. The
advanced architecture of the picoChip is typical of the shape of things
more generally:
[t]he picoArray™ is a tiled processor architecture in which hundreds of processors are connected together using a deterministic
interconnect. The level of parallelism is relatively fine grained
164

164

164

165

165

with each processor having a small amount of local memory.
... Multiple picoArrayTM devices may be connected together to
form systems containing thousands of processors using on-chip
peripherals which effectively extend the on-chip bus structure.
(Panesar, et al., 2006, 324)
The array of processors shown then, is a partial representation, an
armature for a much more extensive diffusion of processors in wireless
digital signal processing: in wireless base stations, 3G phones, mobile
computing, local area networks, municipal, community and domestic
Wi-fi network, in femtocells, picocells, in backhaul, last-mile or first
mile infrastructures.

figure 118
Typical contemporary
wireless infrastructure
DSP chip architecture
PicoChip202

Architectures and intensive movement
It is as if the picoChip is a miniaturised version of the urban geography that contains the many gadgets, devices, and wireless and wired
infrastructures. However, this proliferation of processors is more than
a diffusion of the same. The interconnection between these arrays of
processors is not just extensive, as if space were blanketed by an ever
finer and wider grid of points occupied by processors at work shaping
signals. As we will see, the interconnection between processors in DSP
seeks to potentialise an intensive movement. It tries to accommodate
a change in the nature of movement. Since all movement is change,
intensive movement is a change in change. When intensive movement
occurs, there is always a change in kind, a qualitative change.
Intensive movements always respond to a relational problem. The
crux of the relational problem of wirelessness is this: how can many
things (signals, messages, flows of information) occupy the same space
at the same time, yet all be individualised and separate? The flow of
information and messages promises something highly individualised
(we saw this in the UMPC video from Intel). In terms of this individualising change, the movement of images, messages and data, and the
movement of people, have become linked in very specific ways today.
The greater the degree of individualization, the more dense becomes
the mobility of people and the signals they transmit and receive. And
as people mobilise, they drag personalised flows of communication on
165

165

165

166

166

the move with them. Hence flows of information multiply massively,
and networks must proliferate around those flows. The networks need
to become more dense, and imbricate lived spaces more closely in response to individual mobility.
This poses many problems for the architecture of communication infrastructure. The infrastructural problems of putting networks everywhere are increasingly, albeit only partially, solved by packing radio-frequency waves with more and more intricately modulated signal
patterns. This is the core response of DSP to the changing geography
of calculation, and to the changing media embodiments associated
with it. To be clear on this: were it not for digital signal processing,
the problems of interference, of unrelated communications mixing together, would be potentially insoluble. The very possibility of mobile
devices and mobility depends on ways of increasing the sheer density
of wireless transmissions. Radio spectrum becomes an increasingly
valuable, tightly controlled resource. For any one individual communication, not much space or time can be available. And even when
there is space, it may be noisy and packed with other people and
things trying to communicate. different kinds of wireless signals are
constantly added to the mix. Signals may have to work their way
through crowds of other signals to reach a desired receiver. Communication does not take place in open, uncluttered space. It takes
place in messy configurations of buildings, things and people, which
obstruct waves and bounce signals around. The same signal may
be received many times through different echoes (‘multipath echo'
). Because of the presence of crowds of other signals, and the limited spectrum available for any one transmission, wirelessness needs
to be very careful in its selection of paths if experience is to stream
rather than just buzz. The problem for wireless communication is to
micro-differentiate many paths and to allow them to interweave and
entwine with each other without coming into relation.
So the changing architectures of code and computation associated
with DSP in wireless networks does more, I would argue, than fit in
with changing geography of computing. It belongs to a more intensive, enveloped, and enveloping set of movements. To begin addressing this dynamic, we might say that wireless DSP is the armature
166

166

166

167

167

of a centre of envelopment. This is a concept that Gilles Deleuze
proposes late in Difference and Repetition. ‘Centres of envelopment'
are a way of understanding how extensive movements arise from intensive movement. Such centres crop up in ‘complex systems' when
differences come into relation:
to the extent that every phenomenon finds its reason in a difference of intensity which frames it, as though this constituted
the boundaries between which it flashes, we claim that complex
systems increasingly tend to interiorise their constitutive differences: the centres of envelopment carry out this interiorisation
of the individuating factors. (Deleuze, 2001, 256)
Much of what I have been describing as the intensive movement
that folds spaces and times inside DSP can be understood in terms
of an interiorisation of constitutive differences. An intensive movement always entails a change in the nature of change. In this case,
a difference in intensity arises when many signals need to co-habit
that same place and moment. The problem is: how can many signals
move simultaneously without colliding, without interfering with each
other? How can many signals pass by each other without needing
more space? These problems induce the compression and folding of
spaces inside wireless processing, the folding that we might understand as a ‘centre of envelopment' in action.
The Fast Fourier Transform: transformations between time
and space
I have been arguing that the complications of the mathematics
and the convoluted nature of the code or hardware used in DSP,
stems from an intensive movement or constitutive difference that is
interiorised. We can trace this interiorisation in the DSP used in
wireless networks. I do not have time to show how this happens
in detail, but hopefully one example of DSP that occurs but in the
video codecs and wireless networks will illustrate how this happens
in practice.
167

167

167

168

168

Late in the encoding process, and much earlier in the decoding
process in contemporary wireless networks, a fairly generic computational algorithm comes into action: the Fast Fourier Transform
(ffT). In some ways, it is not surprising to find the ffT in wireless networks or in digital video. Dating from the mid-1960s, ffTs
have long been used to analyse electrical signals in many scientific
and engineering settings. It provides the component frequencies of
a time-varying signal or waveform. Hence, in ‘spectral analysis', the
ffT can show the spectrum of frequencies present in a signal.
The notion of the Fourier transform is mathematical and has been
known since the early 19th century: it is an operation that takes
an arbitrary waveform and turns it into a set of periodic waves (sinusoids) of different frequencies and amplitudes. Some of these sinusoids
make more important contributions to overall shape of the waveform
than others. Added together again, these sine or cosine waves should
exactly re-constitute the original signal. Crucially, a Fourier transform can turn something that varies over time (a signal) into a set of
simple components (sine or cosine waves) that do not vary over time.
Put more technically, it switches between ‘time' and ‘frequency' domains. Something that changes in time, a signal, becomes a set of
distinct components that can be handled separately. 4
In a way, this analysis of a complex signal into simple static component signals means that DSP does use the set-based approaches I
described earlier. Once a complex signal, such as an image, has been
analysed into a set of static components, we can imagine code that

4

Humanities and social science work on the Fast Fourier Transform is hard to find, even
though the ffT is the common mathematical basis of contemporary digital image,
video and sound compression, and hence of many digital multimedia (in JPEG, MPEG
files, in DVDs). In the early 1990s, Friedrich Kittler wrote an article that discussed
it {Kittler, 1993 #753}. His key point was largely to show that there is no realtime
in digital signal processing. The ffT works by defining a sliding window of time for
a signal. It treats a complicated signal as a set of blocks that it lifts out of the time
domain and transforms into the frequency domain. The ffT effectively plots an event
in time as a graph in space. The experience of realtime is epiphenomenal. In terms of
the ffT, a signal is always partly in the future or the past. Although Kittler was not
referring to the use of ffT in wireless networks, the same point applies – there is no
realtime communication. However, while this point about the impossibility of realtime
calculation was important to make during the 1990s, it seems well-established now.

168

168

168

169

169

would select the most important or relevant components. This is precisely what happens in video and sound codecs such as MPEG and
MP3.
The ffT treats sounds and images as complicated superimpositions of waveforms. The envelope of a signal becomes something that
contains many simple signals. It is interesting that wireless networks
tend to use this process in reverse. It deliberately takes a well-separated and discrete set of signals – a digital datastream – and turns it
into a single complex signal. In contrast to the normal uses of ffT in
separating important from insignificant parts of a signal, in wireless
networks, and in many other communications setting, ffT is used to
put signals together in such a way as to contain them in a single envelope. The ffT is found in many wireless computation algorithms
because it allows many different digital signals to be put together on
a single wave and then extracted from it again.
Why would this superimposition of many signals onto a single complex waveform be desirable? Would it not increase the possibilities of
confusion or interference between signals? In some ways the ffT is
used to slow everything down rather than speed it up. Rather than
simply spatialising a duration, the ffT as used in wireless networks
defines a different way of inhabiting the crowded, noise space of electromagnetic radiation. Wireless transmitters are better at inhabiting
crowded signal spectrum when they don't try to separate themselves
off from each other, but actually take the presence of other transmitters into account. How does the ffT allow many transmitters to
inhabit the same spectrum, and even use the same frequencies?
The name of this technique is OFDM (Orthogonal Frequency Division Multiplexing). OFDM spreads a single data stream coming
from a single device across a large number of sub-carriers signals (52
in IEEE 802.11a/g). It splits the data stream into dozens of separate signals of slightly different frequency that together evenly use
the whole available radio spectrum. This is done in such a way that
many different transmitters can be transmitting at the same time,
on the same frequency, without interfering with each other. The advantage of spreading a single high speed data stream across many
signals (wideband) is that each individual signal can carry data at a
169

169

169

170

170

much slower rate. Because the data is split into 52 different signals,
each signal can be much slower (1/50). That means each bit of data
can be spaced apart more in time. This has great advances in urban
environments where there are many obstacles to signals, and signals
can reflect and echo often. In this context, the slower the data is
transmitted, the better.
At the transmitter, a reverse ffT (IffT) is used to re-combine
the 50 signals onto 1 signal. That is, it takes the 50 or so different
sub-carriers produced by OFDM, each of which has a single slightly
different, but carefully chosen frequency, and combines them into one
complex signal that has a wide spectrum. That is, it fills the available
spectrum quite evenly because it contains many different frequency
components. The waveform that results from the IffT looks like
'white noise': it has no remarkable or outstanding tendency whatsoever, except to a receiver synchronised to exactly the right carrier
frequency. At the receiver, this complex signal is transformed, using ffT, back into a set of 50 separate data streams, that are then
reconstituted into a single high speed stream.
Even if we cannot come to grips with the techniques of transformation using in DSP in any great detail, I hope that one point stands
out. The transformation involves ‘c'hanges in kind. Data does not
simply move through space. It changes in kind in order to move
through space, a space whose geography is understood as too full of
potential relations.
Conclusion
A couple of points in conclusion:
a. The spectrum of different wireless-audiovisual devices competing
to do more or less the same thing, are all a reproduction of the
same. Extensive movement associated with wireless networks and
digital video occur in various forms. firstly in the constant enveloping of spaces by wireless signals, and secondly in the dense

170

170

170

171

171

population of wireless spectrum by competing, overlapping signals, vying for market share in highly visible, well-advertised campaigns to dominate spectrum while at the same time allowing for
the presence of many others.
b. Actually, in various ways, wirelessness puts the very primacy of
extension as space-making in question. Signals seem to be able to
occupy the same space at the same time, something that should
not happen in space as usually understood. We can understand
this by re-conceptualising movement as intensive. Intensive movement occurs in multiple ways. Here I have emphasised the constant folding inwards or interiorisation of heterogeneous movements via algorithms used in digital signal processing. Intensive
movement ensues occurs when a centre of envelopment begins to
interiorise differences. While these interiorised spaces are computationally intensive (as exemplified by the picoChip's massive
processing power), the spaces they generate are not perceived as
calculated, precise or rigid. Wirelessness is a relatively invisible,
messy, amorphous, shifting sets of depths and distances that lacks
the visible form and organisation of other entities produced by
centres of calculation (for instance, the shape of a CAD-designed
building or car). However, similar processes occur around sound
and images through DSP. In fact, different layers of DSP are increasingly coupled in wireless media devices.
c. Where does this leave the centre of envelopment? The cost of
this freeing up of movement, of mobility, seems to me to be an
interiorisation of constitutive differences, not just in DSP code
but in the perceptual fields and embodiment of the mobile user.
The irony of the DSP is that it uses code to quantify sensations
or physical movements that lie at the fringes of representation
or awareness. We can't see DSP as such, but it supports our
seeing and moving. It brings code quite close to the body. It
can work with audio and images in ways that bring them much
closer to us. The proliferation of mobile devices such as mp3 and
digital cameras is one consequence of that. Yet the price DSP
pays for this proximity to sensation, to sounds, movement, and
others, is the envelopment I have been describing. DSP acts as
171

171

171

172

172

a centre of envelopment, as something that tends to interiorise
intensive movements, the changing nature of change, the intensive
movements that give rise to it.
d. This brings us back to the UMPC video: it shows two individuals.
Their relation can never, it seems, get very far. The provision
of images, sound and wireless connectivity has come so far, that
they hardly need encounter each other at all. There is something
intensely monadological here: DSP is heavily engaged in furnishing the interior walls of the monad, and with orienting the monad
in relation to other monads, but making sure that nothing much
need pass between them. So much has already been pre-processed
between, that nothing much need happen between. They already
have a complete perception of their relation to the other.
e. On a final constructive note, it seems that there is room for contestation here. The question is how to introduce the set-based
code processes that have proven productive in other areas into
the domain of DSP. What would that look like? How would it be
sensed? What could it do to our sensations of video or wireless
media?

172

172

172

173

173

References
Deleuze, Gilles. Difference and Repetition. Translated by Paul
Patton, Athlone Contemporary European Thinkers. (London; New
York: Continuum, 2001).
Panesar, Gajinder, Daniel Towner, Andrew Duller, Alan Gray, and
Will Robbins. ‘D'eterministic Parallel Processing, International Journal of Parallel Programming 34, no. 4 (2006): 323-41.
PicoChip. 'Advanced Wireless Technologies', (2007). http://www
.picochip.com/solutions/advanced_wireless_technologies
PicoChip. 'Pc202 Integrated Baseband Processor Product Brief',
(2007). http://www.picochip.com/downloads/03989ce88cdbebf5165e2f095a1cb1c8
/PC202_product_brief.pdf
Smith, Steven W. The Scientist and Engineer's Guide to Digital
Signal Processing: California Technical Publishing, 2004).
Thrift, Nigel. ‘R'emembering the Technological Unconscious by
Foregrounding Knowledges of Position, Environment & Planning D:
Society & Space 22, no. 1 (2004): 175-91.

173

173

173

174

174

ELPUEBLODECHINA A.K.A.
ALEJANDRA MARIA PEREZ NUNEZ
License: ??
EN

El Curanto
Curanto is a traditional method of cooking in the ground by the
people of Chiloe, in the south of Chile. This technique is practiced
throughout the world under different names. What follows is a summary of the ELEMENTS and steps enunciated and executed during el
curanto, which was performed in the centre of Brussels during V/J10.

Recipe

?

For making a curanto you need
to take the following steps and
arrange the following ELEMENTS:

This image is repeated in many
different cultures. Might be an
ancient way of cooking. What
does this underground cooking
imply? Most of all, it takes a lot
of TIME.

Free Libre Open Source
Curanto in the center
of Bruxelles

OVEN, a hole in the ground
filled with fire resistant STONES.

? find a way to get a good deal
at the market to get fresh
MUSSELS for x people.
It
helps to have a CHARISMATIC
WOMAN do it for you.

figure A

a slow cooking

OVEN

174

174

174

175

175

onomies of immaterial labour.
?

A BRIGHT WOMAN FRIEND to
find out about BELGIAN PORPHYRY and tell you about the
mining carrière in Quenast
(Hainaut).

? A CAMERA WOMAN to hand
you a MARBLE STONE to put
inside the OVEN.

figure B a TERRAIN VAGUE in
the centre of Brussels and a
NEIGHBOUR willing to let you in.

?

or some other MULwho is
extremely PATIENT and HUMOURISTIC and who helps
you to focus and takes pictures.
WENDY

TITASKING WOMAN

?

or some
that
TRUSTS the carrier of the
performance, will tell their
STORY about TRAVELING MUSSELS.
FEMKE

and

PETER

EXCENTRIC COUPLE

figure C A HOLE in the
ground 1.5 m deep, 1 m
diameter. (It makes me
think of a hole in my head).

A hole in the ground reminds me
of the unknown. FOOD cooked
inside the ground relates to ideas,
creativity and GIFT. It helps to
have GUILLAUME or a strong and
positive MAN to help you dig the
hole. A second PERSON would be
of great help, especially if, while
digging, he would talk about tax-

Mussels eaten in the centre of
Brussels are grown in Ireland and
immersed in Dutch seawater and
are then offcially called Dutch.
After 2 days in Dutch water, they
are ready to be exported to Brussels and become Belgian mussels
that are in fact Dutch-Irish.

175

175

175

176

176

figure D Original curanto
STONES are round fire
resistant stones. I couldn't
find them in Brussels.

figure E A good BUCKET
to scoop the rain out
of your newly dug HOLE

The only round and granite stones
were very expensive design ones.
In Chile you just dig a hole anywhere and find them. The only
fire resistant rock in Brussels was
the STREET itself.
? Square shaped rocks collected
randomly throughout the city
by means of appropriation.
Streets are made of a type of
granite rock, might be Belgian
porphyry. Note that there is a
message on one of the stones we
picked up in the centre. It reads
'watch your head'.

figure F A tent to protect
your fiRE from random RAIN

176

176

176

177

177

figure G LAIA or some
psychonaut, hierophant friend.

Should be someone who is able to
transmit confidence to the execution of el curanto and who will
keep you company while you are
appropriating stones in Brussels.
? A good BOUILLON made of
cheap white wine and concentrated bio vegetables and
spices is one of the secrets.

figure I GIRL that will
randomly come to the place
with her MOTHER and
speak in Spanish to the
carrier of the performance.

She will play the flute, give
the OVEN some orders to cook
well and sing improvised SONGS.
She and some other children will
play around by digging holes and
making their own CURANTO.

figure J A big fiRE to heat up
the wet cold ground of Brussels
figure H You need to find
or some Palestinian fellow
to help you keep the fire burning

MOAM

177

177

177

178

178

figure K

figure M A SACK CLOTH
to cover the food and to
retain STEAM for cooking.

RED HOT COAL

figure L Using some
cabbage leaves to cover
the RED HOT COAL to
place the FOOD on top of

figure N

or some
who is
happy to SHARE his expert
knowledge and willing to
join in the performance.
DIDIER

PANIC COOK MAN

178

178

178

179

179

?

?

HOLE

?

MUSSELS

?
figure O ONIONS,
and SPECULATIONS.

GESTURES

?

While reading VALIS, the carrier
of the performance will become
reverend TIMOTHY ARCHER and
read about TIME (something that
has mainly been forgotten is
Palestine).

figure P el curanto is
to be made together with
PEOPLE and for EVERYONE.

WOOD found in a dismantled
house. It helps to find a ride
to transport it.

SPICES,

leaf.

rosemary and bay

MICHAEL or some DEDICATED
friend that will assist with the
execution of the performance
and keep the pictures of it afterwards for months.

figure Q You can eat from
the shell by using your hands
or a little WOODEN SPOON.

If you want to eat later, take the
mussels out of their shell, add
OLIVE OIL, make a spread and
keep it cold in a jar. find QUEER
couples to savour it with BREAD
while talking about SEX.
179

179

179

180

180

?

fiRE

?

RED HOT COAL

?

FOOD

?

from the cooking MUSIt helps to use 'hot'
PIEZZO MICROPHONES.
NOISE

SELS.

Here TIME turns into space.
“Time can be overcome”, Mircea
Eliade wrote. That's what it's all
about.
The great mystery of Eleusis, of
the Orphics, of the early Christians, of Sarapis, of the Greco

1

-Roman mystery religions, of
Hermes Trismegistos, of the Renaissance Hermetic alchemists,
of the Rose Cross Brotherhood,
of Apollonius of Tyana, of Simon
Magus, of Asklepios, of Paracelsus, of Bruno, consists of the abolition of time. The techniques are
there. Dante discusses them in
the Comedy. It has to do with
the loss of amnesia; when forgetfulness is lost, true memory
spreads out backward and forward, into the past and into the
future, and also, oddly, into alternate universes; it is orthogonal as well as linear. 1

Philip K. Dick Valis (1972)

180

180

180

181

181

ALICE CHAUCHAT, FRÉDÉRIC GIES
License: Attribution-Noncommercial-No Derivative Work
EN

Praticable
Praticable is a collaborative research project between several artists
(currently: Alice Chauchat, Frédéric de Carlo, Frédéric Gies, Isabelle
Schad and Odile Seitz).
Praticable proposes itself as a horizontal work structure, which
brings research, creation, transmission and production structure into
relation with each other. This structure is the basis for the creation
of a variety of performances by either one or several of the project's
participants. In one way or another, these performances start from
the exploration of body practices, leading to a questioning of its representation. More concretely, Praticable takes the form of collective
periods of research and shared physical practices, both of which are
the basis for various creations. These periods of research can either
be independent of the different creation projects or integrated within
them.
During Jonctions/Verbindingen 10, Alice Chauchat and Frédéric
Gies gave a workshop for participants dealing with different ‘body
practices'. On the basis of Body-Mind Centering (BMC) techniques,
the body as a locus of knowledge production was made tangible. The
notation of the Dance performance with which Frédéric Gies concluded the day is reproduced in this book and published under an
open license.

figure 120
Workshop for
participants
with different
body
practices
at V/J10

figure 121
The body as
a locus of
knowledge
production
was made
tangible

figure 122

figure 123

184

Dance (Notation)
20 sec.
31. INTERCELLULAR flUID
Initiate movement in your intercellular fluid. Start slowly and
then put more and more energy
and speed in your movement, using intercellular fluid as a pump
to make you jump.

20 sec.
32. VENOUS BLOOD
Initiate movement in your venous
blood, rising and falling and following its waves.

20 sec.
33. VENOUS BLOOD
Initiate movement in your venous blood, slowing down progressively.

184

184

184

185

185

Less than 5 sec.
34. TRANSITION
Make visible in your movement a
transition from venous blood to
cerebrospinal fluid. finish in the
same posture you chose to start
PART 3.

1 min.
35. EACH flUID
Go through each fluid quality you
have moved with since the beginning of PART 3. The 1st one has
to be cerebrospinal fluid. After
this one, the order is free.

185

185

185

186

186

61. ALL GLANDS
Stand up slowly, building your
vertical axis from coccygeal body
to pineal gland. Use this time to
bound with earth through your
feet, as if you were growing roots.

INSTRUMENTAL (during the voice echo)
Down, down, down in your heart
find, find, find the secret
62. LOWER GLANDS OF THE
PELVIS
Dance as if you were dancing
in a club. Focus on your lower
glands, in your pelvis, to initiate your dance. Your arms, torso,
neck and head are also involved
in your dance.
SMALL PERIMETER
Turn, turn, turn your head around
63. MAMILLARY BODIES
Turn and turn your head around,
initiating this movement in
mamillary bodies. Let your head
drive the rest of your body into
turns.

186

186

186

187

187

Baby we can do it
We can do it alright
64. LOWER GLANDS OF THE
PELVIS
Dance as if you were dancing
in a club. Focus on your lower
glands, in your pelvis, to initiate your dance. Your arms, torso,
neck and head are also involved
in your dance.
Do you believe in love at first sight
It's an illusion, I don't care
Do you believe I can make you feel better
Too much confusion, come on over here
65. HEART BODY
Keep on dancing as if you were
dancing in a club and initiate
movements in your heart body,
connecting with your forearms
and hands.

License: Attribution-Noncommercial-No Derivative Work

187

187

187

188

188

Mutual Motions Video Library
To be browsed, a vision to be displaced

figure 126

figure 125

Wearing the video library, performer Isabelle Bats presents a selection of films related to the themes of V/J10. As a living memory, the
discs and media players in the video library are embedded in a dress
designed by artists collective De Geuzen. Isabelle embodies an accessible interface between you (the viewer), and the videos. This human
interface allows for a mutual relationship: viewing the films influences
the experience of other parts of the program, and the situation and
context in which you watch the films play a role in experiencing and
interpreting the videos. A physical exchange between existing imagery, real-time interpretation, experiences and context, emerges as
a result.
The V/J10 video library collects excerpts of performance and dance
video art, and (documentary) film, which reflect upon our complex
body–technique relations. Searching for the indicating, probing, disturbing or subverting gesture(s) in the endless feedback loop between
technology, tools, data and bodies, we collected historical as well as
contemporary material for this temporary archive.

Modern Times or the Assembly Line
Reflects the body in work environments, which are structured by
technology, ranging from the pre-industrial manual work with analogue
tools, to the assembly line, to postmodern surveillance configurations.
24 Portraits
Excerpt from a series of documentary portraits by Alain Cavalier, FR,
1988-1991.

umentaries paying tribute to women's
manual work. The intriguing and sensitive portraits of 24 women working
in different trades reveal the intimacy
of bodies and their working tools.

24 Portraits is a series of short doc-

198

198

198

199

199

Humain, trop humain
Quotes from a documentary by Louis
Malle, FR, 1972.
A documentary filmed at the Citroen
car factory in Rennes and at the 1972
Paris auto show, documenting the monotonous daily routines of working the
assembly lines, the close interaction
between bodies and machines.

Performing the Border
Video essay by Ursula Biemann, CH,
1999, 45 min.
“Performing the Border is a video
essay set in the Mexican-U.S. border town Ciudad Juarez, where the
U.S. industries assemble their electronic and digital equipment, located
right across El Paso, Texas.
The
video discusses the sexualization of
the border region through labour division, prostitution, the expression of
female desires in the entertainment industry, and sexual violence in the public sphere. The border is presented
as a metaphor for marginalization and
the artificial maintenance of subjective boundaries at a moment when
the distinctions between body and machine, between reproduction and production, between female and male,
have become more fluid than ever.”
(Ursula Biemann)
http://www.geobodies.org

Maquilapolis (city of factories)
A film by Vicky Funari and Sergio
De La Torre, Mexico/U.S.A., 2006, 68
min.

Carmen works the graveyard shift in
one of Tijuana's maquiladoras, the
multinationally-owned factories that
came to Mexico for its cheap labour.
After making television components
all night, Carmen comes home to a
shack she built out of recycled garage
doors, in a neighbourhood with no
sewage lines or electricity. She suffers
from kidney damage and lead poisoning from her years of exposure to toxic
chemicals. She earns six dollars a day.
But Carmen is not a victim. She is a
dynamic young woman, busy making
a life for herself and her children.
As Carmen and a million other
maquiladora workers produce televisions, electrical cables, toys, clothes,
batteries and IV tubes, they weave
the very fabric of life for consumer nations. They also confront labour violations, environmental devastation and
urban chaos – life on the frontier of
the global economy. In Maquilapolis Carmen and her colleague Lourdes reach beyond the daily struggle for
survival to organize for change: Carmen takes a major television manufacturer to task for violating her labour
rights, Lourdes pressures the government to clean up a toxic waste dump
left behind by a departing factory.
As they work for change, the world
changes too: a global economic crisis
and the availability of cheaper labour
in China begin to pull the factories
away from Tijuana, leaving Carmen,
Lourdes and their colleagues with an
uncertain future.
A co-production of the Independent
Television Service (ITVS), project of
Creative Capital.
http://www.maquilapolis.com

199

199

199

200

200

Practices of everyday life
Everyday life as the place of a performative encounter between bodies
and tools, from the U.S.A. of the 70s to contemporary South Africa.

Saute ma ville
Chantal Akerman, B, 1968, 13 min.

states that, “When the woman speaks,
she names her own oppression.”

A girl returns home happily. She locks
herself up in her kitchen and messes up
the domestic world. In her first film,
Chantal Akerman explores a scattered
form of being, where the relationship
with the controlled human world literally explodes. Abolition of oneself,
explosion of oneself.

“I was concerned with something like
the notion of ‘language speaking the
subject', and with the transformation
of the woman herself into a sign in
a system of signs that represent a
system of food production, a system
of harnessed subjectivity.” (Martha
Rosler)

Semiotics of the Kitchen

Choreography

Video by Martha Rosler, U.S.A., 1975,
05:30 min.
Semiotics of the Kitchen adopts the
form of a parodic cooking demonstration in which, Rosler states, “An
anti-Julia Child replaces the domesticated ‘meaning' of tools with a lexicon
of rage and frustration.” In this performance-based work, a static camera is
focused on a woman in a kitchen. On
a counter before her are a variety of
utensils, each of which she picks up,
names and proceeds to demonstrate,
but with gestures that depart from the
normal uses of the tool. In an ironic
grammatology of sound and gesture,
the woman and her implements enter
and transgress the familiar system of
everyday kitchen meanings – the securely understood signs of domestic
industry and food production erupt
into anger and violence. In this alphabet of kitchen implements, Rosler

Video installation preview by Anke
Schäfer, NL/South Africa, 13:07 min
(loop), 2007.
Choreography reflects on the notion
‘Armed Response' as an inner state
of mind. The split screen projection
shows the movements of two women
commuting to their work. On the one
side, the German-South African Edda
Holl, who lives in the rich Northern
suburbs of Johannesburg. Her search
for a safe journey is characterized
by electronic security systems, remote
controls, panic buttons, her constant
cautiousness, the reassuring glances
in the tinted car windows. On the
other side, you see the African-South
African Gloria Fumba, who lives in
Soweto and whose security techniques
are very basic: clutching her handbag to her body, the way she cues for
the bus, avoiding to go home alone
when it's dark. A classical continuity

200

200

200

201

201

editing, as seen fiction film, suggests
at first a narrative storyline, but is
soon interrupted by moments of pause.
These pauses represent the desires of
both women to break with the safety
mechanism that motivates their daily
movements.

Television
Ximena Cuevas, Mexico, 1999, 2 min.
“The vacuum cleaner becomes the device of the feminist ‘liberation', or the
monster that devours us.” (Insite 2000
program, San Diego Museum of Art)

http://www.livemovie.org

Perform the script, write the score
Considers dance and performance as knowledge systems where movement and data interact. With excerpts of performance documents,
interviews and (dance) films. But also the script, the code, as system
of perversion, as an explorative space for the circulation of bodies.
William Forsythe's works
Choreography can be understood as
writing moving bodies into space, a
complex act of inscription, which is
situated on the borderline between
creating and remembering, future and
past. Movement is prescribed and is
passing at the same time. It can be
inscribed into the visceral body memory through constant repetition, but
it is also always undone:
As Laurie Anderson says:
“You're walking. And you don't always realize it, but you're always
falling. With each step you fall forward slightly. And then catch yourself from falling.
Over and over,
you're falling.
And then catching
your self from falling.” (Quoted after
Gabriele Brandstetter, ReMembering
the Body)
William Forsythe, for instance, considers classical ballet as a historical
form of a knowledge system loaded

with ideologies about society, the self,
the body, rather than a fixed set
of rules, which simply can be implemented. An arabesque is a platonic ideal for him, a prescription,
but it can't be danced: “There is
no arabesque, there is only everyone's arabesque.” His choreography
is concerned with remembering and
forgetting: referencing classical ballet, creating a geometrical alphabet,
which expands the classical form, and
searching for the moment of forgetfulness, where new movement can arise.
Over the years, he and his company
developed an understanding of dance
as a complex system of processing information with some analogies to computer programming.

Chance favours
pared mind

the

pre-

Educational dance film, produced by
Vlaams Theaterinstituut, Ministerie
van Onderwijs dienst Media and Informatie, dir. Anne Quirynen, 1990,

201

201

201

202

202

Rehearsal Last Supper

25 min.
Chance favours the prepared mind
features discussions and demonstrations by William Forsythe and four
Frankfurt Ballet Dancers about their
understanding of movement and their
working methods: “Dance is like writing or drawing, some sort of inscription.” (William Forsythe)

The way of the weed
Experimental dance film featuring
William Forsythe, Thomas McManus
and dancers of the Frankfurt Ballet,
An-Marie Lambrechts, Peter Missotten and Anne Quirynen, soundtrack:
Peter Vermeersch, 1997, 83 min.
In this experimental dance film, investigator Thomas is dropped in a desert
in 7079, not only to investigate the
growth movements of the plant life
there, but also the life's work of the
obscure scientist William F. (William
Forsythe), who has achieved numerous insights and discoveries on the
growth and movement of plants. This
knowledge is stored in the enormous
data bank of an underground laboratory. It is Thomas's task to hack into
his computer and check the professor's secret discoveries. His research
leads him into the catacombs of a
complex building, where he finds people stored in cupboards in a comatose
state. They are loaded with professor F.'s knowledge of vegetation. He
puts the ‘people-plants' into a large
transparent pool of water and notices
that in the water the ‘samples' come
to life again. . . A complex reflection
on (body) memory, (digital) archives
and movement as repetition and interference.

Video installation preview by Anke
Schäfer, NL/South Africa, 16:40 min.
(loop), 2007.
The work Rehearsal Last Supper combines a kind of ‘Three Stooges' physical, slapstick-style comedy, but with
far more serious subject matters such
as abuse, gender violence, and the
general breakdown of family relationships. It's a South African and mixed
couple re-enactment of a similar scene
that Bruce Nauman realized in the 70s
with a white, middle-aged man and
woman.
The experience, the ‘Gestalt' of the
experienced violence, the frustration
and the unwillingly or even forced internalization are felt to the core of the
voice and the body. Humour can help
to express the suppressed and to use
your pain as power.
Actors: Nat Ramabulana, Tarryn Lee,
Megan Reeks, Raymond Ngomane
(from Wits University Drama department), Kekeletso Matlabe, Lebogang
Inno, Thabang Kwebu, Paul Noko
(from Market Theatre Laboratory).
http://www.livemovie.org

Nest Of Tens
Miranda July, U.S.A., 1999, 27 min.
Nest Of Tens is comprised of four alternating stories, which reveal mundane yet personal methods of control.
These systems are derived from intuitive sources. Children and a retarded
adult operate control panels made out
of paper, lists, monsters, and their
own bodies.
“A young boy, home alone, performing

202

202

202

203

203

a bizarre ritual with a baby; an uneasy, aborted sexual flirtation between
a teenage babysitter and an older man;
an airport lounge encounter between a
businesswoman (played by July) and a
young girl. Linked by a lecturer enumerating phobias in a quasi-academic
seminar, these three perverse, unnerving scenarios involving children and
adults provide authentic glimpses into
the queasy strangeness that lies behind the everyday.” (New York Video
Festival, 2000)

In the field of players
Jeanne Van Heeswijk & Marten Winters, 2004, NL
Duration: 25.01.2004 – 31.01.2004
Location: TENT.Rotterdam
Participants: 106 through casting, 260
visitors of TENT.
Together with artist Marten Winters,
Van Heeswijk developed a ‘game:set'.
In cooperation with graphic designer
Roger Teeuwen, they marked out a
set of lines and fields on the ground.
Just like in a sporting venue, these
lines had no meaning until used by the
players. The relationship between the
players was revealed by the rules of the
game.
Designer Arienne Boelens created special game cards that were handed out
during the festival by the performance
artists Bliss. Both Bliss and the cards
turned up all over the festival, showing
up at every hot spot or special event.
Through these game cards people were

invited to fulfil the various roles of
the game – like ‘Round Miss' (the
girl who walks around the ring holding up a numbered card at the start
of each round at boxing matches),
‘40-plus male in (high) cultural position', ‘Teen girl with star ambitions',
‘Vital 65-plus'. But even ‘Whisperer',
and ‘Audience' were specific roles.

Writing Desire
Video essay by Ursula Biemann, CH,
2000, 25 min.

Writing Desire is a video essay on
the new dream screen of the Internet, and its impact on the global circulation of women's bodies from the
‘Third World' to the ‘first World'
. Although underage Philippine ‘pen
pals' and post-Soviet mail-order brides
have been part of the transnational
exchange of sex in the post-colonial
and post-Cold War marketplace of desire before the digital age, the Internet has accelerated these transactions.
The video provides the viewers with
a thoughtful meditation on the obvious political, economic and gender inequalities of these exchanges by simulating the gaze of the Internet shopper
looking for the imagined docile, traditional, pre-feminist, but Web-savvy
mate.
http://www.geobodies.org

203

203

203

204

204


INÈS RABADAN
License: Creative Commons Attribution-NonCommercial-ShareAlike
EN

Does the repetition of a gesture irrevocably
lead to madness?

figure 127
Screening
Modern
Times at
V/J10

A personal introduction to Modern Times
(Charles Chaplin, 1936)
figure 128

One of the most memorable moments of Modern Times, is the one
where the tramp goes mad after having spent the whole day screwing
bolts on the assembly line. He is free: neither husband, nor worker,
nor follower of some kind of movement, nor even politically engaged.
His gestures are burlesque responses to the adversity in his life, or
just plain ‘exuberant'. But through the interaction with the machine,
however, he completely goes off the rails and ends up in prison.
Inès Rabadan made two short films in which a female protagonist
is confined by the fast-paced work of the assembly line. Tragically
and mercilessly, the machine changes the woman and reduces her to
a mechanical gesture – a gesture in which she sometimes takes pride,
precisely in order not to lose her sanity. Or else, she really goes mad,
ruined by the machine, eventually managing to free herself.

figure 129

figure 130


MICHAEL TERRY
License: Free Art License
EN

Data analysis as a discourse

figure 131
Michael
Terry in
between
LGM sessions

An interview with Michael Terry
Michael Terry is a computer scientist working at the Human Computer Interaction Lab of the University of Waterloo, Canada. His
main research focus is on improving usability in open source software, and ingimp is the first result of that work.
In a Skype conversation that was live broadcast in La Bellone during Verbindingen/Jonctions 10, we spoke about ingimp, a clone of the
popular image manipulation programme Gimp, but with an important difference. Ingimp allows users to record data about their usage
in to a central database, and subsequently makes this data available
to anyone.
At the Libre Graphics Meeting 2008 in Wroclaw, just before Michael
Terry presents ingimp to an audience of Gimp developers and users,
Ivan Monroy Lopez and Femke Snelting meet up with Michael Terry
again to talk more about the project and about the way he thinks
data analysis could be done as a form of discourse.

figure 132
Interview
at Wroclaw

Femke Snelting (FS) Maybe we could start this face-to-face conversation with a description of the ingimp project you are developing
and – what I am particularly interested in –, why you chose to work
on usability for Gimp?
Michael Terry (MT) So the project is ‘ingimp', which is an instrumented version of Gimp, it collects information about how the
software is used in practice. The idea is you download it, you install
it, and then with the exception of an additional start up screen, you
use it just like regular Gimp. So, our goal is to be as unobtrusive as
possible to make it really easy to get going with it, and then to just
217

217

217

218

218

forget about it. We want to get it into the hands of as many people
as possible, so that we can understand how the software is actually
used in practice. There are plenty of forums where people can express
their opinions about how Gimp should be designed, or what's wrong
with it, there are plenty of bug reports that have been filed, there
are plenty of usability issues that have been identified, but what we
really lack is some information about how people actually apply this
tool on a day to day basis. What we want to do is elevate discussion
above just anecdote and gut feelings, and to say, well, there is this
group of people who appear to be using it in this way, these are the
characteristics of their environment, these are the sets of tools they
work with, these are the types of images they work with and so on,
so that we have some real data to ground discussions about how the
software is actually used by people.
You asked me now why Gimp? I actually used Gimp extensively
for my PhD work. I had these little cousins come down and hang
out with me in my apartment after school, and I would set them up
with Gimp, and quite often they would start off with one picture,
they would create a sphere, a blue sphere, and then they played with
filters until they got something really different. I would turn to them
looking at what they had been doing for the past twenty minutes,
and would be completely amazed at the results they were getting
just by fooling around with it. And so I thought, this application
has lots and lots of power; I'd like to use that power to prototype
new types of interface mechanisms. So I created JGimp, which is
a Java based extension for the 1.0 Gimp series that I can use as a
back-end for prototyping novel user interfaces. I think that it is a
great application, there is a lot of power to it, and I had already an
investment in its code base, so it made sense to use that as a platform
for testing out ideas of open instrumentation.
FS: What is special about ingimp, is the fact that the data you
collect, is equally free to use, run, study and distribute, as the software
you are studying. Could you describe how that works?

218

218

218

219

219

MT: Every bit of data we collect, we make available: you can go to
the website, you can download every log file that we have collected.
The intent really is for us to build tools and infrastructure so that the
community itself can sustain this analysis, can sustain this form of
usability. We don't want to create a situation where we are creating
new dependencies on people, or where we are imposing new tasks on
existing project members. We want to create tools that follow the
same ethos as open source development, where anyone can look at
the source code, where anyone can make contributions, from filing
a bug to doing something as simple as writing a patch, where they
don't even have to have access to the source code repository, to make
valuable contributions. So importantly, we want to have a really low
barrier to participation. At the same time, we want to increase the
signal-to-noise ratio. Yesterday I talked with Peter Sikking, an information architect working for Gimp, and he and I both had this
experience where we work with user interfaces, and since everybody
uses an interface, everybody feels they are an expert, so there can be
a lot of noise. So, not only did we want to create an open environment for collecting this data, and analysing it, but we also wanted to
increase the chance that we are making valuable contributions, and
that the community itself can make valuable contributions. Like I
said, there is enough opinion out there. What we really need to do
is to better understand how the software is being used. So, we have
made a point from the start to try to be as open as possible with
everything, so that anyone can really contribute to the project.
FS: Ingimp has been running for a year now. What are you finding?
MT: I have started analysing the data, and I think one of the things
that we realised early on is that it is a very rich data set; we have lots
and lots of data. So, after a year we've had over 800 installations, and
we've collected about 5000 log files, representing over half a million
commands, representing thousands of hours of the application being
used. And one of the things you have to realise is that when you have
a data set of that size, there are so many different ways to look at it
that my particular perspective might not be enough. Even if you sit
219

219

219

220

220

someone down, and you have him or her use the software for twenty
minutes, and you videotape it, then you can spend hours analysing
just those twenty minutes of videotape. And so, I think that one of
the things we realised is that we have to open up the process so that
anyone could easily participate. We have the log files available, but
they really didn't have an infrastructure for analysing them. So, we
created this new piece of software called ‘Stats Jam', an extension
to MediaWiki, which allows anyone to go to the website and embed
SQL-queries against the ingimp data set and then visualise those
results within the Wiki text. So, I'll be announcing that today and
demonstrating that, but I have been using that tool now for a week
to complement the existing data analysis we have done.
One of the first things that we realized is that we have over 800
installations, but then you have to ask, how many of those are really serious users? A lot of people probably just were curious, they
downloaded it and installed it, found that it didn't really do much
for them and so maybe they don't use it anymore. So, the first thing
we had to do is figure out which data points should we really pay
attention to. We decided that a person should have used ingimp on
two different occasions, preferably at least a day apart, where they'd
saved an image on both of the instances. We used that as an indication of what a serious user is. So with that filter in place, the ‘800
installations' drops down to about 200 people. So we had about 200
people using ingimp; and looking at the data, this represents about
800 hours of use, about 4000 log files, and again still about half a
million commands. So, it's still a very significant group of people.
200 people are still a lot, and that's a lot of data, representing about
11000 images they have been working on – there's just a lot.
From that group, what we found is that use of ingimp is really
short and versatile. So, most sessions are about fifteen minutes or
less, on average. There are outliers, there are some people who use it
for longer periods of time, but really it boils down to them using it for
about fifteen minutes, and they are applying fewer than a hundred
operations when they are working on the image. I should probably
be looking at my data analysis as I say this, but they are very quick,
220

220

220

221

221

short, versatile sessions, and when they use it, they use less than 10
different tools, or they apply less than 10 different commands.
What else did we find? We found that the two most popular monitor resolutions are 1280 by 1024, and 1024 by 768. So, those represent
collectively 60 % of the resolutions, and really 1280 by 1024 represents
pretty much the maximum for most people, although you have some
higher resolutions. So one of the things that's always contentious
about Gimp, is its window management scheme and the fact that it
has multiple windows, right? And some people say, well you know,
this works fine if you have two monitors, because you can throw out
the tools on one monitor and then your images are on another monitor. Well, about 10 to 15 % of ingimp users have two monitors, so
that design decision is not working out for most of the people, if that
is the best way to work. These are things I think that people have
been aware of, it's just now we have some actual concrete numbers
where you can turn to and say: now this is how people are using it.
There is a wide range of tasks that people are performing with the
tool, but they are really short, quick tasks.
FS: Every time you start up ingimp, a screen comes up asking
you to describe what you are planning to do and I am interested in
the kind of language users invent to describe this, even when they
sometimes don't know exactly what it is they are going to do. So
inventing language for possible actions with the software has in a
way become a creative process that is now shared between interface
designer, developer and user. If you look at the ‘activity tags' you
are collecting, do you find a new vocabulary developing?
MT: I think there are 300 to 600 different activity tags that people
register within that group of ‘significant users'. I didn't have time to
look at all of them, but it is interesting to see how people are using
that as a medium for communicating to us. Some people will say,
“Just testing out, ignore this!” Or, people are trying to do things like
insert HTML code, to do like a cross-site scripting attack, because,
you have all the data on the website, so they will try to play with
that. Some people are very sparse and they say ‘image manipulation'
221

221

221

222

222

or ‘graphic design' or something like that, but then some people are
much more verbose, and they give more of a plan, “This is what I
expect to be doing.” So, I think it has been interesting to see how
people have adopted that and what's nice about it, is that it adds a
really nice human element to all this empirical data.
Ivan Monroy Lopez (IM): I wanted to ask you about the data;
without getting too technical, could you explain how these data are
structured, what do the log files look like?
MT: So the log files are all in XML, and generally we compress
them, because they can get rather large. And the reason that they
are rather large is that we are very verbose in our logging. We want
to be completely transparent with respect to everything, so that if
you have some doubts or if you have some questions about what kind
of data has been collected, you should be able to look at the log file,
and figure out a lot about what that data is. That's how we designed
the XML log files, and it was really driven by privacy concerns and
by the desire to be transparent and open. On the server side we take
that log file and we parse it out, and then we throw it into a database,
so that we can query the data set.
FS: Now we are talking about privacy. . . I was impressed by the
work you have done on this; the project is unusually clear about why
certain things are logged, and other things not; mainly to prevent
the possibility of ‘playing back' actions so that one could identify
individual users from the data set. So, while I understand there are
privacy issues at stake I was wondering... what if you could look at the
collected data as a kind of scripting for use, as writing a choreography
that might be replayed later?
MT: Yes, we have been fairly conservative with the type of information that we collect, because this really is the first instance where
anyone has captured such rich data about how people are using software on a day to day basis, and then made it all that data publicly
222

222

222

223

223

available. When a company does this, they will keep the data internally, so you don't have this risk of someone outside figuring something out about a user that wasn't intended to be discovered. We
have to deal with that risk, because we are trying to go about this
in a very open and transparent way, which means that people may
be able to subject our data to analysis or data mining techniques
that we haven't thought of, and extract information that we didn't
intent to be recording in our file, but which is still there. So there are
fairly sophisticated techniques where you can do things like look at
audio recordings of typing and the timings between keystrokes, and
then work backwards with the sounds made to figure out the keys
that people are likely pressing. So, just with keyboard audio and
keystroke timings alone, you can often give enough information to be
able to reconstruct what people are actually typing. So we are always
sort of weary about how much information is in there.
While it might be nice to be able to do something like record people's actions and then share that script, I don't think that that is
really a good use of ingimp. That said, I think it is interesting to
ask: could we characterize people's use enough, so that we can start
clustering groups of people together and then providing a forum for
these people to meet and learn from one another? That's something
we haven't worked out. I think we have enough work cut out for us
right now just to characterize how the community is using it.
FS: It was not meant as a feature request, but as a way to imagine
how usability research could flip around and also become productive
work.
MT: Yes, totally. I think one of the things that we found when
bringing people into the basic usability of the ingimp software and
ingimp website, is that people like looking at what commands other
people are using, what the most frequently used commands are; and
part of the reason that they like that, is because of what it teaches
them about the application. So they might see a command they were
unaware of. So we have toyed with the idea of then providing not
223

223

223

224

224

only the command name, but then a link from that command name
to the documentation – but I didn't have time to implement it, but
certainly there are possibilities like that, you can imagine.
FS: Maybe another group can figure something out like that? That's
the beauty of opening up your software plus data set of course.
Well, just a bit more on what is logged and what not... Maybe you
could explain where and why you put the limit, and what kind of use
you might miss out on as a result?
MT: I think it is important to keep in mind that whatever instrument you use to study people, you are going to have some kind of
bias, you are going to get some information at the cost of other information. So if you do a video taped observation of a user and you
just set up a camera, then you are not going to find details about
the monitor maybe, or maybe you are not really seeing what their
hands are doing. No matter what instrument you use, you are always
getting a particular slice.
I think you have to work backwards and ask what kind of things
do you want to learn. And so the data that we collect right now, was
really driven by what people have done in the past in the area of instrumentation, but also by us bringing people into the lab, observing
them as they are using the application, and noticing particular behaviours and saying, hey, that seems to be interesting, so what kind of
data could we collect to help us identify those kind of phenomena, or
that kind of performance, or that kind of activity? So again, the data
that we were collecting was driven by watching people, and figuring
out what information will help us to identify these types of activities.
As I've said, this is really the first project that is doing this, and
we really need to make sure we don't poison the well. So if it happens that we collect some bit of information, that then someone can
later say, “Oh my gosh, here is the person's file system, here are the
names they are using for the files” or whatever, then it's going to
make the normal user population weary of downloading this type of
224

224

224

225

225

instrumented application. The thing that concerns me most about
open source developers jumping into this domain, is that they might
not be thinking about how you could potentially impact privacy.
IM: I don't know, I don't want to get paranoid. But if you are
doing it, then there is a possibility someone else will do it in a less
considerate way.
MT: I think it is only a matter of time before people start doing
this, because there are a lot of grumblings about, “We should be
doing instrumentation, someone just needs to sit down and do it.”
Now there is an extension out for firefox that will collect this kind
of data as well, so you know. . .
IM: Maybe users could talk with each other, and if they are aware
that this type of monitoring could happen, then that would add a
different social dimension. . .
MT: It could. I think it is a matter of awareness, really. We have a
lengthy concern agreement that details the type of information we are
collecting and the ways your privacy could be impacted, but people
don't read it.
FS: So concretely... what information are you recording, and what
information are you not recording?
MT: We record every command name that is applied to a document,
to an image. Where your privacy is at risk with that, is that if you
write a custom script, then that custom script's name is going to be
inserted into a log file. And so if you are working for example for Lucas
or DreamWorks or something like that, or ILM, in some Hollywood
movie studio and you are using ingimp and you are writing scripts,
then you could have a script like ‘fixing Shrek's beard', and then that
is getting put into the log file and then people are going to know that
the studio uses ingimp.
225

225

225

226

226

We collect command names, we collect things like what windows
are on the screen, their positions, their sizes, and we take hashes of
layer names and file names. We take a string and then we create a
hash code for it, and we also collect information about how long is
this string, how many alphabetical characters, numbers; things like
that, to get a sense of whether people are using the same files, the
same layer names time and time again, and so on. But this is an
instance where our first pass at this, actually left open the possibility
of people taking those hashes and then reconstructing the original
strings from that. Because we have the hash code, we have the length
of the string – all you have to do is generate all possible strings of
that length, take the hash codes and figure out which hashes match.
And so we had to go back and create a new scheme for recording this
type of information where we create a hash and we create a random
number, we pair those up on the client machine but we only log the
random number. So, from log to log then, we can track if people
use the same image names, but we have no idea of what the original
string was.
There are these little ‘gotchas' like that, that I don't think most
people are aware of, and this is why I get really concerned about
instrumentation efforts right now, because there isn't this body of
experience of what kind of data should we collect, and what shouldn't
we collect.
FS: As we are talking about this, I am already more aware of what
data I would allow being collected. Do you think by opening up this
data set and the transparent process of collecting and not collecting,
this will help educate users about these kinds of risks?
MT: It might, but honestly I think probably the thing that will
educate people the most is if there was a really large privacy error
and that it got a lot of news, because then people would become more
aware of it because right now – and this is not to say that we want
that to happen with ingimp – but when we bring people in and we ask
them about privacy, “Are you concerned about privacy?” and they
say “No”, and we say “Why?” Well, they inherently trust us, but the
226

226

226

227

227

fact is that open source also lends a certain amount of trust to it,
because they expect that since it is open source, the community will
in some sense police it and identify potential flaws with it.
FS: Is that happening? Are you in dialogue with the open source
community about this?
MT: No, I think probably five to ten people have looked at the
ingimp code – realistically speaking I don't think a lot of people looked
at it. Some of the Gimp developers took a gander at it to see “How
could we put this upstream?” But I don't want it upstream, because
I want it to always be an opt-in, so that it can't be turned on by
mistake.
FS: You mean you have to download ingimp and use it as a separate
program? It functions in the same way as Gimp, but it makes the
fact that it is a different tool very clear.
MT: Right. You are more aware, because you are making that
choice to download that, compared to the regular version. There is
this awareness about that.
We have this lengthy text based consent agreement that talks about
the data we collect, but less than two percent of the population reads
license agreements. And, most of our users are actually non-native
English speakers, so there are all these things that are working against
us. So, for the past year we have really been focussing on privacy, not
only in terms of how we collect the data, but how we make people
aware of what the software does.
We have been developing wordless diagrams to illustrate how the
software functions, so that we don't have to worry about localisation
errors as much. And so we have these illustrations that show someone
downloading ingimp, starting it up, a graph appears, there is a little
icon of a mouse and a keyboard on the graph, and they type and you
see the keyboard bar go up, and then at the end when they close the
application, you see the data being sent to a web server. And then
227

227

227

228

228

we show snapshots of them doing different things in the software, and
then show a corresponding graph change. So, we developed these by
bringing in both native and non-native speakers, having them look at
the diagrams and then tell us what they meant. We had to go through
about fifteen people and continual redesign until most people could
understand and tell us what they meant, without giving them any
help or prompts. So, this is an ongoing research effort, to come up
with techniques that not only work for ingimp, but also for other
instrumentation efforts, so that people can become more aware of the
implications.
FS: Can you say something about how this type of research relates
to classic usability research and in particular to the usability work
that is happening in Gimp?
MT: Instrumentation is not new, commercial software companies
and researchers have been doing instrumentation for at least ten years,
probably ten to twenty years. So, the idea is not new, but what is
new – in terms of the research aspects of this –, is how do we do this
in a way where we can make all the data open? The fact that you
make the data open, really impacts your decision about the type of
data you collect and how you are representing it. And you need to
really inform people about what the software does.
But I think your question is... how does it impact the Gimp's
usability process? Not at all, right now. But that is because we have
intentionally been laying off to the side, until we got to the point
where we had an infrastructure, where the entire community could
really participate with the data analysis. We really want to have
this to be a self-sustaining infrastructure, we don't want to create a
system where you have to rely on just one other person for this to
work.
IM: What approach did you take in order to make this project
self-sustainable?
228

228

228

229

229

MT: Collecting data is not hard. The challenge is to understand
the data, and I don't want to create a situation where the community
is relying on only one person to do that kind of analysis, because this
is dangerous for a number of reasons. first of all, you are creating
a dependency on an external party, and that party might have other
obligations and commitments, and might have to leave at some point.
If that is the case, then you need to be able to pass the baton to
someone else, even if that could take a considerate amount of time
and so on.
You also don't want to have this external dependency, because of
the richness in the data, you really need to have multiple people
looking at it, and trying to understand and analyse it. So how are
we addressing this? It is through this Stats Jam extension to the
MediaWiki that I will introduce today. Our hope is that this type
of tool will lower the barrier for the entire community to participate
in the data analysis process, whether they are simply commenting on
the analysis we made or taking the existing analysis, tweaking it to
their own needs, or doing something brand new.
In talking with members of the Gimp project here at the Libre
Graphics Meeting, they started asking questions like, “So how many
people are doing this, how many people are doing this and how many
this?” They'll ask me while we are sitting in a café, and I will be able
to pop the database open and say, “A certain number of people have
done this.” or, “No one has actually used this tool at all.”
The danger is that this data is very rich and nuanced, and you
can't really reduce these kinds of questions to an answer of “N people
do this”, you have to understand the larger context. You have to
understand why they are doing it, why they are not doing it. So, the
data helps to answer some questions, but it generates new questions.
They give you some understanding of how the people are using it,
but then it generates new questions of, “Why is this the case?” Is this
because these are just the people using ingimp, or is this some more
widespread phenomenon?
They asked me yesterday how many people are using this colour
picker tool – I can't remember the exact name – so I looked and there
229

229

229

230

230

was no record of it being used at all in my data set. So I asked them
when did this come out, and they said, “Well it has been there at
least since 2.4.” And then you look at my data set, and you notice
that most of my users are in the 2.2 series, so that could be part of
the reasons. Another reason could be, that they just don't know that
it is there, they don't know how to use it and so on. So, I can answer
the question, but then you have to sort of dig a bit deeper.
FS: You mean you can't say that because it is not used, it doesn't
deserve any attention?
MT: Yes, you just can't jump to conclusions like that, which is
again why we want to have this community website, which shows the
reasoning behind the analysis: here are the steps we had to go through
to get this result, so you can understand what that means, what the
context means – because if you don't have that context, then it's sort
of meaningless. It's like asking, “What are the most frequently used
commands?” This is something that people like to ask about. Well
really, how do you interpret that? Is it the numbers of times it has
been used across all log files? Is it the number of people that have
used it? Is it the number of log files where it has been used at least
once? There are lots and lots of ways in which you can interpret
this question. So, you really need to approach this data analysis as
a discourse, where you are saying: here are my assumptions, here is
how I am getting to this conclusion, and this is what it means for
this particular group of people. So again, I think it is dangerous if
one person does that and you become to rely on that one person. We
really want to have lots of people looking at it, and considering it,
and thinking about the implications.
FS: Do you expect that this will impact the kind of interfaces that
can be done for Gimp?
MT: I don't necessarily think it is going to impact interface design,
I see it really as a sort of reality check: this is how communities are
using the software and now you can take that information and ask,
230

230

230

231

231

do we want to better support these people or do we. . . For example
on my data set, most people are working on relatively small images
for short periods of time, the images typically have one or two layers,
so they are not really complex images. So regarding your question,
one of the things you can ask is, should we be creating a simple tool
to meet these people's needs? All the people are just doing cropping
and resizing, fairly common operations, so should we create a tool
that strips away the rest of the stuff? Or, should we figure out why
people are not using any other functionality, and then try to improve
the usability of that?
There are so many ways to use data – I don't really know how
it is going to be used, but I know it doesn't drive design. Design
happens from a really good understanding of the users, the types of
tasks they perform, the range of possible interface designs that are
out there, lots of prototyping, evaluating those prototypes and so on.
Our data set really is a small potential part of that process. You can
say, well, according to this data set, it doesn't look like many people
are using this feature, let's not too much focus on that, let's focus on
these other features or conversely, let's figure out why they are not
using them. . . Or you might even look at things like how big their
monitor resolutions are, and say, well, given the size of the monitor
resolution, maybe this particular design idea is not feasible. But I
think it is going to complement the existing practices, in the best
case.
FS: And do you see a difference in how interface design is done in
free software projects, and in proprietary software?
MT: Well, I have been mostly involved in the research community,
so I don't have a lot of exposure to design projects. I mean, in my
community we are always trying to look at generating new knowledge,
and not necessarily at how to get a product out the door. So, the
goals or objectives are certainly different.

231

231

231

232

232

I think one of the dangers in your question is that you sort of
lump a lot of different projects and project styles into one category
of ‘open source'. ‘Open source' ranges from volunteer driven projects
to corporate projects, where they are actually trying to make money
out of it. There is a huge diversity of projects that are out there;
there is a wide diversity of styles, there is as much diversity in the
open source world as there is in the proprietary world.
One thing you can probably say, is that for some projects that are
completely volunteer driven like Gimp, they are resource strapped.
There is more work than they can possibly tackle with the number of
resources they have. That makes it very challenging to do interface
design; I mean, when you look at interface code, it costs you 50 or 75
% of a code base. That is not insignificant, it is very difficult to hack,
and you need to have lots of time and manpower to be able to do
significant things. And that's probably one of the biggest differences
you see for the volunteer driven projects: it is really a labour of
love for these people and so very often the new things interest them,
whereas with a commercial software company developers are going to
have to do things sometimes they don't like, because that is what is
going to sell the product.

232

232

232

233

233


SADIE PLANT
License: Creative Commons Attribution-NonCommercial-ShareAlike
Interwoven with her own thoughts and experiences, Sadie Plant gave a situated report on the Mutual
Motions track, and responded to the issues discussed during the week-end.

figure 146
Sadie Plant
reports
at V/J10

EN

A Situated Report
I have to begin with many thanks to Femke and Laurence, because
it really has been a great pleasure for me to have been here this weekend. It's nearly five years since I came to an event like this, believe
it or not, and I really cannot say enough how much I have enjoyed it,
and how stimulating I have found it. So yes, a big thank you to both
for getting me here. And as you say, it's ten years since I wrote Zeros
+ Ones, and you are marking ten years of this festival too, so it's an
interesting moment to think about a lot of the issues that have come
up over the weekend. This is a more or less spontaneous report, very
much an ‘open performance', to use Simon Yuill's words, and not to
be taken as any kind of definitive account of what has happened this
weekend. But still I hope it can bring a few of the many and varied strands of this event together, not to form a true conclusion, but
perhaps to provide some kind of digestif after a wonderful meal.
I thought I should begin as Femke very wisely began, with the
theme of cooking. Femke gave us a recipe at the beginning of the
weekend, really a kind of recipe for the whole event, with cooking as
an example of the fact that there are many models, many activities,
many things that we do in our everyday lives, which might inform
and expand our ideas about technology and how we work with them.
So, I too will begin with this idea of cooking, which is as Femke
said a very magical, transformative experience. Femke's clip from
the Cathérine Deneuve film was a really lovely instance of the kind
of deep elemental, magical chemistry which goes on in cooking. It is
this that makes it such an instructive and interesting candidate, for a
model to illuminate the work of programming, which itself obviously
has this same kind of potential to bring something into effect in a very
275

275

275

276

276

direct and immediate sense. And cooking is also the work behind the
scene, the often forgotten work, again a little bit like programming,
that results in something which – again like a lot of technology – can
operate on many different scales. Cooking is in one sense the most
basic kind of activity, a simple matter of survival, but it can also
work on a gourmet level too, where it becomes the most refined – and
well paid – kind of work. It can be the most detailed, fiddly, sort of
decorative work; it can be the most backbreaking, heavy industrial
work – bread making for example as well. So it really covers the whole
panoply of these extremes.
If we think about a recipe, and ask ourselves about the machine that
the recipe requires, it's obviously running on an incredibly complex
assemblage: you have the kitchen, you have all the ingredients, you
have machines for cooling things, machines for heating things, you
have the person doing the cooking, the tools in question. We really
are talking here about a complex process, and not just an end result.
The process is also, again, a very ‘open' activity. Simon Yuill defined
an `open performance' as a partial composition completed in the
performance.
Cooking is always about experimentation and the kitchen really is
a kind of lab. The instructions may be exact, the conditions may be
more or less precise but the results are never the same twice. There
are just too many variables, too many contingencies involved. Of
course like any experimental work, it can go completely wrong, it
often does go wrong: sometimes it really is all about process, and
not about eating at all! But as Simon again said today, quoting Sun
Ra: there are no real mistakes, there are no truly wrong things. This
was certainly the case with the fantastic cooking process that we
had throughout the whole day yesterday, which ended with us eating
these fantastic mussels, which I am sure elpueblodechina thought in
fact were not as they should have been. But only she knew what
she was aiming at: for the people who ate them they were delicious,
their flavour enhanced by the whole experience of their production.
elpueblodechina's meal made us ask: what does it mean for something
to go wrong? She was using a cooking technique which has come out
of generations and generations of errors, mistakes, probings, fallings
276

276

276

277

277

backs, not just simply a continuous kind of story of progress, success,
and forward movement. So the mistakes are clearly always a very big
part of how things work in life, in any context in life, but especially
of course in the context of programming and working with software
and working with technologies, which we often still tend to assume
are incredibly reliable, logical systems, but in fact are full of glitches
and errors. As thinkers and activists resistant to and critical of mainstream methods and cultures, this is something that we need to keep
encouraging.
I have for a long time been interested in textiles, and I can't resist mentioning the fact that the word ‘recipe' was the old word for
knitting patterns: people didn't talk about knitting patterns, but
‘recipes' for knitting. This brings us to another interesting junction
with another set of very basic, repetitive kinds of domestic and often
overlooked activities, which are nevertheless absolutely basic to human existence. Just as we all eat food, so we all wear clothes. As with
cooking, the production of textiles again has this same kind of sense
of being very basic to our survival, very elemental in that sense, but
it can also function at a high level of detailed, refined activity as well.
With a piece of knitting it is difficult to see the ways in which a single
thread becomes looped into a continuous textile. But if you look at a
woven pattern, the program that has led to the pattern is right there
in front of you, as you see the textile itself. This makes weaving a
very nice, basic and early example of how this kind of immediacy can
be brought into operation. What you look at in a piece of woven cloth
is not just a representation of something that can happen somewhere
else, but the actual instructions for producing and reproducing that
piece of woven cloth as well. So that's the kind of deep intuitive connection that it has with computer programming, as well as the more
linear historical connections of which I have often spoken.
There are some other nice connections between textiles, cooking
and programming as well. Several times yesterday there was a lot
of talk about both experts and amateurs, and developers and users.
These are divisions which constantly, and often perhaps with good
reason, reassert themselves, and often carry gendered connotations
too. In the realm of cooking, you have the chef on the one hand,
277

277

277

278

278

who is often male and enjoys the high status of the inventive, creative expert, and the cook on the other, who is more likely to be
female and works under quite a different rubric. In reality, it might
be said that the distinction is far from precise: the very practise of
using computers, of cooking, of knitting, is almost inevitably one of
constantly contributing to their development, because they are all relatively open systems and they all evolve through people's constant,
repetitive use of them. So it is ultimately very difficult to distinguish
between the user and the developer, or the expert and the amateur.
The experiment, the research, the development is always happening
in the kitchen, in the bedroom, on the bus, using your mobile or
using your computer. Fernand Braudel speaks about this kind of ‘micro-histories', this sense of repetitive activity, which is done in many
trades and many lines, and that really is the deep unconscious history
of human activity. And arguably that's where the most interesting
developments happen, albeit in a very unsung, unseen, often almost
hidden way. It is this kind of deep collectivity, this profound sense of
micro-collaboration, which has often been tapped into this weekend.
Still, of course, the social and conceptual divisions persist, and
still, just as we have our celebrity chefs, so we have our celebrity
programmers and dominant corporate software developers. And just
as we have our forgotten and overlooked cooks, so we have people who
are dismissed, or even dismiss themselves, as ‘just computer users'.
The technological realities are such that people are often forced into
this role, with programmes that really are so fixed and closed that
almost nothing remains for the user to contribute. The structural
and social divisions remain, and are reproduced on gendered lines as
well.
In the 1940s, computer programming was considered to be extremely menial, and not at all a glamorous or powerful activity.
Then of course, the business of dealing with the software was strictly
women's work, and it was with the hardware of the system that the
most powerful activity lay. That was where the real solid development was done, and that was where the men were working, with what
were then the real nuts and bolts of the machines. Now of course, it
has all turned around. It is women who are building the chips and
278

278

278

279

279

putting the hardware – such as it is these days – together, while the
male expertise has shifted to the writing of software. In only half a
century, the evolution of the technology has shifted the whole notion
of where the power lies. No doubt – and not least through weekends
like this – the story will keep moving on.
But as the world of computing does move more and more into
software and leave the hardware behind, it is accompanied by the
perceived danger that the technology and, by extension, the cultures
around it, tend to become more and more disembodied and intangible.
This has long been seen as a danger because it tends to reinforce what
have historically, in the Western world at least, been some of the more
oppressive tendencies to affect women and all the other bodies that
haven't quite fitted the philosophical ideal. Both the Platonic and
Christian traditions have tended to dismissing or repress the body,
and with it all the kind of messy, gritty, tangible stuff of culture,
as transient, difficult, and flawed. And what has been elevated is of
course the much more formal, idealist, disembodied kind of activities
and processes. This is a site of continual struggle, and I guess part of
the purpose of a weekend like this is to keep working away, re-injecting
some sense of materiality, of physicality, of the body, of geography,
into what are always in danger of becoming much more formal and
disembodied worlds. What Femke and Laurence have striven to remind us this weekend is that however elevated and removed our work
appears to be from the matter of bodies and physical techniques,
we remain bodies, complex material processes, working in a complex
material work.
Once again, there still tends to be something of a gendered divide.
The dance workshop organised this morning by Alice Chauchat and
Frédéric Gies was an inspiring but also difficult experience for many
of us, unused as we are to using our bodies in such literally physical
and public ways. It was not until we came out of the workshop into
a space which was suddenly mixed in terms of gender, that I realised
that the participants in the workshop had been almost exclusively
female. It was only the women who had gone to this kind of more
physical, embodied, and indeed personally challenging part of the
weekend. But we all need to continually re-engage with this sense
279

279

279

280

280

of the body, all this messiness and grittiness, which it is in many
vested interests to constantly cleanse from the world. We have to
make ourselves deal with all the embarrassment, the awkwardness,
and the problematic side of this more tangible and physical world.
For that reason it has been fantastic that we have had such strong
input from people involved in dance and physical movement, people
working with bodies and the real sense of space. Sabine Prokhoris
and Simon Hecquet made us think about what it means to transcribe
the movements of the body; Séverine Dusollier and Valérie Laure
Benabou got us to question the legal status of such movements too.
And what we have gained from all of this is this sense that we are all
always working with our bodies, we are always using our bodies, with
more or less awareness and talent, of course, whether we are dancing
or baking or knitting or slumped over our keyboards. In some ways we
shouldn't even need to say it, but the fact that we do need to remind
ourselves of our embodiment shows just how easy it is for us to forget
our physicality. This morning's dance workshop really showed some
of the virtues of being able to turn off one's self-consciousness, to
dismiss the constantly controlling part of one's self and to function
on a different, slightly more automatic level. Or perhaps one might
say just to prioritise a level of bodily activity, of bodily awareness,
of a sense of spatiality that is so easy to forget in our very cerebral
society.
What Frédéric and Alice showed us was not simply about using the
body, but rather how to overcome the old dualism of thinking of the
body as a kind of servant of the mind. Perhaps this is how we should
think about our relationships to our technologies as well, not just to
see them as our servants, and ourselves as the authors or subjects of
the activity, but rather to perceive the interactivity, the sense of an
interplay, not between two dualistic things, the body and the mind, or
the agent and the tool, the producer and the user, but to try and see
much more of a continuum of different levels and different kinds and
different speeds of material activity, some very big and clunky, others at extremely complex micro-levels. During the dance workshop,
Frédéric talked about all the synaptic connections that are happening as one moves one's body, in order to instil in us this awareness
280

280

280

281

281

of ourselves as physical, material, thinking machines, assemblages of
many different kinds of activity. And again, I think this idea of bringing together dance, food, software, and brainpower, to see ourselves
operating at all these different levels, has been extremely rewarding.
Femke asked a question of Sabine and Simon yesterday, which perhaps never quite got answered, but expressed something about how
as people living in this especially wireless world, we are now carrying more and more technical devices, just as I am now holding this
microphone, and how these additional machines might be changing
our awarenesses of ourselves. Again it came up this morning in the
workshop when we were asked to imagine that we might have different parts of our bodies, another head, or our feet may have mirrors
in them, or in one brilliant example that we might have magnets,
so that we were forced to have parts of our bodies drawn together
in unlikely combinations, just to imagine a different kind of sense of
self that you get from that experience, or a different way of moving
through space. But in many ways, because of our technologies now,
we don't need to imagine such shifts: we are most of us now carrying
some kind of telecommunicating device, for example, and while we
are not physically attached to our machines – not yet anyway –, we
are at least emotionally attached to them. Often they are very much
with us and part of us: the mobile phone in your pocket is to hand,
it is almost a part of us. And I too am very interested in how that
has changed not only our more intellectual conceptions of ourselves,
but also our physical selves. The fact that I am holding this thing
[the microphone] obviously does change my body, its capacities, and
its awareness of itself. We are all aware of this to some extent: everyone knows that if you put on very formal clothes, for example, you
behave in different ways, your body and your whole experience of its
movement and spatiality changes. Living in a very conservative part
of Pakistan a few years ago, where I had to really be completely covered up and just show my eyes, gave me an acute sense of this kind
of change: I had to sit, stand, walk and turn to look at things in an
entirely new set of ways. In a less dramatic but equally affective way,
wirelessness obviously introduces a new sense of our bodies, of what
we can do with our bodies, of what we carry with us on our bodies,
281

281

281

282

282

and consequently of who we are and how we interact with our environment. And in this sense wirelessness has also brought the body
back into play, rescuing us from what only ten years ago seemed to
be the very real dangers of a more formal and disembodied sense of a
virtual world, which was then imagined as some kind of ‘other place'
, a notion of cyberspace, up there somehow, in an almost heavenly
conception. Wirelessness has made it possible for computer devices to
operate in an actual, geographical environment: they can now come
with us. We can almost start to talk more realistically about a much
more interesting notion of the cyborg, rather than some big clunky
thing trailing wires. It really can start to function as a more interesting idea, and I am very interested in the political and philosophical
implications of this development as well, and in that it does reintroduce the body to as I say what was in danger of becoming a very
kind of abstract and formal kind of cyberspace. It brings us back into
touch with ourselves and our geographies.
The interaction between actual space and virtual space, has been
another theme of this weekend; this ability to translate, to move between different kinds of spaces, to move from the analogue to the
digital, to negotiate the interface between bodies and machines. Yesterday we heard from Adrian Mackenzie about digital signal processing, the possibility of moving between that real sort of analogue world
of human experience and the coding necessary to computing. Sabine
and Simon talked about the possibilities of translating movement into
dance, and this also has come up several times today, and also with
Simon's work in relation to music and notation. Simon and Sabine
made the point that with the transcription and reading of a dance,
one is offered – rather as with a recipe – the same ingredients, the
same list of instructions, but once again as with cooking, you will
never get the same dance, or you will never get the same food as a
consequence. They were interested in the idea of notation, not to
preserve or to conserve, but rather to be able to send food or dance
off into the future, to make it possible in the future. And Simon
referred to these fantastic diagrams from The Scratch Orchestra, as
an entirely different way of conceiving and perceiving music, not as
a score, a notation in this prescriptive, conserving sense of the word,
282

282

282

283

283

but as the opportunity to take something forward into the future.
And to do so not by writing down the sounds, or trying to capture
the sounds, but rather as a way of describing the actions necessary
to produce those sounds, is almost to conceive the production of music as a kind of dance, and again to emphasise its embodiment and
physicality.
This sense of performance brings into play the idea of ‘play' itself,
whether ‘playing' a musical instrument, ‘playing' a musical score, or
‘playing' the body in an effort to dance. I think in some dance traditions one speaks about ‘playing the body'; in Tai Chi it is certainly
said that one plays the body, as though it was an instrument. And
when I think about what I have been doing for the last five years,
it's involved having children, it's involved learning languages, it's involved doing lots of cooking, and lots of playing, funny enough. And
what has been lovely for me about this weekend is that all of these
things have been discussed, but they haven't been just discussed, they
have actually been done as well. So we have not only thought about
cooking, but cooking has happened, not only with the mussels, but
also with the fantastic food that has been provided all weekend. We
haven't just thought about dancing, but dancing has actually been
done. We haven't just thought about translating, but with great
thanks to the translators – who I think have often had a very difficult job – translating has also happened as well. And in all of these
cases we have seen what might so easily have been a simply theoretical discussion, has itself been translated into real bodily activity:
they have all been, literally, brought into play. And this term ‘play'
, which spans a kind of mathematical play of numbers, in relation to
software and programming, and also the world of music and dance,
has enormous potential for us all: Simon talked about ‘playing free'
as an alternative term to ‘improvisation', and this notion of ‘playing
free' might well prove very useful in relation to all these questions of
making music, using the body, and even playing the system in terms
of subverting or hacking into the mainstream cultural and technical
programs with which we presented.

283

283

283

284

284

This weekend was inspired by several desires and impulses to which
I feel very sympathetic, and which remain very urgent in all our debates about technology. As we have seen, one of the most important
of those desires is to reinsert the body into what is always in danger of becoming a disembodied realm of computing and technology.
And to reinsert that body not as a kind of Chaplinesque cog in the
wheel that we saw when Inès Rabadán introduced Modern Times last
night, but as something more problematic, something more complex
and more interesting. And also not to do so nostalgically, with some
idea of some kind of lost natural activity that we need to regain, or to
reassert, or to reintroduce. There is no true body, there is no natural
body, that we can recapture from some mythical past and bring back
into play. At the same time we need to find a way of moving forward,
and inserting our senses of bodies and physicality into the future, to
insist that there is something lively and responsive and messy and
awkward always at work in what could have the tendency otherwise
to be a world of closed systems and dead loops.
One of the ways of doing this is to constantly problematise both
individualised conceptions of the body and orthodox notions of communities and groups. Michael Terry's presentation about ingimp, developed in order to imagine the community of people who are using
his image manipulation software, raised some very problematic issues
about the notion of community, which were also brought up again by
Simon today, with this ideas about collaboration and collectivity, and
what exactly it means to come together and try to escape an individualised notion of one's own work. Femke's point to Michael exemplified
the ways in which the notion of community has some real dangers:
Michael or his team had done the representations of the community
themselves – so if people told them they were graphic artists, they
had found their own kind of symbols for what a graphic artist would
look like –, and when Femke suggested that people – especially if
they were graphic artists – might be capable of producing their own
representations and giving their own way of imagining themselves,
Michael's response was to the effect that people might then come up
with what he and his team would consider to be ‘undesirable images'
of themselves. And this of course is the age old problem with the idea
284

284

284

285

285

of a community: an open, democratic grouping is great when you're
in it and you all agree what's desirable, but what happens to all the
people that don't quite fit the picture? How open can one afford to
be? We need some broader, different senses of how to come together
which, as Alice and Frédéric were discussed, are ways of collaborating
without becoming a new fixed totality. If we go back to the practices
of cooking, weaving, knitting, and dancing, these long histories of
very everyday activities that people have performed for generation
after generation, in every culture in the world – it is at this level that
we can see a kind of collective activity, which is way beyond anything
one might call a ‘community' in the self-conscious sense of the term.
And it's also way beyond any simple notion of a distributed collection of individuals: it is perhaps somewhere at the junction of these
modes, an in-between way of working which has come together in its
own unconscious ways over long periods of time.
This weekend has provided a rich menu of questions and themes to
feed in and out of the writing and use of software, as well as all our
other ways of dealing with our machines, ourselves, and each other.
To keep the body and all its flows and complexities in play, in a lively
and productive sense; to keep all the interruptive possibilities alive;
to stop things closing down; to keep or to foster the sense of collectivity in a highly individualised and totalising world; to find new
ways – constantly find new ways – of collaborating and distributing
information: these are all crucial and ongoing struggles in which we
must all remain continually engaged. And I notice even now that I
used this term ‘to keep', as though there was something to conserve
and preserve, as though the point of making the recipes and writing
the programs is to preserve something. But the ‘keeping' in question
here is much more a matter of ‘keeping on', of constantly inventing
and producing without, as Simon said earlier, leaving ourselves too
vulnerable to all the new kinds of exploitation, the new kinds of territorialisation, which are always waiting around the corner to capture
even the most fluid and radical moves we make. This whole weekend
has been an energising reminder, a stimulating and inspiriting call to

285

285

285

286

286

keep problematising things, to keep inventing and to keep reinventing, to keep on keeping on. And I thank you very much for giving me
the chance to be here and share it all. Thank you.
A quick postscript. After this ‘spontaneous report' was made,
the audience moved upstairs to watch a performance by the dancer
Frédéric Gies, who had co-hosted the morning's workshop. I found
the energy, the vulnerability, and the emotion with which he danced
quite overwhelming. The Madonna track - Hung Up (Time Goes by
so Slowly) – to which he danced ran through my head for the whole
train journey back to Birmingham, and when I got home and checked
out the Madonna video on YouTube I was even more moved to see
what a beautiful commentary and continuation of her choreography
Frédéric had achieved. This really was an example not only of playing
the body, the music, and the culture, but also of effecting the kind of
‘free play' and ‘open performance', which had resonated through the
whole weekend and inspired us all to keep our work and ourselves in
motion. So here's an extra thank you to Frédéric Gies. Madonna will
never sound the same to me.

286

286

286

287

287

Biographies
Valérie Laure Benabou
http://www.juriscom.net/minicv/vlb
EN

Valérie Laure Benabou is associate
Professor at the University of Versailles-Saint Quentin and teaches at
the Ecole des Mines. She is a member of the Centre d'Etude et de
Recherche en Droit de l'Immatériel
(CERDI), and of the Editorial Board
of Propriétés Intellectuelles. She also
teaches civil law at the University
of Barcelona and taught international
commercial law at the Law University
in Phnom Penh, Cambodia. She was a
member of the Commission de réflexion du Conseil d'Etat sur Internet et
les réseaux numériques, co-ordinated
by Ms Falque-Pierrotin, which produced the Rapport du Conseil d'Etat,
(La Documentation française, 1998).
She is the author of a number of works
and articles, including ‘La directive
droit d'auteur, droits voisins et société
de l'information: valse à trois temps
avec l'acquis communautaire', in Europe, No. 8-9, September 2001, p.
3, and in Communication Commerce
Electronique, October 2001, p. 8., and
‘Vie privée sur Internet: le traçage', in
Les libertés individuelles à l'épreuve
des NTIC, PUL, 2001, p. 89.

Pierre Berthet
http://pierre.berthet.be/
EN

Studied percussion with André Van

287

287

287

288

288

Belle and Georges-Elie Octors, improvisation with Garrett List, composition with Frederic Rzewski, and music theory with Henri Pousseur. Designs and builds sound objects and installations (composed of steel, plastic,
water, magnetic fields etc.). Presents
them in exhibitions and solo or duo
performances with Brigida Romano
(CD Continuum asorbus on the Sub
Rosa label) or Frédéric Le Junter (CD
Berthet Le Junter on the Vandœuvres
label). Collaborated with 13th tribe
(CD Ping pong anthropology). Played
percussion in Arnold Dreyblatt's Orchestra of excited strings (CD Animal magnetism, label Tzadik; CD The
sound of one string, label Table of the
elements).

avec Garrett List, la composition avec
Frederic Rzewski, et la théorie de
la musique avec Henri Pousseur. Il
conçoit et construit des objets et installations sonores (en acier, plastique, eau, champs magnétiques etc.),
et les a présentés lors d'expositions et
de performances en solo ou en duo
avec Brigida Romano (CD Continuum asorbus sur le label Sub Rosa)
or Frédéric Le Junter (CD Berthet Le
Junter sur le label Vandœuvres). A
collaboré avec 13th tribe (CD Ping
pong anthropology). A joué de la
percussion chez Orchestra of excited
strings d'Arnold Dreyblatt (CD Animal magnetism, label Tzadik; CD The
sound of one string, sur le label Table
of the elements).

NL

Alice Chauchat
Geluidskunstenaar.
Studeerde percussie met André Van Belle en Georges-Eliehttp://www.theselection.net/dance/
Octors, improvisatie met Garrett List,
EN
compositie met Frederic Rzewski, en
muziektheorie met Henri Pousseur.
Member of the Praticable collective.
Hij ontwerpt en bouwt sonore voorAlice Chauchat was born in 1977 in
werpen en installaties (in staal, plasSaint-Etienne (France) and lives in
tiek, water, magnetische velden etc.).
Paris. She studied at the ConservaDeze toont hij tijdens tentoonstellintoire National Supérieur de Lyon and
gen en performances, solo of samen
P.A.R.T.S in Brussels. She is a foundmet Brigida Romano (cd Continuum
ing member of the collective B.D.C.
asorbus bij het label Sub Rosa) en
With other members such as Tom PlisFrédéric Le Junter (cd Berthet Le
chke, Martin Nachbar and Hendrik
Junter bij het label Vandœuvres).
Laevens she created Events for TeleBerthet werkte samen met 13th tribe
vision, Affects and(Re)sort, between
(cd Ping pong anthropology). Hij ver1999 and 2001. In 2001 she presented
zorgde de percussie voor Arnold Dreyher first solo Quotation marks me.
blatts Orchestra of excited strings (cd
In 2003 she collaborated with Vera
Animal magnetism, label Tzadik; cd
Knolle (A Number of Classics in the
The sound of one string, bij het label
Age of Performance). In 2004 she
Table of the elements).
made J'aime, together with Anne JuFR

Plasticien sonore. A étudié la percussion avec André Van Belle et
Georges-Elie Octors, l'improvisation

ren, and CRYSTALLL, a collaboration with Alix Eynaudi. She also takes
part in other people's projects, such as
Projet, initiated by Xavier Le Roy, or

288

288

288

289

289

Michel Cleempoel
http://www.michelcleempoel.be/
EN

Graduated from the National Superior Art School La Cambre in Brussels.
Author of numerous digital art works
and exhibitions. Worked in collaboration with Nicolas Malevé:
http://www.deshabillez-vous.be

289

289

289

290

290

http://www.geuzen.org/

EN

EN

Femke Snelting, Renée Turner and
Riek Sijbring form the art and design
collective De Geuzen (a foundation for
multi-visual research). De Geuzen develop various strategies on and off line,
to explore their interests in the female
identity, critical resistance, representation and narrative archives.

Séverine Dusollier

Doctor in Law, Professor at the University of Namur (Belgium), Head of
the Department of Intellectual Property Rights at the Research Center for
Computer and Law of the University
of Namur, and Project Leader Creative Commons Belgium, Namur.
NL

EN

Leif Elggren (born 1950, Linköping,
Sweden) is a Swedish artist who lives
and works in Stockholm.
Active since the late 1970s, Leif
Elggren has become one of the most
constantly surprising conceptual artists
to work in the combined worlds of
audio and visual. A writer, visual
artist, stage performer and composer,
he has many albums to his credits, solo and with the Sons of God,
on labels such as Ash International,

http://www.fundp.ac.be/universite/personnes
/page_view/01003580/

290

290

290

291

291

Touch, Radium and his own firework Edition. His music, often conceived as the soundtrack to a visual
installation or experimental stage performance, usually presents carefully
selected sound sources over a long
stretch of time and can range from
mesmerising quiet electronics to harsh
noise. His wide-ranging and prolific
body of art often involves dreams and
subtle absurdities, social hierarchies
turned upside-down, hidden actions
and events taking on the quality of
icons.
Together with artist Carl Michael
von Hausswolff, he is a founder of
the Kingdoms of Elgaland-Vargaland
(KREV), where he enjoys the title of
King.

EN

elpueblodechina a.k.a.
Alejandra
Perez Nuñez is a sound artist and
performer working with open source

291

291

291

292

292

tools, electronic wiring and essay writing. In collaborative projects with
Barcelona based group Redactiva, she
works on psychogeography and social science fiction projects, developing narratives related to the mapping of collective imagination. She received an MA in Media Design at the
Piet Zwart Institute in 2005, and has
worked with the organization V2_ in
Rotterdam. She is currently based in
Valparaíso, Chile, where she is developing a practice related to appropriation, civil society and self-mediation
through electronic media.



EN

Born in Bari (Italy) in 1980, and graduated in May 2005 in Communication
Sciences at the University of Rome
La Sapienza, with a dissertation thesis on software as cultural and social
artefact. His educational background
is mostly theoretical: Humanities and
Media Studies. More recently, he has
been focussing on programming and
the development of web based applications, mostly using open source technologies. In 2007 he received an M.A.
in Media Design at the Piet Zwart Institute in Rotterdam.
His areas of interest are:
social
software, actor network theory, digital archives, knowledge management,
machine readability, semantic web,
data mining, information visualization, profiling, privacy, ubiquitous
computing, locative media.

292

292

293

293

ware, de compilatie van data en de
exploratie van numerieke archieven en
privacy. In 2007 behaalde hij een M.A.
in Media Design aan het Piet Zwart
Instituut in Rotterdam.

amazons (1st version in Tanzfabrik,
2nd in Ausland, Berlin) and The
bitch is back under pressure (reloaded) (Basso, Berlin). As a memeber of the Praticable collective, he
created Dance and The breast piece,
in collaboration with Alice Chauchat.
He also collaborated on Still Lives
(Good Work: Anderson/ Gies/ Pelmus/ Pocheron/ Schad).

EN
After studying ballet and contemfaut (CND, Parijs), Le principal déporary dance, Frédéric Gies worked
faut-solo (Tipi de Beaubourg, Parijs),
with various choreographers such as
En corps (CND, Parijs), Post porn
Daniel Larrieu, Bernard Glandier,
traffc (Macba, Barcelona), In bed
Jean-François Duroure, Olivia Grandville with Rebecca (Vooruit, Gent), (don't)
and Christophe Haleb. In 1995, he
Show it! (Scène nationale, Dieppe),
created a duet in collaboration with
Second hand vintage collector (someOdile Seitz (Because I love). In 1998
times we like to mix it up!) (Ausland,
he started working with Frédéric De
Berlijn).
Carlo. Together they have created
In 2004 danst hij in The better you
various performances such as Le prinlook, the more you see


293

293

293

294

294

Dominique Goblet
http://www.dominique-goblet.be/
EN

Visual artist. She shows her work in
galleries and publishes her stories in
magazines and books. In all cases,
what she tries to pursue is an art of
the multi-faceted narrative. Her exhibitions of paintings – from frame to
frame and in the whole space of the
gallery – could be ‘read' as fragmented
stories. Her comic books question the
deep or thin relations between human
beings. As an author, she has taken
part in almost all the Frigobox series
published by Fréon (Brussels) and to
several Lapin magazines, published by
L'Association (Paris). A silent comic
book was published in the gigantic
Comix 2000 (L'Association). In the
beginning of 2002, a second book is
published by the same editor: Souvenir d'une journée parfaite - Memories of a perfect day - a complex story
that combines autobiographical facts
and fictions.

Tsila Hassine
http://www.missdata.org/

EN

Tsila Hassine is a media artist / designer.
Her interests lie with the
hidden potentialities withheld in the
electronic data mines. In her practice she endeavours to extrude undercurrents of information and traces of
processes that are not easily discerned
through regular consumption of mass
networked media. This she accomplishes through repetitive misuse of
available platforms.
She completed a BScs in Mathematics and Computer Science and spent
2003 at the New Media department
of the HGK Zürich.
In 2004 she
joined the Piet Zwart Institute in Rotterdam, where she pursued an MA
in Media Design, until graduating in
June 2006 with Google randomizer
Shmoogle.
She is currently a researcher at the Design department of
the Jan van Eyck Academie.

Simon Hecquet
EN

Dancer and choreographer. Educated
in classical and contemporary dance,
Hecquet has worked with many different dance companies, specialised
in contemporary as well as baroque
dance.
During this time, he also
studied different notation systems to
describe movement, after which he
wrote scores for several dance pieces
from the contemporary choreographic
repertory. He also contributed, among
others, with the Quatuor Knust,
to projects that restaged important
dance pieces of the 20th century. Together with Sabine Prokhoris he made
a movie, Ceci n'est pas une danse
chorale (2004), and a book, Fabriques
de la Danse (PUF, 2007). He teaches

transcription systems for movement,
among others, at the department of
Dance at the Université de Paris VIII.


Guy Marc Hinant
EN

Guy Marc Hinant is a filmmaker of
films like The Garden is full of Metal
(1996), Éléments d'un Merzbau oublié (1999), The Pleasure of Regrets
– a Portrait of Léo Kupper (2003),
Luc Ferrari face to his Tautology
(2006) and I never promised you a
rose garden – a portrait of David
Toop through his records collection
(2008), all developed together with
Dominique Lohlé. He is the curator
of An Anthology of Noise and Electronic Music CD Series, and manages
the Sub Rosa label. He writes fragmented fictions and notes on aesthetics (some of his texts have been published by Editions de l'Heure, Luna
Park, Leonardo Music Journal etc.).

Dmytri Kleiner
http://www.telekommunisten.net/
EN

Dmytri Kleiner is a USSR-born, Canadian software developer and cultural
producer. In his work, he investigates the intersections of art, technology and political economy. He is a
founder of Telekommunisten, an anarchist technology collective, and lives
in Berlin with his wife Franziska and
his daughter Henriette.


Bettina Knaup
EN

Cultural producer and curator with a
background in theatre and film studies, political science and gender studies. She is interested in the interface
of live arts, politics and knowledge
production, and has curated and/or
produced transnational projects such
as the public arts and science program ‘open space' of the International Women's University (Hannover,
1998-2000), and the transdisciplinary
performing arts laboratory, IN TRANSIT (Berlin, House of World Cultures
2002-2003). Between 2001 and 2004,
she has co-curated and co-directed
the international festival of contemporary arts, CITY OF WOMEN (Ljubljana). After directing the new European platform for cultural exchange
LabforCulture during its launch phase
(Amsterdam, 2004-06), Knaup works
again as an independent curator with
a base in Berlin.


EN

Christophe Lazaro is a scientific collaborator at the Law department
of the Facultés Notre-Dame de la
Paix, Namur, and researcher at the
Research Centre for Computer and
Law. His interest in legal matters is
complemented by socio-anthropological research on virtual communities
(free software community), the human/artefact relationship (prothesis,
implants, RfiD chips), transhumanism and posthumanism.

Manu Luksch, founder of ambientTV.NET,
is a filmmaker who works outside the
frame. The ‘moving image', and in
particular the evolution of film in the
digital or networked age, has been
a core theme of her works. Characteristic is the blurring of boundaries between linear and hypertextual
narrative, directed work and multiple
authorship, and post-produced and
self-generative pieces. Expanding the
idea of the viewing environment is also
of importance; recent works have been
NL
shown on electronic billboards in pub



Nicolas Malevé

He has recently been working on sigSince 1998 multimedia artist Nicolas
nal processing, looking at how artists,
Malevé has been an active member of
activists, development projects, and
the organization of Constant. As such,
community groups are making alterhe has taken part in organizing varinate or competing communication inous activities connected with alternafrastructures.
tives to copyrights, such as ‘Copy.cult



Michael Murtaugh
http://automatist.org/

EN

Born in September 2001, represented
here by Valérie Cordy and Natalia
De Mello, the MéTAmorphoZ collective is a multidisciplinary association that create installations, spectacles and transdisciplinary performances that mix artistic experiments
and digital practices.

EN

Freelance developer of (tools for) online documentaries and other forms of
digital archives. He works and lives in
the Netherlands and online at automatist.org. He teaches at the MA Media
Design program at the Piet Zwart Institute in Rotterdam.

301

301

301

302

302

Julien Ottavi
http://www.noiser.org/

Ottavi is the founder, artistic programmer, audio computer researcher
(networks and audio research) and
sound artist of the experimental music
organization Apo33. Founded in 1997,
Apo33 is a collective of artists, musicians, sound artists, philosophers and
computer scientists, who aim to promote new types of music and sound
practices that do not receive large media coverage. The purpose of Apo33
is to create the conditions for the development of all of the kinds of music
and sound practices that contribute
to the advancement of sound creation,
including electronic music, concrete
music, contemporary written music,
sound poetry, sound art and other
practices which as yet have no name.
Apo33 refers to all of these practices
as ‘Audio Art'.

EN

Jussi Parikka teaches and writes on
the cultural theory and history of new
media. He has a PhD in Cultural
History from the University of Turku,
finland, and is Senior Lecturer in
Media Studies at Anglia Ruskin University, Cambridge, UK. Parikka has
published a book on ‘cultural theory
in the age of digital machines' (Koneoppi, in finnish) and his Digital
Contagions: A Media Archaeology of
Computer Viruses has been published
by Peter Lang, New York, Digital Formations-series (2007). Parikka is currently working on a book on ‘Insect
Media', which focuses on the media
theoretical and historical interconnections of biology and technology.


Sadie Plant

Sadie Plant is the author of The Most
Radical Gesture, Zeros and Ones,
and Writing on Drugs.
She has
taught in the Department of Cultural
Studies, University of Birmingham,
and the Department of Philosophy,
University of Warwick. For the last
ten years she has been working independently and living in Birmingham,
where she is involved with the Ikon
Gallery, Stan's Cafe Theatre Company, and the Birmingham Institute
of Art and Design.




EN

Praticable proposes itself as a horizontal work structure, which brings into
relation research, creation, transmission and production structure. This
structure is the basis for the creation
of many performances that will be
signed by one or more participants in
the project. These performances are
grounded, in one way or another, in
the exploration of body practices to
approach representation. Concretely,
the form of Praticable is periods of
common research of /on physical practices which will be the soil for the various creations. The creation periods
will be part of the research periods.
Thus, each specific project implies the
involvement of all participants in the
practice, the research and the elaboration of the practice from which the
piece will ensue.

304

304

304

305

305

Sabine Prokhoris

EN

EN

Psychoanalyst and author of, among
others, Witch's Kitchen:
Freud,
Faust, and the Transference (Cornell
University Press, 1995), and co-author
with Simon Hecquet of Fabriques de la
Danse (PUF, 2007). She is also active
in contemporary dance, as a critic and
a choreographer. In 2004 she made the
film Ceci n'est pas une danse chorale
together with Simon Hecquet.



After obtaining a master's degree in
Philosophy and Letters, Inès Rabadan
studied film at the IAD. Her short
films (Vacance, Surveiller les Tortues,
Maintenant, Si j'avais dix doigts,
Le jour du soleil), were shown at
about sixty festivals. Surveiller les
tortues and Maintenant were awarded
at the festivals of Clermont, Vendôme,
Chicago, Aix, Grenoble, Brest and
Namur. Occasionally she supervises
scenario workshops.
Her first feature film, Belhorizon, was selected
for the festivals of Montréal, Namur, Créteil, Buenos Aires, Santiago de Chile, Santo Domingo and
Mannheim-Heidelberg.
At the end
of 2006, it was released in Belgium,
France and Switzerland.

305

305

305

306

306
EN

Antoinette Rouvroy is researcher at
the Law department of the Facultés
Notre-Dame de la Paix in Namur,
and at the Research Centre for Computer and Law. Her domains of expertise range from rights and ethics
of biotechnologies, philosophy of Law
and ‘critical legal studies' to interdisciplinary questions related to privacy
and non-discrimination, science and
technology studies, law and language.
NL

Antoinette Rouvroy is onderzoekster
aan het departement Rechten van de
Facultés Notre-Dame de la Paix in Namen, en aan het Centre de Recherche
Informatique et Droit van de Universiteit van Namen. Zij is gespecialiseerd in het recht en de ethiek

Femke Snelting is a member of the
art and design collective De Geuzen
and of the experimental design agency
OSP.
NL


Michael Terry
http://www.ingimp.org/

Computer Scientist, University of Waterloo, Canada.

Carl Michael von Hausswolff

Von Hausswolff was born in 1956 in
Linkšping, Sweden.
He lives and
works in Stockholm. Since the end
of the 70s, von Hausswolff has been
working as a composer using the tape
recorder as his main instrument and
as a conceptual visual artist working with performance art, light- and
sound installations and photography.
His audio compositions from 1979 to
1992, constructed almost exclusively
from basic material taken from earlier audiovisual installations and performance works, essentially consist of
complex macromal drones with a surface of aesthetic elegance and beauty.
In later works, von Hausswolff retained the aesthetic elegance and the
drone, and added a purely isolationistic sonic condition to composing.


Marc Wathieu
http://www.erg.be/sdr/blog/

Marc Wathieu teaches at Erg (digital arts) and HEAJ (visual communication). He is a digital artist (he
works with the Brussels based collective LAB[au]) and sound designer.
He is also an offcial representative of
the Robots Trade Union with the human institutions. During V/J10 he
presented the Robots Trade Union's
Chart and ambitions.


Peter Westenberg

Brian Wyrick

FR

Peter Westenberg is an artist and film
and video maker, and member of Constant. His projects evolve from an
interest in social cartography, urban
anomalies and the relationships between locative identity and cultural

Brian Wyrick is an artist, filmmaker
and web developer working in Berlin
and Chicago. He is also co-founder
of Group 312 films, a Chicago-based
film group.


Simon Yuill
http://www.spring-alpha.org/
EN

Artist and programmer based in Glasgow, Scotland. He is a developer in
the spring_alpha and Social Versioning System (SVS) projects. He has
helped to set up and run a number
of hacklabs and free media labs in
Scotland including the Chateau Institute of Technology (ChIT) and Electron Club, as well as the Glasgow
branch of OpenLab. He has written
on aspects of Free Software and cultural praxis, and has contributed to
publications such as Software Studies
(MIT Press, 2008), the flOSS Manuals and Digital Artists Handbook project (GOTO10 and Folly).


License Register
??

65, 174

a
Attribution-Noncommercial-No Derivative Work

181, 188

c
Copyright Presses Universitaires de France, 2007 188
Creative Commons Attribution-NonCommercial-ShareAlike 58, 71,
73, 81, 93, 98, 155, 215, 254, 275
Creative Commons Attribution - NonCommercial - ShareAlike license
104
d
Dmytri Kleiner & Brian Wyrick, 2007. Anti-Copyright. Use as desired in whole or in part. Independent or collective commercial use
encouraged. Attribution optional.
47
f
Free Art License 38, 70, 75, 131, 143, 217
Fully Restricted Copyright 95
g
GNUFDL 119

311

311

311

312

312

t
The text is under a GPL. The images are a little trickier as none of
them belong to me. The images from ap and David Griffths can
be GPL as well, the Scratch Orchestra images (the graphic music
scores) were always published ‘without copyright' so I guess are
public domain. The photograph of the Scratch Orchestra performance can be GPL or public domain and should be credited to
Stefan Szczelkun. The other images, Sun Ra, Black Arts Group
and Lester Bowie would need to mention ‘contact the photographers'. Sorry the images are complicated but they largely come
from a time before copyleft was widespread.
233

312

312

312

313

313

This publication was produced with a set of digital tools that are
rarely used outside the world of scientific publishing: TEX, LATEX and
ConTEXt. As early as the summer of 2008, when most contributions
and translations to Tracks in electronic fields were reaching their final
stage, we started discussing at OSP 1 how we could design and produce
a book in a way that responded to the theme of the festival itself. OSP
is a design collective working with Free Software, and our relation to
the software we design with, is particular on purpose. At the core
of our design practice is the ongoing investigation of the intimate
connection between form, content and technology. What follows, is a
report of an experiment that stretched out over a little more than a
year.
For the production of previous books, OSP used Scribus, an Open
Source Desktop Publishing tool which resembles its proprietary variants PageMaker, InDesign or QuarkXpress. In this type of software,
each single page is virtually present as a ‘canvas' that has the same
proportions as a physical page and each of these ‘pages' can be individually altered through adding or manipulating the virtual objects
on it. Templates or ‘master pages' allow the automatic placement
of repeated elements such as page numbers and text blocks, but like
in a paper-based design workflow, each single page can be treated as
an autonomous unit that can be moved, duplicated and when necessary removed. Scribus would have certainly been fit for this job,
though the rapidly developing project is currently in a stage that the
production of books with more than 40 pages can become tedious.
Users are advised to split up such documents into multiple sections
which means that in able to keep continuity between pages, design
decisions are best made beforehand. As a result, the design workflow
is rendered less flexible than you would expect from state-of-the-art

5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35

1

Open Source Publishing http://ospublish.constantvzw.org

36

323

323

323

324

324

creative software. In previous projects, Scribus' rigid workflow challenged us to relocate our creative energy to another territory: that
of computation. We experimented with its powerful Python scripting
API to create 500 unique books. In another project, we transformed
a text block over a sequence of pages with the help of a fairy-tale
script. But for Tracks in electronic fields we dreamed of something
else.
Pierre Huyghebaert takes on the responsibility for the design of
the book. He had been using various generations of lay-out software
since the early 90's, and gathered an extensive body of knowledge
about their potential and limitations. More than once he brought up
the desire to try out a legendary typesetting system called TEX a
sublime typographic engine that allegedly implemented the work of
grandmaster Jan Tshichold 2 with mathematical precision.
TEX is a computer language designed by Donald Knuth in the
1970's, specifically for typesetting mathematical and other scientific
material. Powerful algorithms automatize widow and orphan control and can handle intelligent image placement. It is renowned for
being extremely stable, for running on many different kinds of computers and for being virtually bug free. In the academic tradition
of free knowledge exchange, Knuth decided to make TEX available
‘for no monetary fee' and modifications of or experimentations with
the source code are encouraged. In typical self referential style, the
near perfection of its software design is expressed in a version number
which is converging to π 3.
For OSP, TEX represents the potential of doing design differently.
Through shifting our software habits, we try to change our way of
working too. But Scribus, like the kinds of proprietary softwares it is
modeled on, has a ‘productionalist' view of design built into it 4, which

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30

2

In Die neue Typographie (1928), Jan Tschichold formulated the classic canon of modernist bookdesign.

3

The value of Π (3.141592653589793...) is the ratio of any circle's circumference to its
diameter and it's decimal representation never repeats. The current version number of
TEX is 3.141592

4

“A DTP program is the equivalent of a final assembly in an industrial process”
Christoph Schäfer, Gregory Pittman et al. The Offcial Scribus Manual.fles Books,
2009

31
32
33
34
35
36

324

324

324

325

325

is undeniably seeping through in the way we use it. An exotic Free
Software tool like TEX, rooted firmly in an academic context rather
than in commercial design, might help us to re-imagine the familiar
skill of putting type on a page. By making this kind of ‘domain
shift' 5 we hope to discover another experience of making, and find a
more constructive relation between software, content and form. So
when Pierre suggests that this V/J10 publication is possibly the right
occasion to try, we respond with enthusiasm.
By the end of 2008, Pierre starts carving out a path in the dense
forest of manuals, advice, tips-and-tricks with the help of Ivan Monroy Lopez. Ivan is trained as mathematician and more or less familiar with the exotic culture of TEX. They decide to use the popular
macro-package LATEX 6 to interface with TEX and find out about the
tong-in-cheek concept of ‘badness' (depending on the tension put on
hyphenated paragraphs, compiling a .tex document produces ‘badness' for each block on a scale from 0 to 10.000), and encounter a
long history of wonderful but often incoherent layers of development
that envelope the mysterious lasagna beauty of TEX's typographic
algorithms.
Laying-out a publication in LATEX is an entirely different experience than working with a canvas-based software. first of all, design decisions are executed through the application of markup which
vaguely reminds of working with CSS or HTML. The actual design is
only complete after ‘compiling' the document, and this is where TEX
magic happens. The software passes several times over a marked up
.tex file, incrementally deciding where to hyphenate a word, place a
paragraph or image. In principle, the concept of a page only applies
after compilation is complete. Design work therefore radically shifts
from the act of absolute placement to co-managing a flow. All elements remain relatively placed until the last tour has passed, and
while error messages, warnings and hyphenation decisions scroll by on
the command line, the sensation of elasticity is almost tangible. And

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33

5

See: Richard Sennett. The Craftsman. Allen Lane (Penguin Press), 2008

6 L
ATEX

is a high-level markup language that was first developed by Leslie Lamport in
1985. Lamport is a computer scientist also known for his work on distributed systems
and multi-treading algorithms.

34
35
36

325

325

325

326

326

indeed, when within the acceptable ‘stretch' of the program placement of a paragraph is exceeded, words literally break out of the grid
(see page 34 example).
When I join Pierre to continue the work in January 2009, the
book is still far from finished. By now, we can produce those typical
academic-style documents with ease, but we still have not managed to
use our own fonts 7. flipping back and forth in the many manuals and
handbooks that exist, we enjoy discovering a new culture. Though
we occasionally cringe at the paternalist humour that seems to have
infected every corner of the TEX community and which is clearly
inspired by witticisms of the founding father, Donald Knuth himself,
we experience how the lightweight, flexible document structure of
TEX allows for a less hierarchical and non-linear workflow, making
it easier to collaborate on a project. It is an exhilarating experience
to produce a lay-out in dialogue with a tool and the design process
takes on an almost rhythmical quality, iterative and incremental. It
also starts to dawn on us, that souplesse comes with a price.
“Users only need to learn a few easy-to-understand commands that
specify the logical structure of a document” promises The Not So
Short Introduction to LATEX. “They almost never need to tinker with
the actual layout of the document”. It explains why using LATEX
stops being easy-to-understand once you attempt to expand its strict
model of ‘book', ‘article' or ‘thesis': the ‘users' that LATEX addresses
are not designers and editors like us. At this point, we doubt whether
to give up or push through, and decide to set ourselves a limit of a
week in which we should be able to to tick off a minimal amount of
items from a list of essential design elements. Custom page size and
headers, working with URL's... they each require a separate ‘package'
that may or may not be compatible with another one. At the end of
the week, just when we start to regain confidence in the usability of
LATEX for our purpose, our document breaks beyond repair when we
try to use custom paper size with custom headers at the same time.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33

7

“Installing fonts in LATEX has the name of being a very hard task to accomplish. But
it is nothing more than following instructions. However, the problem is that, first, the
proper instructions have to be found and, second, the instructions then have to be read
and understood”. http://www.ntg.nl/maps/29/13.pdf

34
35
36

326

326

326

327

327

In February, more than 6 months into the process, we briefly consider switching to OpenOffce instead (which we had never tried for
such a large publication) or go back to Scribus (which means for
Pierre, learning a new tool). Then we remember ConTEXt, a relatively young ‘macro package' that uses the TEX engine as well. “While
LATEX insulates the writer from typographical details, ConTEXt takes
a complementary approach by providing structured interfaces for handling typography, including extensive support for colors, backgrounds,
hyperlinks, presentations, figure-text integration, and conditional compilation” 8. This is what we have been looking for.
ConTEXt was developed in the 1990's by a Dutch company specialised in ‘Advanced Document Engineering'. They needed to produce complex educational materials and workplace manuals and came
up with their own interface to TEX. “The development was purely
driven by demand and configurability, and this meant that we could
optimize most workflows that involved text editing”. 9
However frustrating it is to re-learn yet another type of markup
(even if both are based on the same TEX language, most of the LATEX
commands do not work in ConTEXt and vice versa), many of the
things that we could only achieve by means of ‘hack' in LATEX, are
built in and readily available in ConTEXt. With the help of the
very active ConTEXt mailinglist we find a way to finally use our own
fonts and while plenty of questions, bugs and dark areas remain, it
feels we are close to producing the kind of multilingual, multi-format,
multi-layered publication we imagine Tracks in Electr(on)ic fields to
be.
However, Pierre and I are working on different versions of Ubuntu,
respectively on a Mac and on a PC and we soon discover that our
installations of ConTEXt produce different results. We can't find
a solution in the nerve-wrackingly incomplete, fragmented though
extensive documentation of ConTEXt and by June 2009, we still have
not managed to print the book. As time passes, we find it increasingly

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33

8

Interview with Hans Hagen http://www.tug.org/interviews/interview-files/hans-hagen
.html

9

Interview with Hans Hagen http://www.tug.org/interviews/interview-files/hans-hagen
.html

34
35
36

327

327

327

328

328

difficult to allocate concentrated time for learning and it is a humbling
experience that acquiring some sort of fluency seems to pull us in all
directions. The stretched out nature of the process also feeds our
insecurity: Maybe we should have tried this package also? Have we
read that manual correctly? Have we read the right manual? Did we
understand those instructions really? If we were computer scientists
ourselves, would we know what to do? Paradoxically, the more we
invest into this process, mentally and physically, the harder it is to
let go. Are we refusing to see the limits of this tool, or even scarier,
our own limitations? Can we accept that the experience we'd hoped
for, is a lot more banal than the sublime results we secretly expected?
A fellow Constant member suggests in desperation: “You can't just
make a book, can you?”
In July, Pierre decides to pay for a consult with the developers
of ConTEXt themselves, and once and for all solve some of the issues we continue to struggle with. We drive up expectantly to the
headquarters of Pragma in Hasselt (NL) and discuss our problems,
seated in the recently redecorated rooms of a former bank building.
Hans Hagen himself reinstalls markIV (the latest in ConTEXt) on the
machine of Pierre, while his colleague Ton Otten tours me through
samples of the colorful publications produced by Pragma. In the afternoon, Hans gathers up some code examples that could help us place
thumbnail images and before we know it we are on our way South
again. Our visit confirms the impression we had from the awkwardly
written manuals and peculiar syntax, that ConTEXt is in essence a
one man mission. It is hard to imagine that a tool written to solve
particular problems of a certain document engineer, will ever grow
into the kind of tool that we desire too as well.
In August, as I type up this report, the book is more or less ready
to go to print. Although it looks ‘handsome' according to some, due
to unexpected bugs and time restraints, we have had to let go of
some of the features we hoped to implement. Looking at it now, just
before going to print, it has certainly not turned out to be the kind of
eye-opening typographic experience we dreamt of and sadly, we will
never know whether that is due to our own limited understanding
of TEX, LATEX and ConTEXt, to the inherent limits of those tools

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36

328

328

328

329

329

themselves, or to the crude decision to finally force through a lay-out
in two weeks. Probably a mix of all of the above, it is first of all a
relief that the publication finally exists. Looking back at the process, I
am reminded of the wise words of Joseph Weizenbaum, who observed
that “Only rarely, if indeed ever, are a tool and an altogether original
job it is to do, invented together” 10.
While this book nearly crumbled under the weight of the projections it had to carry, I often thought that outside academic publishing, the power of TEX is much like a Fata Morgana. Mesmerizing
and always out of reach, TEX continues to represent a promise of an
alternative technological landscape that keeps our dream of changing
software habits alive.

1
2
3
4
5
6
7
8
9
10
11
12
13
14

Femke Snelting (OSP), August 2009

15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35

10

Joseph Weizenbaum. Computer power and human reason: from judgment to calculation.
MIT, 1976

36

329

329

329

330

330

330

330

330

331

331

Colophon
Tracks in electr(on)ic fields is a publication of Constant, Association for Art
and Media, Brussels.
Translations: Steven Tallon, Anne Smolar, Yves Poliart, Emma Sidgwick
Copy editing: Emma Sidgwick, Femke Snelting, Wendy Van Wynsberghe
English editing and translations: Sophie Burm
Design: Pierre Huyghebaert, Femke Snelting (OSP)
Photos, unless otherwise noted: Constant (Peter Westenberg). figure 5-9: Marc
Wathieu, figure 31-96: Constant (Christina Clar, video stills), figure 102-104:
Leiff Elgren, CM von Hausswolff, figure 107-116: Manu Luksch, figure A-Q:
elpueblodechina, figure 151 + 152: Pierre Huyghebaert, figure 155: Cornelius
Cardew, figure 160-162: Scratch Orchestra, figure 153 + 154: Michael E. Emrick
(Courtesy of Ben Looker), figure 156-157 + 159: photographer unknown, figure
158: David Griffths, pages 19, 25, 35, 77 and 139: public domain or unknown.
This book was produced in ConTEXt, based on the TEX typesetting engine, and
other Free Softwares (OpenOffce, Gimp, Inkscape). For a written account of
the production process see The Making Of on page 323.
Printing: Drukkerij Geers Offset, Gent

EN

FR

NL

Copyright © 2009, Constant.
Copyleft: this book is free. You can distribute and modify it according to the
terms of the Free Art Licence. You can find an example of this licence on the
site ‘Copyleft Attitude' http://www.artlibre.org
Copyleft : cette oeuvre est libre, vous pouvez la redistribuer et/ou la modifier selon les termes de la Licence Art Libre. Vous trouverez un exemplaire de
cette Licence sur le site Copyleft Attitude http://www.artlibre.org ainsi que sur
d'autres sites.
Copyleft: dit boek is een vrij werk. Je kunt het verspreiden en/of veranderen
volgens de termen van de Free Art Licence. Je vindt de tekst van deze licentie
onder andere op de site ‘Copyleft Attitude' http://www.artlibre.org
This book can be downloaded from: http://www.constantvzw.org/verlag. Sources
are available from http://osp.constantvzw.org/sources/vj10

331

331

331

332

332

figure 148 De Vlaamse Minister van Cultuur,
Jeugd, Sport en Brussel

figure 149 De Vlaamse Gemeenschapscommissie

332

332

332

 

Display 200 300 400 500 600 700 800 900 1000 ALL characters around the word.