Adema
Scanners, collectors and aggregators. On the underground movement of (pirated) theory text sharing
2009
# Scanners, collectors and aggregators. On the ‘underground movement’ of
(pirated) theory text sharing
_“But as I say, let’s play a game of science fiction and imagine for a moment:
what would it be like if it were possible to have an academic equivalent to
the peer-to-peer file sharing practices associated with Napster, eMule, and
BitTorrent, something dealing with written texts rather than music? What would
the consequences be for the way in which scholarly research is conceived,
communicated, acquired, exchanged, practiced, and understood?”_
Gary Hall – [Digitize this
book!](http://www.upress.umn.edu/Books/H/hall_digitize.html) (2008)
Ubu
web was founded in 1996 by poet [Kenneth
Goldsmith](http://en.wikipedia.org/wiki/Kenneth_Goldsmith "Kenneth Goldsmith")
and has developed from ‘a repository for visual, concrete and (later) sound
poetry, to a site that ‘embraced all forms of the avant-garde and beyond. Its
parameters continue to expand in all directions.’ As
[Wikipedia](http://en.wikipedia.org/wiki/UbuWeb) states, Ubu is non-commercial
and operates on a gift economy. All the same - by forming an amazing resource
and repository for the avant-garde movement, and by offering and hosting these
works on its platform, Ubu is violating copyright laws. As they state however:
‘ _should something return to print, we will remove it from our site
immediately. Also, should an artist find their material posted on UbuWeb
without permission and wants it removed, please let us know. However, most of
the time, we find artists are thrilled to find their work cared for and
displayed in a sympathetic context. As always, we welcome more work from
existing artists on site_.’
Where in the more affluent and popular media realms of block buster movies and
pop music the [Piratebay](http://thepiratebay.org/) and other download sites
(or p2p networks) like [Mininova](http://www.mininova.org/) are being sued and
charged with copyright infringement, the major powers to be seem to turn a
blind eye when it comes to Ubu and many other resource sites online that offer
digital versions of hard-to-get-by materials ranging from books to
documentaries.
This is and has not always been the case: in 2002 [Sebastian
Lütgert](http://www.wizards-of-
os.org/archiv/wos_3/sprecher/l_p/sebastian_luetgert.html) from Berlin/New York
was sued by the "Hamburger Stiftung zur Förderung von Wissenschaft und Kultur"
for putting online two downloadable texts from Theodor W. Adorno on his
website [textz.com](http://www.medienkunstnetz.de/artist/textz-
com/biography/), an underground archive for Literature. According to
[this](http://de.indymedia.org/2004/03/76975.shtml) Indymedia interview with
Lütgert, textz.com was referred to as ‘the Napster for books’ offering about
700 titles, focusing on, as Lütgert states _‘Theorie, Romane, Science-Fiction,
Situationisten, Kino, Franzosen, Douglas Adams, Kritische Theorie, Netzkritik
usw’._
The interview becomes even more interesting when Lütgert remarks that one can
still easily download both Adorno texts without much ado if one wants to. This
leads to the bigger question of the real reasons underlying the charge against
textz.com; why was textz.com sued? As Lütgert says in the interview: “ _Das
kann man sowieso_ [when referring to the still available Adorno texts] _._
_Aber es gibt schon lange einen klaren Unterschied zwischen offener
Verfügbarkeit und dem Untergrund. Man kann die freie Verbreitung von Inhalten
nicht unterbinden, aber man scheint verhindern zu wollen dass dies allzu offen
und selbstverständlich geschieht. Das ist es was sie stört.”
_
__
But how can something be truly underground in an online environment whilst
still trying to spread or disseminate texts as widely as possible? This seems
to be the paradox of many - not quite legal and/or copyright protected -
resource sharing and collecting communities and platforms nowadays. However,
multiple scenario’s are available to evade this dilemma: by being frankly open
about the ‘status’ of the content on offer, as Ubu does, or by using little
‘tricks’ like an easy website registration, classifying oneself as a reading
group, or by relieving oneself from responsibility by stating that one is only
aggregating sources from elsewhere (linking) and not hosting the content on
its own website or blog. One can also state the offered texts or multimedia
files form a special issue or collection of resources, emphasizing their
educational and not-for-profit value.
Most of the ‘underground’ text and content sharing communities seem to follow
the concept of (the inevitability of) ‘[information wants to be
free](https://openreflections.wordpress.com/tag/information-wants-to-be-
free/)’, especially on the Internet. As Lütgert States: “ _Und vor allem sind
die über Walter Benjamin nicht im Bilde, der das gleiche Problem der
Reproduzierbarkeit von Werken aller Art schon zu Beginn des letzten
Jahrhunderts vor sich hatte und erkannt hat: die Massen haben das Recht, sich
das alles wieder anzueignen. Sie haben das Recht zu kopieren, und das Recht,
kopiert zu werden. Jedenfalls ist das eine ganz schön ungemütliche Situation,
dass dessen Nachlass jetzt von solch einem Bürokraten verwaltet wird._ _A:
Glaubst Du es ist überhaupt legitim intellektuellen Inhalt zu "besitzen"? Oder
__Eigentümer davon zu sein?_ _S: Es ist *unmöglich*. "Geistiges" Irgendwas
verbreitet sich immer weiter. Reemtsmas Vorfahren wären nie von den Bäumen
runtergekommen oder aus dem Morast rausgekrochen, wenn sich "geistiges"
Irgendwas nicht verbreitet hätte.”_
What seems to be increasingly obvious, as the interview also states, is that
one can find virtually all Ebooks and texts one needs via p2p networks and
other file sharing community’s (the true
[Darknet](http://en.wikipedia.org/wiki/Darknet_\(file_sharing\)) in a way) –
more and more people are offering (and asking for!) selections of texts and
books (including the ones by Adorno) on openly available websites and blogs,
or they are scanning them and offering them for (educational) use on their
domains. Although the Internet is mostly known for the pirating and
dissemination of pirated movies and music, copyright protected textual content
has (of course) always been spread too. But with the rise of ‘born digital’
text content, and with the help of massive digitization efforts like Google
Books (and accompanying Google Books [download
tools](http://www.codeplex.com/GoogleBookDownloader)) accompanied by the
appearance of better (and cheaper) scanning equipment, the movement of
‘openly’ spreading (pirated) texts (whether or not focusing on education and
‘fair use’) seems to be growing fast.
The direct harm (to both the producers and their publishers) of the free
online availability of (in copyright) texts is also maybe less clear than for
instance with music and films. Many feel texts and books will still be
preferred to be read in print, making the online and free availability of text
nothing more than a marketing tool for the sales of the printed version. Once
discovered, those truly interested will find and buy the print book. Also more
than with music and film, it is felt essential to share information, as a
cultural good and right, to prevent censorship and to improve society.

This is one of the reasons the [Open
Access](http://en.wikipedia.org/wiki/Open_access_\(publishing\)) movement for
scientific research has been initiated. But where the amount of people and
institutions supportive of this movement is gradually growing (especially
where it concerns articles and journals in the Sciences), the spread
concerning Open Access (or even digital availability) of monographs in the
Humanities and Social Sciences (of which the majority of the resources on
offer in the underground text sharing communities consists) has only just
started.
This has lead to a situation in which some have decided that change is not
coming fast enough. Instead of waiting for this utopian Open Access future to
come gradually about, they are actively spreading, copying, scanning and
pirating scholarly texts/monographs online. Although many times accompanied by
lengthy disclaimers about why they are violating copyright (to make the
content more widely accessible for one), many state they will take down the
content if asked. Following the
[copyleft](http://en.wikipedia.org/wiki/Copyleft) movement, what has in a way
thus arisen is a more ‘progressive’ or radical branch of the Open Access
movement. The people who spread these texts deem it inevitable they will be
online eventually, they are just speeding up the process. As Lütgert states: ‘
_The desire of an increasingly larger section of the population to 100-percent
of information is irreversible. The only way there can be slowed down in the
worst case, but not be stopped._
Still we have not yet answered the question of why publishers (and their
pirated authors) are not more upset about these kinds of websites and
platforms. It is not a simple question of them not being aware that these kind
of textual disseminations are occurring. As mentioned before, the harm to
producers (scholars) and their publishers (in Humanities and Social Sciences
mainly Not-For-Profit University Presses) is less clear. First of all, their
main customers are libraries (compare this to the software business model:
free for the consumer, companies pay), who are still buying the legal content
and mostly follow the policy of buying either print or both print and ebook,
so there are no lost sales there for the publishers. Next to that it is not
certain that the piracy is harming sales. Unlike in literary publishing, the
authors (academics) are already paid and do not loose money (very little maybe
in royalties) from the online availability. Perhaps some publishers also see
the Open Access movement as something inevitably growing and they thus don’t
see the urge to step up or organize a collaborative effort against scholarly
text piracy (where most of the presses also lack the scale to initiate this).
Whereas there has been some more upsurge and worries about _[textbook
piracy](http://bookseller-association.blogspot.com/2008/07/textbook-
piracy.html)_ (since this is of course the area where individual consumers –
students – do directly buy the material) and websites like
[Scribd](http://www.scribd.com/), this mostly has to do with the fact that
these kind of platforms also host non-scholarly content and actively promote
the uploading of texts (where many of the text ‘sharing’ platforms merely
offer downloading facilities). In the case of Scribd the size of the platform
(or the amount of content available on the platform) also has caused concerns
and much [media coverage](http://labnol.blogspot.com/2007/04/scribd-youtube-
for-pirated-ebooks-but.html).
All of this gives a lot of potential power to text sharing communities, and I
guess they know this. Only authors might be directly upset (especially famous
ones gathering a lot of royalties on their work) or in the case of Lütgert,
their beneficiaries, who still do see a lot of money coming directly from
individual customers.
Still, it is not only the lack of fear of possible retaliations that is
feeding the upsurge of text sharing communities. There is a strong ideological
commitment to the inherent good of these developments, and a moral and
political strive towards institutional and societal change when it comes to
knowledge production and dissemination.
As Adrian Johns states in his
[article](http://www.culturemachine.net/index.php/cm/article/view/345/348)
_Piracy as a business force_ , ‘today’s pirate philosophy is a moral
philosophy through and through’. As Jonas Andersson
[states](http://www.culturemachine.net/index.php/cm/article/view/346/359), the
idea of piracy has mostly lost its negative connotations in these communities
and is seen as a positive development, where these movements ‘have begun to
appear less as a reactive force (i.e. ‘breaking the rules’) and more as a
proactive one (‘setting the rules’). Rather than complain about the
conservatism of established forms of distribution they simply create new,
alternative ones.’ Although Andersson states this kind of activism is mostly
_occasional_ , it can be seen expressed clearly in the texts accompanying the
text sharing sites and blogs. However, copyright is perhaps so much _an issue_
on most of these sites (where it is on some of them), as it is something that
seems to be simply ignored for the larger good of aggregating and sharing
resources on the web. As is stated clearly for instance in an
[interview](http://blog.sfmoma.org/2009/08/four-dialogues-2-on-aaaarg/) with
Sean Dockray, who maintains AAAARG:
_" The project wasn’t about criticizing institutions, copyright, authority,
and so on. It was simply about sharing knowledge. This wasn’t as general as it
sounds; I mean literally the sharing of knowledge between various individuals
and groups that I was in correspondence with at the time but who weren’t
necessarily in correspondence with each other."_
Back to Lütgert. The files from textz.com have been saved and are still
[accessible](http://web.archive.org/web/20031208043421/textz.gnutenberg.net/index.php3?enhanced_version=http://textz.com/index.php3)
via [The Internet Archive Wayback
Machine](http://web.archive.org/collections/web.html). In the case of
textz.com, these files contain ’typed out text’, so no scanned contents or
PDF’s. Textz.com (or better said its shadow or mirror) offers an amazing
collection of texts, including artists statements/manifestos and screenplays
from for instance David Lynch.
The text sharing community has evolved and now knows many players. Two other
large members in this kind of ‘pirate theory base network’ (although – and I
have to make that clear! – they offer many (and even mostly) legal and out of
copyright texts), still active today, are
[Monoskop/Burundi](http://burundi.sk/monoskop/log/) and
[AAAARG.ORG](http://a.aaaarg.org/). These kinds of platforms all seem to
disseminate (often even on a titular level) similar content, focusing mostly
on Continental Philosophy and Critical Theory, Cultural Studies and Literary
Theory, The Frankfurter Schule, Sociology/Social Theory, Psychology,
Anthropology and Ethnography, Media Art and Studies, Music Theory, and
critical and avant-garde writers like Kafka, Beckett, Burroughs, Joyce,
Baudrillard, etc.etc.
[Monoskop](http://www.burundi.sk/monoskop/index.php/Main_Page) is, as they
state, a collaborative wiki research on the social history of media art or a
‘living archive of writings on art, culture and media technology’. At the
sitemap of their log, or under the categories section, you can browse their
resources on genre: book, journal, e-zine, report, pamphlet etc. As I found
[here](http://www.slovakia.culturalprofiles.net/?id=7958), Burundi originated
in 2003 as a (Slovakian) media lab working between the arts, science and
technologies, which spread out to a European city based cultural network; They
even functioned as a press, publishing the Anthology of New Media Literature
(in Slovak) in 2006, and they hosted media events and curated festivals. It
dissolved in June 2005 although the
[Monoskop](http://www.slovakia.culturalprofiles.net/?id=7964) research wiki on
media art, has continued to run since the dissolving of Burundi.
As
is stated on their website, AAAARG is a conversation platform, or
alternatively, a school, reading group or journal, maintained by Los Angeles
artist[ Sean Dockray](http://www.design.ucla.edu/people/faculty.php?ID=64
"Sean Dockray"). In the true spirit of Critical Theory, its aim is to ‘develop
critical discourse outside of an institutional framework’. Or even more
beautiful said, it operates in the spaces in between: ‘ _But rather than
thinking of it like a new building, imagine scaffolding that attaches onto
existing buildings and creates new architectures between them_.’ To be able to
access the texts and resources that are being ‘discussed’ at AAAARG, you need
to register, after which you will be able to browse the
[library](http://a.aaaarg.org/library). From this library, you can download
resources, but you can also upload content. You can subscribe to their
[feed](http://aaaarg.org/feed) (RSS/XML) and [like
Monoskop](http://twitter.com/monoskop), AAAARG.org also maintains a [Twitter
account](http://twitter.com/aaaarg) on which updates are posted. The most
interesting part though is the ‘extra’ functions the platform offers: after
you have made an account, you can make your own collections, aggregations or
issues out of the texts in the library or the texts you add. This offers an
alternative (thematically ordered) way into the texts archived on the site.
You also have the possibility to make comments or start a discussion on the
texts. See for instance their elaborate [discussion
lists](http://a.aaaarg.org/discussions). The AAAARG community thus serves both
as a sharing and feedback community and in this way operates in a true p2p
fashion, in a way like p2p seemed originally intended. The difference being
that AAAARG is not based on a distributed network of computers, but is based
on one platform, to which registered users are able to upload a file (which is
not the case on Monoskop for instance – only downloading here).
Via[
mercurunionhall](http://mercerunionhall.blogspot.com/2009/06/aaaargorg.html),
I found the image underneath which depicts AAAARG.ORG's article index
organized as a visual map, showing the connections between the different
texts. This map was created and posted by AAAARG user john, according to
mercurunionhall.

Where AAAArg.org focuses again on the text itself - typed out versions of
books - Monoskop works with more modern versions of textual distribution:
scanned versions or full ebooks/pdf’s with all the possibilities they offer,
taking a lot of content from Google books or (Open Access) publishers’
websites. Monoskop also links back to the publishers’ websites or Google
Books, for information about the books or texts (which again proves that the
publishers should know about their activities). To download the text however,
Monoskop links to [Sharebee](http://www.sharebee.com/), keeping the actual
text and the real downloading activity away from its platform.
Another part of the text sharing content consists of platforms offering
documentaries and lectures (so multi-media content) online. One example of the
last is the [Discourse Notebook Archive](http://www.discoursenotebook.com/),
which describes itself as an effort which has as its main goal ‘to make
available lectures in contemporary continental philosophy’ and is maintained
by Todd Kesselman, a PhD Student at The New School for Social Research. Here
you can find lectures from Badiou, Kristeva and Zizek (both audio and video)
and lectures aggregated from the European Graduate School. Kesselman also
links to resources on the web dealing with contemporary continental
philosophy.
Society of Control is a website maintained by [Stephan
Dillemuth](http://www.kopenhagen.dk/fileadmin/oldsite/interviews/solmennesker.htm),
an artist living and working in Munich, Germany, offering amongst others an
overview of his work and scientific research. According to
[this](http://www2.khib.no/~hovedfag/akademiet_05/tekster/interview.html)
interview conducted by Kristian Ø Dahl and Marit Flåtter his work is a
response to the increased influence of the neo-liberal world order on
education, creating a culture industry that is more than often driven by
commercial interests. He asks the question ‘How can dissidence grow in the
blind spots of the ‘society of control’ and articulate itself?’ His website,
the [Society of Control](http://www.societyofcontrol.com/disclaimer1.htm) is,
as he states, ‘an independent organization whose profits are entirely devoted
to research into truth and meaning.’
Society of Control has a [library
section](http://www.societyofcontrol.com/library/) which contains works from
some of the biggest thinkers of the twentieth century: Baudrillard, Adorno,
Debord, Bourdieu, Deleuze, Habermas, Sloterdijk und so weiter, and so much
more, a lot in German, and all ‘typed out’ texts. The library section offers a
direct search function, a category function and a a-z browse function.
Dillemuth states that he offers this material under fair use, focusing on not
for profit, freedom of information and the maintenance of freedom of speech
and information and making information accessible to all:
_“The Societyofcontrol website site contains information gathered from many
different sources. We see the internet as public domain necessary for the free
flow and exchange of information. However, some of these materials contained
in this site maybe claimed to be copyrighted by various unknown persons. They
will be removed at the copyright holder 's request within a reasonable period
of time upon receipt of such a request at the email address below. It is not
the intent of the Societyofcontrol to have violated or infringed upon any
copyrights.”_
Important in this respect is
that he put the responsibility of reading/using/downloading the texts on his
site with the viewers, and not with himself: _“Anyone reading or looking at
copyright material from this site does so at his/her own peril, we disclaim
any participation or liability in such actions.”_
Fark Yaraları = [Scars of Différance](http://farkyaralari.blogspot.com/) and
[Multitude of blogs](http://multitudeofblogs.blogspot.com/) are maintained by
the same author, Renc-u-ana, a philosophy and sociology student from Istanbul.
The first is his personal blog (with also many links to downloadable texts),
focusing on ‘creating an e-library for a Heideggerian philosophy and
Bourdieuan sociology’ on which he writes ‘market-created inequalities must be
overthrown in order to close knowledge gap.’ The second site has a clear
aggregating function with the aim ‘to give united feedback for e-book
publishing sites so that tracing and finding may become easier.’ And a call
for similar blogs or websites offering free ebook content. The blog is
accompanied by a nice picture of a woman warning to keep quiet, very
paradoxically appropriate to the context. Here again, a statement from the
host on possible copyright infringement _: ‘None of the PDFs are my own
productions. I 've collected them from web (e-mule, avax, libreremo, socialist
bros, cross-x, gigapedia..) What I did was thematizing._’ The same goes for
[pdflibrary](http://pdflibrary.wordpress.com/) (which seems to be from the
same author), offering texts from Derrida, Benjamin, Deleuze and the likes:
_‘_ _None of the PDFs you find here are productions of this blog. They are
collected from different places in the web (e-mule, avax, libreremo, all
socialist bros, cross-x, …). The only work done here is thematizing and
tagging.’_
[](http://multitudeofblogs.blogspot.com/)Our
student from Istanbul lists many text sharing sites on Multitude of blogs,
including [Inishark](http://danetch.blogspot.com/) (amongst others Badiou,
Zizek and Derrida), [Revelation](http://revelation-online.blogspot.com/2009/02
/keeping-ten-commandments.html) (a lot of history and bible study), [Museum of
accidents](http://museumofaccidents.blogspot.com/) (many resources relating to
again, critical theory, political theory and continental philhosophy) and
[Makeworlds](http://makeworlds.net/) (initiated from the [make world
festival](http://www.makeworlds.org/1/index.html) 2001).
[Mariborchan](http://mariborchan.wordpress.com/) is mainly a Zizek resource
site (also Badiou and Lacan) and offers next to ebooks also video and audio
(lectures and documentaries) and text files, all via links to file sharing
platforms.
What is clear is that the text sharing network described above (I am sure
there are many more related to other fields and subjects) is also formed and
maintained by the fact that the blogs and resource sites link to each other in
their blog rolls, which is what in the end makes up the network of text
sharing, only enhanced by RSS feeds and Twitter accounts, holding together
direct communication streams with the rest of the community. That there has
not been one major platform or aggregation site linking them together and
uploading all the texts is logical if we take into account the text sharing
history described before and this can thus be seen as a clear tactic: it is
fear, fear for what happened to textz.com and fear for the issue of scale and
fear of no longer operating at the borders, on the outside or at the fringes.
Because a larger scale means they might really get noticed. The idea of
secrecy and exclusivity which makes for the idea of the underground is very
practically combined with the idea that in this way the texts are available in
a multitude of places and can thus not be withdrawn or disappear so easily.
This is the paradox of the underground: staying small means not being noticed
(widely), but will mean being able to exist for probably an extended period of
time. Becoming (too) big will mean reaching more people and spreading the
texts further into society, however it will also probably mean being noticed
as a treat, as a ‘network of text-piracy’. The true strategy is to retain this
balance of openly dispersed subversivity.
Update 25 November 2005: Another interesting resource site came to my
attention recently: [Bedeutung](http://http://www.bedeutung.co.uk/index.php),
a philosophical and artistic initiative consisting of three projects:
[Bedeutung
Magazine](http://www.bedeutung.co.uk/index.php?option=com_content&view=article&id=1&Itemid=3),
[Bedeutung
Collective](http://www.bedeutung.co.uk/index.php?option=com_content&view=article&id=67&Itemid=4)
and [Bedeutung Blog](http://bedeutung.wordpress.com/), hosts a
[library](http://www.bedeutung.co.uk/index.php?option=com_content&view=article&id=85&Itemid=45)
section which links to freely downloadable online e-books, articles, audio
recordings and videos.
### Share this:
* [Twitter](https://openreflections.wordpress.com/2009/09/20/scanners-collectors-and-aggregators-on-the-%e2%80%98underground-movement%e2%80%99-of-pirated-theory-text-sharing/?share=twitter "Click to share on Twitter")
* [Facebook](https://openreflections.wordpress.com/2009/09/20/scanners-collectors-and-aggregators-on-the-%e2%80%98underground-movement%e2%80%99-of-pirated-theory-text-sharing/?share=facebook "Click to share on Facebook")
*
### Like this:
Like Loading...
### _Related_
### 17 comments on " Scanners, collectors and aggregators. On the
‘underground movement’ of (pirated) theory text sharing"
1. Pingback: [Humanism at the fringe « Snarkmarket](http://snarkmarket.com/2009/3428)
2. Pingback: [Scanners, collectors and aggregators. On the 'underground movement' of (pirated) theory text sharing « Mariborchan](http://mariborchan.wordpress.com/2009/09/20/scanners-collectors-and-aggregators-on-the-underground-movement-of-pirated-theory-text-sharing/)
hi there, I'm the owner of the Scars of Différance blog, I'm grateful for your
reading which nurtures self-reflexivity.
text-sharers phylum is a Tardean phenomena, it works through imitation and
differences differentiate styles and archives. my question was inherited from
aby warburg who is perhaps the first kantian librarian (not books, but the
nomenclatura of books must be thought!), I shape up a library where books
speak to each other, each time fragmentary.
you are right about the "fear", that's why I don't reupload books that are
deleted from mediafire. blog is one of the ways, for ex there are e-mail
groups where chain-sharings happen and there are forums where people ask each
other from different parts of the world, to scan a book that can't be found in
their library/country. I understand publishers' qualms (I also work in a
turkish publishing house and make translations). but they miss a point, it was
the very movement which made book a medium that de-posits "book" (in the
Blanchotian sense): these blogs do indeed a very important service, they save
books from the databanks. I'm not going to make a easy rider argument and
decry technology.what I mean is this: these books are the very bricks which
make up resistance -they are not compost-, it is a sharing "partage" and these
fragmentary impartations (the act in which 'we' emancipate books from the
proper names they bear: author, editor, publisher, queen,…) make words blare.
our work: to disenfranchise.
to get larger, to expand: these are too ambitious terms, one must learn to
stay small, remain finite. a blog can not supplant the non-place of the
friendships we make up around books.
the epigraph at the top of my blog reads: "what/who exorbitates mutates into
its opposite" from a Turkish poet Cahit Zarifoğlu. and this logic is what
generates the slithering of the word. we must save books from its own ends.
Thanks for the link, good article, agree with the contents, especially like
the part 'Could, for instance, the considerable resources that might be
allocated to protecting, policing and, ultimately, sanctioning online file-
sharing not be used for rendering it less financially damaging for the
creative sector?'
I like this kind of pragmatic reasoning, and I know more people do.
By the way, checked Bedeutung, great journal, and love your
[library](http://www.bedeutung.co.uk/index.php?option=com_content&view=article&id=86&Itemid=46)
section! Will add it to the main article.
10. Pingback: [Mariborchan » Scanners, collectors and aggregators. On the 'underground movement' of (pirated) theory text sharing](http://mariborchan.com/scanners-collectors-and-aggregators-on-the-underground-movement-of-pirated-theory-text-sharing/)
This is Nick, the author of the JJPS project; thanks for the tweet! I actually
came across this blog post while doing background research for the project and
looking for discussions about AAAARG; found out about a lot of projects that I
didn't already know about. One thing that I haven't been able to articulate
very well is that I think there's an interesting relationship between, say,
Kenneth Goldsmith's own poetry and his founding of Ubu Web; a collation and
reconfiguration of the detritus of culture (forgotten works of the avant-
gardes locked up behind pay walls of their own, or daily minutiae destined to
be forgotten), which is something that I was trying to do, in a more
circumscribed space, in JJPS Radio. But the question of distribution of
digital works is something I find fascinating, as there are all sorts of
avenues that we could be investigating but we are not. The issue, as it often
is, is one of technical ability, and that's why one of the future directions
of JJPS is to make some of the techniques I used easier to use. Those who want
to can always look into the code, which is of course freely available, but
that cannot and should not be a prerequisite.
Hi Nick, thanks for your comment. I love the JJPS and it would be great if the
technology you mention would be easily re-usable. What I find fascinating is
how you use another medium (radio) to translate/re-mediate and in a way also
unlock textual material. I see you also have an Open Access and a Cut-up hour.
I am very much interested in using different media to communicate scholarly
research and even more in remixing and re-mediating textual scholarship. I
think your project(s) is a very valuable exploration of these themes while at
the same time being a (performative) critique of the current system. I am in
awe.
14. Pingback: [Text-sharing "in the paradise of too many books" – SLOTHROP](http://slothrop.com/2012/11/16/text-sharing-in-the-paradise-of-too-many-books/)
Interesting topic, but also odd in some respects. Not translating the German
quotes is very unthoughtful and maybe even arrogant. If you are interested in
open access accessibility needs to be your top priority. I can read German,
but many of my friends (and most of the world) can't. It take a little effort
to just fix this, but you can do it.
Adema
The Ethics of Emergent Creativity: Can We Move Beyond Writing as Human Enterprise, Commodity and Innovation?
2019
# 3\. The Ethics of Emergent Creativity: Can We Move Beyond Writing as Human
Enterprise, Commodity and Innovation?
In 2013, the Authors’ Licensing & Collecting Society
(ALCS)[1](ch3.xhtml#footnote-152) commissioned a survey of its members to
explore writers’ earnings and contractual issues in the UK. The survey, the
results of which were published in the summary booklet ‘What Are Words Worth
Now?’, was carried out by Queen Mary, University of London. Almost 2,500
writers — from literary authors to academics and screenwriters — responded.
‘What Are Words Worth Now?’ summarises the findings of a larger study titled
‘The Business Of Being An Author: A Survey Of Authors’ Earnings And
Contracts’, carried out by Johanna Gibson, Phillip Johnson and Gaetano Dimita
and published in April 2015 by Queen Mary University of
London.[2](ch3.xhtml#footnote-151) The ALCS press release that accompanies the
study states that this ‘shocking’ new research into authors’ earnings finds a
‘dramatic fall, both in incomes, and the number of those working full-time as
writers’.[3](ch3.xhtml#footnote-150) Indeed, two of the main findings of the
study are that, first of all, the income of a professional author (which the
research defines as those who dedicate the majority of their time to writing)
has dropped 29% between 2005 and 2013, from £12,330 (£15,450 in real terms) to
just £11,000. Furthermore, the research found that in 2005 40% of professional
authors earned their incomes solely from writing, where in 2013 this figure
had dropped to just 11.5%.[4](ch3.xhtml#footnote-149)
It seems that one of the primary reasons for the ALCS to conduct this survey
was to collect ‘accurate, independent data’ on writers’ earnings and
contractual issues, in order for the ALCS to ‘make the case for authors’
rights’ — at least, that is what the ALCS Chief Executive Owen Atkinson writes
in the introduction accompanying the survey, which was sent out to all ALCS
members.[5](ch3.xhtml#footnote-148) Yet although this research was conducted
independently and the researchers did not draw conclusions based on the data
collected — in the form of policy recommendations for example — the ALCS did
frame the data and findings in a very specific way, as I will outline in what
follows; this framing includes both the introduction to the survey and the
press release that accompanies the survey’s findings. Yet to some extent this
framing, as I will argue, is already apparent in the methodology used to
produce the data underlying the research report.
First of all, let me provide an example of how the research findings have been
framed in a specific way. Chief Executive Atkinson mentions in his
introduction to the survey that the ALCS ‘exists to ensure that writers are
treated fairly and remunerated appropriately’. He continues that the ALCS
commissioned the survey to collect ‘accurate, independent data,’ in order to
‘make the case for writers’ rights’.[6](ch3.xhtml#footnote-147) Now this focus
on rights in combination with remuneration is all the more noteworthy if we
look at an earlier ALCS funded report from 2007, ‘Authors’ Earnings from
Copyright and Non-Copyright Sources: a Survey of 25,000 British and German
Writers’. This report is based on the findings of a 2006 writers’ survey,
which the 2013 survey updates. The 2007 report argues conclusively that
current copyright law has empirically failed to ensure that authors receive
appropriate reward or remuneration for the use of their
work.[7](ch3.xhtml#footnote-146) The data from the subsequent 2013 survey show
an even bleaker picture as regards the earnings of writers. Yet Atkinson
argues in the press release accompanying the findings of the 2013 survey that
‘if writers are to continue making their irreplaceable contribution to the UK
economy, they need to be paid fairly for their work. This means ensuring
clear, fair contracts with equitable terms and a copyright regime that support
creators and their ability to earn a living from their
creations’.[8](ch3.xhtml#footnote-145) Atkinson does not outline what this
copyright regime should be, nor does he draw attention to how this model could
be improved. More importantly, the fact that a copyright model is needed to
ensure fair pay stands uncontested for Atkinson and the ALCS — not surprising
perhaps, as protecting and promoting the rights of authors is the primary
mission of this member society. If there is any culprit to be held responsible
for the study’s ‘shocking’ findings, it is the elusive and further undefined
notion of ‘the digital’. According to Atkinson, digital technology is
increasingly challenging the mission of the ALCS to ensure fair remuneration
for writers, since it is ‘driving new markets and leading the copyright
debate’.[9](ch3.xhtml#footnote-144) The 2013 study is therefore, as Atkinson
states ‘the first to capture the impact of the digital revolution on writers’
working lives’.[10](ch3.xhtml#footnote-143) This statement is all the more
striking if we take into consideration that none of the questions in the 2013
survey focus specifically on digital publishing.[11](ch3.xhtml#footnote-142)
It therefore seems that — despite earlier findings — the ALCS has already
decided in advance what ‘the digital’ is and that a copyright regime is the
only way to ensure fair remuneration for writers in a digital context.
## Creative Industries
This strong uncontested link between copyright and remuneration can be traced
back to various other aspects of the 2015 report and its release. For example,
the press release draws a strong connection between the findings of the report
and the development of the creative industries in the UK. Again, Atkinson
states in the press release:
These are concerning times for writers. This rapid decline in both author
incomes and in the numbers of those writing full-time could have serious
implications for the economic success of the creative industries in the
UK.[12](ch3.xhtml#footnote-141)
This connection to the creative industries — ‘which are now worth £71.4
billion per year to the UK economy’,[13](ch3.xhtml#footnote-140) Atkinson
points out — is not surprising where the discourse around creative industries
maintains a clear bond between intellectual property rights and creative
labour. As Geert Lovink and Ned Rossiter state in their MyCreativity Reader,
the creative industries consist of ‘the generation and exploitation of
intellectual property’.[14](ch3.xhtml#footnote-139) Here they refer to a
definition created as part of the UK Government’s Creative Industries Mapping
Document,[15](ch3.xhtml#footnote-138) which states that the creative
industries are ‘those industries which have their origin in individual
creativity, skill and talent and which have a potential for wealth and job
creation through the generation and exploitation of intellectual property’.
Lovink and Rossiter point out that the relationship between IP and creative
labour lies at the basis of the definition of the creative industries where,
as they argue, this model of creativity assumes people only create to produce
economic value. This is part of a larger trend Wendy Brown has described as
being quintessentially neoliberal, where ‘neoliberal rationality disseminates
the model of the market to all domains and activities’ — and this includes the
realm of politics and rights.[16](ch3.xhtml#footnote-137) In this sense the
economization of culture and the concept of creativity is something that has
become increasingly embedded and naturalised. The exploitation of intellectual
property stands at the basis of the creative industries model, in which
cultural value — which can be seen as intricate, complex and manifold —
becomes subordinated to the model of the market; it becomes economic
value.[17](ch3.xhtml#footnote-136)
This direct association of cultural value and creativity with economic value
is apparent in various other facets of the ALCS commissioned research and
report. Obviously, the title of the initial summary booklet, as a form of
wordplay, asks ‘What are words worth?’. It becomes clear from the context of
the survey that the ‘worth’ of words will only be measured in a monetary
sense, i.e. as economic value. Perhaps even more important to understand in
this context, however, is how this economic worth of words is measured and
determined by focusing on two fixed and predetermined entities in advance.
First of all, the study focuses on individual human agents of creativity (i.e.
creators contributing economic value): the value of writing is established by
collecting data and making measurements at the level of individual authorship,
addressing authors/writers as singular individuals throughout the survey.
Secondly, economic worth is further determined by focusing on the fixed and
stable creative objects authors produce, in other words the study establishes
from the outset a clear link between the worth and value of writing and
economic remuneration based on individual works of
writing.[18](ch3.xhtml#footnote-135) Therefore in this process of determining
the economic worth of words, ‘writers’ and/or ‘authors’ are described and
positioned in a certain way in this study (i.e. as the central agents and
originators of creative objects), as is the form their creativity takes in the
shape of quantifiable outputs or commodities. The value of both these units of
measurement (the creator and the creative objects) are then set off against
the growth of the creative industries in the press release.
The ALCS commissioned survey provides some important insights into how
authorship, cultural works and remuneration — and ultimately, creativity — is
currently valued, specifically in the context of the creative industries
discourse in the UK. What I have tried to point out — without wanting to
downplay the importance either of writers receiving fair remuneration for
their work or of issues related to the sustainability of creative processes —
is that the findings from this survey have both been extracted and
subsequently framed based on a very specific economic model of creativity (and
authorship). According to this model, writing and creativity are sustained
most clearly by an individual original creator (an author) who extracts value
from the work s/he creates and distributes, aided by an intellectual property
rights regime. As I will outline more in depth in what follows, the enduring
liberal and humanist presumptions that underlie this survey continuously
reinforce the links between the value of writing and established IP and
remuneration regimes, and support a vision in which authorship and creativity
are dependent on economic incentives and ownership of works. By working within
this framework and with these predetermined concepts of authorship and
creativity (and ‘the digital’) the ALCS is strongly committed to the upkeep of
a specific model and discourse of creativity connected to the creative
industries. The ALCS does not attempt to complicate this model, nor does it
search for alternatives even when, as the 2007 report already implies, the
existing IP model has empirically failed to support the remuneration of
writers appropriately.
I want to use this ALCS survey as a reference point to start problematising
existing constructions of creativity, authorship, ownership, and
sustainability in relation to the ethics of publishing. To explore what ‘words
are worth’ and to challenge the hegemonic liberal humanist model of creativity
— to which the ALCS adheres — I will examine a selection of theoretical and
practical publishing and writing alternatives, from relational and posthuman
authorship to radical open access and uncreative writing. These alternatives
do not deny the importance of fair remuneration and sustainability for the
creative process; however, they want to foreground and explore creative
relationalities that move beyond the individual author and her ownership of
creative objects as the only model to support creativity and cultural
exchange. By looking at alternatives while at the same time complicating the
values and assumptions underlying the dominant narrative for IP expansion, I
want to start imagining what more ethical, fair and emergent forms of
creativity might entail. Forms that take into consideration the various
distributed and entangled agencies involved in the creation of cultural
content — which are presently not being included in the ALCS survey on fair
remuneration, for example. As I will argue, a reconsideration of the liberal
and humanist model of creativity might actually create new possibilities to
consider the value of words, and with that perhaps new solutions to the
problems pointed out in the ALCS study.
## Relational and Distributed Authorship
One of the main critiques of the liberal humanist model of authorship concerns
how it privileges the author as the sole source and origin of creativity. Yet
the argument has been made, both from a historical perspective and in relation
to today’s networked digital environment, that authorship and creativity, and
with that the value and worth of that creativity, are heavily
distributed.[19](ch3.xhtml#footnote-134) Should we therefore think about how
we can distribute notions of authorship and creativity more ethically when
defining the worth and value of words too? Would this perhaps mean a more
thorough investigation of what and who the specific agencies involved in
creative production are? This seems all the more important given that, today,
‘the value of words’ is arguably connected not to (distributed) authors or
creative agencies, but to rights holders (or their intermediaries such as
agents).[20](ch3.xhtml#footnote-133) From this perspective, the problem with
the copyright model as it currently functions is that the creators of
copyright don’t necessarily end up benefiting from it — a point that was also
implied by the authors of the 2007 ALCS commissioned report. Copyright
benefits rights holders, and rights holders are not necessarily, and often not
at all, involved in the production of creative work.
Yet copyright and the work as object are knit tightly to the authorship
construct. In this respect, the above criticism notwithstanding, in a liberal
vision of creativity and ownership the typical unit remains either the author
or the work. This ‘solid and fundamental unit of the author and the work’ as
Foucault has qualified it, albeit challenged, still retains a privileged
position.[21](ch3.xhtml#footnote-132) As Mark Rose argues, authorship — as a
relatively recent cultural formation — can be directly connected to the
commodification of writing and to proprietorship. Even more it developed in
tandem with the societal principle of possessive individualism, in which
individual property rights are protected by the social
order.[22](ch3.xhtml#footnote-131)
Some of the more interesting recent critiques of these constructs of
authorship and proprietorship have come from critical and feminist legal
studies, where scholars such as Carys Craig have started to question these
connections further. As Craig, Turcotte and Coombe argue, IP and copyright are
premised on liberal and neoliberal assumptions and constructs, such as
ownership, private rights, self-interest and
individualism.[23](ch3.xhtml#footnote-130) In this sense copyright,
authorship, the work as object, and related discourses around creativity
continuously re-establish and strengthen each other as part of a self-
sustaining system. We have seen this with the discourse around creative
industries, as part of which economic value comes to stand in for the creative
process itself, which, according to this narrative, can only be sustained
through an IP regime. Furthermore, from a feminist new materialist position,
the current discourse on creativity is very much a material expression of
creativity rather than merely its representation, where this discourse has
been classifying, constructing, and situating creativity (and with that,
authorship) within a neoliberal framework of creative industries.
Moving away from an individual construct of creativity therefore immediately
affects the question of the value of words. In our current copyright model
emphasis lies on the individual original author, but in a more distributed
vision the value of words and of creative production can be connected to a
broader context of creative agencies. Historically there has been a great
discursive shift from a valuing of imitation or derivation to a valuing of
originality in determining what counts as creativity or creative output.
Similar to Rose, Craig, Turcotte and Coombe argue that the individuality and
originality of authorship in its modern form established a simple route
towards individual ownership and the propertisation of creative achievement:
the original work is the author’s ownership whereas the imitator or pirate is
a trespasser of thief. In this sense original authorship is
‘disproportionately valued against other forms of cultural expression and
creative play’, where copyright upholds, maintains and strengthens the binary
between imitator and creator — defined by Craig, Turcotte and Coombe as a
‘moral divide’.[24](ch3.xhtml#footnote-129) This also presupposes a notion of
creativity that sees individuals as autonomous, living in isolation from each
other, ignoring their relationality. Yet as Craig, Turcotte and Coombe argue,
‘the act of writing involves not origination, but rather the adaptation,
derivation, translation and recombination of “raw material” taken from
previously existing texts’.[25](ch3.xhtml#footnote-128) This position has also
been explored extensively from within remix studies and fan culture, where the
adaptation and remixing of cultural content stands at the basis of creativity
(what Lawrence Lessig has called Read/Write culture, opposed to Read/Only
culture).[26](ch3.xhtml#footnote-127) From the perspective of access to
culture — instead of ownership of cultural goods or objects — one could also
argue that its value would increase when we are able to freely distribute it
and with that to adapt and remix it to create new cultural content and with
that cultural and social value — this within a context in which, as Craig,
Turcotte and Coombe point out, ‘the continuous expansion of intellectual
property rights has produced legal regimes that restrict access and downstream
use of information resources far beyond what is required to encourage their
creation’[27](ch3.xhtml#footnote-126)
To move beyond Enlightenment ideals of individuation, detachment and unity of
author and work, which determine the author-owner in the copyright model,
Craig puts forward a post-structuralist vision of relational authorship. This
sees the individual as socially situated and constituted — based also on
feminist scholarship into the socially situated self — where authorship in
this vision is situated within the communities in which it exists, but also in
relation to the texts and discourses that constitute it. Here creativity takes
place from within a network of social relations and the social dimensions of
authorship are recognised, as connectivity goes hand in hand with individual
autonomy. Craig argues that copyright should not be defined out of clashing
rights and interests but should instead focus on the kinds of relationships
this right would structure; it should be understood in relational terms: ‘it
structures relationships between authors and users, allocating powers and
responsibilities amongst members of cultural communities, and establishing the
rules of communication and exchange’.[28](ch3.xhtml#footnote-125) Cultural
value is then defined within these relationships.
## Open Access and the Ethics of Care
Craig, Turcotte and Coombe draw a clear connection between relational
authorship, feminism and (the ideals of) the open access movement, where as
they state, ‘rather than adhering to the individuated form of authorship that
intellectual property laws presuppose, open access initiatives take into
account varying forms of collaboration, creativity and
development’.[29](ch3.xhtml#footnote-124) Yet as I and others have argued
elsewhere,[30](ch3.xhtml#footnote-123) open access or open access publishing
is not a solid ideological block or model; it is made up of disparate groups,
visions and ethics. In this sense there is nothing intrinsically political or
democratic about open access, practitioners of open access can just as well be
seen to support and encourage open access in connection with the neoliberal
knowledge economy, with possessive individualism — even with CC licenses,
which can be seen as strengthening individualism —[31](ch3.xhtml#footnote-122)
and with the unity of author and work.[32](ch3.xhtml#footnote-121)
Nevertheless, there are those within the loosely defined and connected
‘radical open access community’, that do envision their publishing outlook and
relationship towards copyright, openness and authorship within and as part of
a relational ethics of care.[33](ch3.xhtml#footnote-120) For example Mattering
Press, a scholar-led open access book publishing initiative founded in 2012
and launched in 2016, publishes in the field of Science and Technology Studies
(STS) and works with a production model based on cooperation and shared
scholarship. As part of its publishing politics, ethos and ideology, Mattering
Press is therefore keen to include various agencies involved in the production
of scholarship, including ‘authors, reviewers, editors, copy editors, proof
readers, typesetters, distributers, designers, web developers and
readers’.[34](ch3.xhtml#footnote-119) They work with two interrelated feminist
(new materialist) and STS concepts to structure and perform this ethos:
mattering[35](ch3.xhtml#footnote-118) and care.[36](ch3.xhtml#footnote-117)
Where it concerns mattering, Mattering Press is conscious of how their
experiment in knowledge production, being inherently situated, puts new
relationships and configurations into the world. What therefore matters for
them are not so much the ‘author’ or the ‘outcome’ (the object), but the
process and the relationships that make up publishing:
[…] the way academic texts are produced matters — both analytically and
politically. Dominant publishing practices work with assumptions about the
conditions of academic knowledge production that rarely reflect what goes on
in laboratories, field sites, university offices, libraries, and various
workshops and conferences. They tend to deal with almost complete manuscripts
and a small number of authors, who are greatly dependent on the politics of
the publishing industry.[37](ch3.xhtml#footnote-116)
For Mattering Press care is something that extends not only to authors but to
the many other actants involved in knowledge production, who often provide
free volunteer labour within a gift economy context. As Mattering Press
emphasises, the ethics of care ‘mark vital relations and practices whose value
cannot be calculated and thus often goes unacknowledged where logics of
calculation are dominant’.[38](ch3.xhtml#footnote-115) For Mattering Press,
care can help offset and engage with the calculative logic that permeates
academic publishing:
[…] the concept of care can help to engage with calculative logics, such as
those of costs, without granting them dominance. How do we calculate so that
calculations do not dominate our considerations? What would it be to care for
rather than to calculate the cost of a book? This is but one and arguably a
relatively conservative strategy for allowing other logics than those of
calculation to take centre stage in publishing.[39](ch3.xhtml#footnote-114)
This logic of care refers, in part, to making visible the ‘unseen others’ as
Joe Deville (one of Mattering Press’s editors) calls them, who exemplify the
plethora of hidden labour that goes unnoticed within this object and author-
focused (academic) publishing model. As Endre Danyi, another Mattering Press
editor, remarks, quoting Susan Leigh Star: ‘This is, in the end, a profoundly
political process, since so many forms of social control rely on the erasure
or silencing of various workers, on deleting their work from representations
of the work’.[40](ch3.xhtml#footnote-113)
## Posthuman Authorship
Authorship is also being reconsidered as a polyvocal and collaborative
endeavour by reflecting on the agentic role of technology in authoring
content. Within digital literature, hypertext and computer-generated poetry,
media studies scholars have explored the role played by technology and the
materiality of text in the creation process, where in many ways writing can be
seen as a shared act between reader, writer and computer. Lori Emerson
emphasises that machines, media or technology are not neutral in this respect,
which complicates the idea of human subjectivity. Emerson explores this
through the notion of ‘cyborg authorship’, which examines the relation between
machine and human with a focus on the potentiality of in-
betweenness.[41](ch3.xhtml#footnote-112) Dani Spinosa talks about
‘collaboration with an external force (the computer, MacProse, technology in
general)’.[42](ch3.xhtml#footnote-111) Extending from the author, the text
itself, and the reader as meaning-writer (and hence playing a part in the
author function), technology, she states, is a fourth term in this
collaborative meaning-making. As Spinosa argues, in computer-generated texts
the computer is more than a technological tool and becomes a co-producer,
where it can occur that ‘the poet herself merges with the machine in order to
place her own subjectivity in flux’.[43](ch3.xhtml#footnote-110) Emerson calls
this a ‘break from the model of the poet/writer as divinely inspired human
exemplar’, which is exemplified for her in hypertext, computer-generated
poetry, and digital poetry.[44](ch3.xhtml#footnote-109)
Yet in many ways, as Emerson and Spinosa also note, these forms of posthuman
authorship should be seen as part of a larger trend, what Rolf Hughes calls an
‘anti-authorship’ tradition focused on auto-poesis (self-making), generative
systems and automatic writing. As Hughes argues, we see this tradition in
print forms such as Oulipo and in Dada experiments and surrealist games
too.[45](ch3.xhtml#footnote-108) But there are connections here with broader
theories that focus on distributed agency too, especially where it concerns
the influence of the materiality of the text. Media theorists such as N.
Katherine Hayles and Johanna Drucker have extensively argued that the
materiality of the page is entangled with the intentionality of the author as
a further agency; Drucker conceptualises this through a focus on ‘conditional
texts’ and ‘performative materiality’ with respect to the agency of the
material medium (be it the printed page or the digital
screen).[46](ch3.xhtml#footnote-107)
Where, however, does the redistribution of value creation end in these
narratives? As Nick Montfort states with respect to the agency of technology,
‘should other important and inspirational mechanisms — my CD player, for
instance, and my bookshelves — get cut in on the action as
well?’[47](ch3.xhtml#footnote-106) These distributed forms of authorship do
not solve issues related to authorship or remuneration but further complicate
them. Nevertheless Montfort is interested in describing the processes involved
in these types of (posthuman) co-authorship, to explore the (previously
unexplored) relationships and processes involved in the authoring of texts
more clearly. As he states, this ‘can help us understand the role of the
different participants more fully’.[48](ch3.xhtml#footnote-105) In this
respect a focus on posthuman authorship and on the various distributed
agencies that play a part in creative processes is not only a means to disrupt
the hegemonic focus on a romantic single and original authorship model, but it
is also about a sensibility to (machinic) co-authorship, to the different
agencies involved in the creation of art, and playing a role in creativity
itself. As Emerson remarks in this respect: ‘we must be wary of granting a
(romantic) specialness to human intentionality — after all, the point of
dividing the responsibility for the creation of the poems between human and
machine is to disrupt the singularity of human identity, to force human
identity to intermingle with machine identity’.[49](ch3.xhtml#footnote-104)
## Emergent Creativity
This more relational notion of rights and the wider appreciation of the
various (posthuman) agencies involved in creative processes based on an ethics
of care, challenges the vision of the single individualised and original
author/owner who stands at the basis of our copyright and IP regime — a vision
that, it is worth emphasising, can be seen as a historical (and Western)
anomaly, where collaborative, anonymous, and more polyvocal models of
authorship have historically prevailed.[50](ch3.xhtml#footnote-103) The other
side of the Foucauldian double bind, i.e. the fixed cultural object that
functions as a commodity, has however been similarly critiqued from several
angles. As stated before, and as also apparent from the way the ALCS report
has been framed, currently our copyright and remuneration regime is based on
ownership of cultural objects. Yet as many have already made clear, this
regime and discourse is very much based on physical objects and on a print-
based context.[51](ch3.xhtml#footnote-102) As such the idea of ‘text’ (be it
print or digital) has not been sufficiently problematised as versioned,
processual and materially changing within an IP context. In other words, text
and works are mostly perceived as fixed and stable objects and commodities
instead of material and creative processes and entangled relationalities. As
Craig et al. state, ‘the copyright system is unfortunately employed to
reinforce the norms of the analog world’.[52](ch3.xhtml#footnote-101) In
contrast to a more relational perspective, the current copyright regime views
culture through a proprietary lens. And it is very much this discursive
positioning, or as Craig et al. argue ‘the language of “ownership,”
“property,” and “commodity”’, which ‘obfuscates the nature of copyright’s
subject matter, and cloaks the social and cultural conditions of its
production and the implications of its
protection’.[53](ch3.xhtml#footnote-100) How can we approach creativity in
context, as socially and culturally situated, and not as the free-standing,
stable product of a transcendent author, which is very much how it is being
positioned within an economic and copyright framework? This hegemonic
conception of creativity as property fails to acknowledge or take into
consideration the manifold, distributed, derivative and messy realities of
culture and creativity.
It is therefore important to put forward and promote another more emergent
vision of creativity, where creativity is seen as both processual and only
ever temporarily fixed, and where the work itself is seen as being the product
of a variety of (posthuman) agencies. Interestingly, someone who has written
very elaborately about a different form of creativity relevant to this context
is one of the authors of the ALCS commissioned report, Johanna Gibson. Similar
to Craig, who focuses on the relationality of copyright, Gibson wants to pay
more attention to the networking of creativity, moving it beyond a focus on
traditional models of producers and consumers in exchange for a ‘many-to-many’
model of creativity. For Gibson, IP as a system aligns with a corporate model
of creativity, one which oversimplifies what it means to be creative and
measures it against economic parameters alone.[54](ch3.xhtml#footnote-099) In
many ways in policy driven visions, IP has come to stand in for the creative
process itself, Gibson argues, and is assimilated within corporate models of
innovation. It has thus become a synonym for creativity, as we have seen in
the creative industries discourse. As Gibson explains, this simplified model
of creativity is very much a ‘discursive strategy’ in which the creator is
mythologised and output comes in the form of commodified
objects.[55](ch3.xhtml#footnote-098) In this sense we need to re-appropriate
creativity as an inherently fluid and uncertain concept and practice.
Yet this mimicry of creativity by IP and innovation at the same time means
that any re-appropriation of creativity from the stance of access and reuse is
targeted as anti-IP and thus as standing outside of formal creativity. Other,
more emergent forms of creativity have trouble existing within this self-
defining and sustaining hegemonic system. This is similar to what Craig
remarked with respect to remixed, counterfeit and pirated, and un-original
works, which are seen as standing outside the system. Gibson uses actor
network theory (ANT) as a framework to construct her network-based model of
creativity, where for her ANT allows for a vision that does not fix creativity
within a product, but focuses more on the material relationships and
interactions between users and producers. In this sense, she argues, a network
model allows for plural agencies to be attributed to creativity, including
those of users.[56](ch3.xhtml#footnote-097)
An interesting example of how the hegemonic object-based discourse of
creativity can be re-appropriated comes from the conceptual poet Kenneth
Goldsmith, who, in what could be seen as a direct response to this dominant
narrative, tries to emphasise that exactly what this discourse classifies as
‘uncreative’, should be seen as valuable in itself. Goldsmith points out that
appropriating is creative and that he uses it as a pedagogical method in his
classes on ‘Uncreative Writing’ (which he defines as ‘the art of managing
information and representing it as writing’[57](ch3.xhtml#footnote-096)). Here
‘uncreative writing’ is something to strive for and stealing, copying, and
patchwriting are elevated as important and valuable tools for writing. For
Goldsmith the digital environment has fostered new skills and notions of
writing beyond the print-based concepts of originality and authorship: next to
copying, editing, reusing and remixing texts, the management and manipulation
of information becomes an essential aspect of
creativity.[58](ch3.xhtml#footnote-095) Uncreative writing involves a
repurposing and appropriation of existing texts and works, which then become
materials or building blocks for further works. In this sense Goldsmith
critiques the idea of texts or works as being fixed when asking, ‘if artefacts
are always in flux, when is a historical work determined to be
“finished”?’[59](ch3.xhtml#footnote-094) At the same time, he argues, our
identities are also in flux and ever shifting, turning creative writing into a
post-identity literature.[60](ch3.xhtml#footnote-093) Machines play important
roles in uncreative writing, as active agents in the ‘managing of
information’, which is then again represented as writing, and is seen by
Goldsmith as a bridge between human-centred writing and full-blown
‘robopoetics’ (literature written by machines, for machines). Yet Goldsmith is
keen to emphasise that these forms of uncreative writing are not beholden to
the digital medium, and that pre-digital examples are plentiful in conceptual
literature and poetry. He points out — again by a discursive re-appropriation
of what creativity is or can be — that sampling, remixing and appropriation
have been the norm in other artistic and creative media for decades. The
literary world is lagging behind in this respect, where, despite the
experiments by modernist writers, it continues neatly to delineate avant-garde
from more general forms of writing. Yet as Goldsmith argues the digital has
started to disrupt this distinction again, moving beyond ‘analogue’ notions of
writing, and has fuelled with it the idea that there might be alternative
notions of writing: those currently perceived as
uncreative.[61](ch3.xhtml#footnote-092)
## Conclusion
There are two addendums to the argument I have outlined above that I would
like to include here. First of all, I would like to complicate and further
critique some of the preconceptions still inherent in the relational and
networked copyright models as put forward by Craig et al. and Gibson. Both are
in many ways reformist and ‘responsive’ models. Gibson, for example, does not
want to do away with IP rights, she wants them to develop and adapt to mirror
society more accurately according to a networked model of creativity. For her,
the law is out of tune with its public, and she wants to promote a more
inclusive networked (copy) rights model.[62](ch3.xhtml#footnote-091) For Craig
too, relationalities are established and structured by rights first and
foremost. Yet from a posthuman perspective we need to be conscious of how the
other actants involved in creativity would fall outside such a humanist and
subjective rights model.[63](ch3.xhtml#footnote-090) From texts and
technologies themselves to the wider environmental context and to other
nonhuman entities and objects: in what sense will a copyright model be able to
extend such a network beyond an individualised liberal humanist human subject?
What do these models exclude in this respect and in what sense are they still
limited by their adherence to a rights model that continues to rely on
humanist nodes in a networked or relational model? As Anna Munster has argued
in a talk about the case of the monkey selfie, copyright is based on a logic
of exclusion that does not line up with the assemblages of agentic processes
that make up creativity and creative expression.[64](ch3.xhtml#footnote-089)
How can we appreciate the relational and processual aspects of identity, which
both Craig and Gibson seem to want to promote, if we hold on to an inherently
humanist concept of subjectification, rights and creativity?
Secondly, I want to highlight that we need to remain cautious of a movement
away from copyright and the copyright industries, to a context of free culture
in which free content — and the often free labour it is based upon — ends up
servicing the content industries (i.e. Facebook, Google, Amazon). We must be
wary when access or the narrative around (open) access becomes dominated by
access to or for big business, benefitting the creative industries and the
knowledge economy. The danger of updating and adapting IP law to fit a
changing digital context and to new technologies, of making it more inclusive
in this sense — which is something both Craig and Gibson want to do as part of
their reformative models — is that this tends to be based on a very simplified
and deterministic vision of technology, as something requiring access and an
open market to foster innovation. As Sarah Kember argues, this technocratic
rationale, which is what unites pro-and anti-copyright activists in this
sense, essentially de-politicises the debate around IP; it is still a question
of determining the value of creativity through an economic perspective, based
on a calculative lobby.[65](ch3.xhtml#footnote-088) The challenge here is to
redefine the discourse in such a way that our focus moves away from a dominant
market vision, and — as Gibson and Craig have also tried to do — to emphasise
a non-calculative ethics of relations, processes and care instead.
I would like to return at this point to the ALCS report and the way its
results have been framed within a creative industries discourse.
Notwithstanding the fact that fair remuneration and incentives for literary
production and creativity in general are of the utmost importance, what I have
tried to argue here is that the ‘solution’ proposed by the ALCS does not do
justice to the complexities of creativity. When discussing remuneration of
authors, the ALCS seems to prefer a simple solution in which copyright is seen
as a given, the digital is pointed out as a generalised scapegoat, and
binaries between print and digital are maintained and strengthened.
Furthermore, fair remuneration is encapsulated by the ALCS within an economic
calculative logic and rhetoric, sustained by and connected to a creative
industries discourse, which continuously recreates the idea that creativity
and innovation are one. Instead I have tried to put forward various
alternative visions and practices, from radical open access to posthuman
authorship and uncreative writing, based on vital relationships and on an
ethics of care and responsibility. These alternatives highlight distributed
and relational authorship and/or showcase a sensibility that embraces
posthuman agencies and processual publishing as part of a more complex,
emergent vision of creativity, open to different ideas of what creativity is
and can become. In this vision creativity is thus seen as relational, fluid
and processual and only ever temporarily fixed as part of our ethical decision
making: a decision-making process that is contingent on the contexts and
relationships with which we find ourselves entangled. This involves asking
questions about what writing is and does, and how creativity expands beyond
our established, static, or given concepts, which include copyright and a
focus on the author as a ‘homo economicus’, writing as inherently an
enterprise, and culture as commodified. As I have argued, the value of words,
indeed the economic worth and sustainability of words and of the ‘creative
industries’, can and should be defined within a different narrative. Opening
up from the hegemonic creative industries discourse and the way we perform it
through our writing practices might therefore enable us to explore extended
relationalities of emergent creativity, open-ended publishing processes, and a
feminist ethics of care and responsibility.
This contribution has showcased examples of experimental, hybrid and posthuman
writing and publishing practices that are intervening in this established
discourse on creativity. How, through them, can we start to performatively
explore a new discourse and reconfigure the relationships that underlie our
writing processes? How can the worth of writing be reflected in different
ways?
## Works Cited
(2014) ‘New Research into Authors’ Earnings Released’, Authors’ Licensing and
Collecting Society,
Us/News/News/What-are-words-worth-now-not-much.aspx>
Abrahamsson, Sebastian, Uli Beisel, Endre Danyi, Joe Deville, Julien McHardy,
and Michaela Spencer (2013) ‘Mattering Press: New Forms of Care for STS
Books’, The EASST Review 32.4,
volume-32-4-december-2013/mattering-press-new-forms-of-care-for-sts-books/>
Adema, Janneke (2017) ‘Cut-Up’, in Eduardo Navas (ed.), Keywords in Remix
Studies (New York and London: Routledge), pp. 104–14,
— (2014) ‘Embracing Messiness’, LSE Impact of Social Sciences,
adema-pdsc14/>
— (2015) ‘Knowledge Production Beyond The Book? Performing the Scholarly
Monograph in Contemporary Digital Culture’ (PhD dissertation, Coventry
University),
f4c62c77ac86/1/ademacomb.pdf>
— (2014) ‘Open Access’, in Critical Keywords for the Digital Humanities
(Lueneburg: Centre for Digital Cultures (CDC)),
— and Gary Hall (2013) ‘The Political Nature of the Book: On Artists’ Books
and Radical Open Access’, New Formations 78.1, 138–56,
— and Samuel Moore (2018) ‘Collectivity and Collaboration: Imagining New Forms
of Communality to Create Resilience in Scholar-Led Publishing’, Insights 31.3,
ALCS, Press Release (8 July 2014) ‘What Are Words Worth Now? Not Enough’,
Barad, Karen (2007) Meeting the Universe Halfway: Quantum Physics and the
Entanglement of Matter and Meaning (Durham, N.C., and London: Duke University
Press).
Boon, Marcus (2010) In Praise of Copying (Cambridge, MA: Harvard University
Press).
Brown, Wendy (2015) Undoing the Demos: Neoliberalism’s Stealth Revolution
(Cambridge, MA: MIT Press).
Chartier, Roger (1994) The Order of Books: Readers, Authors, and Libraries in
Europe Between the 14th and 18th Centuries, 1st ed. (Stanford, CA: Stanford
University Press).
Craig, Carys J. (2011) Copyright, Communication and Culture: Towards a
Relational Theory of Copyright Law (Cheltenham, UK, and Northampton, MA:
Edward Elgar Publishing).
— Joseph F. Turcotte, and Rosemary J. Coombe (2011) ‘What’s Feminist About
Open Access? A Relational Approach to Copyright in the Academy’, Feminists@law
1.1,
Cramer, Florian (2013) Anti-Media: Ephemera on Speculative Arts (Rotterdam and
New York, NY: nai010 publishers).
Drucker, Johanna (2015) ‘Humanist Computing at the End of the Individual Voice
and the Authoritative Text’, in Patrik Svensson and David Theo Goldberg
(eds.), Between Humanities and the Digital (Cambridge, MA: MIT Press), pp.
83–94.
— (2014) ‘Distributed and Conditional Documents: Conceptualizing
Bibliographical Alterities’, MATLIT: Revista do Programa de Doutoramento em
Materialidades da Literatura 2.1, 11–29.
— (2013) ‘Performative Materiality and Theoretical Approaches to Interface’,
Digital Humanities Quarterly 7.1 [n.p.],
Ede, Lisa, and Andrea A. Lunsford (2001) ‘Collaboration and Concepts of
Authorship’, PMLA 116.2, 354–69.
Emerson, Lori (2008) ‘Materiality, Intentionality, and the Computer-Generated
Poem: Reading Walter Benn Michaels with Erin Moureacute’s Pillage Land’, ESC:
English Studies in Canada 34, 45–69.
— (2003) ‘Digital Poetry as Reflexive Embodiment’, in Markku Eskelinen, Raine
Koskimaa, Loss Pequeño Glazier and John Cayley (eds.), CyberText Yearbook
2002–2003, 88–106,
Foucault, Michel, ‘What Is an Author?’ (1998) in James D. Faubion (ed.),
Essential Works of Foucault, 1954–1984, Volume Two: Aesthetics, Method, and
Epistemology (New York: The New Press).
Gibson, Johanna (2007) Creating Selves: Intellectual Property and the
Narration of Culture (Aldershot, England and Burlington, VT: Routledge).
— Phillip Johnson and Gaetano Dimita (2015) The Business of Being an Author: A
Survey of Author’s Earnings and Contracts (London: Queen Mary University of
London), [https://orca.cf.ac.uk/72431/1/Final Report - For Web
Publication.pdf](https://orca.cf.ac.uk/72431/1/Final%20Report%20-%20For%20Web%20Publication.pdf)
Goldsmith, Kenneth (2011) Uncreative Writing: Managing Language in the Digital
Age (New York: Columbia University Press).
Hall, Gary (2010) ‘Radical Open Access in the Humanities’ (presented at the
Research Without Borders, Columbia University),
humanities/>
— (2008) Digitize This Book!: The Politics of New Media, or Why We Need Open
Access Now (Minneapolis, MN: University of Minnesota Press).
Hayles, N. Katherine (2004) ‘Print Is Flat, Code Is Deep: The Importance of
Media-Specific Analysis’, Poetics Today 25.1, 67–90,
Hughes, Rolf (2005) ‘Orderly Disorder: Post-Human Creativity’, in Proceedings
of the Linköping Electronic Conference (Linköpings universitet: University
Electronic Press).
Jenkins, Henry, and Owen Gallagher (2008) ‘“What Is Remix Culture?”: An
Interview with Total Recut’s Owen Gallagher’, Confessions of an Aca-Fan,
Johns, Adrian (1998) The Nature of the Book: Print and Knowledge in the Making
(Chicago, IL: University of Chicago Press).
Kember, Sarah (2016) ‘Why Publish?’, Learned Publishing 29, 348–53,
— (2014) ‘Why Write?: Feminism, Publishing and the Politics of Communication’,
New Formations: A Journal of Culture/Theory/Politics 83.1, 99–116.
Kretschmer, M., and P. Hardwick (2007) Authors’ Earnings from Copyright and
Non-Copyright Sources : A Survey of 25,000 British and German Writers (Poole,
UK: CIPPM/ALCS Bournemouth University),
[https://microsites.bournemouth.ac.uk/cippm/files/2007/07/ALCS-Full-
report.pdf](https://microsites.bournemouth.ac.uk/cippm/files/2007/07/ACLS-
Full-report.pdf)
Lessig, Lawrence (2008) Remix: Making Art and Commerce Thrive in the Hybrid
Economy (New York: Penguin Press).
Lovink, Geert, and Ned Rossiter (eds.) (2007) MyCreativity Reader: A Critique
of Creative Industries (Amsterdam: Institute of Network Cultures),
McGann, Jerome J. (1992) A Critique of Modern Textual Criticism
(Charlottesville, VA: University of Virginia Press).
McHardy, Julien (2014) ‘Why Books Matter: There Is Value in What Cannot Be
Evaluated.’, Impact of Social Sciences [n.p.],
Mol, Annemarie (2008) The Logic of Care: Health and the Problem of Patient
Choice, 1st ed. (London and New York: Routledge).
Montfort, Nick (2003) ‘The Coding and Execution of the Author’, in Markku
Eskelinen, Raine Kosimaa, Loss Pequeño Glazier and John Cayley (eds.),
CyberText Yearbook 2002–2003, 2003, 201–17, , pp. 201–17.
Moore, Samuel A. (2017) ‘A Genealogy of Open Access: Negotiations between
Openness and Access to Research’, Revue Française des Sciences de
l’information et de la Communication 11,
Munster, Anna (2016) ‘Techno-Animalities — the Case of the Monkey Selfie’
(presented at the Goldsmiths University, London),
Navas, Eduardo (2012) Remix Theory: The Aesthetics of Sampling (Vienna and New
York: Springer).
Parikka, Jussi, and Mercedes Bunz (11 July 2014) ‘A Mini-Interview: Mercedes
Bunz Explains Meson Press’, Machinology,
meson-press/>
Richards, Victoria (7 January 2016) ‘Monkey Selfie: Judge Rules Macaque Who
Took Grinning Photograph of Himself “Cannot Own Copyright”’, The Independent,
macaque-who-took-grinning-photograph-of-himself-cannot-own-
copyright-a6800471.html>
Robbins, Sarah (2003) ‘Distributed Authorship: A Feminist Case-Study Framework
for Studying Intellectual Property’, College English 66.2, 155–71,
Rose, Mark (1993) Authors and Owners: The Invention of Copyright (Cambridge,
MA: Harvard University Press).
Spinosa, Dani (14 May 2014) ‘“My Line (Article) Has Sighed”: Authorial
Subjectivity and Technology’, Generic Pronoun,
Star, Susan Leigh (1991) ‘The Sociology of the Invisible: The Primacy of Work
in the Writings of Anselm Strauss’, in Anselm Leonard Strauss and David R.
Maines (eds.), Social Organization and Social Process: Essays in Honor of
Anselm Strauss (New York: A. de Grutyer).
* * *
[1](ch3.xhtml#footnote-152-backlink) The Authors’ Licensing and Collecting
Society is a [British](https://en.wikipedia.org/wiki/United_Kingdom)
membership organisation for writers, established in 1977 with over 87,000
members, focused on protecting and promoting authors’ rights. ALCS collects
and pays out money due to members for secondary uses of their work (copying,
broadcasting, recording etc.).
[2](ch3.xhtml#footnote-151-backlink) This survey was an update of an earlier
survey conducted in 2006 by the Centre of Intellectual Property Policy and
Management (CIPPM) at Bournemouth University.
[3](ch3.xhtml#footnote-150-backlink) ‘New Research into Authors’ Earnings
Released’, Authors’ Licensing and Collecting Society, 2014,
Us/News/News/What-are-words-worth-now-not-much.aspx>
[4](ch3.xhtml#footnote-149-backlink) Johanna Gibson, Phillip Johnson, and
Gaetano Dimita, The Business of Being an Author: A Survey of Author’s Earnings
and Contracts (London: Queen Mary University of London, 2015), p. 9,
[https://orca.cf.ac.uk/72431/1/Final Report - For Web Publication.pdf
](https://orca.cf.ac.uk/72431/1/Final%20Report%20-%20For%20Web%20Publication.pdf)
[5](ch3.xhtml#footnote-148-backlink) ALCS, Press Release. What Are Words Worth
Now? Not Enough, 8 July 2014,
worth-now-not-enough>
[6](ch3.xhtml#footnote-147-backlink) Gibson, Johnson, and Dimita, The Business
of Being an Author, p. 35.
[7](ch3.xhtml#footnote-146-backlink) M. Kretschmer and P. Hardwick, Authors’
Earnings from Copyright and Non-Copyright Sources: A Survey of 25,000 British
and German Writers (Poole: CIPPM/ALCS Bournemouth University, 2007), p. 3,
[https://microsites.bournemouth.ac.uk/cippm/files/2007/07/ALCS-Full-
report.pdf](https://microsites.bournemouth.ac.uk/cippm/files/2007/07/ACLS-
Full-report.pdf)
[8](ch3.xhtml#footnote-145-backlink) ALCS, Press Release, 8 July 2014,
[https://www.alcs.co.uk/news/what-are-words-](https://www.alcs.co.uk/news
/what-are-words-worth-now-not-enough) worth-now-not-enough
[9](ch3.xhtml#footnote-144-backlink) Gibson, Johnson, and Dimita, The Business
of Being an Author, p. 35.
[10](ch3.xhtml#footnote-143-backlink) Ibid.
[11](ch3.xhtml#footnote-142-backlink) In the survey, three questions that
focus on various sources of remuneration do list digital publishing and/or
online uses as an option (questions 8, 11, and 15). Yet the data tables
provided in the appendix to the report do not provide the findings for
questions 11 and 15 nor do they differentiate according to type of media for
other tables related to remuneration. The only data table we find in the
report related to digital publishing is table 3.3, which lists ‘Earnings
ranked (1 to 7) in relation to categories of work’, where digital publishing
ranks third after books and magazines/periodicals, but before newspapers,
audio/audio-visual productions and theatre. This lack of focus on the effect
of digital publishing on writers’ incomes, for a survey that is ‘the first to
capture the impact of the digital revolution on writers’ working lives’, is
quite remarkable. Gibson, Johnson, and Dimita, The Business of Being an
Author, Appendix 2.
[12](ch3.xhtml#footnote-141-backlink) Ibid., p. 35.
[13](ch3.xhtml#footnote-140-backlink) Ibid.
[14](ch3.xhtml#footnote-139-backlink) Geert Lovink and Ned Rossiter (eds.),
MyCreativity Reader: A Critique of Creative Industries (Amsterdam: Institute
of Network Cultures, 2007), p. 14,
[16](ch3.xhtml#footnote-137-backlink) Wendy Brown, Undoing the Demos:
Neoliberalism’s Stealth Revolution (Cambridge, MA: MIT Press, 2015), p. 31.
[17](ch3.xhtml#footnote-136-backlink) Therefore Lovink and Rossiter make a
plea to, ‘redefine creative industries outside of IP generation’. Lovink and
Rossiter, MyCreativity Reader, p. 14.
[18](ch3.xhtml#footnote-135-backlink) Next to earnings made from writing more
in general, the survey on various occasions asks questions about earnings
arising from specific categories of works and related to the amount of works
exploited (published/broadcast) during certain periods. Gibson, Johnson, and
Dimita, The Business of Being an Author, Appendix 2.
[19](ch3.xhtml#footnote-134-backlink) Roger Chartier, The Order of Books:
Readers, Authors, and Libraries in Europe Between the 14th and 18th Centuries,
1st ed. (Stanford: Stanford University Press, 1994); Lisa Ede and Andrea A.
Lunsford, ‘Collaboration and Concepts of Authorship’, PMLA 116.2 (2001),
354–69; Adrian Johns, The Nature of the Book: Print and Knowledge in the
Making (Chicago, IL: University of Chicago Press, 1998); Jerome J. McGann, A
Critique of Modern Textual Criticism (Charlottesville, VA, University of
Virginia Press, 1992); Sarah Robbins, ‘Distributed Authorship: A Feminist
Case-Study Framework for Studying Intellectual Property’, College English 66.2
(2003), 155–71,
[20](ch3.xhtml#footnote-133-backlink) The ALCS survey addresses this problem,
of course, and tries to lobby on behalf of its authors for fair contracts with
publishers and intermediaries. That said, the survey findings show that only
42% of writers always retain their copyright. Gibson, Johnson, and Dimita, The
Business of Being an Author, p. 12.
[21](ch3.xhtml#footnote-132-backlink) Michel Foucault, ‘What Is an Author?’,
in James D. Faubion (ed.), Essential Works of Foucault, 1954–1984, Volume Two:
Aesthetics, Method, and Epistemology (New York: The New Press, 1998), p. 205.
[22](ch3.xhtml#footnote-131-backlink) Mark Rose, Authors and Owners: The
Invention of Copyright (Cambridge, MA: Harvard University Press, 1993).
[23](ch3.xhtml#footnote-130-backlink) Carys J. Craig, Joseph F. Turcotte, and
Rosemary J. Coombe, ‘What’s Feminist About Open Access? A Relational Approach
to Copyright in the Academy’, Feminists@law 1.1 (2011),
[24](ch3.xhtml#footnote-129-backlink) Ibid., p. 8.
[25](ch3.xhtml#footnote-128-backlink) Ibid., p. 9.
[26](ch3.xhtml#footnote-127-backlink) Lawrence Lessig, Remix: Making Art and
Commerce Thrive in the Hybrid Economy (New York: Penguin Press, 2008); Eduardo
Navas, Remix Theory: The Aesthetics of Sampling (Vienna and New York:
Springer, 2012); Henry Jenkins and Owen Gallagher, ‘“What Is Remix Culture?”:
An Interview with Total Recut’s Owen Gallagher’, Confessions of an Aca-Fan,
2008,
[27](ch3.xhtml#footnote-126-backlink) Craig, Turcotte, and Coombe, ‘What’s
Feminist About Open Access?, p. 27.
[28](ch3.xhtml#footnote-125-backlink) Ibid., p. 14.
[29](ch3.xhtml#footnote-124-backlink) Ibid., p. 26.
[30](ch3.xhtml#footnote-123-backlink) Janneke Adema, ‘Open Access’, in
Critical Keywords for the Digital Humanities (Lueneburg: Centre for Digital
Cultures (CDC), 2014), ; Janneke Adema,
‘Embracing Messiness’, LSE Impact of Social Sciences, 2014,
adema-pdsc14/>; Gary Hall, Digitize This Book!: The Politics of New Media, or
Why We Need Open Access Now (Minneapolis, MN: University of Minnesota Press,
2008), p. 197; Sarah Kember, ‘Why Write?: Feminism, Publishing and the
Politics of Communication’, New Formations: A Journal of
Culture/Theory/Politics 83.1 (2014), 99–116; Samuel A. Moore, ‘A Genealogy of
Open Access: Negotiations between Openness and Access to Research’, Revue
Française des Sciences de l’information et de la Communication, 2017,
[31](ch3.xhtml#footnote-122-backlink) Florian Cramer, Anti-Media: Ephemera on
Speculative Arts (Rotterdam and New York: nai010 publishers, 2013).
[32](ch3.xhtml#footnote-121-backlink) Especially within humanities publishing
there is a reluctance to allow derivative uses of one’s work in an open access
setting.
[33](ch3.xhtml#footnote-120-backlink) In 2015 the Radical Open Access
Conference took place at Coventry University, which brought together a large
array of presses and publishing initiatives (often academic-led) in support of
an ‘alternative’ vision of open access and scholarly communication.
Participants in this conference subsequently formed the loosely allied Radical
Open Access Collective: [radicaloa.co.uk](https://radicaloa.co.uk/). As the
conference concept outlines, radical open access entails ‘a vision of open
access that is characterised by a spirit of on-going creative experimentation,
and a willingness to subject some of our most established scholarly
communication and publishing practices, together with the institutions that
sustain them (the library, publishing house etc.), to rigorous critique.
Included in the latter will be the asking of important questions about our
notions of authorship, authority, originality, quality, credibility,
sustainability, intellectual property, fixity and the book — questions that
lie at the heart of what scholarship is and what the university can be in the
21st century’. Janneke Adema and Gary Hall, ‘The Political Nature of the Book:
On Artists’ Books and Radical Open Access’, New Formations 78.1 (2013),
138–56, ; Janneke Adema and Samuel
Moore, ‘Collectivity and Collaboration: Imagining New Forms of Communality to
Create Resilience In Scholar-Led Publishing’, Insights 31.3 (2018), ; Gary Hall, ‘Radical Open Access in the
Humanities’ (presented at the Research Without Borders, Columbia University,
2010),
humanities/>; Janneke Adema, ‘Knowledge Production Beyond The Book? Performing
the Scholarly Monograph in Contemporary Digital Culture’ (PhD dissertation,
Coventry University, 2015),
f4c62c77ac86/1/ademacomb.pdf>
[34](ch3.xhtml#footnote-119-backlink) Julien McHardy, ‘Why Books Matter: There
Is Value in What Cannot Be Evaluated’, Impact of Social Sciences, 2014, n.p.,
[http://blogs.lse.ac.uk/impactofsocial sciences/2014/09/30/why-books-
matter/](http://blogs.lse.ac.uk/impactofsocialsciences/2014/09/30/why-books-
matter/)
[35](ch3.xhtml#footnote-118-backlink) Karen Barad, Meeting the Universe
Halfway: Quantum Physics and the Entanglement of Matter and Meaning (Durham,
N.C. and London: Duke University Press, 2007).
[36](ch3.xhtml#footnote-117-backlink) Annemarie Mol, The Logic of Care: Health
and the Problem of Patient Choice, 1st ed. (London and New York: Routledge,
2008).
[37](ch3.xhtml#footnote-116-backlink) Sebastian Abrahamsson and others,
‘Mattering Press: New Forms of Care for STS Books’, The EASST Review 32.4
(2013),
press-new-forms-of-care-for-sts-books/>
[40](ch3.xhtml#footnote-113-backlink) Susan Leigh Star, ‘The Sociology of the
Invisible: The Primacy of Work in the Writings of Anselm Strauss’, in Anselm
Leonard Strauss and David R. Maines (eds.), Social Organization and Social
Process: Essays in Honor of Anselm Strauss (New York: A. de Gruyter, 1991).
Mattering Press is not alone in exploring an ethics of care in relation to
(academic) publishing. Sarah Kember, director of Goldsmiths Press is also
adamant in her desire to make the underlying processes of publishing (i.e.
peer review, citation practices) more transparent and accountable Sarah
Kember, ‘Why Publish?’, Learned Publishing 29 (2016), 348–53, . Mercedes Bunz, one of the editors running
Meson Press, argues that a sociology of the invisible would incorporate
‘infrastructure work’, the work of accounting for, and literally crediting
everybody involved in producing a book: ‘A book isn’t just a product that
starts a dialogue between author and reader. It is accompanied by lots of
other academic conversations — peer review, co-authors, copy editors — and
these conversations deserve to be taken more serious’. Jussi Parikka and
Mercedes Bunz, ‘A Mini-Interview: Mercedes Bunz Explains Meson Press’,
Machinology, 2014,
mercedes-bunz-explains-meson-press/>. For Open Humanities Press authorship is
collaborative and even often anonymous: for example, they are experimenting
with research published in wikis to further complicate the focus on single
authorship and a static marketable book object within academia (see their
living and liquid books series).
[41](ch3.xhtml#footnote-112-backlink) Lori Emerson, ‘Digital Poetry as
Reflexive Embodiment’, in Markku Eskelinen, Raine Koskimaa, Loss Pequeño
Glazier and John Cayley (eds.), CyberText Yearbook 2002–2003, 2003, 88–106,
[42](ch3.xhtml#footnote-111-backlink) Dani Spinosa, ‘“My Line (Article) Has
Sighed”: Authorial Subjectivity and Technology’, Generic Pronoun, 2014,
[43](ch3.xhtml#footnote-110-backlink) Spinosa, ‘My Line (Article) Has Sighed’.
[44](ch3.xhtml#footnote-109-backlink) Emerson, ‘Digital Poetry as Reflexive
Embodiment’, p. 89.
[45](ch3.xhtml#footnote-108-backlink) Rolf Hughes, ‘Orderly Disorder: Post-
Human Creativity’, in Proceedings of the Linköping Electronic Conference
(Linköpings universitet: University Electronic Press, 2005).
[46](ch3.xhtml#footnote-107-backlink) N. Katherine Hayles, ‘Print Is Flat,
Code Is Deep: The Importance of Media-Specific Analysis’, Poetics Today 25.1
(2004), 67–90, ; Johanna Drucker,
‘Performative Materiality and Theoretical Approaches to Interface’, Digital
Humanities Quarterly 7.1 (2013), ; Johanna
Drucker, ‘Distributed and Conditional Documents: Conceptualizing
Bibliographical Alterities’, MATLIT: Revista do Programa de Doutoramento em
Materialidades da Literatura 2.1 (2014), 11–29.
[47](ch3.xhtml#footnote-106-backlink) Nick Montfort, ‘The Coding and Execution
of the Author’, in Markku Eskelinen, Raine Kosimaa, Loss Pequeño Glazier and
John Cayley (eds.), CyberText Yearbook 2002–2003, 2003, 201–17 (p. 201),
[48](ch3.xhtml#footnote-105-backlink) Montfort, ‘The Coding and Execution of
the Author’, p. 202.
[49](ch3.xhtml#footnote-104-backlink) Lori Emerson, ‘Materiality,
Intentionality, and the Computer-Generated Poem: Reading Walter Benn Michaels
with Erin Moureacute’s Pillage Land’, ESC: English Studies in Canada 34
(2008), 66.
[50](ch3.xhtml#footnote-103-backlink) Marcus Boon, In Praise of Copying
(Cambridge, MA: Harvard University Press, 2010); Johanna Drucker, ‘Humanist
Computing at the End of the Individual Voice and the Authoritative Text’, in
Patrik Svensson and David Theo Goldberg (eds.), Between Humanities and the
Digital (Cambridge, MA: MIT Press, 2015), pp. 83–94.
[51](ch3.xhtml#footnote-102-backlink) We have to take into consideration here
that print-based cultural products were never fixed or static; the dominant
discourses constructed around them just perceive them to be so.
[52](ch3.xhtml#footnote-101-backlink) Craig, Turcotte, and Coombe, ‘What’s
Feminist About Open Access?’, p. 2.
[53](ch3.xhtml#footnote-100-backlink) Ibid.
[54](ch3.xhtml#footnote-099-backlink) Johanna Gibson, Creating Selves:
Intellectual Property and the Narration of Culture (Aldershot, UK, and
Burlington: Routledge, 2007), p. 7.
[55](ch3.xhtml#footnote-098-backlink) Gibson, Creating Selves, p. 7.
[56](ch3.xhtml#footnote-097-backlink) Ibid.
[57](ch3.xhtml#footnote-096-backlink) Kenneth Goldsmith, Uncreative Writing:
Managing Language in the Digital Age (New York: Columbia University Press,
2011), p. 227.
[58](ch3.xhtml#footnote-095-backlink) Ibid., p. 15.
[59](ch3.xhtml#footnote-094-backlink) Goldsmith, Uncreative Writing, p. 81.
[60](ch3.xhtml#footnote-093-backlink) Ibid.
[61](ch3.xhtml#footnote-092-backlink) It is worth emphasising that what
Goldsmith perceives as ‘uncreative’ notions of writing (including
appropriation, pastiche, and copying), have a prehistory that can be traced
back to antiquity (thanks go out to this chapter’s reviewer for pointing this
out). One example of this, which uses the method of cutting and pasting —
something I have outlined more in depth elsewhere — concerns the early modern
commonplace book. Commonplacing as ‘a method or approach to reading and
writing involved the gathering and repurposing of meaningful quotes, passages
or other clippings from published books by copying and/or pasting them into a
blank book.’ Janneke Adema, ‘Cut-Up’, in Eduardo Navas (ed.), Keywords in
Remix Studies (New York and London: Routledge, 2017), pp. 104–14,
[62](ch3.xhtml#footnote-091-backlink) Gibson, Creating Selves, p. 27.
[63](ch3.xhtml#footnote-090-backlink) For example, animals cannot own
copyright. See the case of Naruto, the macaque monkey that took a ‘selfie’
photograph of itself. Victoria Richards, ‘Monkey Selfie: Judge Rules Macaque
Who Took Grinning Photograph of Himself “Cannot Own Copyright”’, The
Independent, 7 January 2016,
/monkey-selfie-judge-rules-macaque-who-took-grinning-photograph-of-himself-
cannot-own-copyright-a6800471.html>
[64](ch3.xhtml#footnote-089-backlink) Anna Munster, ‘Techno-Animalities — the
Case of the Monkey Selfie’ (presented at the Goldsmiths University, London,
2016),
[65](ch3.xhtml#footnote-088-backlink) Sarah Kember, ‘Why Write?: Feminism,
Publishing and the Politics of Communication’, New Formations: A Journal of
Culture/Theory/Politics 83.1 (2014), 99–116.
Adema & Hall
The political nature of the book: on artists' books and radical open access
2013
The political nature of the book: on artists' books and radical open access
Adema, J. and Hall, G.
Author post-print (accepted) deposited in CURVE September 2013
Original citation & hyperlink:
Adema, J. and Hall, G. (2013). The political nature of the book: on artists' books and radical
open access. New Formations, volume 78 (1): 138-156
http://dx.doi.org/10.3898/NewF.78.07.2013
This is an Open Access article distributed under the terms of the Creative Commons
Attribution License (http://creativecommons.org/licenses/by/3.0/), which permits
unrestricted use, distribution, and reproduction in any medium, provided the original
work is properly cited.
This document is the author’s post-print version of the journal article, incorporating any
revisions agreed during the peer-review process. Some differences between the published
version and this version may remain and you are advised to consult the published version
if you wish to cite from it.
CURVE is the Institutional Repository for Coventry University
http://curve.coventry.ac.uk/open
Abstract
In this article we argue that the medium of the book can be a material and
conceptual means, both of criticising capitalism’s commodification of knowledge (for
example, in the form of the commercial incorporation of open access by feral and
predatory publishers), and of opening up a space for thinking about politics. The
book, then, is a political medium. As the history of the artist’s book shows, it can be
used to question, intervene in and disturb existing practices and institutions, and even
offer radical, counter-institutional alternatives. If the book’s potential to question and
disturb existing practices and institutions includes those associated with liberal
democracy and the neoliberal knowledge economy (as is apparent from some of the
more radical interventions occurring today under the name of open access), it also
includes politics and with it the very idea of democracy. In other words, the book is a
medium that can (and should) be ‘rethought to serve new ends’; a medium through
which politics itself can be rethought in an ongoing manner.
Janneke Adema is a PhD student at Coventry University, writing a dissertation on the
future of the scholarly monograph. She is the author of the OAPEN report Overview
of Open Access Models for eBooks in the Humanities and Social Sciences (2010) and
has published in The International Journal of Cultural Studies, New Media & Society,
New Review of Academic Librarianship; Krisis: Journal for Contemporary
Philosophy; Scholarly and Research Communication; and LOGOS; and co-edited a
living book on Symbiosis (Open Humanities Press, 2011). Her research can be
followed on www.openreflections.wordpress.com.
Gary Hall is Professor of Media and Performing Arts and Director of the Centre for
Disruptive Media at Coventry University, UK. He is author of Culture in Bits
(Continuum, 2002) and Digitize This Book! (Minnesota UP, 2008). His work has
appeared in numerous journals, including Angelaki, Cultural Studies, The Oxford
Literary Review, Parallax and Radical Philosophy. He is also founding co-editor of
the open access journal Culture Machine (http://www.culturemachine.net), and co-
1
founder of Open Humanities Press (http://www.openhumanitiespress.org). More
details are available on his website http://www.garyhall.info.
THE POLITICAL NATURE OF THE BOOK: ON ARTISTS’ BOOKS AND
RADICAL OPEN ACCESS
Janneke Adema and Gary Hall
INTRODUCTION
The medium of the book plays a double role in art and academia, functioning not only
as a material object but also as a concept-laden metaphor. Since it is a medium
through which an alternative future for art, academia and even society can be enacted
and imagined, materially and conceptually, we can even go so far as to say that, in its
ontological instability with regard to what it is and what it conveys, the book serves a
political function. In short, the book can be ‘rethought to serve new ends’. 1 At the
same time, the medium of the book remains subject to a number of constraints: in
terms of its material form, structure, characteristics and dimensions; and also in terms
of the political economies, institutions and practices in which it is historically
embedded. Consequently, if it is to continue to be able to serve ‘new ends’ as a
medium through which politics itself can be rethought – although this is still a big if –
then the material and cultural constitution of the book needs to be continually
1
Johanna Drucker, The Century of Artists’ Books, 2nd ed., Granary Books, New York, 2004,
p49.
2
reviewed, reevaluated and reconceived. In order to explore critically this ‘political
nature of the book’, as we propose to think of it, along with many of the fundamental
ideas on which the book as both a concept and a material object is based, this essay
endeavours to demonstrate how developments undergone by the artist’s book in the
1960s and 1970s can help us to understand some of the changes the scholarly
monograph is experiencing now, at a time when its mode of production, distribution,
organisation and consumption is shifting from analogue to digital and from codex to
net. In what follows we will thus argue that a reading of the history of the artist’s
book can be generative for reimagining the future of the scholarly monograph, both
with respect to the latter’s potential form and materiality in the digital age, and with
respect to its relation to the economic system in which book production, distribution,
organisation and consumption takes place. Issues of access and experimentation are
crucial to any such future, we will suggest, if the critical potentiality of the book is to
remain open to new political, economic and intellectual contingencies.
THE HISTORY OF THE ARTIST’S BOOK
With the rise to prominence of digital publishing today, the material conditions of
book production, distribution, organisation and consumption are undergoing a rapid
and potentially profound transformation. The academic world is one arena in which
digital publishing is having a particularly strong impact. Here, the transition from
print to digital, along with the rise of self-publishing (Blurb, Scribd) and the use of
social media and social networks (Facebook, Twitter, Academia.edu) to communicate
and share scholarly research, has lead to the development of a whole host of
alternative publication and circulation systems for academic thought and knowledge.
3
Nowhere have such changes to the material conditions of the academic book been
rendered more powerfully apparent than in the emergence and continuing rise to
prominence of the open access movement. With its exploration of different ways of
publishing, circulating and consuming academic work (specifically, more open,
Gratis, Libre ways of doing so), and of different systems for governing, reviewing,
accrediting and legitimising that work, open access is frequently held as offering a
radical challenge to the more established academic publishing industry. Witness the
recent positioning in the mainstream media of the boycott of those publishers of
scholarly journals – Elsevier in particular – who charge extremely high subscription
prices and who refuse to allow authors to make their work freely available online on
an open access basis, in terms of an ‘Academic Spring’. Yet more potentially radical
still is the occupation of the new material conditions of academic book production,
distribution, organization and consumption by those open access advocates who are
currently experimenting with the form and concept of the book, with a view to both
circumventing and placing in question the very print-based system of scholarly
communication – complete with its ideas of quality, stability and authority – on
which so much of the academic institution rests.
In the light of the above, our argument in this essay is that some of these more
potentially radical, experimental developments in open access book publishing can be
related on the level of political and cultural significance to transformations undergone
in a previous era by the artist’s book. As a consequence, the history of the latter can
help us to explore in more depth and detail than would otherwise be possible the
relation in open access between experimenting with the medium of the book on a
4
material and conceptual level on the one hand, and enacting political alternatives in a
broader sense on the other. Within the specific context of 1960s and 1970s
counterculture, the artist’s book was arguably able to fill a certain political void,
providing a means of democratising and subverting existing institutions by
distributing an increasingly cheap and accessible medium (the book), and in the
process using this medium in order to reimagine what art is and how it can be
accessed and viewed. While artists grasped and worked through that relation between
the political, conceptual and material aspects of the book several decades ago, thanks
to the emergence of open access online journals, archives, blogs, wikis and free textsharing networks one of the main places in which this relation is being explored today
is indeed in the realm of academic publishing. 2
In order to begin thinking through some of the developments in publishing that are
currently being delved into under the banner of open access, then, let us pause for a
moment to reflect on some of the general characteristics of those earlier experiments
with the medium of the book that were performed by artists. Listed below are six key
areas in which artists’ books can be said to offer guidance for academic publishing in
the digital age, not just on a pragmatic level but on a conceptual and political level
too.
1) The Circumvention of Established Institutions
2
The relation in academic publishing between the political, conceptual and material aspects
of the book has of course been investigated at certain points in the past, albeit to varying
degrees and extents. For one example, see the ‘Working Papers’ and other forms of stencilled
gray literature that were produced and distributed by the Birmingham Centre for
Contemporary Cultural Studies in the 1960s and 1970s, as discussed by Ted Striphas and
Mark Hayward in their contribution to this issue.
5
According to the art theorist Lucy Lippard, the main reason the book has proved to be
so attractive as an artistic medium has to do with the fact that artists’ books are
‘considered by many the easiest way out of the art world and into the hearth of a
broader audience.’ 3 Books certainly became an increasingly popular medium of
artistic expression in Europe and the United States in the 1960s and 1970s. This was
largely due to their perceived potential to subvert the (commercial, profit-driven)
gallery system and to politicise artistic practice - to briefly introduce some of the
different yet as we can see clearly related arguments that follow - with the book
becoming a ‘democratic multiple’ that breached the walls held to be separating socalled high and low culture. Many artist-led and artist-controlled initiatives, such as
US-based Franklin Furnace, Printed Matter and Something Else Press, were
established during this period to provide a forum for artists excluded from the
traditional institutions of the gallery and the museum. Artists’ books played an
extremely important part in the rise of these independent art structures and publishing
ventures. 4 Indeed, for many artists such books embodied the ideal of being able to
control all aspects of their work.
Yet this movement toward liberating themselves from the gallery system by
publishing and exhibiting in artists’ books was by no means an easy transition for
many artists to make. It required them to come to terms with the idea that publishing
their own work did not amount to mere vanity self-publishing, in particular. Moore
and Hendricks describe this state of affairs in terms of the power and potential of ‘the
3
Lucy R. Lippard, ‘The Artist’s Book Goes Public’, in Joan Lyons (ed), Artists’ Books: a
Critical Anthology and Sourcebook, Rochester, New York: Visual Studies Workshop Press,
1993, p45.
4
Joan Lyons, ‘Introduction’, in Lyons (ed), Artists’ Books, p7.
6
page as an alternative space’. 5 From this perspective, producing, publishing and
distributing one’s own artist’s book was a sign of autonomy and independence; it was
nothing less than a way of being able to affect society directly. 6 The political potential
associated with the book by artists should therefore not be underestimated..
Accordingly, many artists created their own publishing imprints or worked together
with newly founded artist’s book publishers and printers (just as some academics are
today challenging the increasingly profit-driven publishing industry by establishing
not-for-profit, scholar-led, open access journals and presses). The main goal of these
independent (and often non-commercial) publisher-printer-artist collectives was to
make experimental, innovative work (rather than generate a profit), and to promote
ephemeral art works, which were often ignored by mainstream, mostly marketorientated institutions. 7 Artists’ books thus fitted in well with the mythology Johanna
Drucker describes as surrounding ‘activist artists’, and especially with the idea of the
book as a tool of independent activist thought. 8
2) The Relationship with Conceptual and Processual Art
In the context of this history of the artist’s book, one particularly significant
conceptual challenge to the gallery system came with the use of the book as a
platform for exhibiting original work (itself an extension of André Malraux’s idea of
the museum without walls). Curator Seth Siegelaub was among the first to publish his
artists – as opposed to exhibiting them – thus becoming, according to Germano
5
Hendricks and Moore, ‘The Page as Alternative Space: 1950 to 1969’, in Lyons (ed),
Artists’ Books, p87.
6
Pavel Büchler, ‘Books as Books’, in Jane Rolo and Ian Hunt (eds), Book Works: a Partial
History and Sourcebook, London: Book Works, 1996.
7
Clive Phillpot, ‘Some Contemporary Artists and Their Books’, in Cornelia Lauf and Clive
Phillpot (eds), Artist/Author: Contemporary Artists’ Books, New York, Distributed Art
Publishers, 1998, pp128-9.
8
Drucker, The Century of Artists’ Books, pp7-8.
7
Celant, ‘the first to allow complete operative and informative liberty to artists’. 9 The
Xerox Book and March 1-31, 1969, featuring work by Sol LeWitt, Robert Barry,
Douglas Huebler, Joseph Kosuth, Lawrence Weiner and other international artists, are
both examples of artists’ books where the book (or the catalogue) itself is the
exhibition. As Moore and Hendricks point out, this offered all kinds of benefits when
compared with traditional exhibitions: ‘This book is the exhibition, easily
transportable without the need for expensive physical space, insurance, endless
technical problems or other impediments. In this form it is relatively permanent and,
fifteen years later, is still being seen by the public.’ 10 Artists’ books thus served here
as an alternative space in themselves and at the same time functioned within a
network of alternative spaces, such as the above-mentioned Franklin Furnace
and Printed Matter.. Next to publishing and supporting artists’ books, such venues
offered a space for staging often highly politicised, critical, experimental and
performance art. 11 It is important to emphasise this aspect of artist book publishing, as
it shows that the book was used as a specific medium to exhibit works that could not
otherwise readily find a place within mainstream exhibition venues (a situation which,
as we will show, has been one of the main driving forces behind open access book
publishing). This focus on the book as a place for continual experimentation – be it on
the level of content or form – can thus be seen as underpinning what we are referring
to here as the ‘political nature of the book’ (playing on the title of Adrian Johns’
classic work of book history). 12
9
Germano Celant, Book as Artwork 1960-1972, New York, 6 Decades Books, 2011, p40.
Hendricks and Moore, ‘The Page as Alternative Space. 1950 to 1969’, p94.
11
Brian Wallis, ‘The Artist’s Book and Postmodernism’, in Cornelia Lauf and Clive Phillpot,
(eds), Artist/Author, 1998.
12
Adrian Johns, The Nature of the Book: Print and Knowledge in the Making, Chicago,
University of Chicago Press, 1998.
10
8
3) The Use of Accessible Technologies
As is the case with the current changes to the scholarly monograph, the rise of artists’
books can be perceived to have been underpinned (though by no means determined)
by developments in technology, with the revolution in mimeograph and offset
printing helping to take artists’ books out of the realm of expensive and rare
commodities by providing direct access to quick and inexpensive printing
methods. 13 Due to its unique characteristics – low production costs, portability,
accessibility and endurance – the artist’s book was regarded as having the potential to
communicate with a wider audience beyond the traditional art world. In particular, it
was seen as having the power to break down the barriers between so-called high and
low culture, using the techniques of mass media to enable artists to argue for their
own,
alternative
goals,
something
that
presented
all
kinds
of
political
possibilities.14 The artist’s book thus conveyed a high degree of artistic autonomy,
while also offering a far greater role to the reader or viewer, who was now able to
interact with the art object directly (eluding the intermediaries of the gallery and
museum system). Indeed, Lippard even went so far as to envision a future where
artists’ books would be readily available as part of mass consumer culture, at
‘supermarkets, drugstores and airports’. 15
4) The Politics of the Democratic Multiple
13
Hendricks and Moore, ‘The Page as Alternative Space’, pp94-95.
Joan Lyons, ‘Introduction’, in Lyons (ed), Artists’ Books, p7.
15
Lippard, ‘The Artist’s Book Goes Public’, p48; Lippard, ‘Conspicuous Consumption: New
Artists’ Books’, in Lyons (ed), Artists’ Books, p100. Is there a contradiction here between a
politics of artists’ books that is directed against commercial profit-driven galleries and
institutions, but which nevertheless uses the tools of mass consumer culture to reach a wider
audience (see also the critique Lippard offers in the next section)? And can a similar point be
made with respect to the politics of some open access initiatives and their use of social media
and (commercial, profit-driven) platforms such as Google Books and Amazon?
14
9
The idea of the book as a real democratic multiple came into being only after 1945, a
state of events that has been facilitated by a number of technological innovations,
including those detailed above. Yet the concept of the democratic multiple itself
developed in what was already a climate of political activism and social
consciousness. In this respect, the democratic multiple was part of both the overall
trend toward the dematerialization of art and the newly emergent emphasis on cultural
and artistic processes rather than ready-made objects. 16
Artists’ desire for
independence from established institutions and for the wider availability of their
works thus resonated with the democratising and anti-institutional potential of the
book as a medium. What is more, the book offered artists a space in which they were
able to experiment with the materiality of the medium itself and with the practices
that comprised it, and thus ultimately with the question of what constituted art and an
art object. This reflexivity of the book with regard to its own nature is one of the key
characteristics that make a book an artist’s book, and enable it to have political
potential in that it can be ‘rethought to serve new ends’. Much the same can be said
with respect to the relation between the book and scholarly communication: witness
the way reflection on the material nature of the book in the digital age has led to
questions being raised regarding how we structure scholarly communication and
practice scholarship more generally.
5) Conceptual Experimentation: Problematising the Concept and Form of the Book
Another key to understanding artists’ books and their history lies with the way the
radical change in printing technologies after World War II led to the reassessment of
the book form itself, and in particular, of the specific nature of the book’s materiality,
16
Drucker, The Century of Artists’ Books, p72.
10
of the very idea of the book, and of the notions and practices underlying the book’s
various uses.
When it came to reevaluating the materiality of the book, many experiments with
artists’ books tried to escape the linearity brought about by the codex form’s
(sequential) constraints, something which had long conditioned both writing and
reading practices. Undoubtedly, one of the most important theorists as far as
rethinking the materiality of the book in the period after 1945 is concerned is Ulises
Carrión. He defines the book as a specific set of conditions that should be (or need to
be) responded to. 17 Instead of seeing it as just a text, Carrión positions the book as an
object, a container and a sequence of spaces. For him, the codex is a form that needs
to be responded to in what he prefers to call ‘bookworks’. These are ‘books in which
the book form, as a coherent sequence of pages, determines conditions of reading that
are intrinsic to the work.’ 18 From this perspective, artists’ books interrogate the
structure and the meaning of the book’s form. 19
Yet the book is also a metaphor, a symbol and an icon to be responded to. 20 Indeed, it
is difficult to establish a precise definition or set of characteristics for artists’ books as
their very nature keeps changing. As Sowden and Bodman put it, ‘What a book is can
be challenged’. 21 Drucker, meanwhile, is at pains to point out that the book is open
for innovation, although the latter has its limits: ‘The convention of the book is both
its constrained meanings (as literacy, the law, text and so forth) and the space of new
17
James Langdon (ed), Book, Birmingham, Eastside Projects, 2010.
Ulises Carrión, ‘Bookworks Revisited’, in James Langdon (ed), Book, Birmingham,
Eastside Projects, 2010.
19
Drucker, The Century of Artists’ Books, pp3-4.
20
Ibid., p360.
21
Tom Sowden and Sarah Bodman, A Manifesto for the Book, Impact Press, 2010, p9.
18
11
work (the blank page, the void, the empty place).’ Books here ‘mutate, expand,
transform’. Accordingly, Drucker regards the transformed book as an intervention,
something that reflects the inherent critique that book experiments embody with
respect to their own constitution.22 One way of examining reflexively the structures
that make up the book is precisely by disturbing those structures. In certain respects
the page can be thought of as being finite (e.g. physically, materially), but it can also
be understood to be infinite, not least as a result of being potentially different on each
respective viewing/reading. This allows the book to be perceived as a self-reflexive
medium that is extremely well-suited to formal experiments. At the same time, it
allows it to be positioned as a potentially political medium, in the sense that it can be
used to intervene in and disturb existing practices and institutions.
6) The Problematisation of Reading and Authorship
As part of their constitution, artists’ books can be said to have brought into question
certain notions and practices relating to the book that had previously been taken too
much for granted – and perhaps still are. For instance, Brian Wallis shows how, ‘in
place of the omnipotent author’, postmodern artists’ books ‘acknowledge a
collectivity of voices and active participation of the reader’. 23 Carrión, for one, was
very concerned with the thought that readers might consume books passively, while
being unaware of their specificity as a medium. 24 The relationship between the book
and reading, and the way in which the physical aspect of the book can change how we
read, was certainly an important topic for artists throughout this period. Many
experiments with artists’ books focused on the interaction between author, reader and
22
Drucker, The Century of Artists’ Books.
Lucy Lippard and John Chandler, ‘The Dematerialization of Art’, Art International, 12, 2
(1968).
24
Langdon, Book.
23
12
book, offering an alternative, and not necessarily linear, reading experience. 25 Such
readerly interventions often represented a critical engagement with ideas of the author
as original creative genius derived from the cultural tradition of European
Romanticism. Joan Lyons describes this potential of the artist’s book very clearly:
‘The best of the bookworks are multinotational. Within them, words, images, colors,
marks, and silences become plastic organisms that play across the pages in variable
linear sequence. Their importance lies in the formulation of a new perceptual
literature whose content alters the concept of authorship and challenges the reader to a
new discourse with the printed page.’ 26 Carrión thus writes about how in the books of
the new art, as he calls them, words no longer transmit an author’s intention. Instead,
authors can use other people’s words as an element of the book as a whole – so much
so that he positions plagiarism as lying at the very basis of creativity. As far as artists’
books are concerned, it is not the artist’s intention that is at stake, according to
Carrión, but rather the process of testing the meaning of language. It is the reader who
creates the meaning and understanding of a book for Carrión, through his or her
specific meaning-extraction. Every book requires a different reading and opens up
possibilities to the reader. 27
THE INHIBITIONS OF MEDIATIC CHANGE
We can thus see that the very ‘nature’ of the book is particularly well suited to
experimentation and to reading against the grain. As a medium, the book has the
25
This has been one of the focal points of the books published and commissioned by UK
artist book publisher Book Works, for instance. Jane Rolo and Ian Hunt, ‘Introduction’, in
Book Works: A Partial History and Sourcebook, op. cit.
26
Joan Lyons, ‘Introduction’, p7.
27
Ulises Carrión, ‘The New Art of Making Books’, in James Langdon (ed), Book,
Birmingham, Eastside Projects, 2010.
13
potential to raise questions for some of the established practices and institutions
surrounding the production, distribution and consumption of printed matter. This
potential notwithstanding, it gradually became apparent (for some this realisation
occurred during the 1960s and 1970s, for others it only came about later) that the
ability of artists’ books to bring about institutional change in the art world, and to
question both the concept of the book and that of art as the singular aesthetic artefact
bolstered by institutional structures, was not particularly long-lasting. With respect to
the democratization of the artist’s book, for example, Lippard notes that, by losing its
distance, there was also a chance of the book losing its critical function. Here, says
Lippard, the ‘danger is that, with an expanding audience and an increased popularity
with collectors, the artist’s book will fall back into its edition de luxe or coffee table
origin … transformed into glossy, pricey products.’ For Lippard there is a discrepancy
between the characteristics of the medium which had the potential to break down
walls, and the actual content and form of most artists’ books which was highly
experimental and avant-garde, and thus inaccessible to readers/consumers outside of
the art world. 28
PROCESSES OF INCORPORATION AND COMMERCIALISATION
Interestingly, Carrión was one of the sharpest critics of the idea that artists’ books
should be somehow able to subvert the gallery system. In his ‘Bookworks Revisited’,
he showed how the hope surrounding this supposedly revolutionary potential of the
book as a medium was based on a gross misunderstanding of the mechanisms
underlying the art world. In particular, Carrión attacked the idea that the artist’s book
28
Lippard, ‘The Artist’s Book Goes Public’ pp47-48.
14
could do without any intermediaries. Instead of circumventing the gallery system, he
saw book artists as merely adopting an alternative set of intermediaries, namely book
publishers and critics. 29
Ten years later Stewart Cauley updated Carrión’s criticisms, arguing that as an art
form and medium, the artist’s book had not been able to avoid market mechanisms
and the celebrity cult of the art system. In fact, by the end of the 1980s the field of
artists’ publications had lost most of its experimental impetus and had become
something of an institution itself, imitating the gallery and museum system it was
initially designed to subvert. 30 Those interested in artists’ books initially found it
difficult to set up an alternative system, as they had to manage without organized
distribution, review mechanisms or funding schemes. When they were eventually able
to do so in the 1970s, the resulting structures in many ways mirrored the very
institutions they were supposed to be criticizing and providing an alternative to.31
Cauley points the finger of blame at the book community itself, especially at the fact
that artists at the time focused more on the concept and structure of the book than on
using the book form to make any kind of critical political statement. The idea that
artists’ books were disconnected from mainstream institutional systems has also been
debunked as a myth. As Drucker makes clear, many artists’ books were developed in
cooperation with museums or galleries, where they were perceived not as subversive
artefacts but rather as low-cost tools for gathering additional publicity for those
institutions and their activities. 32
29
Carrión, ‘Bookworks Revisited’; Johanna Drucker, ‘Artists’ Books and the Cultural Status
of the Book’, Journal of Communication, 44 (1994).
30
Stewart Cauley, ‘Bookworks for the ’90s’, Afterimage, 25, 6, May/June (1998).
31
Stefan Klima, Artists Books: A Critical Survey of the Literature, Granary Books, New
York, 1998, pp54-60.
32
Drucker, The Century of Artists’ Books, p78.
15
Following Abigail Solomon-Godeau, this process of commercialisation and
incorporation – or, as she calls it, ‘the near-total assimilation’ of art practice
(Solomon-Godeau focuses specifically on postmodern photography) and critique into
the discourses it professed to challenge – can be positioned as part of a general
tendency in conceptual and postmodern ‘critical art practices’. It is a development that
can be connected to the changing art markets of the time and viewed in terms of a
broader social and cultural shift to Reaganomics. For Solomon-Godeau, however, the
problem lay not only in changes to the art market, but in critical art practices and art
critique too, which in many ways were not robust enough to keep on reinventing
themselves. Nonetheless, even if they have become incorporated into the art market
and the commodity system, Solomon-Godeau argues that it is still possible for art
practices and institutional critiques to develop some (new) forms of sustainable
challenge from within these systems. As far as she is concerned, ‘a position of
resistance can never be established once and for all, but must be perpetually
refashioned and renewed to address adequately those shifting conditions and
circumstances that are its ground.’ 33
THE PROMISE OF OPEN ACCESS
At first sight many of the changes that have occurred recently in the world of
academic book publishing seem to resemble those charted above with respect to the
artist’s book. As was the case with the publishing of artists’ books, digital publishing
has provided interested parties with an opportunity to counter the existing
33
Abigail Solomon-Godeau, ‘Living with Contradictions: Critical Practices in the Age of
Supply-Side Aesthetics’, Social Text, 21 (1989).
16
(publishing) system and its institutions, to experiment with using contemporary and
emergent media to publish (in this case academic) books in new ways and forms, and
in the process to challenge established ideas of the printed codex book, together with
the material practices of production, distribution and consumption that surround it.
This has resulted in a new wave of scholar-led publishing initiatives in academia, both
formal (with scholars either becoming publishers themselves, or setting up crossinstitutional publishing infrastructures with libraries, IT departments and research
groups) and informal (using self-publishing and social media platforms such as blogs
and wikis). 34 The phenomenon of open access book publishing can be located within
this broader context – a context which, it is worth noting, also includes the closing of
many book shops due to fierce rivalry from the large supermarkets at one end of the
market, and online e-book traders such as Amazon at the other; the fact that the major
high-street book chains are increasingly loath to take academic titles - not just
journals but books too; and the handing over (either in part or in whole) to for-profit
corporations of many publishing organisations designed to serve charitable aims and
the public good: scholarly associations, learned societies, university presses, nonprofit and not-for-profit publishers.
From the early 1990s onwards, open access was pioneered and developed most
extensively in the science, technology, engineering and mathematics (STEM) fields,
where much of the attention was focused on the online self-archiving by scholars of
pre-publication (i.e. pre-print) versions of their research papers in central, subject or
institutionally-based repositories. This is known as the Green Road to open access, as
34
See, for example, Janneke Adema and Birgit Schmidt, ‘From Service Providers to Content
Producers: New Opportunities For Libraries in Collaborative Open Access Book Publishing’,
New Review of Academic Librarianship, 16 (2010).
17
distinct from the Gold Road, which refers to the publishing of articles in online, open
access journals. Of particular interest in this respect is the philosophy that lies behind
the rise of the open access movement, as it can be seen to share a number of
characteristics with the thinking behind artists’ books discussed earlier. The former
was primarily an initiative established by academic researchers, librarians, managers
and administrators, who had concluded that the traditional publishing system – thanks
in no small part to the rapid (and, as we shall see, ongoing) process of aggressive forprofit commercialisation it was experiencing – was no longer willing or able to meet
all of their communication needs. Accordingly, those behind this initiative wanted to
take advantage of the opportunities they saw as being presented by the new digital
publishing and distribution mechanisms to make research more widely and easily
available in a far faster, cheaper and more efficient manner than was offered by
conventional print-on-paper academic publishing. They had various motivations for
doing so. These include wanting to extend the circulation of research to all those who
were interested in it, rather than restricting access to merely those who could afford to
pay for it in the form of journal subscriptions, etc; 35 and a desire to promote the
emergence of a global information commons, and, through this, help to produce a
renewed democratic public sphere of the kind Jürgen Habermas propounds. From the
latter point of view (as distinct from the more radical democratic philosophy we
proceed to develop in what follows), open access was seen as working toward the
creation of a healthy liberal democracy, through its alleged breaking down of the
barriers between the academic community and the rest of society, and its perceived
consequent ability to supply the public with the information they need to make
knowledgeable decisions and actively contribute to political debate. Without doubt,
35
John Willinsky, The Access Principle: The Case for Open Access to Research and
Scholarship, Cambridge, Mass., The MIT Press, 2009, p5.
18
though, another motivating factor behind the development of open access was a desire
on the part of some of those involved to enhance the transparency, accountability,
discoverability, usability, efficiency and (cost) effectivity not just of scholarship and
research but of higher education itself. From the latter perspective (and as can again
be distinguished from the radical open access philosophy advocated below), making
research available on an open access basis was regarded by many as a means of
promoting and stimulating the neoliberal knowledge economy both nationally and
internationally. Open access is supposed to achieve these goals by making it easier for
business and industry to capitalise on academic knowledge - companies can build new
businesses based on its use and exploitation, for example - thus increasing the impact
of higher education on society and helping the UK, Europe and the West (and North)
to be more competitive globally. 36
To date, the open access movement has progressed much further toward its goal of
making all journal articles available open access than it has toward making all
academic books available in this fashion. There are a number of reasons why this is
the case. First, since the open access movement was developed and promoted most
extensively in the STEMs, it has tended to concentrate on the most valued mode of
publication in those fields: the peer-reviewed journal article. Interestingly, the recent
36
Gary Hall, Digitize This Book! The Politics of New Media, or Why We Need Open Access
Now, Minneapolis, University of Minnesota Press, 2008; Janneke Adema, Open Access
Business Models for Books in the Humanities and Social Sciences: An Overview of Initiatives
and Experiments, OAPEN Project Report, Amsterdam, 2010. David Willetts, the UK Science
Minister, is currently promoting ‘author-pays’ open access for just these reasons. See David
Willetts, ‘Public Access to Publicly-Funded Research’, BIS: Department for Business,
Innovation and Skills, May 2, 2012: https://www.gov.uk/government/speeches/public-accessto-publicly-funded-research--2
19
arguments around the ‘Academic Spring’ and ‘feral’ publishers such as Informa plc
are no exception to this general rule. 37
Second, restrictions to making research available open access associated with
publishers’ copyright and licensing agreements can in most cases be legally
circumvented when it comes to journal articles. If all other options fail, authors can
self-archive a pre-refereed pre-print of their article in a central, subject or
institutionally-based repository such as PubMed Central. However, it is not so easy to
elude such restrictions when it comes to the publication of academic books. In the
latter case, since the author is often paid royalties in exchange for their text, copyright
tends to be transferred by the author to the publisher. The text remains the intellectual
property of the author, but the exclusive right to put copies of that text up for sale, or
give them away for free, then rests with the publisher. 38
Another reason the open access movement has focused on journal articles is because
of the expense involved in publishing books in this fashion, since one of the main
models of funding open access in the STEMs, author-side fees, 39 is not easily
transferable either to book publishing or to the Humanities and Social Sciences
(HSS). In contrast to the STMs, the HSS feature a large number of disciplines in
which it is books (monographs in particular) published with esteemed international
37
David Harvie, Geoff Lightfoot, Simon Lilley and Kenneth Weir, ‘What Are We To Do
With Feral Publishers?’, submitted for publication in Organization, and accessible through
the Leicester Research Archive: http://hdl.handle.net/2381/9689.
38
See the Budapest Open Access Initiative, ‘Self-Archiving FAQ, written for the Budapest
Open Access Initiative (BOAI)’, 2002-4: http://www.eprints.org/self-faq/.
39
Although ‘author-pays’ is often positioned as the main model of funding open access
publication in the STEMs, a lot of research has disputed this fact. See, for example, Stuart
Shieber, ‘What Percentage of Open-Access Journals Charge Publication Fees’, The
Occasional Pamphlet on Scholarly Publishing, May 9, 2009:
http://blogs.law.harvard.edu/pamphlet/2009/05/29/what-percentage-of-open-access-journalscharge-publication-fees/.
20
presses, rather than articles in high-ranking journals, that are considered as the most
significant and valued means of scholarly communication. Authors in many fields in
the HSS are simply not accustomed to paying to have their work published. What is
more, many authors associate doing so with vanity publishing. 40 They are also less
likely to acquire the grants from either funding bodies or their institutions that are
needed to cover the cost of publishing ‘author-pays’. That the HSS in many Western
countries receive only a fraction of the amount of government funding the STEMs do
only compounds the problem, 41 as does the fact that higher rejection rates in the HSS,
as compared to the STEMs, mean that any grants would have to be significantly
larger, as the time spent on reviewing articles, and hence the amount of human labour
used, makes it a much more intensive process. 42 And that is just to publish journal
articles. Publishing books on an author-pays basis would be more expensive still.
Yet even though the open access movement initially focused more on journal articles
than on monographs, things have begun to change in this respect in recent years.
Undoubtedly, one of the major factors behind this change has been the fact that the
40
Maria Bonn, ‘Free Exchange of Ideas: Experimenting with the Open Access Monograph’,
College and Research Libraries News, 71, 8, September (2010) pp436-439:
http://crln.acrl.org/content/71/8/436.full.
41
Patrick Alexander, director of the Pennsylvania State University Press, provides the
following example: ‘Open Access STEM publishing is often funded with tax-payer dollars,
with publication costs built into researchers’ grant request… the proposed NIH budget for
2013 is $31 billion. NSF’s request for 2013 is around $7.3 billion. Compare those amounts to
the NEH ($154 million) and NEA ($154 million) and you can get a feel for why researchers
in the the arts and humanities face challenges in funding their publication costs.’ (Adeline
Koh, ‘Is Open Access a Moral or a Business Issue? A Conversation with The Pennsylvania
State University Press, The Chronicle of Higher Education, July 10, 2012:
http://chronicle.com/blogs/profhacker/is-open-access-a-moral-or-a-business-issue-aconversation-with-the-pennsylvania-state-university-press/41267)
42
See Mary Waltham’s 2009 report for the National Humanities Alliance, ‘The Future of
Scholarly Journals Publishing among Social Sciences and Humanities Associations’:
http://www.nhalliance.org/research/scholarly_communication/index.shtml; and Peter Suber,
‘Promoting Open Access in the Humanities’, 2004:
http://www.earlham.edu/~peters/writing/apa.htm. ‘On average, humanities journals have
higher rejection rates (70-90%) than STEM journals (20-40%)’, Suber writes.
21
publication of books on an open access basis has been perceived as one possible
answer to the ‘monograph crisis’. This phrase refers to the way in which the already
feeble sustainability of the print monograph is being endangered even further by the
ever-declining sales of academic books. 43 It is a situation that has in turn been brought
about by ‘the so-called “serials crisis”, a term used to designate the vertiginous rise of
the subscription to STEM journals since the mid-80s which… strangled libraries and
led to fewer and fewer purchases of books/monographs.’ 44 This drop in library
demand for monographs has led many presses to produce smaller print runs; focus on
more commercial, marketable titles; or even move away from monographs to
concentrate on text books, readers, and reference works instead. In short, conventional
academic publishers are now having to make decisions about what to publish more on
the basis of the market and a given text’s potential value as a commodity, and less on
the basis of its quality as a piece of scholarship. This last factor is making it difficult
for early career academics to publish the kind of research-led monographs that are
often needed to acquire that all important first full-time position. This in turn means
the HSS is, in effect, allowing publishers to make decisions on its future and on who
gets to have a long-term career on an economic basis, according to the needs of the
market – or what they believe those needs to be. But it is also making it hard for
43
Greco and Wharton estimate that the average number of library purchases of monographs
has dropped from 1500 in the 1970s to 200-300 at present. Thompson estimates that print
runs and sales have declined from 2000-3000 (print runs and sales) in the 1970s to print runs
of between 600-1000 and sales of between 400-500 nowadays. Albert N. Greco and Robert
Michael Wharton, ‘Should University Presses Adopt an Open Access [electronic publishing]
Business Model for all of their Scholarly Books?’, ELPUB. Open Scholarship: Authority,
Community, and Sustainability in the Age of Web 2.0 – Proceedings of the 12th
International Conference on Electronic Publishing held in Toronto, Canada 25-27 June
2008; John B. Thompson, Books in the Digital Age: The Transformation of Academic and
Higher Education Publishing in Britain and the United States, Cambridge, Polity Press, 2005.
44
Jean Kempf, ‘Social Sciences and Humanities Publishing and the Digital “Revolution”’
unpublished manuscript, 2010: http://perso.univlyon2.fr/~jkempf/Digital_SHS_Publishing.pdf; Thompson, Books in the Digital Age, pp. 9394.
22
authors in the HSS generally to publish monographs that are perceived as being
difficult, advanced, specialized, obscure, radical, experimental or avant-garde - a
situation reminiscent of the earlier state of events which led to the rise of artists’
books, with the latter emerging in the context of a perceived lack of exhibition space
for experimental and critical (conceptual) work within mainstream commercial
galleries.
Partly in response to this ‘monograph crisis’, a steadily increasing number of
initiatives have now been set up to enable authors in the HSS in particular to bring out
books open access – not just introductions, reference works and text books, but
research monographs and edited collections too. These initiatives include scholar-led
presses such as Open Humanities Press, re.press, and Open Book Publishers;
commercial presses such as Bloomsbury Academic; university presses, including
ANU E Press and Firenze University Press; and presses established by or working
with libraries, such as Athabasca University’s AU Press. 45
Yet important though the widespread aspiration amongst academics, librarians and
presses to find a solution to the monograph crisis has been, the reasons behind the
development of open access book publishing in the HSS are actually a lot more
diverse than is often suggested. For instance, to the previously detailed motivating
factors that inspired the rise of the open access movement can be added the desire,
shared by many scholars, to increase accessibility to (specialized) HSS research, with
a view to heightening its reputation, influence, impact and esteem. This is seen as
45
A list of publishers experimenting with business models for OA books is available at:
http://oad.simmons.edu/oadwiki/Publishers_of_OA_books. See also Adema, Open Access
Business Models.
23
being especially significant at a time when the UK government, to take just one
example, is emphasizing the importance of the STEMs while withdrawing support
and funding for the HSS. Many scholars in the HSS are thus now willing to stand up
against, and even offer a counter-institutional alternative to, the large, established,
profit-led, commercial firms that have come to dominate academic publishing – and,
in so doing, liberate the long-form argument from market constraints through the
ability to publish books that often lack a clear commercial market.
TWO STRATEGIES: ACCESSIBILITY AND EXPERIMENTATION
That said, all of these reasons and motivating factors behind the recent changes in
publishing models are still very much focused on making more scholarly research
more accessible. Yet for at least some of those involved in the creation and
dissemination of open access books, doing so also constitutes an important stage in
the development of what might be considered more ‘experimental’ forms of research
and publication; forms for which commercial and heavily print-based systems of
production and distribution have barely provided space. Such academic experiments
are thus perhaps capable of adopting a role akin to, if not the exact equivalent of, that
we identified artists’ books as having played in the countercultural context of the
1960s and 1970s: in terms of questioning the concept and material form of the book;
promoting alternative ways of reading and communicating via books; and
interrogating modern, romantic notions of authorship. We are thinking in particular of
projects that employ open peer-review procedures (such as Kathleen Fitzpatrick’s
Planned Obsolescence, which uses the CommentPress Wordpress plugin to enable
comments to appear alongside the main body of the text), wikis (e.g. Open
24
Humanities Press’ two series of Liquid and Living Books) and blogs (such as those
created using the Anthologize app developed at George Mason University). 46 These
enable varying degrees of what Peter Suber calls ‘author-side openness’ when it
comes to reviewing, editing, changing, updating and re-using content, including
creating derivative works. Such practices pose a conceptual challenge to some of the
more limited interpretations of open access (what has at times been dubbed ‘weak
open access’), 47 and can on occasion even constitute a radical test of the integrity and
identity of a given work, not least by enabling different versions to exist
simultaneously. In an academic context this raises questions of both a practical and
theoretical nature that have the potential to open up a space for reimagining what
counts as scholarship and research, and of how it can be responded to and accessed:
not just which version of a work is to be cited and preserved, and who is to have
ultimate responsibility for the text and its content; but also what an author, a text, and
a work actually is, and where any authority and stability that might be associated with
such concepts can now be said to reside.
It is interesting then that, although they can be positioned as constituting two of the
major driving forces behind the recent upsurge in the current interest in open access
book publishing, as ‘projects’, the at times more obviously or overtly ‘political’ (be it
liberal-democratic, neoliberal or otherwise) project of using digital media and the
Internet to create wider access to book-based research on the one hand, and
experimenting—as part of the more conceptual, experimental aspects of open access
book publishing—with the form of the book (a combination of which we identified as
46
See http://mediacommons.futureofthebook.org/mcpress/plannedobsolescence;
http://liquidbooks.pbwiki.com/; http://www.livingbooksaboutlife.org/; http://anthologize.org/.
47
See Peter Suber, SPARC OA newsletter, issue 155, March 2, 2011:
http://www.earlham.edu/~peters/fos/newsletter/03-02-11.htm
25
being essential components of the experimental and political potential of artists’
books) and the way our dominant system of scholarly communication currently
operates on the other, often seem to be rather disconnected. Again, a useful
comparison can be made to the situation described by Lippard, where more
(conceptually or materially) experimental artists’ books were seen as being less
accessible to a broader public and, in some cases, as going against the strategy of
democratic multiples, promoting exclusivity instead.
It is certainly the case that, in order to further the promotion of open access and
achieve higher rates of adoption and compliance among the academic community, a
number of strategic alliances have been forged between the various proponents of the
open access movement. Some of these alliances (those associated with Green open
access, for instance) have taken making the majority if not indeed all of the research
accessible online without a paywall (Gratis open access) 48 as their priority, perhaps
with the intention of moving on to the exploration of other possibilities, including
those concerned with experimenting with the form of the book, once critical mass has
been attained – but perhaps not. Hence Stevan Harnad’s insistence that ‘it’s time to
stop letting the best get in the way of the better: Let’s forget about Libre and Gold OA
until we have managed to mandate Green Gratis OA universally.’ 49 Although they
cannot be simply contrasted and opposed to the former (often featuring many of the
same participants), other strategic alliances have focused more on gaining the trust of
the academic community. Accordingly, they have prioritized allaying many of the
48
For an overview of the development of these terms, see:
http://www.arl.org/sparc/publications/articles/gratisandlibre.shtml
49
Stevan Harnad, Open Access: Gratis and Libre, Open Access Archivangelism,
Thursday, May 3, 2012.
26
anxieties with regard to open access publications – including concerns regarding their
quality, stability, authority, sustainability and status with regard to publishers’
copyright licenses and agreements – that have been generated as a result of the
transition toward the digital mode of reproduction and distribution. More often than
not, such alliances have endeavoured to do so by replicating in an online context
many of the scholarly practices associated with the world of print-on-paper
publishing. Witness the way in which the majority of open access book publishers
continue to employ more or less the same quality control procedures, preservation
structures and textual forms as their print counterparts: pre-publication peer review
conducted by scholars who have already established their reputations in the paper
world; preservation carried out by academic libraries; monographs consisting of
numbered pages and chapters arranged in a linear, sequential order and narrative, and
so on. As Sigi Jöttkandt puts it with regard to the strategy of Open Humanities Press
in this respect:
We’re intending OHP as a tangible demonstration to our still generally
sceptical colleagues in the humanities that there is no reason why OA
publishing cannot have the same professional standards as print. We aim to
show that OA is not only academically credible but is in fact being actively
advanced by leading figures in our fields, as evidenced by our editorial
advisory board. Our hope is that OHP will contribute to OA rapidly becoming
standard practice for scholarly publishing in the humanities. 50
50
Sigi Jöttkandt, 'No-fee OA Journals in the Humanities, Three Case Studies: A Presentation
by Open Humanities Press', presented at the Berlin 5 Open Access Conference: From Practice
to Impact: Consequences of Knowledge Dissemination, Padua, September 19, 2007:
http://openhumanitiespress.org/Jottkandt-Berlin5.pdf
27
Relatively few open access publishers, however, have displayed much interest in
combining such an emphasis on achieving universal, free, online access to research
and/or the gaining of trust, with a rigorous critical exploration of the form of the book
itself. 51 And this despite the fact that the ability to re-use material is actually an
essential feature of what has become known as the Budapest-Bethesda-Berlin (BBB)
definition of open access, which is one of the major agreements underlying the
movement. 52 It therefore seems significant that, of the books presently available open
access, only a minority have a license where price and permission barriers to research
are removed, with the result that the research is available under both Gratis and Libre
(re-use) conditions. 53
REIMAGINING THE BOOK, OR RADICAL OPEN ACCESS
Admittedly, there are many in the open access community who regard the more
radical experiments conducted with and on books as highly detrimental to the
strategies of large-scale accessibility and trust respectively. From this perspective,
efforts designed to make open access material available for others to (re)use, copy,
51
Open Humanities Press (http://openhumanitiespress.org/) and Media Commons Press
(http://mediacommons.futureofthebook.org/mcpress/) remain the most notable exceptions on
the formal side of the publishing scale, the majority of experiments with the form of the book
taking place in the informal sphere (e.g. blogbooks self-published by Anthologize, and
crowd-sourced, ‘sprint’ generated books such as Dan Cohen and Tom Scheinfeldt’s Hacking
the Academy: http://hackingtheacademy.org/).
52
See Peter Suber on the BBB definition here:
http://www.earlham.edu/~peters/fos/newsletter/09-02-04.htm, where he also states that two
of the three BBB component definitions (the Bethesda and Berlin statements) require
removing barriers to derivative works.
53
An examination of the licenses used on two of the largest open access book publishing
platforms or directories to date, the OAPEN (Open Access Publishing in Academic
Networks) platform and the DOAB (Directory of Open Access Books), reveals that on the
OAPEN platform (accessed May 6th 2012) 2 of the 966 books are licensed with a CC-BY
license, and 153 with a CC-BY-NC license (which still restricts commercial re-use). On the
DOAB (accessed May 6th 2012) 5 of the 778 books are licensed with a CC-BY license, 215
with CC-BY-NC.
28
reproduce and distribute in any medium, as well as make and distribute derivative
works, coupled with experiments with the form of the book, are seen as being very
much secondary objectives (and even by some as unnecessarily complicating and
diluting open access’s primary goal of making all of the research accessible online
without a paywall). 54 And, indeed, although in many of the more formal open access
definitions (including the important Bethesda and Berlin definitions of open access,
which require removing barriers to derivative works), the right to re-use and reappropriate a scholarly work is acknowledged and recommended, in both theory and
practice a difference between ‘author-side openness’ and ‘reader-side openness’ tends
to be upheld—leaving not much space for the ‘readerly interventions’ that were so
important in opening up the kind of possibilities for ‘reading against the grain’ that
the artist’s book promoted, something we feel (open access) scholarly works should
also strive to encourage and support. 55 This is especially the case with regard to the
publication of books, where a more conservative vision frequently holds sway. For
instance, it is intriguing that in an era in which online texts are generally connected to
a network of other information, data and mobile media environments, the open access
book should for the most part still find itself presented as having definite limits and a
clear, distinct materiality.
But if the ability to re-use material is an essential feature of open access – as, let us
repeat, it is according to the Budapest-Bethesda-Berlin and many of other influential
definitions of the term – then is working toward making all of the research accessible
54
See, for example, Stevan Harnad, Open Access: Gratis and Libre, Open Access
Archivangelism, Thursday, May 3, 2012.
55
For more on author-side and reader-side openness respectively, see Peter Suber, SPARC
OA newsletter: http://www.earlham.edu/~peters/fos/newsletter/03-02-11.htm
29
online on a Gratis basis and/or gaining the trust of the academic community the best
way for the open access movement (including open access book publishing) to
proceed, always and everywhere? If we do indeed wait until we have gained a critical
mass of open access content before taking advantage of the chance the shift from
analogue to digital creates, might it not by then be too late? Does this shift not offer
us the opportunity, through its loosening of much of the stability, authority, and
‘fixity’ of texts, to rethink scholarly publishing, and in the process raise the kind of
fundamental questions for our ideas of authorship, authority, legitimacy, originality,
permanence, copyright, and with them the text and the book, that we really should
have been raising all along? If we miss this opportunity, might we not find ourselves
in a similar situation to that many book artists and publishers have been in since the
1970s, namely, that of merely reiterating and reinforcing established structures and
practices?
Granted, following a Libre open access strategy may on occasion risk coming into
conflict with those more commonly accepted and approved open access strategies (i.e.
those concerned with achieving accessibility and the gaining of trust on a large-scale).
Nevertheless, should open access advocates on occasion not be more open to adopting
and promoting forms of open access that are designed to make material available for
others to (re)use, copy, reproduce, distribute, transmit, translate, modify, remix and
build upon? In particular, should they not be more open to doing so right here, right
now, before things begin to settle down and solidify again and we arrive at a situation
where we have succeeded merely in pushing the movement even further toward rather
weak, watered-down and commercial versions of open access?
30
CONCLUSION
We began by looking at how, in an art world context, the idea and form of the book
have been used to engage critically many of the established cultural institutions, along
with some of the underlying philosophies that inform them. Of particular interest in
this respect is the way in which, with the rise of offset printing and cheaper
production methods and printing techniques in the 1960s, there was a corresponding
increase in access to the means of production and distribution of books. This in turn
led to the emergence of new possibilities and roles that the book could be put to in an
art context, which included democratizing art and critiquing the status quo of the
gallery system. But these changes to the materiality and distribution of the codex
book in particular – as an artistic product as well as a medium – were integrally linked
with questions concerning the nature of both art and the book as such. Book artists
and theorists thus became more and more engaged in the conceptual and practical
exploration of the materiality of the book. In the end, however, the promise of
technological innovation which underpinned the changes with respect to the
production and distribution of artists’ books in the 1960s and 1970s was not enough
to generate any kind of sustainable (albeit repeatedly reviewed, refashioned and
renewed) challenge within the art world over the longer term.
The artist’s book of the 1960s and 1970s therefore clearly had the potential to bring
about a degree of transformation, yet it was unable to elude the cultural practices,
institutions and the market mechanisms that enveloped it for long (including those
developments in financialisation and the art market Solomon-Godeau connects to the
shift to Reaganomics). Consequently, instead of criticising or subverting the
31
established systems of publication and distribution, the artist’s book ended up being
largely integrated into them. 56 Throughout the course of this article we have argued
that its conceptual and material promise notwithstanding, there is a danger of
something similar happening to open access publishing today. Take the way open
access has increasingly come to be adopted by commercial publishers. If one of the
motivating factors behind at least some aspects of the open access movement – not
just the aforementioned open access book publishers in the HSS, but the likes of
PLoS, too – has been to stand up against, and even offer an alternative to, the large,
profit-led firms that have come to dominate the field of academic publishing, recent
years have seen many such commercial publishers experimenting with open access
themselves, even if such experiments have so far been confined largely to journals.57
Most commonly, this situation has resulted in the trialling of ‘author-side’ fees for the
open access publishing of journals, a strategy seen as protecting the interests of the
established publishers, and one which has recently found support in the Finch Report
from a group of representatives of the research, library and publishing communities
convened by David Willetts, the UK Science Minister. 58 But the idea that open access
56
That said, there is currently something of a revival of print, craft and artist's book
publishing taking place in which the paperbound book is being re-imagined in offline
environments. In this post-digital print culture, paper publishing is being used as a new form
of avant-garde social networking that, thanks to its analog nature, is not so easily controlled
by the digital data-gathering commercial hegemonies of Google, Amazon, Facebook et al. For
more, see Alessandro Ludovico, Post-Digital Print - the Mutation of Publishing Since 1984,
Onomatopee, 2012; and Florian Cramer, `Post-Digital Writing', Electronic Book Review,
December, 2012: http://electronicbookreview.com/thread/electropoetics/postal.
57
For more details, see Wilhelm Peekhaus, ‘The Enclosure and Alienation of Academic
Publishing: Lessons for the Professoriate’, tripleC, 10(2), 2012: http://www.triplec.at/index.php/tripleC/article/view/395
58
‘Accessibility, Sustainability, Excellence: How to Expand Access to Research Publications,
Report of the Working Group on Expanding Access to Published Research Findings’, June
18, 2012: http://www.researchinfonet.org/wp-content/uploads/2012/06/Finch-Group-reportFINAL-VERSION.pdf. For one overview of some of the problems that can be identified from
an HSS perspective in the policy direction adopted by Finch and Willetts, see Lucinda
Matthews-Jones, ‘Open Access and the Future of Academic Journals’, Journal of Victorian
Culture Online, November 21, 2012: http://myblogs.informa.com/jvc/2012/11/21/openaccess-and-the-future-of-academic-journals/
32
may represent a commercially viable publishing model has attracted a large amount of
so-called predatory publishers, too, 59 who (like Finch and Willetts) have propagated a
number of misleading and often quite mistaken accounts of open access. 60 The
question is thus raised as to whether the desire to offer a counter-institutional
alternative to the large, established, commercial firms is likely to become somewhat
marginalised and neutralised as a result of open access publishing being seen more
and more by such commercial publishers as just another means of generating a profit.
Will the economic as well as material practices transferred from the printing press
continue to inform and shape our communication systems? As Nick Knouf argues, to
raise this question, ‘is not to damn open access publishing by any means; rather, it is
to say that open access publishing, without a concurrent interrogation of the economic
underpinnings of the scholarly communication system, will only reform the situation
rather than provide a radical alternative.’ 61
With this idea of providing a radical challenge to the current scholarly communication
system in mind, and drawing once again on the brief history of artists’ books as
presented above, might it not be helpful to think of open access less as a project and
model to be implemented, and more as a process of continuous struggle and critical
resistance? Here an analogy can be drawn with the idea of democracy as a process. In
‘Historical Dilemmas of Democracy and Their Contemporary Relevance for
Citizenship’, the political philosopher Etiènne Balibar develops an interesting analysis
of democracy based on a concept of the ‘democratisation of democracy’ he derives
59
For a list of predatory OA publishers see: http://scholarlyoa.com/publishers/
This list has increased from 23 predatory publishers in 2011, to 225 in 2012.
60
See the reference to the research of Peter Murray Rust in Sigi Jöttkandt, ‘No-fee OA
Journals in the Humanities’.
61
Nicholas Knouf, ‘The JJPS Extension: Presenting Academic Performance Information’,
Journal of Journal Performance Studies, 1 (2010).
33
from a reading of Hannah Arendt and Jacques Rancière. For Balibar, the problem
with much of the discourse surrounding democracy is that it perceives the latter as a
model that can be implemented in different contexts (in China or the Middle East, for
instance). He sees discourses of this kind as running two risks in particular. First of
all, in conceptualizing democracy as a model there is a danger of it becoming a
homogenizing force, masking differences and inequalities. Second, when positioned
as a model or a project, democracy also runs the risk of becoming a dominating force
– yet another political regime that takes control and power. According to Balibar, a
more interesting and radical notion of democracy involves focusing on the process of
the democratisation of democracy itself, thus turning democracy into a form of
continuous struggle (or struggles) – or, perhaps better, continuous critical selfreflection. Democracy here is not an established reality, then, nor is it a mere ideal; it
is rather a permanent struggle for democratisation. 62
Can open access be understood in similar terms: less as a homogeneous project
striving to become a dominating model or force, and more as an ongoing critical
struggle, or series of struggles? And can we perhaps locate what some perceive as the
failure of artists’ books to contribute significantly to such a critical struggle after the
1970s to the fact that ultimately they became (incorporated in) dominant institutional
settings themselves – a state of affairs brought about in part by their inability to
address issues of access, experimentation and self-reflexivity in an ongoing critical
manner?
62
Etienne Balibar, ‘Historical Dilemmas of Democracy and Their Contemporary Relevance
for Citizenship’, Rethinking Marxism, 20 (2008).
34
Certainly, one of the advantages of conceptualizing open access as a process of
struggle rather than as a model to be implemented would be that doing so would
create more space for radically different, conflicting, even incommensurable positions
within the larger movement, including those that are concerned with experimenting
critically with the form of the book and the way our system of scholarly
communication currently operates. As we have shown, such radical differences are
often played down in the interests of strategy. To be sure, open access can experience
what Richard Poynder refers to as a ‘bad tempered wrangles’ over relatively ‘minor
issues’ such as ‘metadata, copyright, and distributed versus central archives’. 63 Still,
much of the emphasis has been on the importance of trying to maintain a more or less
unified front (within certain limits, of course) in the face of criticisms from
publishers, governments, lobbyists and so forth, lest its opponents be provided with
further ammunition with which to attack the open access movement, and dilute or
misinterpret its message, or otherwise distract advocates from what they are all
supposed to agree are the main tasks at hand (e.g. achieving universal, free, online
access to research and/or the gaining of trust). Yet it is important not to see the
presence of such differences and conflicts within the open access movement in purely
negative terms – the way they are often perceived by those working in the liberal
tradition, with its ‘rationalist belief in the availability of a universal consensus based
on reason’. 64 (This emphasis on the ‘universal’ is also apparent in fantasies of having
not just universal open access, but one single, fully integrated and indexed global
archive.) In fact if, as we have seen, one of the impulses behind open access is to
make knowledge and research – and with it society – more open and democratic, it
63
Richard Poynder, ‘Time to Walk the Walk’, Open and Shut?, 17 March, 2005:
http://poynder.blogspot.com/2005/03/time-to-walk-talk.html.
64
Chantal Mouffe, On the Political, London, Routledge, 2005, p11.
35
can be argued that the existence of such dissensus will help achieve this ambition.
After all, and as we know from another political philosopher, Chantal Mouffe, far
from placing democracy at risk, a certain degree of conflict and antagonism actually
constitutes the very possibility of democracy. 65 It seems to us that such a critical, selfreflexive, processual, non-goal oriented way of thinking about academic publishing
shares much with the mode of working of the artist - which is why we have argued
that open access today can draw productively on the kind of conceptual openness and
political energy that characterised experimentation with the medium of the book in
the art world of the 1960s and 1970s.
65
Mouffe, On the Political, p30.
36
Barok
Communing Texts
2014
Communing Texts
_A talk given on the second day of the conference_ [Off the
Press](http://digitalpublishingtoolkit.org/22-23-may-2014/program/) _held at
WORM, Rotterdam, on May 23, 2014. Also available
in[PDF](/images/2/28/Barok_2014_Communing_Texts.pdf "Barok 2014 Communing
Texts.pdf")._
I am going to talk about publishing in the humanities, including scanning
culture, and its unrealised potentials online. For this I will treat the
internet not only as a platform for storage and distribution but also as a
medium with its own specific means for reading and writing, and consider the
relevance of plain text and its various rendering formats, such as HTML, XML,
markdown, wikitext and TeX.
One of the main reasons why books today are downloaded and bookmarked but
hardly read is the fact that they may contain something relevant but they
begin at the beginning and end at the end; or at least we are used to treat
them in this way. E-book readers and browsers are equipped with fulltext
search functionality but the search for "how does the internet change the way
we read" doesn't yield anything interesting but the diversion of attention.
Whilst there are dozens of books written on this issue. When being insistent,
one easily ends up with a folder with dozens of other books, stucked with how
to read them. There is a plethora of books online, yet there are indeed mostly
machines reading them.
It is surely tempting to celebrate or to despise the age of artificial
intelligence, flat ontology and narrowing down the differences between humans
and machines, and to write books as if only for machines or return to the
analogue, but we may as well look back and reconsider the beauty of simple
linear reading of the age of print, not for nostalgia but for what we can
learn from it.
This perspective implies treating texts in their context, and particularly in
the way they commute, how they are brought in relations with one another, into
a community, by the mere act of writing, through a technique that have
developed over time into what we have came to call _referencing_. While in the
early days referring to texts was practised simply as verbal description of a
referred writing, over millenia it evolved into a technique with standardised
practices and styles, and accordingly: it gained _precision_. This precision
is however nothing machinic, since referring to particular passages in other
texts instead of texts as wholes is an act of comradeship because it spares
the reader time when locating the passage. It also makes apparent that it is
through contexts that the web of printed books has been woven. But even though
referencing in its precision has been meant to be very concrete, particularly
the advent of the web made apparent that it is instead _virtual_. And for the
reader, laborous to follow. The web has shown and taught us that a reference
from one document to another can be plastic. To follow a reference from a
printed book the reader has to stand up, walk down the street to a library,
pick up the referred volume, flip through its pages until the referred one is
found and then follow the text until the passage most probably implied in the
text is identified, while on the web the reader, _ideally_ , merely moves her
finger a few milimeters. To click or tap; the difference between the long way
and the short way is obviously the hyperlink. Of course, in the absence of the
short way, even scholars are used to follow the reference the long way only as
an exception: there was established an unwritten rule to write for readers who
are familiar with literature in the respective field (what in turn reproduces
disciplinarity of the reader and writer), while in the case of unfamiliarity
with referred passage the reader inducts its content by interpreting its
interpretation of the writer. The beauty of reading across references was
never fully realised. But now our question is, can we be so certain that this
practice is still necessary today?
The web silently brought about a way to _implement_ the plasticity of this
pointing although it has not been realised as the legacy of referencing as we
know it from print. Today, when linking a text and having a particular passage
in mind, and even describing it in detail, the majority of links physically
point merely to the beginning of the text. Hyperlinks are linking documents as
wholes by default and the use of anchors in texts has been hardly thought of
as a _requirement_ to enable precise linking.
If we look at popular online journalism and its use of hyperlinks within the
text body we may claim that rarely someone can afford to read all those linked
articles, not even talking about hundreds of pages long reports and the like
and if something is wrong, it would get corrected via comments anyway. On the
internet, the writer is meant to be in more immediate feedback with the
reader. But not always readers are keen to comment and not always they are
allowed to. We may be easily driven to forget that quoting half of the
sentence is never quoting a full sentence, and if there ought to be the entire
quote, its source text in its whole length would need to be quoted. Think of
the quote _information wants to be free_ , which is rarely quoted with its
wider context taken into account. Even factoids, numbers, can be carbon-quoted
but if taken out of the context their meaning can be shaped significantly. The
reason for aversion to follow a reference may well be that we are usually
pointed to begin reading another text from its beginning.
While this is exactly where the practices of linking as on the web and
referencing as in scholarly work may benefit from one another. The question is
_how_ to bring them closer together.
An approach I am going to propose requires a conceptual leap to something we
have not been taught.
For centuries, the primary format of the text has been the page, a vessel, a
medium, a frame containing text embedded between straight, less or more
explicit, horizontal and vertical borders. Even before the material of the
page such as papyrus and paper appeared, the text was already contained in
lines and columns, a structure which we have learnt to perceive as a grid. The
idea of the grid allows us to view text as being structured in lines and
pages, that are in turn in hand if something is to be referred to. Pages are
counted as the distance from the beginning of the book, and lines as the
distance from the beginning of the page. It is not surprising because it is in
accord with inherent quality of its material medium -- a sheet of paper has a
shape which in turn shapes a body of a text. This tradition goes as far as to
the Ancient times and the bookroll in which we indeed find textual grids.
[](/File:Papyrus_of_Plato_Phaedrus.jpg)
A crucial difference between print and digital is that text files such as HTML
documents nor markdown documents nor database-driven texts did inherit this
quality. Their containers are simply not structured into pages, precisely
because of the nature of their materiality as media. Files are written on
memory drives in scattered chunks, beginning at point A and ending at point B
of a drive, continuing from C until D, and so on. Where does each of these
chunks start is ultimately independent from what it contains.
Forensic archaeologists would confirm that when a portion of a text survives,
in the case of ASCII documents it is not a page here and page there, or the
first half of the book, but textual blocks from completely arbitrary places of
the document.
This may sound unrelated to how we, humans, structure our writing in HTML
documents, emails, Office documents, even computer code, but it is a reminder
that we structure them for habitual (interfaces are rectangular) and cultural
(human-readability) reasons rather then for a technical necessity that would
stem from material properties of the medium. This distinction is apparent for
example in HTML, XML, wikitext and TeX documents with their content being both
stored on the physical drive and treated when rendered for reading interfaces
as single flow of text, and the same goes for other texts when treated with
automatic line-break setting turned off. Because line-breaks and spaces and
everything else is merely a number corresponding to a symbol in character set.
So how to address a section in this kind of document? An option offers itself
-- how computers do, or rather how we made them do it -- as a position of the
beginning of the section in the array, in one long line. It would mean to
treat the text document not in its grid-like format but as line, which merely
adapts to properties of its display when rendered. As it is nicely implied in
the animated logo of this event and as we know it from EPUBs for example.
In the case of 'reference-linking' we can refer to a passage by including the
information about its beginning and length determined by the character
position within the text (in analogy to _pp._ operator used for printed
publications) as well as the text version information (in printed texts served
by edition and date of publication). So what is common in printed text as the
page information is here replaced by the character position range and version.
Such a reference-link is more precise while addressing particular section of a
particular version of a document regardless of how it is rendered on an
interface.
It is a relatively simple idea and its implementation does not be seem to be
very hard, although I wonder why it has not been implemented already. I
discussed it with several people yesterday to find out there were indeed
already attempts in this direction. Adam Hyde pointed me to a proposal for
_fuzzy anchors_ presented on the blog of the Hypothes.is initiative last year,
which in order to overcome the need for versioning employs diff algorithms to
locate the referred section, although it is too complicated to be explained in
this setting.[1] Aaaarg has recently implemented in its PDF reader an option
to generate URLs for a particular point in the scanned document which itself
is a great improvement although it treats texts as images, thus being specific
to a particular scan of a book, and generated links are not public URLs.
Using the character position in references requires an agreement on how to
count. There are at least two options. One is to include all source code in
positioning, which means measuring the distance from the anchor such as the
beginning of the text, the beginning of the chapter, or the beginning of the
paragraph. The second option is to make a distinction between operators and
operands, and count only in operands. Here there are further options where to
make the line between them. We can consider as operands only characters with
phonetic properties -- letters, numbers and symbols, stripping the text from
operators that are there to shape sonic and visual rendering of the text such
as whitespaces, commas, periods, HTML and markdown and other tags so that we
are left with the body of the text to count in. This would mean to render
operators unreferrable and count as in _scriptio continua_.
_Scriptio continua_ is a very old example of the linear onedimensional
treatment of the text. Let's look again at the bookroll with Plato's writing.
Even though it is 'designed' into grids on a closer look it reveals the lack
of any other structural elements -- there are no spaces, commas, periods or
line-breaks, the text is merely one flow, one long line.
_Phaedrus_ was written in the fourth century BC (this copy comes from the
second century AD). Word and paragraph separators were reintroduced much
later, between the second and sixth century AD when rolls were gradually
transcribed into codices that were bound as pages and numbered (a dramatic
change in publishing comparable to digital changes today).[2]
'Reference-linking' has not been prominent in discussions about sharing books
online and I only came to realise its significance during my preparations for
this event. There is a tremendous amount of very old, recent and new texts
online but we haven't done much in opening them up to contextual reading. In
this there are publishers of all 'grounds' together.
We are equipped to treat the internet not only as repository and library but
to take into account its potentials of reading that have been hiding in front
of our very eyes. To expand the notion of hyperlink by taking into account
techniques of referencing and to expand the notion of referencing by realising
its plasticity which has always been imagined as if it is there. To mesh texts
with public URLs to enable entaglement of referencing and hyperlinks. Here,
open access gains its further relevance and importance.
Dušan Barok
_Written May 21-23, 2014, in Vienna and Rotterdam. Revised May 28, 2014._
Notes
1. ↑ Proposals for paragraph-based hyperlinking can be traced back to the work of Douglas Engelbart, and today there is a number of related ideas, some of which were implemented on a small scale: fuzzy anchoring, 1(http://hypothes.is/blog/fuzzy-anchoring/); purple numbers, 2(http://project.cim3.net/wiki/PMWX_White_Paper_2008); robust anchors, 3(http://github.com/hypothesis/h/wiki/robust-anchors); _Emphasis_ , 4(http://open.blogs.nytimes.com/2011/01/11/emphasis-update-and-source); and others 5(http://en.wikipedia.org/wiki/Fragment_identifier#Proposals). The dependence on structural elements such as paragraphs is one of their shortcoming making them not suitable for texts with longer paragraphs (e.g. Adorno's _Aesthetic Theory_ ), visual poetry or computer code; another is the requirement to store anchors along the text.
2. ↑ Works which happened not to be of interest at the time ceased to be copied and mostly disappeared. On the book roll and its gradual replacement by the codex see William A. Johnson, "The Ancient Book", in _The Oxford Handbook of Papyrology_ , ed. Roger S. Bagnall, Oxford, 2009, pp 256-281, 6(http://google.com/books?id=6GRcLuc124oC&pg=PA256).
Addendum (June 9)
Arie Altena wrote a [report from the
panel](http://digitalpublishingtoolkit.org/2014/05/off-the-press-report-day-
ii/) published on the website of Digital Publishing Toolkit initiative,
followed by another [summary of the
talk](http://digitalpublishingtoolkit.org/2014/05/dusan-barok-digital-imprint-
the-motion-of-publishing/) by Irina Enache.
The online repository Aaaaarg [has
introduced](http://twitter.com/aaaarg/status/474717492808413184) the
reference-link function in its document viewer, see [an
example](http://aaaaarg.fail/ref/60090008362c07ed5a312cda7d26ecb8#0.102).
_An unedited version of a talk given at the conference[Public
Library](http://www.wkv-stuttgart.de/en/program/2014/events/public-library/)
held at Württembergischer Kunstverein Stuttgart, 1 November 2014._
_Bracketed sequences are to be reformulated._
Poetics of Research
In this talk I'm going to attempt to identify [particular] cultural
algorithms, ie. processes in which cultural practises and software meet. With
them a sphere is implied in which algorithms gather to form bodies of
practices and in which cultures gather around algorithms. I'm going to
approach them through the perspective of my practice as a cultural worker,
editor and artist, considering practice in the same rank as theory and
poetics, and where theorization of practice can also lead to the
identification of poetical devices.
The primary motivation for this talk is an attempt to figure out where do we
stand as operators, users [and communities] gathering around infrastructures
containing a massive body of text (among other things) and what sort of things
might be considered to make a difference [or to keep making difference].
The talk mainly [considers] the role of text and the word in research, by way
of several figures.
A
A reference, list, scheme, table, index; those things that intervene in the
flow of narrative, illustrating the point, perhaps in a more economic way than
the linear text would do. Yet they don't function as pictures, they are
primarily texts, arranged in figures. Their forms have been
standardised[normalised] over centuries, withstood the transition to the
digital without any significant change, being completely intuitive to the
modern reader. Compared to the body of text they are secondary, run parallel
to it. Their function is however different to that of the punctuation. They
are there neither to shape the narrative nor to aid structuring the argument
into logical blocks. Nor is their function spatial, like in visual poems.
Their positions within a document are determined according to the sequential
order of the text, [standing as attachments] and are there to clarify the
nature of relations among elements of the subject-matter, or to establish
relations with other documents. The [premise] of my talk is that these
_textual figures_ also came to serve as the abstract[relational] models
determining possible relations among documents as such, and in consequence [to
structure conditions [of research]].
B
It can be said that research, as inquiry into a subject-matter, consists of
discrete queries. A query, such as a question about what something is, what
kinds, parts and properties does it have, and so on, can be consulted in
existing documents or generate new documents based on collection of data [in]
the field and through experiment, before proceeding to reasoning [arguments
and deductions]. Formulation of a query is determined by protocols providing
access to documents, which means that there is a difference between collecting
data outside the archive (the undocumented, ie. in the field and through
experiment), consulting with a person--an archivist (expert, librarian,
documentalist), and consulting with a database storing documents. The
phenomena such as [deepening] of specialization and throughout digitization
[have given] privilege to the database as [a|the] [fundamental] means for
research. Obviously, this is a very recent [phenomenon]. Queries were once
formulated in natural language; now, given the fact that databases are queried
[using] SQL language, their interfaces are mere extensions of it and
researchers pose their questions by manipulating dropdowns, checkboxes and
input boxes mashed together on a flat screen being ran by software that in
turn translates them into a long line of conditioned _SELECTs_ and _JOINs_
performed on tables of data.
Specialization, digitization and networking have changed the language of
questioning. Inquiry, once attached to the flesh and paper has been
[entrusted] to the digital and networked. Researchers are querying the black
box.
C
Searching in a collection of [amassed/assembled] [tangible] documents (ie.
bookshelf) is different from searching in a systematically structured
repository (library) and even more so from searching in a digital repository
(digital library). Not that they are mutually exclusive. One can devise
structures and algorithms to search through a printed text, or read books in a
library one by one. They are rather [models] [embodying] various [processes]
associated with the query. These properties of the query might be called [the
sequence], the structure and the index. If they are present in the ways of
querying documents, and we will return to this issue, are they persistent
within the inquiry as such? [wait]
D
This question itself is a rupture in the sequence. It makes a demand to depart
from one narrative [a continuous flow of words] to another, to figure out,
while remaining bound to it [it would be even more as a so-called rhetorical
question]. So there has been one sequence, or line, of the inquiry--about the
kinds of the query and its properties. That sequence itself is a digression,
from within the sequence about what is research and describing its parts
(queries). We are thus returning to it and continue with a question whether
the properties of the inquiry are the same as the properties of the query.
E
But isn't it true that every single utterance occurring in a sequence yields a
query as well? Let's consider the word _utterance_. [wait] It can produce a
number of associations, for example with how Foucault employs the notion of
_énoncé_ in his _Archaeology of Knowledge_ , giving hard time to his English
translators wondering whether _utterance_ or _statement_ is more appropriate,
or whether they are interchangeable, and what impact would each choice have on
his reception in the Anglophone world. Limiting ourselves to textual forms for
now (and not translating his work but pursing a different inquiry), let us say
the utterance is a word [or a phrase or an idiom] in a sequence such as a
sentence, a paragraph, or a document.
## (F) The
structure[[edit](/index.php?title=Talks/Poetics_of_Research&action=edit§ion=1
"Edit section: \(F\) The structure")]
This distinction is as old as recorded Western thought since both Plato and
Aristotle differentiate between a word on its own ("the said", a thing said)
and words in the company of other words. For example, Aristotle's _Categories_
[lay] on the [notion] of words on their own, and they are made the subject-
matter of that inquiry. [For him], the ambiguity of connotation words
[produce] lies in their synonymity, understood differently from the moderns--
not as more words denoting a similar thing but rather one word denoting
various things. Categories were outlined as a device to differentiate among
words according to kinds of these things. Every word as such belonged to not
less and not more than one of ten categories.
So it happens to the word _utterance_ , as to any other word uttered in a
sequence, that it poses a question, a query about what share of the spectrum
of possibly denoted things might yield as the most appropriate in a given
context. The more context the more precise share comes to the fore. When taken
out of the context ambiguity prevails as the spectrum unveils in its variety.
Thus single words [as any other utterances] are questions, queries,
themselves, and by occuring in statements, in context, their [means] are being
singled out.
This process is _conditioned_ by what has been formalized as the techniques of
_regulating_ definitions of words.
### (G) The structure: words as
words[[edit](/index.php?title=Talks/Poetics_of_Research&action=edit§ion=2
"Edit section: \(G\) The structure: words as words")]
P.Oxy.XX 2260 i: Oxyrhynchus papyrus XX, 2260, column i, with quotation from
Philitas, early 2nd c. CE. 1(http://163.1.169.40/cgi-
bin/library?e=q-000-00---0POxy--00-0-0--0prompt-10---4------0-1l--1-en-50---
20-about-2260--
00031-001-0-0utfZz-8-00&a=d&c=POxy&cl=search&d=HASH13af60895d5e9b50907367) 2(http://en.wikipedia.org/wiki/File:POxy.XX.2260.i-Philitas-
highlight.jpeg)
Ephraim Chambers, _Cyclopaedia, or an Universal Dictionary of Arts and
Sciences_ , 1728, p. 210. 3(http://digicoll.library.wisc.edu/cgi-
bin/HistSciTech/HistSciTech-
idx?type=turn&entity=HistSciTech.Cyclopaedia01.p0576&id=HistSciTech.Cyclopaedia01&isize=L)
Detail from the Liddell-Scott Greek-English Lexicon, c1843.
Dictionaries have had a long life. The ancient Greek scholar and poet Philitas
of Cos living in the 4th c. BCE wrote a vocabulary explaining the meanings of
rare Homeric and other literary words, words from local dialects, and
technical terms. The vocabulary, called _Disorderly Words_ (Átaktoi glôssai),
has been lost, with a few fragments quoted by later authors. One example is
that the word πέλλα (pélla) meant "wine cup" in the ancient Greek region of
Boeotia; contrasted to the same word meaning "milk pail" in Homer's _Iliad_.
Not much has changed in the way how dictionaries constitute order. Selected
archives of statements are queried to yield occurrences of particular words,
various _criteria[indicators]_ are applied to filtering and sorting them and
in turn the spectrum of [denoted] things allocated in this way is structured
into groups and subgroups which are then given, according to other set of
rules, shorter or longer names. These constitute facets of [potential]
meanings of a word.
So there are at least _four_ sets of conditions [structuring] dictionaries.
One is required to delimit an archive[corpus of texts], one to select and give
preference[weights] to occurrences of a word, another to cluster them, and yet
another to abstract[generalize] the subject-matter of each of these clusters.
Needless to say, this is a craft of a few and these criteria are rarely being
disclosed, despite their impact on research, and more generally, their
influence as conditions for production[making] of a so called _common sense_.
It doesn't take that much to reimagine what a dictionary is and what it could
be, especially having large specialized corpora of texts at hand. These can
also serve as aids in production of new words and new meanings.
### (H) The structure: words as knowledge and the
world[[edit](/index.php?title=Talks/Poetics_of_Research&action=edit§ion=3
"Edit section: \(H\) The structure: words as knowledge and the world")]
Boethius's rendering of a classification tree described in Porphyry's Isagoge
(3th c.), [6th c.] 10th c. 4(http://www.e-codices.unifr.ch/en/sbe/0315/53/medium)
Ephraim Chambers, _Cyclopaedia, or an Universal Dictionary of Arts and
Sciences_ , London, 1728, p. II. 5(http://digicoll.library.wisc.edu/cgi-
bin/HistSciTech/HistSciTech-
idx?type=turn&entity=HistSciTech.Cyclopaedia01.p0015&id=HistSciTech.Cyclopaedia01&isize=L)
Système figuré des connaissances humaines, _Encyclopédie ou Dictionnaire
raisonné des sciences, des arts et des métiers_ , 1751. 6(http://encyclopedie.uchicago.edu/content/syst%C3%A8me-figur%C3%A9-des-
connaissances-humaines)
Another _formalized_ and [internalized] process being at play when figuring
out a word is its [containment]. Word is not only structured by way of things
it potentially denotes but also by words it is potentially part of and those
it contains.
The fuzz around categorization of knowledge _and_ the world in the Western
thought can be traced back to Porphyry, if not further. In his introduction to
Aristotle's _Categories_ this 3rd century AD Neoplatonist began expanding the
notions of genus and species into their hypothetic consequences. Aristotle's
brief work outlines ten categories of 'things that are said' (legomena,
λεγόμενα), namely substance (or substantive, {not the same as matter!},
οὐσία), quantity (ποσόν), qualification (ποιόν), a relation (πρός), where
(ποῦ), when (πότε), being-in-a-position (κεῖσθαι), having (or state,
condition, ἔχειν), doing (ποιεῖν), and being-affected (πάσχειν). In his
different work, _Topics_ , Aristotle outlines four kinds of subjects/materials
indicated in propositions/problems from which arguments/deductions start.
These are a definition (όρος), a genus (γένος), a property (ἴδιος), and an
accident (συμβεβηϰόϛ). Porphyry does not explicitly refer _Topics_ , and says
he omits speaking "about genera and species, as to whether they subsist (in
the nature of things) or in mere conceptions only" 8(http://www.ccel.org/ccel/pearse/morefathers/files/porphyry_isagogue_02_translation.htm#C1),
which means he avoids explicating whether he talks about kinds of concepts or
kinds of things in the sensible world. However, the work sparked confusion, as
the following passage [suggests]:
> "[I]n each category there are certain things most generic, and again, others
most special, and between the most generic and the most special, others which
are alike called both genera and species, but the most generic is that above
which there cannot be another superior genus, and the most special that below
which there cannot be another inferior species. Between the most generic and
the most special, there are others which are alike both genera and species,
referred, nevertheless, to different things, but what is stated may become
clear in one category. Substance indeed, is itself genus, under this is body,
under body animated body, under which is animal, under animal rational animal,
under which is man, under man Socrates, Plato, and men particularly." (Owen
1853, 9(http://www.ccel.org/ccel/pearse/morefathers/files/porphyry_isagogue_02_translation.htm#C2))
Porphyry took one of Aristotle's ten categories of the word, substance, and
dissected it using one of his four rhetorical devices, genus. Employing
Aristotle's categories, genera and species as means for logical operations,
for dialectic, Porphyry's interpretation resulted in having more resemblance
to the perceived _structures_ of the world. So they began to bloom.
There were earlier examples, but Porphyry was the most influential in
injecting the _universalist_ version of classification [implying] the figure
of a tree into the [locus] of Aristotle's thought. Knowledge became
monotheistic.
Classification schemes [growing from one point] play a major role in
untangling the format of modern encyclopedia from that of the dictionary
governed by alphabet. Two of the most influential encyclopedias of the 18th
century are cases in the point. Although still keeping 'dictionary' in their
titles, they are conceived not to represent words but knowledge. The [upper-
most] genus of the body was set as the body of knowledge. The English
_Cyclopaedia, or an Universal Dictionary of Arts and Sciences_ (1728) splits
into two main branches: "natural and scientifical" and "artificial and
technical"; these further split down to 47 classes in total, each carrying a
structured list (on the following pages) of thematic articles, serving as
table of contents. The French _Encyclopedia: or a Systematic Dictionary of the
Sciences, Arts, and Crafts_ (1751) [unwinds] from judgement ( _entendement_ ),
branches into memory as history, reason as philosophy, and imagination as
poetry. The logic of containers was employed as an aid not only to deal with
the enormous task of naming and not omiting anything from what is known, but
also for the management of labour of hundreds of writers and researchers, to
create a mechanism for delegating work and the distribution of
responsibilities. Flesh was also more present, in the field research, with
researchers attending workshops and sites of everyday life to annotate it.
The world came forward to unshine the word in other schemes. Darwin's tree of
evolution and some of the modern document classification systems such as
Charles A. Cutter's _Expansive Classification_ (1882) set to classify the
world itself and set the field for what has came to be known as authority
lists structuring metadata in today's computing.
### The structure
(summary)[[edit](/index.php?title=Talks/Poetics_of_Research&action=edit§ion=4
"Edit section: The structure \(summary\)")]
Facetization of meaning and branching of knowledge are both the domain of the
unit of utterance.
While lexicographers[dictionarists] structure thought through multi-layered
processes of abstraction of the written record, knowledge growers dissect it
into hierarchies of [mutually] contained notions.
One seek to describe the word as a faceted list of small worlds, another to
describe the world as a structured lists of words. One play prime in the
domain of epistemology, in what is known, controlling the vocabulary, another
in the domain of ontology, in what is, controlling reality.
Every [word] has its given things, every thing has its place, closer or
further from a single word.
The schism between classifying words and classifying the world implies it is
not possible to construct a universal classification scheme[system]. On top of
that, any classification system of words is bound to a corpus of texts it is
operating upon and any classification system of the world again operates with
words which are bound to a vocabulary[lexicon] which is again bound to a
corpus [of texts]. It doesn't mean it would prevent people from trying.
Classifications function as descriptors of and 'inscriptors' upon the world,
imprinting their authority. They operate from [a locus of] their
corpus[context]-specificity. The larger the corpus, the more power it has on
shaping the world, as far as the word shapes it (yes, I do imply Google here,
for which it is a domain to be potentially exploited).
## (J) The
sequence[[edit](/index.php?title=Talks/Poetics_of_Research&action=edit§ion=5
"Edit section: \(J\) The sequence")]
The structure-yielding query [of] the single word [shrinks][zuzuje
sa,spresnuje] with preceding and following words. Inquiry proceeds in the flow
that establishes another kind[mode] of relationality, chaining words into the
sequence. While the structuring property of the query brings words apart from
each other, its sequential property establishes continuity and brings these
units into an ordered set.
This is what is responsible for attaching textual figures mentioned earlier
(lists, schemes, tables) to the body of the text. Associations can be also
stated explicitly, by indexing tables and then referring them from a
particular point in the text. The same goes for explicit associations made
between blocks of the text by means of indexed paragraphs, chapters or pages.
From this follows that all utterances point to the following utterance by the
nature of sequential order, and indexing provides means for pointing elsewhere
in the document as well.
A lot can be said about references to other texts. Here, to spare time, I
would refer you to a talk I gave a few months ago and which is online 10(http://monoskop.org/Talks/Communing_Texts).
This is still the realm of print. What happens with document when it is
digitized?
Digitization breaks a document into units of which each is assigned a numbered
position in the sequence of the document. From this perspective digitization
can be viewed as a total indexation of the document. It is converted into
units rendered for machine operations. This sequentiality is made explicit, by
means of an underlying index.
Sequences and chains are orders of one dimension. Their one-dimensional
ordering allows addressability of each element and [random] access. [Jumps]
between [random] addresses are still sequential, processing elements one at a
time.
## (K) The
index[[edit](/index.php?title=Talks/Poetics_of_Research&action=edit§ion=6
"Edit section: \(K\) The index")]
Summa confessorum [1297-98], 1310. 7(http://www.bl.uk/onlinegallery/onlineex/illmanus/roymanucoll/j/011roy000008g11u00002000.html)
[The] sequencing not only weaves words into statements but activates other
temporalities, and _presents occurrences of words from past statements_. As
now when I am saying the word _utterance_ , each time there surface contexts
in which I have used it earlier.
A long quote from Frederick G. Kilgour, _The Evolution of the Book_ , 1998, pp
76-77:
> "A century of invention of various types of indexes and reference tools
preceded the advent of the first subject index to a specific book, which
occurred in the last years of the thirteenth century. The first subject
indexes were "distinctions," collections of "various figurative or symbolic
meanings of a noun found in the scriptures" that "are the earliest of all
alphabetical tools aside from dictionaries." (Richard and Mary Rouse supply an
example: "Horse = Preacher. Job 39: 'Hast thou given the horse strength, or
encircled his neck with whinning?')
>
> [Concordance] By the end of the third decade of the thirteenth century Hugh
de Saint-Cher had produced the first word concordance. It was a simple word
index of the Bible, with every location of each word listed by [its position
in the Bible specified by book, chapter, and letter indicating part of the
chapter]. Hugh organized several dozen men, assigning to each man an initial
letter to search; for example, the man assigned M was to go through the entire
Bible, list each word beginning with M and give its location. As it was soon
perceived that this original reference work would be even more useful if words
were cited in context, a second concordance was produced, with each word in
lengthy context, but it proved to be unwieldy. [Soon] a third version was
produced, with words in contexts of four to seven words, the model for
biblical concordances ever since.
>
> [Subject index] The subject index, also an innovation of the thirteenth
century, evolved over the same period as did the concordance. Most of the
early topical indexes were designed for writing sermons; some were organized,
while others were apparently sequential without any arrangement. By midcentury
the entries were in alphabetical order, except for a few in some classified
arrangement. Until the end of the century these alphabetical reference works
indexed a small group of books. Finally John of Freiburg added an alphabetical
subject index to his own book, _Summa Confessorum_ (1297—1298). As the Rouses
have put it, 'By the end of the [13]th century the practical utility of the
subject index is taken for granted by the literate West, no longer solely as
an aid for preachers, but also in the disciplines of theology, philosophy, and
both kinds of law.'"
In one sense neither subject-index nor concordane are indexes, they are words
or group of words selected according to given criteria from the body of the
text, each accompanied with a list of identifiers. These identifiers are
elements of an index, whether they represent a page, chapter, column, or other
[kind of] block of text. Every identifier is an unique _address_.
The index is thus an ordering of a sequence by means of associating its
elements with a set of symbols, when each element is given unique combination
of symbols. Different sizes of sets yield different number of variations.
Symbol sets such as an alphabet, arabic numerals, roman numerals, and binary
digits have different proportions between the length of a string of symbols
and the number of possible variations it can contain. Thus two symbols of
English alphabet can store 26^2 various values, of arabic numerals 10^2, of
roman numberals 8^2 and of binary digits 2^2.
Indexation is segmentation, a breaking into segments. From as early as the
13th century the index such as that of sections has served as enabler of
search. The more [detailed] indexation the more precise search results it
enables.
The subject-index and concordance are tables of search results. There is a
direct lineage from the 13th-century biblical concordances and the birth of
computational linguistic analysis, they were both initiated and realised by
priests.
During the World War II, Jesuit Father Roberto Busa began to look for machines
for the automation of the linguistic analysis of the 11 million-word Latin
corpus of Thomas Aquinas and related authors.
Working on his Ph.D. thesis on the concept of _praesens_ in Aquinas he
realised two things:
> "I realized first that a philological and lexicographical inquiry into the
verbal system of an author has t o precede and prepare for a doctrinal
interpretation of his works. Each writer expresses his conceptual system in
and through his verbal system, with the consequence that the reader who
masters this verbal system, using his own conceptual system, has to get an
insight into the writer's conceptual system. The reader should not simply
attach t o the words he reads the significance they have in his mind, but
should try t o find out what significance they had in the writer's mind.
Second, I realized that all functional or grammatical words (which in my mind
are not 'empty' at all but philosophically rich) manifest the deepest logic of
being which generates the basic structures of human discourse. It is .this
basic logic that allows the transfer from what the words mean today t o what
they meant to the writer.
>
> In the works of every philosopher there are two philosophies: the one which
he consciously intends to express and the one he actually uses to express it.
The structure of each sentence implies in itself some philosophical
assumptions and truths. In this light, one can legitimately criticize a
philosopher only when these two philosophies are in contradiction." 11(http://www.alice.id.tue.nl/references/busa-1980.pdf)
Collaborating with the IBM in New York from 1949, the work, a concordance of
all the words of Thomas Aquinas, was finally published in the 1970s in 56
printed volumes (a version is online since 2005 12(http://www.corpusthomisticum.org/it/index.age)). Besides that, an
electronic lexicon for automatic lemmatization of Latin words was created by a
team of ten priests in the scope of two years (in two phases: grouping all the
forms of an inflected word under their lemma, and coding the morphological
categories of each form and lemma), containing 150,000 forms 13(http://www.alice.id.tue.nl/references/busa-1980.pdf#page=4). Father
Busa has been dubbed the father of humanities computing and recently also of
digital humanities.
The subject-index has a crucial role in the printed book. It is the only means
for search the book offers. Subjects composing an index can be selected
according to a classification scheme (specific to a field of an inquiry), for
example as elements of a certain degree (with a given minimum number of
subclasses).
Its role seemingly vanishes in the digital text. But it can be easily
transformed. Besides serving as a table of pre-searched results the subject-
index also gives a distinct idea about content of the book. Two patterns give
us a clue: numbers of occurrences of selected words give subjects weights,
while words that seem specific to the book outweights other even if they don't
occur very often. A selection of these words then serves as a descriptor of
the whole text, and can be thought of as a specific kind of 'tags'.
This process was formalized in a mathematical function in the 1970s, thanks to
a formula by Karen Spärck Jones which she entitled 'inverse document
frequency' (IDF), or in other words, "term specificity". It is measured as a
proportion of texts in the corpus where the word appears at least once to the
total number of texts. When multiplied by the frequency of the word _in_ the
text (divided by the maximum frequency of any word in the text), we get _term
frequency-inverse document frequency_ (tf-idf). In this way we can get an
automated list of subjects which are particular in the text when compared to a
group of texts.
We came to learn it by practice of searching the web. It is a mechanism not
dissimilar to thought process involved in retrieving particular information
online. And search engines have it built in their indexing algorithms as well.
There is a paper proposing attaching words generated by tf-idf to the
hyperlinks when referring websites 14(http://bscit.berkeley.edu/cgi-
bin/pl_dochome?query_src=&format=html&collection=Wilensky_papers&id=3&show_doc=yes).
This would enable finding the referred content even after the link is dead.
Hyperlinks in references in the paper use this feature and it can be easily
tested: 15(http://www.cs.berkeley.edu/~phelps/papers/dissertation-
abstract.html?lexical-
signature=notemarks+multivalent+semantically+franca+stylized).
There is another measure, cosine similarity, which takes tf-idf further and
can be applied for clustering texts according to similarities in their
specificity. This might be interesting as a feature for digital libraries, or
even a way of organising library bottom-up into novel categories, new
discourses could emerge. Or as an aid for researchers to sort through texts,
or even for editors as an aid in producing interesting anthologies.
## Final
remarks[[edit](/index.php?title=Talks/Poetics_of_Research&action=edit§ion=7
"Edit section: Final remarks")]
1
New disciplines emerge all the time - most recently, for example, cultural
techniques, software studies, or media archaeology. It takes years, even
decades, before they gain dedicated shelves in libraries or a category in
interlibrary digital repositories. Not that it matters that much. They are not
only sites of academic opportunities but, firstly, frameworks of new
perspectives of looking at the world, new domains of knowledge. From the
perspective of researcher the partaking in a discipline involves negotiating
its vocabulary, classifications, corpus, reference field, and specific
terms[subjects]. Creating new fields involves all that, and more. Even when
one goes against all disciplines.
2
Google can still surprise us.
3
Knowledge has been in the making for millenia. There have been (abstract)
mechanisms established that govern its conditions. We now possess specialized
corpora of texts which are interesting enough to serve as a ground to discuss
and experiment with dictionaries, classifications, indexes, and tools for
references retrieval. These all belong to the poetic devices of knowledge-
making.
4
Command-line example of tf-idf and concordance in 3 steps.
* 1\. Process the files text.1-5.txt and produce freq.1-5.txt with lists of (nonlemmatized) words (in respective texts), ordered by frequency:
> for i in {1..5}; do tr '[A-Z]' '[a-z]' < text.$i.txt | tr -c '[a-z]'
'[\012*]' | tr -d '[:punct:]' | sort | uniq -c | sort -k 1nr | sed '1,1d' >
temp.txt; max=$(awk -vvar=1 -F" " 'NR
* 2\. Process the files freq.1-5.txt and produce tfidf.1-5.txt containing a list of words (out of 500 most frequent in respective lists), ordered by weight (specificity for each text):
> for j in {1..5}; do rm freq.$j.txt.temp; lines=$(wc -l freq.$j.txt) && for i
in {1..500}; do word=$(awk -vline="$i" -vfield=2 -F" " 'NR
line {print
$field}' freq.$j.txt); tf=$(awk -vline="$i" -vfield=1 -F" " 'NR
* 3\. Process the files tfidf.1-5.txt and their source text, text.txt, and produce occ.txt with concordance of top 3 words from each of them:
> rm occ.txt && for j in {1..5}; do echo "$j" >> occ.txt; ptx -f -w 150
text.txt.$j > occ.$j.txt; for i in {1..3}; do word=$(awk -vline="$i" -vfield=1
-F" " 'NR
line {print $field}' tfidf.$j.txt); egrep -i
"[alpha:](/index.php?title=Alpha:&action=edit&redlink=1 "Alpha: \(page does
not exist\)") $word" occ.$j.txt >> occ.txt; done; done
Dušan Barok
_Written 23 October - 1 November 2014 in Bratislava and Stuttgart._
Barok
Techniques of Publishing
2014
Techniques of Publishing
Draft translation of a talk given at the seminar Informace mezi komoditou a komunitou [The Information Between Commodity and Community] held at Tranzitdisplay in Prague, Czech Republic, on May 6, 2014
My contribution has three parts. I will begin by sketching the current environment of publishing in general, move on to some of the specificities of publishing
in the humanities and art, and end with a brief introduction to the Monoskop
initiative I was asked to include in my talk.
I would like to thank Milos Vojtechovsky, Matej Strnad and CAS/FAMU for
the invitation, and Tranzitdisplay for hosting this seminar. It offers itself as an
opportunity for reflection for which there is a decent distance from a previous
presentation of Monoskop in Prague eight years ago when I took part in a new
media education workshop prepared by Miloš and Denisa Kera. Many things
changed since then, not only in new media, but in the humanities in general,
and I will try to articulate some of these changes from today’s perspective and
primarily from the perspective of publishing.
I. The Environment of Publishing
One change, perhaps the most serious, and which indeed relates to the humanities
publishing as well, is that from a subject that was just a year ago treated as a paranoia of a bunch of so called technological enthusiasts, is today a fact with which
the global public is well acquainted: we are all being surveilled. Virtually every
utterance on the internet, or rather made by means of the equipment connected
to it through standard protocols, is recorded, in encrypted or unencrypted form,
on servers of information agencies, besides copies of a striking share of these data
on servers of private companies. We are only at the beginning of civil mobilization towards reversal of the situation and the future is open, yet nothing suggests
so far that there is any real alternative other than “to demand the impossible.”
There are at least two certaintes today: surveillance is a feature of every communication technology controlled by third parties, from post, telegraphy, telephony
to internet; and at the same time it is also a feature of the ruling power in all its
variants humankind has come to know. In this regard, democracy can be also understood as the involvement of its participants in deciding on the scale and use of
information collected in this way.
I mention this because it suggests that also all publishing initiatives, from libraries,
through archives, publishing houses to schools have their online activities, back1
ends, shared documents and email communication recorded by public institutions–
which intelligence agencies are, or at least ought to be.
In regard to publishing houses it is notable that books and other publications today are printed from digital files, and are delivered to print over email, thus it is
not surprising to claim that a significant amount of electronically prepared publications is stored on servers in the public service. This means that besides being
required to send a number of printed copies to their national libraries, in fact,
publishers send their electronic versions to information agencies as well. Obviously, agencies couldn’t care less about them, but it doesn’t change anything on
the likely fact that, whatever it means, the world’s largest electronic repository of
publications today are the server farms of the NSA.
Information agencies archive publications without approval, perhaps without awareness, and indeed despite disapproval of their authors and publishers, as an
“incidental” effect of their surveillance techniques. This situation is obviously
radically different from a totalitarianism we got to know. Even though secret
agencies in the Eastern Bloc were blackmailing people to produce miserable literature as their agents, samizdat publications could at least theoretically escape their
attention.
This is not the only difference. While captured samizdats were read by agents of
flesh and blood, publications collected through the internet surveillance are “read”
by software agents. Both of them scan texts for “signals”, ie. terms and phrases
whose occurrences trigger interpretative mechanisms that control operative components of their organizations.
Today, publishing is similarly political and from the point of view of power a potentially subversive activity like it was in the communist Czechoslovakia. The
difference is its scale, reach and technique.
One of the messages of the recent “revelations” is that while it is recommended
to encrypt private communication, the internet is for its users also a medium of
direct contact with power. SEO, or search engine optimization, is now as relevant technique for websites as for books and other publications since all of them
are read by similar algorithms, and authors can read this situation as a political
dimension of their work, as a challenge to transform and model these algorithms
by texts.
2
II. Techniques of research in the humanities literature
Compiling the bibliography
Through the circuitry we got to the audience, readers. Today, they also include
software and algorithms such as those used for “reading” by information agencies
and corporations, and others facilitating reading for the so called ordinary reader,
the reader searching information online, but also the “expert” reader, searching
primarily in library systems.
Libraries, as we said, are different from information agencies in that they are
funded by the public not to hide publications from it but to provide access to
them. A telling paradox of the age is that on the one hand information agencies
are storing almost all contemporary book production in its electronic version,
while generally they absolutely don’t care about them since the “signal” information lies elsewhere, and on the other in order to provide electronic access, paid or
direct, libraries have to costly scan also publications that were prepared for print
electronically.
A more remarkable difference is, of course, that libraries select and catalogize
publications.
Their methods of selection are determined in the first place by their public institutional function of the protector and projector of patriotic values, and it is reflected
in their preference of domestic literature, ie. literature written in official state languages. Methods of catalogization, on the other hand, are characterized by sorting
by bibliographic records, particularly by categories of disciplines ordered in the
tree structure of knowledge. This results in libraries shaping the research, including academic research, towards a discursivity that is national and disciplinary, or
focused on the oeuvre of particular author.
Digitizing catalogue records and allowing readers to search library indexes by their
structural items, ie. the author, publisher, place and year of publication, words in
title, and disciplines, does not at all revert this tendency, but rather extends it to
the web as well.
I do not intend to underestimate the value and benefits of library work, nor the
importance of discipline-centered writing or of the recognition of the oeuvre of
the author. But consider an author working on an article who in the early phase
of his research needs to prepare a bibliography on the activity of Fluxus in central Europe or on the use of documentary film in education. Such research cuts
through national boundaries and/or branches of disciplines and he is left to travel
not only to locate artefacts, protagonists and experts in the field but also to find
literature, which in turn makes even the mere process of compiling bibliography
relatively demanding and costly activity.
3
In this sense, the digitization of publications and archival material, providing their
free online access and enabling fulltext search, in other words “open access”, catalyzes research across political-geographical and disciplinary configurations. Because while the index of the printed book contains only selected terms and for
the purposes of searching the index across several books the researcher has to have
them all at hand, the software-enabled search in digitized texts (with a good OCR)
works with the index of every single term in all of them.
This kind of research also obviously benefits from online translation tools, multilingual case bibliographies online, as well as second hand bookstores and small
specialized libraries that provide a corrective role to public ones, and whose “open
access” potential has been explored to the very small extent until now, but which
I won’t discuss here further for the lack of time.
Writing
The disciplinarity and patriotism are “embedded” in texts themselves, while I repeat that I don’t say this in a pejorative way.
Bibliographic records in bodies of texts, notes, attributions of sources and appended references can be read as formatted addresses of other texts, making apparent a kind of intertextual structure, well known in hypertext documents. However, for the reader these references are still “virtual”. When following a reference
she is led back to a library, and if interested in more references, to more libraries.
Instead, authors assume certain general erudition of their readers, while following references to their very sources is perceived as an exception from the standard
self-limitation to reading only the body of the text. Techniques of writing with
virtual bibliography thus affirm national-disciplinary discourses and form readers
and authors proficient in the field of references set by collections of local libraries
and so called standard literature of fields they became familiar with during their
studies.
When in this regime of writing someone in the Czech Republic wants to refer to
the work of Gilbert Simondon or Alexander Bogdanov, to give an example, the
effect of his work will be minimal, since there was practically nothing from these
authors translated into Czech. His closely reading colleague is left to try ordering
books through a library and wait for 3-4 weeks, or to order them from an online
store, travel to find them or search for them online. This applies, in the case of
these authors, for readers in the vast majority of countries worldwide. And we can
tell with certainty that this is not only the case of Simondon and Bogdanov but
of the vast majority of authors. Libraries as nationally and pyramidally situated
institutions face real challenges in regard to the needs of free research.
This is surely merely one aspect of techniques of writing.
4
Reading
Reading texts with “live” references and bibliographies using electronic devices is
today possible not only to imagine but to realise as well. This way of reading
allows following references to other texts, visual material, other related texts of
an author, but also working with occurrences of words in the text, etc., bringing
reading closer to textual analysis and other interesting levels. Due to the time
limits I am going to sketch only one example.
Linear reading is specific by reading from the beginning of the text to its end,
as well as ‘tree-like’ reading through the content structure of the document, and
through occurrences of indexed words. Still, techniques of close reading extend
its other aspect – ‘moving’ through bibliographic references in the document to
particular pages or passages in another. They make the virtual reference plastic –
texts are separated one from another merely by a click or a tap.
We are well familiar with a similar movement through the content on the web
– surfing, browsing, and clicking through. This leads us to an interesting parallel: standards of structuring, composing, etc., of texts in the humanities has been
evolving for centuries, what is incomparably more to decades of the web. From
this stems also one of the historical challenges the humanities are facing today:
how to attune to the existence of the web and most importantly to epistemological consequences of its irreversible social penetration. To upload a PDF online is
only a taste of changes in how we gain and make knowledge and how we know.
This applies both ways – what is at stake is not only making production of the
humanities “available” online, it is not only about open access, but also about the
ways of how the humanities realise the electronic and technical reality of their
own production, in regard to the research, writing, reading, and publishing.
Publishing
The analogy between information agencies and national libraries also points to
the fact that large portion of publications, particularly those created in software,
is electronic. However the exceptions are significant. They include works made,
typeset, illustrated and copied manually, such as manuscripts written on paper
or other media, by hand or using a typewriter or other mechanic means, and
other pre-digital techniques such as lithography, offset, etc., or various forms of
writing such as clay tablets, rolls, codices, in other words the history of print and
publishing in its striking variety, all of which provide authors and publishers with
heterogenous means of expression. Although this “segment” is today generally
perceived as artists’ books interesting primarily for collectors, the current process
of massive digitization has triggered the revival, comebacks, transformations and
5
novel approaches to publishing. And it is these publications whose nature is closer
to the label ‘book’ rather than the automated electro-chemical version of the offset
lithography of digital files on acid-free paper.
Despite that it is remarkable to observe a view spreading among publishers that
books created in software are books with attributes we have known for ages. On
top of that there is a tendency to handle files such as PDFs, EPUBs, MOBIs and
others as if they are printed books, even subject to the rules of limited edition, a
consequence of what can be found in the rise of so called electronic libraries that
“borrow” PDF files and while someone reads one, other users are left to wait in
the line.
Whilst, from today’s point of view of the humanities research, mass-printed books
are in the first place archives of the cultural content preserved in this way for the
time we run out of electricity or have the internet ‘switched off’ in some other
way.
III. Monoskop
Finally, I am getting to Monoskop and to begin with I am going to try to formulate
its brief definition, in three versions.
From the point of view of the humanities, Monoskop is a research, or questioning, whose object’s nature renders no answer as definite, since the object includes
art and culture in their widest sense, from folk music, through visual poetry to
experimental film, and namely their history as well as theory and techniques. The
research is framed by the means of recording itself, what makes it a practise whose
record is an expression with aesthetic qualities, what in turn means that the process of the research is subject to creative decisions whose outcomes are perceived
esthetically as well.
In the language of cultural management Monoskop is an independent research
project whose aim is subject to change according to its continual findings; which
has no legal body and thus as organisation it does not apply for funding; its participants have no set roles; and notably, it operates with no deadlines. It has a reach
to the global public about which, respecting the privacy of internet users, there
are no statistics other than general statistics on its social networks channels and a
figure of numbers of people and bots who registered on its website and subscribed
to its newsletter.
At the same time, technically said, Monoskop is primarily an internet website
and in this regard it is no different from any other communication media whose
function is to complicate interpersonal communication, at least due to the fact
that it is a medium with its own specific language, materiality, duration and access.
6
Contemporary media
Monoskop has began ten years ago in the milieu of a group of people running
a cultural space where they had organised events, workshops, discussion, a festival,
etc. Their expertise, if to call that way the trace left after years spent in the higher
education, varied well, and it spanned from fine art, architecture, philosophy,
through art history and literary theory, to library studies, cognitive science and
information technology. Each of us was obviously interested in these and other
fields other than his and her own, but the praxis in naming the substance whose
centripetal effects brought us into collaboration were the terms new media, media
culture and media art.
Notably, it was not contemporary art, because a constituent part of the praxis was
also non-visual expression, information media, etc., so the research began with the
essentially naive question ‘of what are we contemporary?’. There had been not
much written about media culture and art as such, a fact I perceived as drawback
but also as challenge.
The reflection, discussion and critique need to be grounded in reality, in a wider
context of the field, thus the research has began in-field. From the beginning, the
website of Monoskop served to record the environment, including people, groups,
organizations, events we had been in touch with and who/which were more or
less explicitly affiliated with media culture. The result of this is primarily a social
geography of live media culture and art, structured on the wiki into cities, with
a focus on the two recent decades.
Cities and agents
The first aim was to compile an overview of agents of this geography in their
wide variety, from eg. small independent and short-lived initiatives to established
museums. The focus on the 1990s and 2000s is of course problematic. One of
its qualities is a parallel to the history of the World Wide Web which goes back
precisely to the early 1990s and which is on the one hand the primary recording
medium of the Monoskop research and on the other a relevant self-archiving and–
stemming from its properties–presentation medium, in other words a platform on
which agents are not only meeting together but potentially influence one another
as well.
http://monoskop.org/Prague
The records are of diverse length and quality, while the priorities for what they
consist of can be generally summed up in several points in the following order:
7
1. Inclusion of a person, organisation or event in the context of the structure.
So in case of a festival or conference held in Prague the most important is to
mention it in the events section on the page on Prague.
2. Links to their web presence from inside their wiki pages, while it usually
implies their (self-)presentation.
http://monoskop.org/The_Media_Are_With_Us
3. Basic information, including a name or title in an original language, dates
of birth, foundation, realization, relations to other agents, ideally through
links inside the wiki. These are presented in narrative and in English.
4. Literature or bibliography in as many languages as possible, with links to
versions of texts online if there are any.
5. Biographical and other information relevant for the object of the research,
while the preference is for those appearing online for the first time.
6. Audiovisual material, works, especially those that cannot be found on linked
websites.
Even though pages are structured in the quasi same way, input fields are not structured, so when you create a wiki account and decide to edit or add an entry, the
wiki editor offers you merely one input box for the continuous text. As is the case
on other wiki websites. Better way to describe their format is thus articles.
There are many related questions about representation, research methodology,
openness and participation, formalization, etc., but I am not going to discuss them
due to the time constraint.
The first research layer thus consists of live and active agents, relations among
them and with them.
Countries
Another layer is related to a question about what does the field of media culture
and art stem from; what and upon what does it consciously, but also not fully
consciously, builds, comments, relates, negates; in other words of what it may be
perceived a post, meta, anti, retro, quasi and neo legacy.
An approach of national histories of art of the 20th century proved itself to be
relevant here. These entries are structured in the same way like cities: people,
groups, events, literature, at the same time building upon historical art forms and
periods as they are reflected in a range of literature.
8
http://monoskop.org/Czech_Republic
The overviews are organised purposely without any attempts for making relations
to the present more explicit, in order to leave open a wide range of intepretations
and connotations and to encourage them at the same time.
The focus on art of the 20th century originally related to, while the researched
countries were mostly of central and eastern Europe, with foundations of modern
national states, formations preserving this field in archives, museums, collections
but also publications, etc. Obviously I am not saying that contemporary media
culture is necessarily archived on the web while art of the 20th century lies in
collections “offline”, it applies vice versa as well.
In this way there began to appear new articles about filmmakers, fine artists, theorists and other partakers in artistic life of the previous century.
Since then the focus has considerably expanded to more than a century of art and
new media on the whole continent. Still it portrays merely another layer of the
research, the one which is yet a collection of fragmentary data, without much
context. Soon we also hit the limit of what is about this field online. The next
question was how to work in the internet environment with printed sources.
Log
http://monoskop.org/log
When I was installing this blog five years ago I treated it as a side project, an offshoot, which by the fact of being online may not be only an archive of selected
source literature for the Monoskop research but also a resource for others, mainly
students in the humanities. A few months later I found Aaaarg, then oriented
mainly on critical theory and philosophy; there was also Gigapedia with publications without thematic orientation; and several other community library portals
on password. These were the first sources where I was finding relevant literature
in electronic version, later on there were others too, I began to scan books and catalogues myself and to receive a large number of scans by email and soon came to
realise that every new entry is an event of its own not only for myself. According
to the response, the website has a wide usership across all the continents.
At this point it is proper to mention the copyright. When deciding about whether
to include this or that publication, there are at least two moments always present.
One brings me back to my local library at the outskirts of Bratislava in the early
1990s and asks that if I would have found this book there and then, could it change
my life? Because books that did I was given only later and elsewhere; and here I
think of people sitting behind computers in Belarus, China or Kongo. And even
9
if not, the latter is a wonder on whether this text has a potential to open up some
serious questions about disciplinarity or national discursivity in the humanities,
while here I am reminded by a recent study which claims that more than half
of academic publications are not read by more than three people: their author,
reviewer and editor. What does not imply that it is necessary to promote them
to more people but rather to think of reasons why is it so. It seems that the
consequences of the combination of high selectivity with open access resonate
also with publishers and authors from whom the complaints are rather scarce and
even if sometimes I don’t understand reasons of those received, I respect them.
Media technology
Throughout the years I came to learn, from the ontological perspective, two main
findings about media and technology.
For a long time I had a tendency to treat technologies as objects, things, while now
it seems much more productive to see them as processes, techniques. As indeed
nor the biologist does speak about the dear as biology. In this sense technology is
the science of techniques, including cultural techniques which span from reading,
writing and counting to painting, programming and publishing.
Media in the humanities are a compound of two long unrelated histories. One of
them treats media as a means of communication, signals sent from point A to the
point B, lacking the context and meaning. Another speaks about media as artistic
means of expression, such as the painting, sculpture, poetry, theatre, music or
film. The term “media art” is emblematic for this amalgam while the historical
awareness of these two threads sheds new light on it.
Media technology in art and the humanities continues to be the primary object of
research of Monoskop.
I attempted to comment on political, esthetic and technical aspects of publishing.
Let me finish by saying that Monoskop is an initiative open to people and future
and you are more than welcome to take part in it.
Dušan Barok
Written May 1-7, 2014, in Bergen and Prague. Translated by the author on May 10-13,
2014. This version generated June 10, 2014.
Barok
Shadow Libraries
2018
_A talk given at the [Shadow Libraries](http://www.sgt.gr/eng/SPG2096/)
symposium held at the National Museum of Contemporary Art (EMST) in
[Athens](/Athens "Athens"), 17 March 2018. Moderated by [Kenneth
Goldsmith](/Kenneth_Goldsmith "Kenneth Goldsmith") (UbuWeb) and bringing
together [Dusan Barok](/Dusan_Barok "Dusan Barok") (Monoskop), [Marcell
Mars](/Marcell_Mars "Marcell Mars") (Public Library), [Peter
Sunde](/Peter_Sunde "Peter Sunde") (The Pirate Bay), [Vicki
Bennett](/Vicki_Bennett "Vicki Bennett") (People Like Us), [Cornelia
Sollfrank](/Cornelia_Sollfrank "Cornelia Sollfrank") (Giving What You Don't
Have), and Prodromos Tsiavos, the event was part of the _[Shadow Libraries:
UbuWeb in Athens](http://www.sgt.gr/eng/SPG2018/) _programme organised by [Ilan
Manouach](/Ilan_Manouach "Ilan Manouach"), Kenneth Goldsmith and the Onassis
Foundation._
This is the first time that I was asked to talk about Monoskop as a _shadow
library_.
What are shadow libraries?
[Lawrence Liang](/Lawrence_Liang "Lawrence Liang") wrote a think piece for _e-
flux_ a couple of years ago,
in response to the closure of Library.nu, a digital library that had operated
from 2004, first as Ebooksclub, later as Gigapedia.
He wrote that:
[](http://www.e-flux.com/journal/37/61228
/shadow-libraries/)
In the essay, he moves between identifying Library.nu as digital Alexandria
and as its shadow.
In this account, even large libraries exist in the shadows cast by their
monumental precedessors.
There’s a lineage, there’s a tradition.
Almost everyone and every institution has a library, small or large.
They’re not necessarily Alexandrias, but they strive to stay relevant.
Take the University of Amsterdam where I now work.
University libraries are large, but they’re hardly _large enough_.
The publishing market is so huge that you simply can’t keep up with all the
niche little disciplines.
So either you have to wait days or weeks for a missing book to be ordered
somewhere.
Or you have some EBSCO ebooks.
And most of the time if you’re searching for a book title in the catalogue,
all you get are its reviews in various journals the library subscribes to.
So my colleagues keep asking me.
Dušan, where do I find this or that book?
You need to scan through dozens of texts, check one page in that book, table
of contents of another book, read what that paper is about.
[](/Digital_libraries#Libraries
"Digital libraries#Libraries")
Or scrapes it from somewhere, since most books today are born digital and live
their digital lives.
...
Digital libraries need to be creative.
They don’t just preserve and circulate books.
[](https://monoskop.org/log/?p=10262)
They engage in extending print runs, making new editions, readily
reproducible, unlimited editions.
[](https://monoskop.org/images/d/de/Hirsal_Josef_Groegerova_Bohumila_eds_Slovo_pismo_akce_hlas.pdf#page=87)
This one comes with something extra. Isn’t this beautiful? You can read along
someone else.
In this case we know these annotations come from the Slovak avant-garde visual
poet and composer [Milan Adamciak](/Milan_Adamciak "Milan Adamciak").
[](/Milan_Adamciak
"Milan Adamciak")
...standing in the middle.
A couple of pages later...
[](https://monoskop.org/images/d/de/Hirsal_Josef_Groegerova_Bohumila_eds_Slovo_pismo_akce_hlas.pdf#page=117)
...you can clearly see how he found out about a book containing one million
random digits [see note 24 on the image]. The strangest book.
[](https://monoskop.org/log/?p=5780)
He was still alive when we put it up on Monoskop, and could experience it.
...
Digital libraries may seem like virtual, grey places, nonplaces.
But these little chance encounters happen all the time there.
There are touches. There are traces. There are many hands involved, visible
hands.
They join writers’ hands and help creating new, unlimited editions.
They may be off Google, but for many, especially younger generation these are
the places to go to learn, to share.
Rather than in a shadow, they are out in the open, in plain sight.
[](http://www.cam.ac.uk/research/news
/step-inside-the-mind-of-the-young-stephen-hawking-as-his-phd-thesis-goes-
online-for-first-time)
This made rounds last year.
As scholars, as authors, we have reasons to have our works freely accessible
by everyone.
We do it for feedback, for invites to lecture, for citations.
Sounds great.
So when after long two, three, four, five years I have my manuscript ready,
where will I go?
Will I go to an established publisher or an open access press?
Will I send it to MIT Press or Open Humanities Press?
Traditional publishers have better distribution, and they often have a strong
brand.
It’s often about career moves and bios, plans A’s and plan B’s.
There are no easy answers, but one can always be a little inventive.
In the end, one should not feel guilty for publishing with MIT Press.
But at the same time, one should neither feel guilty for scanning and sharing
such a book with others.
...
You know, there’s fighting, there are court cases.
[Aaaaarg](/Aaaaarg "Aaaaarg"), a digital library run by our dear friend [Sean
Dockray](/Sean_Dockray "Sean Dockray"), is facing a Canadian publisher.
Open Library is now facing the Authors Guild for lending scanned books
deaccessioned from libraries.
They need our help, our support.
But collisions of interests can be productive.
This is what our beloved _Cabinet_ magazine did when they found their PDFs
online.
They converted all their articles into HTML and put them online.
The most beautiful takedown request we have ever received.
[](https://monoskop.org/log/?p=16598)
So what is at stake? What are these digital books?
They are poor versions of print books.
They come with no binding, no paper, no weight.
They come as PDFs, EPUBs, JPEGs in online readers, they come as HTML.
By the way, HTML is great, you can search it, copy, save it, it’s lightweight,
it’s supported by all browsers, footnotes too, you can adapt its layout
easily.
That’s completely fine for a researcher.
As a researcher, you just need source code:
you need plain text, page numbers, images, working footnotes, relevant data
and code.
_Data and code_ as well:
this is where online companions to print books come in,
you want to publish your research material,
your interviews, spreadsheets, software you made.
...
Here we distinguish between researchers and readers.
As _readers_ we will always build our beautiful libraries at home, and
elsewhere,
filled with books and... and external harddrives.
...
There may be _no contradiction_ between the existence of a print book in
stores and the existence of its free digital version.
So what we’ve been asking for is access, basic access. The access to culture
and knowledge for research, educational, noncommercial purposes. A low budget,
poor bandwidth access. Access to badly OCR’d ebooks with grainy images. Access
to culture and knowledge _light_.
Thank you.
Dusan Barok
_Written on 16-17 March 2018 in Athens and Amsterdam. Published online on 21
March 2018._
Bodo
A Short History of the Russian Digital Shadow Libraries
2014
Draft Manuscript, 11/4/2014, DO NOT CITE!
A short history of the Russian digital shadow libraries
Balazs Bodo, Institute for Information Law, University of Amsterdam
“What I see as a consequence of the free educational book distribution: in decades generations of people
everywhere in the World will grow with the access to the best explained scientific texts of all times.
[…]The quality and accessibility of education to poors will drastically grow too. Frankly, I'm seeing this as
the only way to naturally improve mankind: by breeding people with all the information given to them at
any time.” – Anonymous admin of Aleph, explaining the reason d’étre of the site
Abstract
RuNet, the Russian segment of the internet is now the home of the most comprehensive scientific pirate
libraries on the net. These sites offer free access to hundreds of thousands of books and millions of
journal articles. In this contribution we try to understand the factors that led to the development of
these sites, and the sociocultural and legal conditions that enable them to operate under hostile legal
and political conditions. Through the reconstruction of the micro-histories of peer produced online text
collections that played a central role in the history of RuNet, we are able to link the formal and informal
support for these sites to the specific conditions developed under the Soviet and post Soviet times.
(pirate) libraries on the net
The digitization and collection of texts was one of the very first activities enabled by computers. Project
Gutenberg, the first in line of digital libraries was established as early as 1971. By the early nineties, a
number of online electronic text archives emerged, all hoping to finally realize the dream that was
chased by humans every since the first library: the collection of everything (Battles, 2004), the Memex
(Bush, 1945), the Mundaneum (Rieusset-Lemarié, 1997), the Library of Babel (Borges, 1998). It did not
take long to realize that the dream was still beyond reach: the information storage and retrieval
technology might have been ready, but copyright law, for the foreseeable future was not. Copyright
protection and enforcement slowly became one of the most crucial issues around digital technologies.
1
Electronic copy available at: http://ssrn.com/abstract=2616631
Draft Manuscript, 11/4/2014, DO NOT CITE!
And as that happened, the texts, which were archived without authorization were purged from the
budding digital collections. Those that survived complete deletion were moved into the dark, locked
down sections of digital libraries that sometimes still lurk behind the law-abiding public façades. Hopes
for a universal digital library can be built was lost in just a few short years as those who tried it (such as
Google or Hathitrust) got bogged down in endless court battles.
There are unauthorized texts collections circulating on channels less susceptible to enforcement, such as
DVDs, torrents, or IRC channels. But the technical conditions of these distribution channels do not enable
the development of a library. Two of the most essential attributes of any proper library: the catalogue
and the community are hard to provide on such channels. The catalog doesn’t just organize the
knowledge stored in the collection; it is not just a tool of searching and browsing. It is a critical
component in the organization of the community of “librarians” who preserve and nourish the
collection. The catalog is what distinguishes an unstructured heap of computer files from a wellmaintained library, but it is the same catalog, which makes shadow libraries, unauthorized texts
collections an easy target of law enforcement. Those few digital online libraries that dare to provide
unauthorized access to texts in an organized manner, such as textz.org, a*.org, monoskop or Gigapedia/
library.nu, all had their bad experiences with law enforcement and rights holder dismay.
Of these pirate libraries, Gigapedia—later called Library.nu—was the largest at the turn of the 2010’s. At
its peak, it was several orders of magnitudes bigger than its peers, offering access to nearly a million
English language documents. It was not just size that made Gigapedia unique. Unlike most sites, it
moved beyond its initial specialization in scientific texts to incorporate a wide range of academic
disciplines. Compared to its peers, it also had a highly developed central metadata database, which
contained bibliographic details on the collection and also, significantly, on gaps in the collection, which
underpinned a process of actively solicited contributions from users. With the ubiquitous
scanner/copiers, the production of book scans was as easy as copying them, thus the collection grew
rapidly.
Gigapedia’s massive catalog made the site popular, which in turn made it a target. In early 2012, a group
of 17 publishers was granted an injunction against the site (now called Library.nu; and against iFile.it—
the hosting site that stored most of Library.nu’s content). Unlike the record and movie companies,
which had collaborated on dozens of lawsuits over the past decade, the Library.nu injunction and lawsuit
were the first coordinated publisher actions against a major file-sharing site, and the first to involve
major university publishers in particular. Under the injunction, the Library.nu adminstrators closed the
site. The collection disappeared and the community around it dispersed. (Liang, 2012)
Gigapedia’s collection was integrated into Aleph’s predominantly Russian language collection before the
shutdown, making Aleph the natural successor of Gigapedia/library.nu.
Libraries in the RuNet
2
Electronic copy available at: http://ssrn.com/abstract=2616631
Draft Manuscript, 11/4/2014, DO NOT CITE!
The search soon zeroed in on a number of sites with strong hints to their Russian origins. Sites like Aleph,
[sc], [fi], [os] are open, completely free to use, and each offers access to a catalog comparable to the late
Gigapedia’s.
The similarity of these seemingly distinct services is no coincidence. These sites constitute a tightly knit
network, in which Aleph occupies the central position. Aleph, as its name suggests, is the source library,
it aims to seed of all scientific digital libraries on the net. Its mission is simple and straightforward. It
collects free-floating scientific texts and other collections from the Internet and consolidates them (both
content and metadata) into a single, open database. Though ordinary users can search the catalog and
retrieve the texts, its main focus is the distribution of the catalog and the collection to anyone who
wants to build services upon them. Aleph has regularly updated links that point to its own, neatly packed
source code, its database dump, and to the terabytes worth of collection. It is a knowledge infrastructure
that can be freely accessed, used and built upon by anyone. This radical openness enables a number of
other pirate libraries to offer Aleph’s catalogue along with books coming from other sources. By
mirroring Aleph they take over tasks that the administrators of Aleph are unprepared or unwilling to do.
Handling much of the actual download traffic they relieve Aleph from the unavoidable investment in
servers and bandwidth, which, in turn puts less pressure on Aleph to engage in commercial activities to
finance its operation. While Aleph stays in the background, the network of mirrors compete for
attention, users and advertising revenue as their design, business model, technical sophistication is finetuned to the profile of their intended target audience.
This strategy of creating an open infrastructure serves Aleph well. It ensures the widespread distribution
of books while it minimizes (legal) exposure. By relinquishing control, Aleph also ensures its own longterm survival, as it is copied again and again. In fact, openness is the core element in the philosophy of
Aleph, which was summed up by one of its administrators as to:
“- collect valuable science/technology/math/medical/humanities academic literature. That is,
collect humanity's valuable knowledge in digital form. Avoid junky books. Ignore "bestsellers".
- build a community of people who share knowledge, improve quality of books, find good and
valuable books, and correct errors.
- share the files freely, spreading the knowledge altruistically, not trying to make money, not
charging money for knowledge. Here people paid money for many books that they considered
valuable and then shared here on [Aleph], for free. […]
This is the true spirit of the [Aleph] project.”
3
Draft Manuscript, 11/4/2014, DO NOT CITE!
Reading, publishing, censorship and libraries in Soviet-Russia
“[T]he library of the Big Lubyanka was unique. In all probability it had been assembled out of confiscated
private libraries. The bibliophiles who had collected those books had already rendered up their souls to
God. But the main thing was that while State Security had been busy censoring and emasculating all the
libraries of the nation for decades, it forgot to dig in its own bosom. Here, in its very den, one could read
Zamyatin, Pilnyak, Panteleimon Romanov, and any volume at all of the complete works of Merezhkovsky.
(Some people wisecracked that they allowed us to read forbidden books because they already regarded
us as dead. But I myself think that the Lubyanka librarians hadn't the faintest concept of what they were
giving us—they were simply lazy and ignorant.)”
(Solzhenitsyn, 1974)
In order to properly understand the factors that shaped Russian pirate librarians’ and their wider
environments’ attitudes towards bottom-up, collaborative, copyright infringing open source digital
librarianship, we need to go back nearly a century and take a close look at the specific social and political
conditions of the Soviet times that shaped the contemporary Russian intelligentsia’s attitudes towards
knowledge.
The communist ideal of a reading nation
Russian culture always had a reverence for the printed word, and the Soviet state, with its Leninist
program of mass education further stressed the idea of the educated, reading public. As Stelmach (1993)
put it:
Reading almost transplanted religion as a sacred activity: in the secularized socialist state, where the
churches were closed, the free press stifled and schools and universities politicized, literature became the
unique source of moral truth for the population. Writers were considered teachers and prophets.
The Soviet Union was a reading culture: in the last days of the USSR, a quarter of the adult population
were considered active readers, and almost everyone else categorized as an occasional reader. Book
prices were low, alternative forms of entertainment were scarce, and people were poor, making reading
one of the most attractive leisure activities.
The communist approach towards intellectual property protection reflected the idea of the reading
nation. The Soviet Union inherited a lax and isolationist copyright system from the tsarist Russia. Neither
the tsarist Russian state nor the Soviet state adhered to international copyright treaties, nor did they
enter into bilateral treaties. Tsarist Russia’s refusal to grant protection to foreign authors and
translations had primarily an economic rationale. The Soviet regime added a strong ideological claim:
granting exclusive ownership to authors was against the interests of the reading public, and “the cultural
development of the masses,” and only served the private interests of authors and heirs.
“If copyright had an economic function, that was only as a right of remuneration for his contribution to
the extension of the socialist art heritage. If copyright had a social role, this was not to protect the author
4
Draft Manuscript, 11/4/2014, DO NOT CITE!
from the economically stronger exploiter, but was one of the instruments to get the author involved in
the great communist educational project.” (Elst, 2005, p 658)
The Soviet copyright system, even in its post-revolutionary phase, maintained two persistent features
that served as important instruments of knowledge dissemination. First, the statutorily granted
“freedom of translation” meant that translation was treated as an exception to copyright, which did not
require rights holder authorization. This measure dismantled a significant barrier to access in a
multicultural and multilingual empire. By the same token, the denial of protection to foreign authors and
rights holders eased the imports of foreign texts (after, of course the appropriate censorship review).
Due to these instruments:
“[s]oon after its founding, the Soviet Union became as well the world's leading literary pirate, not only
publishing in translation the creations of its own citizens but also publishing large numbers of copies of
the works of Western authors both in translation and in the original language.” (Newcity, 1980, p 6.)
Looking simply at the aggregate numbers of published books, the USSR had an impressive publishing
industry on a scale appropriate to a reading nation. Between 1946 and 1970 more than 1 billion copies of
over 26 thousand different work were published, all by foreign authors (Newcity, 1978). In 1976 alone,
more than 1.7 billion copies of 84,304 books were printed. (Friedberg, Watanabe, & Nakamoto, 1984, fn
4.)
Of course these impressive numbers reflected neither a healthy public sphere, nor a well-functioning
print ecology. The book-based public sphere was both heavily censored and plagued by the peculiar
economic conditions of the Soviet, and later the post-Soviet era.
Censorship
The totalitarian Soviet state had many instruments to control the circulation of literary and scientific
works. 1 Some texts never entered official circulation in the first hand: “A particularly harsh
prepublication censorship [affected] foreign literature, primarily in the humanities and socioeconomic
disciplines. Books on politics, international relations, sociology, philosophy, cybernetics, semiotics,
linguistics, and so on were hardly ever published.” (Stelmakh, 2001, p 145.)
Many ‘problematic’ texts were only put into severely limited circulation. Books were released in small
print runs; as in-house publications, or they were only circulated among the trustworthy few. As the
resolution of the Central Committee of the Communist Party of June 4, 1959, stated: “Writings by
bourgeois authors in the fields of philosophy, history, economics, diplomacy, and law […] are to be
published in limited quantities after the excision from them of passages of no scholarly or practical
1
We share Helen Freshwater’s (2003) approach that censorship is a more complex phenomenon than the state just
blocking the circulation of certain texts. Censorship manifested itself in more than one ways and its dominant
modus operandi, institutions, extent, focus, reach, effectiveness showed extreme variations over time. This short
chapter however cannot go into the intricate details of the incredibly rich history of censorship in the Soviet Union.
Instead, through much simplification we try to demonstrate that censorship did not only affect literary works, but
extended deep into scholarly publishing, including natural science disciplines.
5
Draft Manuscript, 11/4/2014, DO NOT CITE!
interest. They are to be supplied with extensive introductions and detailed annotations." (quoted in
Friedberg et al., 1984)
Truncation and mutilation of texts was also frequent. Literary works and texts from humanities and
social sciences were obvious subjects of censorship, but natural sciences and technical fields did not
escape:
“In our film studios we received an American technical journal, something like Cinema, Radio and
Television. I saw it on the chief engineer's desk and noticed that it had been reprinted in Moscow.
Everything undesirable, including advertisements, had been removed, and only those technical articles
with which the engineer could be trusted were retained. Everything else, even whole pages, was missing.
This was done by a photo copying process, but the finished product appeared to be printed.” (Dewhirst &
Farrell, 1973, p. 127)
Mass cultural genres were also subject to censorship and control. Women's fiction, melodrama, comics,
detective stories, and science fiction were completely missing or heavily underrepresented in the mass
market. Instead, “a small group of officially approved authors […] were published in massive editions
every year, [and] blocked readers' access to other literature. […]Soviet literature did not fit the formula
of mass culture and was simply bad literature, but it was issued in huge print-runs.” (Stelmakh, 2001, p.
150)
Libraries were also important instruments of censorship. When not destroyed altogether, censored
works ended up in the spetskhrans, limited access special collections established in libraries to contain
censored works. Besides obvious candidates such as anti-Soviet works and western ‘bourgeois’
publications, many scientific works from the fields of biology, nuclear physics, psychology, sociology,
cybernetics, and genetics ended up in these closed collections (Ryzhak, 2005). Access to the spetskhrans
was limited to those with special permits issued by their employers. “Only university educated readers
were enrolled and only those holding positions of at least junior scientific workers were allowed to read
the publications kept by the spetskhran” (Ryzhak, 2005). In the last years of the USSR, the spetskhran of
the Russian State Library—the largest of them with more than 1 million items in the collection—had 43
seats for its roughly 4500 authorized readers. Yearly circulation was around 200,000 items, a figure that
included “the history and literature of other countries, international relations, science of law, technical
sciences and others.” (Ryzhak, 2005)
Librarians thus played a central role in the censorship machinery. They did more than guard the contents
of limited-access collections and purge the freely accessible stocks according to the latest Party
directives. As the intermediaries between the readers and the closed stacks, their task was to carefully
guide readers’ interests:
“In the 1970s, among the staff members of the service department of the Lenin State Library of the
U.S.S.R., there were specially appointed persons-"politcontrollers"-who, apart from their regular
professional functions, had to perform additional control over the literature lent from the general stocks
(not from the restricted access collections), thus exercising censorship over the percolation of avant-garde
6
Draft Manuscript, 11/4/2014, DO NOT CITE!
aesthetics to the reader, the aesthetics that introduced new ways of thinking and a new outlook on life
and social behavior.” (Stelmakh, 2001)
Librarians also used library cards and lending histories to collect and report information on readers and
suspicious reading habits.
Soviet economic dysfunction also severely limited access to printed works. Acute and chronic shortages
of even censor-approved texts were common, both on the market and in libraries. When the USSR
joined its first first international copyright treaty in its history in 1973 (the UNESCO-backed Universal
Copyright Convention), which granted protection to foreign authors and denied “freedom of
translation,” the access problems only got worse. Soviet concern that granting protection to foreign
authors would result in significant royalty payments to western rightsholders proved valid. By 1976, the
yearly USSR trade deficit in publishing reached a million rubles (~5.5 million current USD) (Levin, 1983, p.
157). This imbalance not only affected the number of publications that were imported into the cashpoor country, but also raised the price of translated works to the double that of Russian-authored books
(Levin, 1983, p. 158).
The literary and scientific underground in Soviet times
Various practices and informal institutions evolved to address the problems of access. Book black
markets flourished: “In the 1970s and 1980s the black market was an active part of society. Buying books
directly from other people was how 35 percent of Soviet adults acquired books for their own homes, and
68 percent of families living in major cities bought books only on the black market.” (Stelmakh, 2001, p
146). Book copying and hoarding was practiced to supplement the shortages:
“People hoarded books: complete works of Pushkin, Tolstoy or Chekhov. You could not buy such things.
So you had the idea that it is very important to hoard books. High-quality literary fiction, high quality
science textbooks and monographs, even biographies of famous people (writers, scientists, composers,
etc.) were difficult to buy. You could not, as far as I remember, just go to a bookstore and buy complete
works of Chekhov. It was published once and sold out and that's it. Dostoyevsky used to be prohibited in
the USSR, so that was even rarer. Lots of writers were prohibited, like Nabokov. Eventually Dostoyevsky
was printed in the USSR, but in very small numbers.
And also there were scientists who wanted scientific books and also could not get them. Mathematics
books, physics - only very few books were published every year, you can't compare this with the market in
the U.S. Russian translations of classical monographs in mathematics were difficult to find.
So, in the USSR, everyone who had a good education shared the idea that hoarding books is very, very
important, and did just that. If someone had free access to a Xerox machine, they were Xeroxing
everything in sight. A friend of mine had entire room full of Xeroxed books.”2
From the 1960s onwards, the ever-growing Samizdat networks tried to counterbalance the effects of
censorship and provide access to both censored classics and information on the current state of Soviet
2
Anonymous source #1
7
Draft Manuscript, 11/4/2014, DO NOT CITE!
society. Reaching a readership of around 200,000, these networks operated in a networked, bottom-up
manner. Each node in the chain of distribution copied the texts it received, and distributed the copies.
The nodes also carried information backwards, towards the authors of the samizdat publications.
In the immediate post-Soviet political turmoil and economic calamity, access to print culture did not get
any easier. Censorship officially ended, but so too did much of the funding for the state-funded
publishing sector. Mass unemployment, falling wages, and the resulting loss of discretionary income did
not facilitate the shift toward market-based publishing models. The funding of libraries also dwindled,
limiting new acquisitions (Elst, 2005, p. 299-300). Economic constraints took the place of political ones.
But in the absence of political repression, self-organizing efforts to address these constraints acquired
greater scope of action. Slowly, the informal sphere began to deliver alternative modes of access to
otherwise hard-to-get literary and scientific works.
Russian pirate libraries emerged from these enmeshed contexts: communist ideologies of the reading
nation and mass education; the censorship of texts; the abused library system; economic hardships and
dysfunctional markets, and, most importantly, the informal practices that ensured the survival of
scholarship and literary traditions under hostile political and economic conditions. The prominent place
of Russian pirate libraries in the larger informal media economy—and of Russian piracy of music, film,
and other copyrighted work more generally—cannot be understood outside this history.
The emergence of DIY digital libraries in RuNet
The copying of censored and uncensored works (by hand, by typewriters, by photocopying or by
computers), the hoarding of copied texts, the buying and selling of books on the black market, and the
informal, peer-to-peer distribution of samizdat material were integral parts of the everyday experience
of much of educated Soviet and post-Soviet readers. The building and maintenance of individual
collections and the participation in the informal networks of exchange offered a sense of political,
economic and cultural agency—especially as the public institutions that supported the core professions
of the intelligentsia fell into sustained economic crisis.
Digital technologies were embraced by these practices as soon as they appeared:
"From late 1970s, when first computers became used in the USSR and printers became available,
people started to print forbidden books, or just books that were difficult to find, not necessarily
forbidden. I have seen myself a print-out on a mainframe computer of a science fiction novel,
printed in all caps! Samizdat was printed on typewriters, xeroxed, printed abroad and xeroxed, or
printed on computers. Only paper circulated, files could not circulate until people started to have
PCs at home. As late as 1992 most people did not have a PC at home. So the only reason to type
a big text into a computer was to print it on paper many times.”3
People who worked in academic and research institutions were well positioned in this process: they had
access to computers, and many had access to the materials locked up in the spetskhrans. Many also had
3
Anonymous source #1
8
Draft Manuscript, 11/4/2014, DO NOT CITE!
the time and professional motivations to collect and share otherwise inaccessible texts. The core of
current digital collections was created in this late-Soviet/early post-Soviet period by such professionals.
Their home academic and scientific institutions continued to play an important role in the development
of digital text collections well into the era of home computing and the internet.
Digitized texts first circulated in printouts and later on optical/magnetic storage media. With the
emergence of digital networking these texts quickly found their way to the early Internet as well. The
first platform for digital text sharing was the Russian Fidonet, a network of BBS systems similar to
Usenet, which enabled the mass distribution of plain text files. The BBS boards, such as the Holy Spirit
BBS’ “SU.SF & F.FANDOM” group whose main focus was Soviet-Russian science fiction and fantasy
literature, connected fans around emerging collections of shared texts. As an anyonmous interviewee
described his experience in the early 1990s…
“Fidonet collected a large number of plaintext files in literature / fiction, mostly in Russian, of course.
Fidonet was almost all typed in by hand. […] Maybe several thousand of the most important books,
novels that "everyone must read" and such stuff. People typed in poetry, smaller prose pieces. I have
myself read a sci-fi novel printed on a mainframe, which was obviously typed in. This novel was by
Strugatski brothers. It was not prohibited or dissident, but just impossible to buy in the stores. These
were culturally important, cult novels, so people typed them in. […] At this point it became clear that
there was a lot of value in having a plaintext file with some novels, and the most popular novels were first
digitized in this way.”
The next stage in the text digitization started around 1994. By that time growing numbers of people had
computers, scanning peripherals, OCR software. Russian internet and PC penetration while extremely
low overall in the 1990s (0.1% of the population having internet access in 1994, growing to 8.3% by
2003), began to make inroads in educational and scientific institutions and among Moscow and
St.Petersburg elites, who were often the critical players in these networks. As access to technologies
increased a much wider array of people began to digitize their favorite texts, and these collections began
to circulate, first via CD-ROMs, later via the internet.
One of such collection belonged to Maxim Moshkov, who published his library under the name lib.ru in
1994. Moshkov was a graduate of the Moscow State University Department of Mechanics and
Mathematics, which played a large role in the digitization of scientific works. After graduation, he started
to work for the Scientific Research Institute of System Development, a computer science institute
associated with the Russian Academy of Sciences. He describes the early days of his collection as follows:
“ I began to collect electronic texts in 1990, on a desktop computer. When I got on the Internet in 1994, I
found lots of sites with texts. It was like a dream came true: there they were, all the desired books. But
these collections were in a dreadful state! Incompatible formats, different encodings, missing content. I
had to spend hours scouring the different sites and directories to find something.
As a result, I decided to convert all the different file-formats into a single one, index the titles of the books
and put them in thematic directories. I organized the files on my work computer. I was the main user of
my collection. I perfected its structure, made a simple, fast and convenient search interface and
9
Draft Manuscript, 11/4/2014, DO NOT CITE!
developed many other useful functions and put it all on the Internet. Soon, people got into the habit of
visiting the site. […]
For about 2 years I have scoured the internet: I sought out and pulled texts from the network, which were
lying there freely accessible. Slowly the library grew, and the audience increased with it. People started
to send books to me, because they were easier to read in my collection. And the time came when I
stopped surfing the internet for books: regular readers are now sending me the books. Day after day I get
about 100 emails, and 10-30 of them contain books. So many books were sent in, that I did not have time
to process them. Authors, translators and publishers also started to send texts. They all needed the
library.”(Мошков, 1999)
In the second half of the 1990’s, the Russian Internet—RuNet—was awash in book digitization projects.
With the advent of scanners, OCR technology, and the Internet, the work of digitization eased
considerably. Texts migrated from print to digital and sometimes back to print again. They circulated
through different collections, which, in turn, merged, fell apart, and re-formed. Digital libraries with the
mission to collect and consolidate these free-floating texts sprung up by the dozens.
Such digital librarianship was the antithesis of official Soviet book culture: it was free, bottom-up,
democratic, and uncensored. It also offered a partial remedy to problems created by the post-Soviet
collapse of the economy: the impoverishment of libraries, readers, and publishers. In this context, book
digitization and collecting also offered a sense of political, economic and cultural agency, with parallels
to the copying and distribution of texts in Soviet times. The capacity to scale up these practices coincided
with the moment when anti-totalitarian social sentiments were the strongest, and economic needs the
direst.
The unprecedented bloom of digital librarianship is the result of the superimposition of multiple waves
of distinct transformations: technological, political, economical and social. “Maksim Moshkov's Library”
was ground zero for this convergence and soon became a central point of exchange for the community
engaged in text digitization and collection:
[At the outset] there were just a couple of people who started scanning books in large quantities. Literally
hundreds of books. Others started proofreading, etc. There was a huge hole in the market for books.
Science fiction, adventure, crime fiction, all of this was hugely in demand by the public. So lib.ru was to a
large part the response, and was filled by those books that people most desired and most valued.
For years, lib.ru integrated as much as it could of the different digital libraries flourishing in the RuNet. By
doing so, it preserved the collections of the many short-lived libraries.
This process of collection slowed in the early 2000’s. By that time, lib.ru had all of the classics, resulting
in a decrease in the flow of new digitized material. By the same token, the Russian book market was
finally starting to offer works aimed at the popular mainstream, and was flooded by cheap romances,
astrology, crime fiction, and other genres. Such texts started to appear in, and would soon flood lib.ru.
Many contributors, including Moshkov, were concerned that such ephemera would dilute the original
10
Draft Manuscript, 11/4/2014, DO NOT CITE!
library. And so they began to disaggregate the collection. Self-published literature, “user generated
content,” and fan fiction was separated into the aptly named samizdat.lib.ru, which housed original texts
submitted by readers. Popular fiction--“low-brow literature”—was copied from the relevant subsections
of lib.ru and split off. Sites specializing in those genres quickly formed their own ecosystem. [L], the first
of its kind, now charges a monthly fee to provide access to the collection. The [f] community split off
from [L] the same way that [L] split off from lib.ru, to provide free and unrestricted access to a
fundamentally similar collection. Finally, some in the community felt the need to focus their efforts on a
separate collection of scientific works. This became Kolhoz collection.
The genesis of a million book scientific library
A Kolhoz (Russian: колхо́ з) was one of the types of collective farm that emerged in the early Soviet
period. In the early days, it was a self-governing, community-owned collaborative enterprise, with many
of the features of a commons. For the Russian digital librarians, these historical resonances were
intentional.
The kolhoz group was initially a community that scanned and processed scientific materials: books and,
occasionally, articles. The ethos was free sharing. Academic institutes in Russia were in dire need of
scientific texts; they xeroxed and scanned whatever they could. Usually, the files were then stored on the
institute's ftp site and could be downloaded freely. There were at least three major research institutes
that did this, back in early 2000s, unconnected to each other in any way, located in various faraway parts
of Russia. Most of these scans were appropriated by the kolhoz group and processed into DJVU4.
The sources of files for kolhoz were, initially, several collections from academic institutes (downloaded
whenever the ftp servers were open for anonymous access; in one case, from one of the institutes of the
Chinese academy of sciences, but mostly from Russian academic institutes). At that time (around 2002),
there were also several commercialized collections of scanned books on sale in Russia (mostly, these were
college-level textbooks on math and physics); these files were also all copied to kolhoz and processed into
DJVU. The focus was on collecting the most important science textbooks and monographs of all time, in
all fields of natural science.
There was never any commercial support. The kolhoz group never had a web site with a database, like
most projects today. They had an ftp server with files, and the access to ftp was given by PM in a forum.
This ftp server was privately supported by one of the members (who was an academic researcher, like
most kolhoz members). The files were distributed directly by burning files on writable DVDs and giving the
4
DJVU is a file format that revolutionized online book distribution the way mp3 revolutionized the online music
distribution. For books that contain graphs, images and mathematical formulae scanning is the only digitization
option. However, the large number of resulting image files is difficult to handle. The DJVU file format allows for the
images of scanned book pages to be stored in the smallest possible file size, which makes it the perfect medium for
the distribution of scanned e-books.
11
Draft Manuscript, 11/4/2014, DO NOT CITE!
DVDs away. Later, the ftp access was closed to the public, and only a temporary file-swapping ftp server
remained. Today the kolhoz DVD releases are mostly spread via torrents.” 5
Kolhoz amassed around fifty thousand documents, the mexmat collection of the Moscow State
University Department of Mechanics and Mathematics (Moshkov’s alma mater) was around the same
size, the “world of books” collection (mirknig) had around thirty thousand files, and there were around a
dozen other smaller archives, each with approximately 10 thousand files in their respective collections.
The Kolhoz group dominated the science-minded ebook community in Russia well into the late 2000’s.
Kolhoz, however, suffered from the same problems as the early Fidonet-based text collections. Since it
was distributed in DVDs, via ftp servers and on torrents, it was hard to search, it lacked a proper catalog
and it was prone to fragmentation. Parallel solutions soon emerged: around 2006-7, an existing book site
called Gigapedia copied the English books from Kolhoz, set up a catalog, and soon became the most
influential pirate library in the English speaking internet.
Similar cataloguing efforts soon emerged elsewhere. In 2007, someone on rutracker.ru, a Russian BBS
focusing on file sharing, posted torrent links to 91 DVDs containing science and technology titles
aggregated from various other Russian sources, including Kolhoz. This massive collection had no
categorization or particular order. But it soon attracted an archivist: a user of the forum started the
laborious task of organizing the texts into a usable, searchable format—first filtering duplicates and
organizing existing metadata first into an excel spreadsheet, and later moving to a more open, webbased database operating under the name Aleph.
Aleph inherited more than just books from Kolhoz and Moshkov’s lib.ru. It inherited their elitism with
regard to canonical texts, and their understanding of librarianship as a community effort. Like the earlier
sites, Aleph’s collections are complemented by a stream of user submissions. Like the other sites, the
number of submissions grew rapidly as the site’s visibility, reputation and trustworthiness was
established, and like the others it later fell, as more and more of what was perceived as canonical
literature was uploaded:
“The number of mankind’s useful books is about what we already have. So growth is defined by newly
scanned or issued books. Also, the quality of the collection is represented not by the number of books but
by the amount of knowledge it contains. [ALEPH] does not need to grow more and I am not the only one
among us who thinks so. […]
We have absolutely no idea who sends books in. It is practically impossible to know, because there are a
million books. We gather huge collections which eliminate any traces of the original uploaders.
My expectation is that new arrivals will dry up. Not completely, as I described above, some books will
always be scanned or rescanned (it nowadays happens quite surprisingly often) and the overall process of
digitization cannot and should not be stopped. It is also hard to say when the slowdown will occur: I
expected it about a year ago, but then library.nu got shut down and things changed dramatically in many
respects. Now we are "in charge" (we had been the largest anyways, just now everyone thinks we are in
5
Anonymous source #1
12
Draft Manuscript, 11/4/2014, DO NOT CITE!
charge) and there has been a temporary rise in the book inflow. At the moment, relatively small or
previously unseen collections are being integrated into [ALEPH]. Perhaps in a year it will saturate.
However, intuition is not a good guide. There are dynamic processes responsible for eBook availability. If
publishers massively digitize old books, they'll obviously be harvested and that will change the whole
picture.” 6
Aleph’s ambitions to create a universal library are limited , at least in terms of scope. It does not want to
have everything, or anything. What it wants is what is thought to be relevant by the community,
measured by the act of actively digitizing and sharing books. But it has created a very interesting strategy
to establish a library which is universal in terms of its reach. The administrators of Aleph understand that
Gigapedia’s downfall was due to its visibility and they wish to avoid that trap:
“Well, our policy, which I control as strictly as I can, is to avoid fame. Gigapedia's policy was to gain as
much fame as possible. Books should be available to you, if you need them. But let the rest of the world
stay in its equilibrium. We are taking great care to hide ourselves and it pays off.”7
They have solved the dilemma of providing access without jeopardizing their mission by open sourcing
the collection and thus allowing others to create widely publicized services that interface with the
public.They let others run the risk of getting famous.
Mirrors and communities
Aleph serves as a source archive for around a half-dozen freely accessible pirate libraries on the net. The
catalog database is downloadable, the content is downloadable, even the server code is downloadable.
No passwords are required to download and there are no gatekeepers. There are no obstacle to setting
up a similar library with a wider catalog, with improved user interface and better services, with a
different audience or, in fact, a different business model.
This arrangement creates a two-layered community. The core group of the Aleph admins maintains the
current service, while a loose and ever changing network of ‘mirror sites’ build on the Aleph
infrastructure.
“The unspoken agreement is that the mirrors support our ideas. Otherwise we simply do not interact with
them. If the mirrors do support this, they appear in the discussions, on the Web etc. in a positive context.
This is again about building a reputation: if they are reliable, we help with what we can, otherwise they
should prove the World they are good on their own. We do not request anything from them. They are free
to do anything they like. But if they do what we do not agree with, it'll be taken into account in future
relations. If you think for a while, there is no other democratic way of regulation: everyone expresses his
own views and if they conform with ours, we support them. If the ideology does not match, it breaks
down.”8
Draft Manuscript, 11/4/2014, DO NOT CITE!
The core Aleph team claims to exclusively control only two critical resources: the BBS that is the home of
the community, and the book-uploading interface. That claim is, however, not entirely accurate. For the
time being, the academic minded e-book community indeed gathers on the BBS managed by Aleph, and
though there is little incentive to move on, technically nothing stands in the way of alternatives to spring
up. As for the centralization of the book collection: many of the mirrors have their own upload pages
where one can contribute to a mirror’s collection, and it is not clear how or whether books that land at
one of the mirrors find their way back to the central database. Aleph also offers a desktop library
management tool, which enables dedicated librarians to see the latest Aleph database on their desktop
and integrate their local collections with the central database via this application. Nevertheless, it seems
that nothing really stands in the way of the fragmentation of the collection, apart from the willingness of
uploaders to contribute directly to Aleph rather than to one of its mirrors (or other sites).
Funding for Aleph comes from the administrators’ personal resources as well as occasional donations
when there is a need to buy or rent equipment or services:
“[W]e've been asking and getting support for this purpose for years. […] All our mirrors are supported
primarily from private pockets and inefficient donation schemes: they bring nothing unless a whole
campaign is arranged. I asked the community for donations 3 or 4 times, for a specific purpose only and
with all the budget spoken for. And after getting the requested amount of money we shut down the
donations.”9
Mirrors, however, do not need to be non-commercial to enjoy the support of the core Aleph community,
they just have to provide free access. Ad-supported business models that do not charge for individual
access are still acceptable to the community, but there has been serious fallout with another site, which
used the Aleph stock to seed its own library, but decided to follow a “collaborative piracy” business
approach.
“To make it utmost clear: we collaborate with anyone who shares the ideology of free knowledge
distribution. No conditions. [But] we can't suddenly start supporting projects that earn money. […]
Moreover, we've been tricked by commercial projects in the past when they used the support of our
community for their own benefit.”10
The site in question, [e], is based on a simple idea: If a user cannot find a book in its collection, the
administrators offer to purchase a digital or print copy, rip it, and sell it to the user for a fraction of the
original price—typically under $1. Payments are to be made in Amazon gift cards which make the
purchases easy but the de-anonymization of users difficult. [e] recoups its investment, in principle,
through resale. While clearly illegal, the logic is not that different from that of private subscription
libraries, which purchase a resource and distribute the costs and benefits among club members.
9
BBS comment posted on Jan 15, 2013
BBS comment posted on Jan 15, 2013
10
14
Draft Manuscript, 11/4/2014, DO NOT CITE!
Although from the rights holders’ perspective there is little difference between the two approaches,
many participants in the free access community draw a sharp line between the two, viewing the sales
model as a violation of community norms.
“[e] is a scam. They were banned in our forum. Yes, most of the books in [e] came from [ALEPH], because
[ALEPH] is open, but we have nothing to do with them... If you wish to buy a book, do it from legal
sources. Otherwise it must be free.[…]
What [e] wants:
- make money on ebook downloads, no matter what kind of ebooks.
- get books from all the easy sources - spend as little effort as possible on books - maximize profit.
- no need to build a community, no need to improve quality, no need to correct any errors - just put all
files in a big pile - maximize profit.
- files are kept in secret, never given away, there is no listing of files, there is no information about what
books are really there or what is being done.
There are very few similarities in common between [e]and [ALEPH], and these similarities are too
superficial to serve as a common ground for communication. […]
They run an illegal business, making a profit.”11
Aleph administrators describe a set of values that differentiates possible site models. They prioritize the
curatorial mission and the provision of long term free access to the collection with all the costs such a
position implies, such as open sourcing the collection, ignoring takedown requests, keeping a low profile,
refraining from commercial activities, and as a result, operating on a reduced budget . [e] prioritizes the
expansion of its catalogue on demand but that implies a commercial operation, a larger budget and the
associated high legal risk. Sites carrying Aleph’s catalogue prioritize public visibility, carry ads to cover
costs but respond to takedown requests to avoid as much trouble as they can. From the perspective of
expanding access, these are not easy or straightforward tradeoffs. In Aleph’s case, the strong
commitment to the mission of providing free access comes with significant sacrifices, the most important
of which is relinquishing control over its most valuable asset: its collection of 1.2 million scientific books.
But they believe that these costs are justified by the promise, that this way the fate of free access is not
tied to the fate of Aleph.
The fact that piratical file sharing communities are willing to make substantial sacrifices (in terms of selfrestraint) to ensure their long term survival has been documented in a number of different cases. (Bodó,
2013) Aleph is unique, however in its radical open source approach. No other piratical community has
given up all the control over itself entirely. This approach is rooted in the way how it regards the legal
status of its subject matter, i.e. scholarly publications in the first place. While norms of openness in the
field of scientific knowledge production were first formed in the Enlightenment period, Aleph’s
11
BBS comments posted on Jul 02, 2013, and Aug 25, 2013
15
Draft Manuscript, 11/4/2014, DO NOT CITE!
copynorms are as much shaped by the specificities of post-Soviet era as by the age old realization that in
science we can see further if we are allowed “standing on the shoulders of giants”.
Copyright and copynorms around Russian pirate libraries
The struggle to re-establish rightsholders’ control over digitized copyrighted works has defined the
copyright policy arena since Napster emerged in 1999. Russia brought a unique history to this conflict. In
Russia, digital libraries and their emerged in a period a double transformation: the post-Soviet copyright
system had to adopt global norms, while the global norms struggled to adapt to the emergence of digital
copying.
The first post-Soviet decade produced new copyright laws that conformed with some of the international
norms advocated by Western rightsholders, but little legal clarity or enforceability (Sezneva & Karaganis,
2011). Under such conditions, informally negotiated copynorms set in to fill the void of non-existent,
unreasonable, or unenforceable laws. The pirate libraries in the RuNet are as much regulated by such
norms as by the actual laws themselves.
During most of the 1990’s user-driven digitization and archiving was legal, or to be more exact, wasn’t
illegal. The first Russian copyright law, enacted in 1993, did not cover “internet rights” until a 2006
amendment (Budylin & Osipova, 2007; Elst, 2005, p. 425). As a result, many argued (including the
Moscow prosecutor’s office), that the distribution of copyrighted works via the internet was not
copyright infringement. Authors and publishers, who saw their works appear in digital form, and
circulated via CD-ROMs and the internet, had to rely on informal norms, still in development, to establish
control over their texts vis-à-vis enthusiastic collectors and for-profit entrepreneurs.
The HARRYFAN CD was one of the early examples of a digital text collection in circulation before internet
access was widespread. The CD contained around ten thousand texts, mostly Russian science fiction. It
was compiled in 1997 by Igor Zagumenov, a book enthusiast, from the texts that circulated on the Holy
Spirit BBS. The CD was a non-profit project, planned to be printed and sold in around 1000 copies.
Zagumenov did get in touch with some of the authors and publishers, and got permission to release
some of their texts, but the CD also included many other works that were uploaded to the BBS without
authorization. The CD included the following copyright notice, alongside the name and contact of
Zagumenov and those who granted permission:
Texts on this CD are distributed in electronic format with the consent of the copyright holders or their
literary agent. The disk is aimed at authors, editors, translators and fans SF & F as a compact reference
and information library. Copying or reproduction of this disc is not allowed. For the commercial use of
texts please refer directly to the copyright owners at the following addresses.
The authors whose texts and unpublished manuscripts appeared in the collection without authorization
started to complain to those whose contact details were in the copyright notice. Some complained
about the material damage the collection may have caused to them, but most complaints focused on
moral rights: unauthorized publication of a manuscript, the mutilation of published works, lack of
attribution, or the removal of original copyright and contact notices. Some authors had no problem
16
Draft Manuscript, 11/4/2014, DO NOT CITE!
appearing in non-commercially distributed collections but objected to the fact that the CDs were sold
(and later overproduced in spite of Zagumenov’s intentions).
The debate, which took place in the book-related fora of Fidonet, had some important points.
Participants again drew a significant distinction between free access provided first by Fidonet (and later
by lib.ru, which integrated some parts of the collection) and what was perceived as Zagumenov’s forprofit enterprise—despite the fact that the price of the CD only covered printing costs. The debate also
drew authors’ and publishers’ attention to the digital book communities’ actions, which many saw as
beneficial as long as it respected the wishes of the authors. Some authors did not want to appear online
at all, others wanted only their published works to be circulated.
Lib.ru of course integrated the parts of the HARRYFAN CD into its collection. Moshkov’s policy towards
authors’ rights was to ask for permission, if he could contact the author or publisher. He also honored
takedown requests sent to him. In 1999 he wrote on copyright issues as follows:
The author’s interests must be protected on the Internet: the opportunity to find the original copy, the
right of attribution, protection from distorting the work. Anyone who wants to protect his/her rights,
should be ready to address these problems, ranging from the ability to identify the offending party, to the
possibility of proving infringement.[…]
Meanwhile, it has become a stressing question how to protect authors-netizens' rights regarding their
work published on the Internet. It is known that there are a number of periodicals that reprint material
from the Internet without the permission of the author, without payment of a fee, without prior
arrangement. Such offenders need to be shamed via public outreach. The "Wall of shame" website is one
of the positive examples of effective instruments established by the networked public to protect their
rights. It manages to do the job without bringing legal action - polite warnings, an indication of potential
trouble and shaming of the infringer.
Do we need any laws for digital libraries? Probably we do, but until then we have to do without. Yes, of
course, it would be nice to have their status established as “cultural objects” and have the same rights as
a "real library" to collect information, but that might be in the distant future. It would also be nice to
have the e-library "legal deposits" of publications in electronic form, but when even Leninka [the Russian
State Library] cannot always afford that, what we really need are enthusiastic networkers. […]
The policy of the library is to take everything they give, otherwise they cease to send books. It is also to
listen to the authors and strictly comply with their requirements. And it is to grow and prosper. […] I
simply want the books to find their readers because I am afraid to live in a world where no one reads
books. This is already the case in America, and it is speeding up with us. I don’t just want to derail this
process, I would like to turn it around.”
17
Draft Manuscript, 11/4/2014, DO NOT CITE!
Moshkov played a crucial role in consolidating copynorms in the Russian digital publishing domain. His
reputation and place in the Russian literary domain is marked by a number of prizes12, and the library’s
continued existence. This place was secured by a number of closely intertwined factors:
Framing and anchoring the digitization and distribution practice in the library tradition.
The non-profit status of the enterprise.
Respecting the wishes of the rights holders even if he was not legally obliged to do so.
Maintaining active communication with the different stakeholders in the community,
including authors and readers.
Responding to a clear gap in affordable, legal access.
Conservatism with regard to the book, anchored in the argument that digital texts are not
substitutes for printed matter.
Many other digital libraries tried to follow Moshkov’s formula, but the times were changing. Internet and
computer access left the sub-cultural niches and became mainstream; commercialization became a
viable option and thus an issue for both the community and rightsholders; and the legal environment
was about to change.
Formalization of the IP regime in the 2000s
As soon as the 1993 copyright law passed, the US resumed pressure on the Russian government for
further reform. Throughout the period—and indeed to the present day—US Trade Representative
Special 301 reports cited inadequate protections and lack of enforcement of copyright. Russia’s plans to
join the WTO, over which the US had effective veto power, also became leverage to bring the Russian
copyright regime into compliance with US norms.
Book piracy was regularly mentioned in Special 301 reports in the 2000s, but the details, alleged losses,
and analysis changed little from year to year. The estimated $40M USD losses per year throughout this
period were dwarfed by claims from the studios and software vendors, and clearly were not among the
top priorities of the USTR. For most of the decade, the electronic availability of bestsellers and academic
textbooks was seen in the context of print substitution, rather than damage to the non-existent
electronic market. And though there is little direct indication, the Special 301 reports name sites which
(unlike lib.ru) were serving audiences beyond the RuNet, indicating that the focus of enforcement was
not to protect US interests in the Russian market, but to prevent sites based in Russia to cater for
demand in the high value Western-European and US markets.
A 1998 amendment to the 1993 copyright law extended the legal framework to encompass digital rights,
though in a fashion that continued to produce controversy. After 1998, digital services had to license
content from collecting societies, but those societies needed no permission from rightsholders provided
they paid royalites. The result was a proliferation of collective management organizations, competing to
license the material to digital services (Sezneva and Karaganis, 2011), which under this arrangement
12
ROTOR, the International Union of Internet Professionals in Russia voted lib.ru as the “literary site of the year” in
1999,2001 and 2003, “electronic library of the year” in 2004,2006,2008,2009, and 2010, “programmer of the year”
in 1999, and “man of the year” in 2004 and 2005.
18
Draft Manuscript, 11/4/2014, DO NOT CITE!
were compliant with Russian law, but were regarded as illegal by Western rights holders who claimed
that the Russian collecting societies were not representing them.
The best known of dispute from this time was the one around the legality of Allofmp3.com, a site that
sold music from western record labels at prices far below those iTunes or other officially licensed
vendors. AllofMP3.com claimed that it was licensed by ROMS, the Russian Society for Multimedia and
Internet (Российское общество по мультимедиа и цифровым сетям (НП РОМС)), but despite of that
became the focal point of US (and behind them, major label) pressure, leading to an unsuccessful
criminal prosecution of the site owner and eventual closure of the site in 2007. Although Lib.ru had
some direct agreements with authors, it also licensed much of its collection from ROMS, and thus was in
the same legal situation as AllofMP3.com. .
Lib.ru avoided the attention of foreign rightholders and Russian state pressure and even benefited from
state support during the period, the receiving a $30,000 grant from the Federal Agency for Press and
Mass Communications to digitize the most important works from the 1930’s. But the chaotic licensing
environment that governed their legal status also came back to haunt them. In 2005, a lawsuit was
brought against Moshkov by KM Online (KMO), an online vendor that sold digital texts for a small fee.
Although the KMO collection—like every other collection—had been assembled from a wide range of
sources on the Internet, KMO claimed to pay a 20% royalty on its income to authors. In 2004 KMO
requested that lib.ru take down works by several authors with whom (or with whose heirs) KMO claimed
to be in exclusive contract to distribute their texts online. KMO’s claims turned out to be only partly true.
KMO had arranged contracts with a number of the heirs to classics of the Soviet period, who hoped to
benefit from an obscure provision in the 1993 Russian copyright law that granted copyrights to the heirs
of politically prosecuted and later rehabilitated Soviet-era authors. Moshkov, in turn, claimed that he
had written or oral agreements with many of the same authors and heirs, in addition to his agreement
with ROMS.
The lawsuit was a true public event. It generated thousands of news items both online and in the
mainstream press. Authors, members of the publishing industry, legal professionals, librarians, internet
professionals publicly supported Moshkov, while KMO was seen as a rogue operator that would lie to
make easy money on freely-available digital resources.
Eventually, the court ruled that KMO indeed had one exclusive contract with Eduard Gevorgyan, and that
the publication of his texts by Moshkov infringed the moral (but not the economic) rights of the author.
Moshkov was ordered to pay 3000 Rubles (approximately $100) in compensation.
The lawsuit was a sign of a slow but significant transformation in the Russian print ecosystem. The idea
of a viable market for electronic books began to find a foothold. Electronic versions of texts began to be
regarded as potential substitutes for the printed versions, not advertisements for them or supplements
to them. More and more commercial services emerged, which regard the well-entrenched free digital
libraries as competitors. As Russia continued to bring its laws into closer conformance with WTO
requirements, ahead of Russia’s admission in 2012, western rightsholders gained enough power to
demand enforcement against RuNet pirate sites. The kinds of selective enforcement for political or
19
Draft Manuscript, 11/4/2014, DO NOT CITE!
business purposes, which had marked the Russian IP regime throughout the decade (Sezneva &
Karaganis, 2011), slowly gave way to more uniform enforcement.
Closure of the Legal Regime
The legal, economic, and cultural conditions under which Aleph and its mirrors operate today are very
different from those of two decades earlier. The major legal loopholes are now closed, though Russian
authorities have shown little inclination to pursue Aleph so far:
I can't say whether it's the Russian copyright enforcement or the Western one that's most dangerous for
Aleph; I'd say that Russian enforcement is still likely to tolerate most of the things that Western
publishers won't allow. For example, lib.ru and [L] and other unofficial Russian e-libraries are tolerated
even though far from compliant with the law. These kinds of e-libraries could not survive at all in western
countries.13
Western publishers have been slow to join record, film, and software companies in their aggressive
online enforcement campaigns, and academic publishers even more so. But such efforts are slowly
increasing, as the market for digital texts grows and as publishers benefit from the enforcement
precedents set or won by the more aggressive rightsholder groups. The domain name of [os], one of the
sites mirroring the Aleph collection was seized, apparently due to the legal action taken by a US
rightholder, and it also started to respond to DMCA notices, removing links to books reported to be
infringing. Aleph responds to this with a number of tactical moves:
We want books to be available, but only for those who need them. We do not want [ALEPH] to be visible.
If one knows where to get books, there are here for him or her. In this way we stay relatively invisible (in
search engines, e.g.), but all the relevant communities in the academy know about us. Actually, if you
question people at universities, the percentage of them is quite low. But what's important is that the
news about [ALEPH] is spread mostly by face-to-face communication, where most of the unnecessary
people do not know about it. (Unnecessary are those who aim profit)14
The policy of invisibility is radically different from Moshkov’s policy of maximum visibility. Aleph hopes
that it can recede into the shadows where it will be protected by the omerta of academics sharing the
sharing ethos:
In Russian academia, [Aleph] is tacitly or actively supported. There are people that do not want to be
included, but it is hard to say who they are in most cases. Since there are DMCA complaints, of course
there are people who do not want stuff to appear here. But in our experience the complainers are only
from the non-scientific fellows. […] I haven't seen a single complaint from the authors who should
constitute our major problem: professors etc. No, they don't complain. Who complains are either of such
type I have mentioned or the ever-hungry publishers.15
Draft Manuscript, 11/4/2014, DO NOT CITE!
The protection the academic community has to offer may not be enough to fend off the publishers’
enforcement actions. The option to recede further into the darknets and hide behind the veil of privacy
technologies is one option the Aleph site has: the first mirror on I2P, an anonymizing network designed
to hide the whereabouts and identity of web services is already operational. But
[i]f people are physically served court invitations, they will have to close the site. The idea is, however,
that the entire collection is copied throughout the world many times over, the database is open, the code
for the site is open, so other people can continue.16
On methodology
We tried to reconstruct the story behind Aleph by conducting interviews and browsing through the BBS
of the community. Access to the site and community members was given under a strict condition of
anonymity. We thus removed any reference to the names and URLs of the services in question.
At one point we shared an early draft of this paper with interested members and asked for their
feedback. Beyond access and feedback, community members were helping the writing of this article by
providing translations of some Russian originals, as well as reviewing the translations made by the
author. In return, we provided financial contributions to the community, in the value of 100 USD.
We reproduced forum entries without any edits to the language, we, however, edited interviews
conducted via IM services to reflect basic writing standards.
16
Anonymous source #1
21
Draft Manuscript, 11/4/2014, DO NOT CITE!
References
Abelson, H., Diamond, P. A., Grosso, A., & Pfeiffer, D. W. (2013). Report to the President MIT and the
Prosecution of Aaron Swartz. Cambridge, MA. Retrieved from http://swartzreport.mit.edu/docs/report-to-the-president.pdf
Alekseeva, L., Pearce, C., & Glad, J. (1985). Soviet dissent: Contemporary movements for national,
religious, and human rights. Wesleyan University Press.
Bodó, B. (2013). Set the fox to watch the geese: voluntary IP regimes in piratical file-sharing
communities. In M. Fredriksson & J. Arvanitakis (Eds.), Piracy: Leakages from Modernity.
Sacramento, CA: Litwin Books.
Borges, J. L. (1998). The library of Babel. In Collected fictions. New York: Penguin.
Bowers, S. L. (2006). Privacy and Library Records. The Journal of Academic Librarianship, 32(4), 377–383.
doi:http://dx.doi.org/10.1016/j.acalib.2006.03.005
Budylin, S., & Osipova, Y. (2007). Is AllOfMP3 Legal? Non-Contractual Licensing Under Russian Copyright
Law. Journal Of High Technology Law, 7(1).
Bush, V. (1945). As We May Think. Atlantic Monthly.
Dewhirst, M., & Farrell, R. (Eds.). (1973). The Soviet Censorship. Metuchen, NJ: The Scarecrow Press.
Elst, M. (2005). Copyright, freedom of speech, and cultural policy in the Russian Federation.
Leiden/Boston: Martinus Nijhoff.
Ermolaev, H. (1997). Censorship in Soviet Literature: 1917-1991. Rowman & Littlefield.
Foerstel, H. N. (1991). Surveillance in the stacks: The FBI’s library awareness program. New York:
Greenwood Press.
Friedberg, M., Watanabe, M., & Nakamoto, N. (1984). The Soviet Book Market: Supply and Demand.
Acta Slavica Iaponica, 2, 177–192. Retrieved from
http://eprints.lib.hokudai.ac.jp/dspace/bitstream/2115/7941/1/KJ00000034083.pdf
Interview with Dusan Barok. (2013). Neural, 10–11.
Interview with Marcell Mars. (2013). Neural, 6–8.
Komaromi, A. (2004). The Material Existence of Soviet Samizdat. Slavic Review, 63(3), 597–618.
doi:10.2307/1520346
22
Draft Manuscript, 11/4/2014, DO NOT CITE!
Lessig, L. (2013). Aaron’s Laws - Law and Justice in a Digital Age. Cambridge,MA: Harward Law School.
Retrieved from http://www.youtube.com/watch?v=9HAw1i4gOU4
Levin, M. B. (1983). Soviet International Copyright: Dream or Nightmare. Journal of the Copyright Society
of the U.S.A., 31, 127.
Liang, L. (2012). Shadow Libraries. e-flux. Retrieved from http://www.e-flux.com/journal/shadowlibraries/
Newcity, M. A. (1978). Copyright law in the Soviet Union. Praeger.
Newcity, M. A. (1980). Universal Copyright Convention as an Instrument of Repression: The Soviet
Experiment, The. In Copyright L. Symp. (Vol. 24, p. 1). HeinOnline.
Patry, W. F. (2009). Moral panics and the copyright wars. New York: Oxford University Press.
Post, R. (1998). Censorship and Silencing: Practices of Cultural Regulation. Getty Research Institute for
the History of Art and the Humanities.
Rieusset-Lemarié, I. (1997). P. Otlet’s mundaneum and the international perspective in the history of
documentation and information science. Journal of the American Society for Information Science,
48(4), 301–309.
Ryzhak, N. (2005). Censorship in the USSR and the Russian State Library. IFLA/FAIFE Satellite meeting:
Documenting censorship – libraries linking past and present, and preparing for the future.
Sezneva, O., & Karaganis, J. (2011). Chapter 4: Russia. In J. Karaganis (Ed.), Media Piracy in Emerging
Economies. New York: Social Science Research Council.
Skilling, H. G. (1989). Samizdat and an Independent Society in Central and Eastern Europe. Pa[Aleph]rave
Macmillan.
Solzhenitsyn, A. I. (1974). The Gulag Archipelago 1918-1956: An Experiment in Literary Investigation,
Parts I-II. Harper & Row.
Stelmach, V. D. (1993). Reading in Russia: findings of the sociology of reading and librarianship section of
the Russian state library. The International Information & Library Review, 25(4), 273–279.
Stelmakh, V. D. (2001). Reading in the Context of Censorship in the Soviet Union. Libraries & Culture,
36(1), 143–151. doi:10.2307/25548897
Suber, P. (2013). Open Access (Vol. 1). Cambridge, MA: The MIT Press.
doi:10.1109/ACCESS.2012.2226094
UHF. (2005). Где-где - на борде! Хакер, 86–90.
23
Draft Manuscript, 11/4/2014, DO NOT CITE!
Гроер, И. (1926). Авторское право. In Большая Советская Энциклопедия. Retrieved from
http://ru.gse1.wikia.com/wiki/Авторское_право
24
Bodo
Libraries in the Post-Scarcity Era
2015
Libraries in the Post-Scarcity Era
Balazs Bodo
Abstract
In the digital era where, thanks to the ubiquity of electronic copies, the book is no longer a scarce
resource, libraries find themselves in an extremely competitive environment. Several different actors are
now in a position to provide low cost access to knowledge. One of these competitors are shadow libraries
- piratical text collections which have now amassed electronic copies of millions of copyrighted works
and provide access to them usually free of charge to anyone around the globe. While such shadow
libraries are far from being universal, they are able to offer certain services better, to more people and
under more favorable terms than most public or research libraries. This contribution offers insights into
the development and the inner workings of one of the biggest scientific shadow libraries on the internet in
order to understand what kind of library people create for themselves if they have the means and if they
don’t have to abide by the legal, bureaucratic and economic constraints that libraries usually face. I argue
that one of the many possible futures of the library is hidden in the shadows, and those who think of the
future of libraries can learn a lot from book pirates of the 21 st century about how users and readers expect
texts in electronic form to be stored, organized and circulated.
“The library is society’s last non-commercial meeting place which the majority of the population uses.”
(Committee on the Public Libraries in the Knowledge Society, 2010)
“With books ready to be shared, meticulously cataloged, everyone is a librarian. When everyone is
librarian, library is everywhere.” – Marcell Mars, www.memoryoftheworld.org
I have spent the last few months in various libraries visiting - a library. I spent countless hours in the
modest or grandiose buildings of the Harvard Libraries, the Boston and Cambridge Public Library
systems, various branches of the Openbare Bibliotheek in Amsterdam, the libraries of the University of
Amsterdam, with a computer in front of me, on which another library was running, a library which is
perfectly virtual, which has no monumental buildings, no multi-million euro budget, no miles of stacks,
no hundreds of staff, but which has, despite lacking all what apparently makes a library, millions of
literary works and millions of scientific books, all digitized, all available at the click of the mouse for
everyone on the earth without any charge, library or university membership. As I was sitting in these
1
Bodó B. (2015): Libraries in the post-scarcity era.
in: Porsdam (ed): Copyrighting Creativity: Creative values, Cultural Heritage Institutions and Systems of Intellectual Property, Ashgate
physical spaces where the past seemed to define the present, I was wondering where I should look to find
the library of the future: down to my screen or up around me.
The library on my screen was Aleph, one of the biggest of the countless piratical text collections on the
internet. It has more than a million scientific works and another million literary works to offer, all free to
download, without any charge or fee, for anyone on the net. I’ve spent months among its virtual stacks,
combing through the catalogue, talking to the librarians who maintain the collection, and watching the
library patrons as they used the collection. I kept going back to Aleph both as a user and as a researcher.
As a user, Aleph offered me books that the local libraries around me didn’t, in formats that were more
convenient than print. As a researcher, I was interested in the origins of Aleph, its modus operandi, its
future, and I was curious where the journey to which it has taken the book-readers, authors, publishers
and libraries would end.
In this short essay I will introduce some of the findings of a two year research project conducted on
Aleph. In the project I looked at several things. I reconstructed the pirate library’s genesis in order to
understand the forces that called it to life and shaped its development. I looked at its catalogue to
understand what it has to offer and how that piratical supply of books is related to the legal supply of
books through libraries and online distributors. I also acquired data on its usage, so was able to
reconstruct some aspects of piratical demand. After a short introduction, in the first part of this essay I
will outline some of the main findings, and in the second part will situate the findings in the wider context
of the future of libraries.
Book pirates and shadow librarians
Book piracy has a fascinating history, tightly woven into the history of the printing press (Judge, 1934),
into the history of censorship (Wittmann, 2004), into the history of copyright (Bently, Davis, & Ginsburg,
2010; Bodó, 2011a) and into the history of European civilization (Johns, 2010). Book piracy, in the 21st or
in the mid-17th century is an activity that has deep cultural significance, because ultimately it is a story
about how knowledge is circulated beyond and often against the structures of political and economic
power (Bodó, 2011b), and thus it is a story about the changes this unofficial circulation of knowledge
brings.
There are many different types of book pirates. Some just aim for easy money, others pursue highly
ideological goals, but they are invariably powerful harbingers of change. The emergence of black markets
whether they be of culture, of drugs or of arms is always a symptom, a warning sign of a friction between
2
Bodó B. (2015): Libraries in the post-scarcity era.
in: Porsdam (ed): Copyrighting Creativity: Creative values, Cultural Heritage Institutions and Systems of Intellectual Property, Ashgate
supply and demand. Increased activity in the grey and black zones of legality marks the emergence of a
demand which legal suppliers are unwilling or unable to serve (Bodó, 2011a). That friction, more often
than not, leads to change. Earlier waves of book piracy foretold fundamental economic, political, societal
or technological shifts (Bodó, 2011b): changes in how the book publishing trade was organized (Judge,
1934; Pollard, 1916, 1920); the emergence of the new, bourgeois reading class (Patterson, 1968; Solly,
1885); the decline of pre-publication censorship (Rose, 1993); the advent of the Reformation and of the
Enlightenment (Darnton, 1982, 2003), or the rapid modernization of more than one nation (Khan &
Sokoloff, 2001; Khan, 2004; Yu, 2000).
The latest wave of piracy has coincided with the digital revolution which, in itself, profoundly upset the
economics of cultural production and distribution (Landes & Posner, 2003). However technology is not
the primary cause of the emergence of cultural black markets like Aleph. The proliferation of computers
and the internet has just revealed a more fundamental issue which all has to do with the uneven
distribution of the access to knowledge around the globe.
Sometimes book pirates do more than just forecast and react to changes that are independent of them.
Under certain conditions, they themselves can be powerful agents of change (Bodó, 2011b). Their agency
rests on their ability to challenge the status quo and resist cooptation or subjugation. In that effect, digital
pirates seem to be quite resilient (Giblin, 2011; Patry, 2009). They have the technological upper hand and
so far they have been able to outsmart any copyright enforcement effort (Bodó, forthcoming). As long as
it is not completely possible to eradicate file sharing technologies, and as long as there is a substantial
difference between what is legally available and what is in demand, cultural black markets will be here to
compete with and outcompete the established and recognized cultural intermediaries. Under this constant
existential threat, business models and institutions are forced to adapt, evolve or die.
After the music and audiovisual industries, now the book industry has to address the issue of piracy.
Piratical book distribution services are now in direct competition with the bookstore on the corner, the
used book stall on the sidewalk, they compete with the Amazons of the world and, like it or not, they
compete with libraries. There is, however, a significant difference between the book and the music
industries. The reluctance of music rights holders to listen to the demands of their customers caused little
damage beyond the markets of recorded music. Music rights holders controlled their own fates and those
who wanted to experiment with alternative forms of distribution had the chance to do so. But while the
rapid proliferation of book black markets may signal that the book industry suffers from similar problems
as the music industry suffered a decade ago, the actions of book publishers, the policies they pursue have
impact beyond the market of books and directly affect the domain of libraries.
3
Bodó B. (2015): Libraries in the post-scarcity era.
in: Porsdam (ed): Copyrighting Creativity: Creative values, Cultural Heritage Institutions and Systems of Intellectual Property, Ashgate
The fate of libraries is tied to the fate of book markets in more than one way. One connection is structural:
libraries emerged to remedy the scarcity in books. This is true both for the pre-print era as well as in the
Gutenberg galaxy. In the era of widespread literacy and highly developed book markets, libraries offer
access to books under terms publishers and booksellers cannot or would not. Libraries, to a large extent,
are defined to complement the structure of the book trade. The other connection is legal. The core
activities of the library (namely lending, copying) are governed by the same copyright laws that govern
authors and publishers. Libraries are one of the users in the copyright system, and their existence depends
on the limitations of and exceptions to the exclusive rights of the rights holders. The space that has been
carved out of copyright to enable the existence of libraries has been intensely contested in the era of
postmodern copyright (Samuelson, 2002) and digital technologies. This heavy legal and structural
interdependence with the market means that libraries have only a limited control over their own fate in the
digital domain.
Book pirates compete with some of the core services of libraries. And as is usually the case with
innovation that has no economic or legal constraints, pirate libraries offer, at least for the moment,
significantly better services than most of the libraries. Pirate libraries offer far more electronic books,
with much less restrictions and constraints, to far more people, far cheaper than anyone else in the library
domain. Libraries are thus directly affected by pirate libraries, and because of their structural
interdependence with book markets, they also have to adjust to how the commercial intermediaries react
to book piracy. Under such conditions libraries cannot simply count on their survival through their legacy.
Book piracy must be taken seriously, not just as a threat, but also as an opportunity to learn how shadow
libraries operate and interact with their users. Pirate libraries are the products of readers (and sometimes
authors), academics and laypeople, all sharing a deep passion for the book, operating in a zone where
there is little to no obstacle to the development of the “ideal” library. As such, pirate libraries can teach
important lessons on what is expected of a library, how book consumption habits evolve, and how
knowledge flows around the globe.
Pirate libraries in the digital age
The collection of texts in digital formats was one of the first activities that computers enabled: the text file
is the native medium of the computer, it is small, thus it is easy to store and copy. It is also very easy to
create, and as so many projects have since proved, there are more than enough volunteers who are willing
to type whole books into the machine. No wonder that electronic libraries and digital text repositories
were among the first “mainstream” application of computers. Combing through large stacks of matrix-
4
Bodó B. (2015): Libraries in the post-scarcity era.
in: Porsdam (ed): Copyrighting Creativity: Creative values, Cultural Heritage Institutions and Systems of Intellectual Property, Ashgate
printer printouts of sci-fi classics downloaded from gopher servers is a shared experience of anyone who
had access to computers and the internet before it was known as the World Wide Web.
Computers thus added fresh momentum to the efforts of realizing the age-old dream of the universal
library (Battles, 2004). Digital technologies offered a breakthrough in many of the issues that previously
posed serious obstacles to text collection: storage, search, preservation, access have all become cheaper
and easier than ever before. On the other hand, a number of key issues remained unresolved: digitization
was a slow and cumbersome process, while the screen proved to be too inconvenient, and the printer too
costly an interface between the text file and the reader. In any case, ultimately it wasn’t these issues that
put a break to the proliferation of digital libraries. Rather, it was the realization, that there are legal limits
to the digitization, storage, distribution of copyrighted works on the digital networks. That realization
soon rendered many text collections in the emerging digital library scene inaccessible.
Legal considerations did not destroy this chaotic, emergent digital librarianship and the collections the adhoc, accidental and professional librarians put together. The text collections were far too valuable to
simply delete them from the servers. Instead, what happened to most of these collections was that they
retreated from the public view, back into the access-controlled shadows of darknets. Yesterday’s gophers
and anonymous ftp servers turned into closed, membership only ftp servers, local shared libraries residing
on the intranets of various academic, business institutions and private archives stored on local hard drives.
The early digital libraries turned into book piracy sites and into the kernels of today’s shadow libraries.
Libraries and other major actors, who decided to start large scale digitization programs soon needed to
find out that if they wanted to avoid costly lawsuits, then they had to limit their activities to work in the
public domain. While the public domain is riddled with mind-bogglingly complex and unresolved legal
issues, but at least it is still significantly less complicated to deal with than copyrighted and orphan works.
Legally more innovative, (or as some would say, adventurous) companies, such as Google and Microsoft,
who thought they had sufficient resources to sort out the legal issues soon had to abandon their programs
or put them on hold until the legal issues were sorted out.
There were, however, a large group of disenfranchised readers, library patrons, authors and users who
decided to ignore the legal problems and set out to build the best library that could possibly be built using
the digital technologies. Despite the increased awareness of rights holders to the issue of digital book
piracy, more and more communities around text collections started defy the legal constraints and to
operate and use more or less public piratical shadow libraries.
5
Bodó B. (2015): Libraries in the post-scarcity era.
in: Porsdam (ed): Copyrighting Creativity: Creative values, Cultural Heritage Institutions and Systems of Intellectual Property, Ashgate
Aleph1
Aleph2 is a meta-library, and currently one of the biggest online piratical text collections on the internet.
The project started on a Russian bulletin board devoted to piracy in around 2008 as an effort to integrate
various free-floating text collections that circulated online, on optical media, on various public and private
ftp servers and on hard-drives. Its aim was to consolidate these separate text collections, many of which
were created in various Russian academic institutions, into a single, unified catalog, standardize the
technical aspects, add and correct missing or incorrect metadata, and offer the resulting catalogue,
computer code and the collection of files as an open infrastructure.
From Russia with love
It is by no means a mistake that Aleph was born in Russia. In post-Soviet Russia the unique constellation
of several different factors created the necessary conditions for the digital librarianship movement that
ultimately led to the development of Aleph. A rich literary legacy, the Soviet heritage, the pace with
which various copying technologies penetrated the market, the shortcomings of the legal environment and
the informal norms that stood in for the non-existent digital copyrights all contributed to the emergence of
the biggest piratical library in the history of mankind.
Russia cherishes a rich literary tradition, which suffered and endured extreme economic hardships and
political censorship during the Soviet period (Ermolaev, 1997; Friedberg, Watanabe, & Nakamoto, 1984;
Stelmakh, 2001). The political transformation in the early 1990’s liberated authors, publishers, librarians
and readers from much of the political oppression, but it did not solve the economic issues that stood in
the way of a healthy literary market. Disposable income was low, state subsidies were limited, the dire
economic situation created uncertainty in the book market. The previous decades, however, have taught
authors and readers how to overcome political and economic obstacles to access to books. During the
Soviet times authors, editors and readers operated clandestine samizdat distribution networks, while
informal book black markets, operating in semi-private spheres, made uncensored but hard to come by
books accessible (Stelmakh, 2001). This survivalist attitude and the skills that came with it became handy
in the post-Soviet turmoil, and were directly transferable to the then emerging digital technologies.
1
I have conducted extensive research on the origins of Aleph, on its catalogue and its users. The detailed findings, at
the time of writing this contribution are being prepared for publication. The following section is brief summary of
those findings and is based upon two forthcoming book chapters on Aleph in a report, edited by Joe Karaganis, on
the role of shadow libraries in the higher education systems of multiple countries.
2
Aleph is a pseudonym chosen to protect the identity of the shadow library in question.
6
Bodó B. (2015): Libraries in the post-scarcity era.
in: Porsdam (ed): Copyrighting Creativity: Creative values, Cultural Heritage Institutions and Systems of Intellectual Property, Ashgate
Russia is not the only country with a significant informal media economy of books, but in most other
places it was the photocopy machine that emerged to serve such book grey/black markets. In pre-1990
Russia and in other Eastern European countries the access to this technology was limited, and when
photocopiers finally became available, computers were close behind them in terms of accessibility. The
result of the parallel introduction of the photocopier and the computer was that the photocopy technology
did not have time to lock in the informal market of texts. In many countries where the photocopy machine
preceded the computer by decades, copy shops still capture the bulk of the informal production and
distribution of textbooks and other learning material. In the Soviet-bloc PCs instantly offered a less costly
and more adaptive technology to copy and distribute texts.
Russian academic and research institutions were the first to have access to computers. They also had to
somehow deal with the frustrating lack of access to up-to-date and affordable western works to be used in
education and research (Abramitzky & Sin, 2014). This may explain why the first batch of shadow
libraries started in a number of academic/research institutions such as the Department of Mechanics and
Mathematics (MexMat) at Moscow State University. The first digital librarians in Russia were
mathematicians, computer scientists and physicists, working in those institutions.
As PCs and internet access slowly penetrated Russian society, an extremely lively digital librarianship
movement emerged, mostly fuelled by enthusiastic readers, book fans and often authors, who spared no
effort to make their favorite books available on FIDOnet, a popular BBS system in Russia. One of the
central figures in these tumultuous years, when typed-in books appeared online by the thousands, was
Maxim Moshkov, a computer scientist, alumnus of the MexMat, and an avid collector of literary works.
His digital library, lib.ru was at first mostly a private collection of literary texts, but soon evolved into the
number one text repository which everyone used to depose the latest digital copy on a newly digitized
book (Мошков, 1999). Eventually the library grew so big that it had to be broken up. Today it only hosts
the Russian literary classics. User generated texts, fan fiction and amateur production was spin off into the
aptly named samizdat.lib.ru collection, low brow popular fiction, astrology and cheap romance found its
way into separate collections, and so did the collection of academic/scientific books, which started an
independent life under the name of Kolkhoz. Kolkhoz, which borrowed its name from the commons
based agricultural cooperative of the early Soviet era, was both a collection of scientific texts, and a
community of amateur librarians, who curated, managed and expanded the collection.
Moshkov and his library introduced several important norms into the bottom-up, decentralized, often
anarchic digital library movement that swept through the Russian internet in the late 1990’s, early 2000’s.
First, lib.ru provided the technological blueprint for any future digital library. But more importantly,
7
Bodó B. (2015): Libraries in the post-scarcity era.
in: Porsdam (ed): Copyrighting Creativity: Creative values, Cultural Heritage Institutions and Systems of Intellectual Property, Ashgate
Moshkov’s way of handling the texts, his way of responding to the claims, requests, questions, complaints
of authors and publishers paved the way to the development of copynorms (Schultz, 2007) that continue
to define the Russian digital library scene until today. Moshkov was instrumental in the creation of an
enabling environment for the digital librarianship while respecting the claims of authors, during times
when the formal copyright framework and the enforcement environment was both unable and unwilling to
protect works of authorship (Elst, 2005; Sezneva, 2012).
Guerilla Open Access
Around the time of the late 2000’s when Aleph started to merge the Kolkhoz collection with other, freefloating texts collections, two other notable events took place. It was in 2008 when Aaron Swartz penned
his Guerilla Open Access Manifesto (Swartz, 2008), in which he called for the liberation and sharing of
scientific knowledge. Swartz forcefully argued that scientific knowledge, the production of which is
mostly funded by the public and by the voluntary labor of academics, cannot be locked up behind
corporate paywalls set up by publishers. He framed the unauthorized copying and transfer of scientific
works from closed access text repositories to public archives as a moral act, and by doing so, he created
an ideological framework which was more radical and promised to be more effective than either the
creative commons (Lessig, 2004) or the open access (Suber, 2013) movements that tried to address the
access to knowledge issues in a more copyright friendly manner. During interviews, the administrators of
Aleph used the very same arguments to justify the raison d'être of their piratical library. While it seems
that Aleph is the practical realization of Swartz’s ideas, it is hard to tell which served as an inspiration for
the other.
It was also in around the same time when another piratical library, gigapedia/library.nu started its
operation, focusing mostly on making freely available English language scientific works (Liang, 2012).
Until its legal troubles and subsequent shutdown in 2012, gigapedia/library.nu was the biggest English
language piratical scientific library on the internet amassing several hundred thousand books, including
high-quality proofs ready to print and low resolution scans possibly prepared by a student or a lecturer.
During 2012 the mostly Russian-language and natural sciences focused Alephs absorbed the English
language, social sciences rich gigapedia/library.nu, and with the subsequent shutdown of
gigapedia/library.nu Aleph became the center of the scientific shadow library ecosystem and community.
Aleph by numbers
8
Bodó B. (2015): Libraries in the post-scarcity era.
in: Porsdam (ed): Copyrighting Creativity: Creative values, Cultural Heritage Institutions and Systems of Intellectual Property, Ashgate
By adding pre-existing text collections to its catalogue Aleph was able to grow at an astonishing rate.
Aleph added, on average 17.500 books to its collection each month since 2009, and as a result, by April
2014 is has more than 1.15 million documents. Nearly two thirds of the collection is in English, one fifth
of the documents is in Russian, while German works amount to the third largest group with 8.5% of the
collection. The rest of the major European languages, like French or Spanish have less than 15000 works
each in the collection.
More than 50 thousand publishers have works in the library, but most of the collection is published by
mainstream western academic publishers. Springer published more than 12% of the works in the
collection, followed by the Cambridge University Press, Wiley, Routledge and Oxford University Press,
each having more than 9000 works in the collection.
Most of the collection is relatively recent, more than 70% of the collection being published in 1990 or
after. Despite the recentness of the collection, the electronic availability of the titles in the collection is
limited. While around 80% of the books that had an ISBN number registered in the catalogue3 was
available in print either as a new copy or a second hand one, only about one third of the titles were
available in e-book formats. The mean price of the titles still in print was 62 USD according to the data
gathered from Amazon.com.
The number of works accessed through of Aleph is as impressive as its catalogue. In the three months
between March and June, 2012, on average 24.000 documents were downloaded every day from one of
its half-a-dozen mirrors.4 This means that the number of documents downloaded daily from Aleph is
probably in the 50 to 100.000 range. The library users come from more than 150 different countries. The
biggest users in terms of volume were the Russian Federation, Indonesia, USA, India, Iran, Egypt, China,
Germany and the UK. Meanwhile, many of the highest per-capita users are Central and Eastern European
countries.
What Aleph is and what it is not
Aleph is an example of the library in the post scarcity age. It is founded on the idea that books should no
longer be a scarce resource. Aleph set out to remove both sources of scarcity: the natural source of
3
Market availability data is only available for that 40% of books in the Aleph catalogue that had an ISBN number
on file. The titles without a valid ISBN number tend to be older, Russian language titles, in general with low
expected print and e-book availability.
4
Download data is based on the logs provided by one of the shadow library services which offers the books in
Aleph’s catalogue as well as other works also free and without any restraints or limitations.
9
Bodó B. (2015): Libraries in the post-scarcity era.
in: Porsdam (ed): Copyrighting Creativity: Creative values, Cultural Heritage Institutions and Systems of Intellectual Property, Ashgate
scarcity in physical copies is overcome through distributed digitization; the artificial source of scarcity
created by copyright protection is overcome through infringement. The liberation from both constraints is
necessary to create a truly scarcity free environment and to release the potential of the library in the postscarcity age.
Aleph is also an ongoing demonstration of the fact that under the condition of non-scarcity, the library can
be a decentralized, distributed, commons-based institution created and maintained through peer
production (Benkler, 2006). The message of Aleph is clear: users left to their own devices, can produce a
library by themselves for themselves. In fact, users are the library. And when everyone has the means to
digitize, collect, catalogue and share his/her own library, then the library suddenly is everywhere. Small
individual and institutional collections are aggregated into Aleph, which, in turn is constantly fragmented
into smaller, local, individual collections as users download works from the collection. The library is
breathing (Battles, 2004) books in and out, but for the first time, this circulation of books is not a zero
sum game, but a cumulative one: with every cycle the collection grows.
On the other hand Aleph may have lots of books on offer, but it is clear that it is neither universal in its
scope, nor does it fulfill all the critical functions of a library. Most importantly Aleph is disembedded
from the local contexts and communities that usually define the focus of the library. While it relies on the
availability of local digital collections for its growth, it has no means to play an active role in its own
development. The guardians of Aleph can prevent books from entering the collection, but they cannot
pay, ask or force anyone to provide a title if it is missing. Aleph is reliant on the weak copy-protection
technologies of official e-text repositories and the goodwill of individual document submitters when it
comes to the expansion of the collection. This means that the Aleph collection is both fragmented and
biased, and it lacks the necessary safeguards to ensure that it stays either current or relevant.
Aleph, with all its strengths and weaknesses carries an important lesson for the discussions on the future
of libraries. In the next section I’ll try situate these lessons in the wider context of the library in the post
scarcity age.
The future of the library
There is hardly a week without a blog post, a conference, a workshop or an academic paper discussing the
future of libraries. While existing libraries are buzzing with activity, librarians are well aware that they
need to re-define themselves and their institutions, as the book collections around which libraries were
organized slowly go the way the catalogue has gone: into the digital realm. It would be impossible to give
10
Bodó B. (2015): Libraries in the post-scarcity era.
in: Porsdam (ed): Copyrighting Creativity: Creative values, Cultural Heritage Institutions and Systems of Intellectual Property, Ashgate
a faithful summary of all the discussions on the future of libraries is such a short contribution. There are,
however, a few threads, to which the story of Aleph may contribute.
Competition
It is very rare to find the two words: libraries and competition in the same sentence. No wonder: libraries
enjoyed a near perfect monopoly in their field of activity. Though there may have been many different
local initiatives that provided free access to books, as a specialized institution to do so, the library was
unmatched and unchallenged. This monopoly position has been lost in a remarkably short period of time
due to the internet and the rapid innovations in the legal e-book distribution markets. Textbooks can be
rented, e-books can be lent, a number of new startups and major sellers offer flat rate access to huge
collections. Expertise that helps navigate the domains of knowledge is abundant, there are multiple
authoritative sources of information and meta-information online. The search box of the library catalog is
only one, and not even the most usable of all the different search boxes one can type a query in5.
Meanwhile there are plenty of physical spaces which offer good coffee, an AC plug, comfortable chairs
and low levels of noise to meet, read and study from local cafes via hacker- and maker spaces, to coworking offices. Many library competitors have access to resources (human, financial, technological and
legal) way beyond the possibilities of even the richest libraries. In addition, publishers control the
copyrights in digital copies which, absent of well fortified statutory limitations and exceptions, prevent
libraries keeping up with the changes in user habits and with the competing commercial services.
Libraries definitely feel the pressure. “Libraries’ offers of materials […] compete with many other offers
that aim to attract the attention of the public. […] It is no longer enough just to make a good collection
available to the public.” (Committee on the Public Libraries in the Knowledge Society, 2010) As a
response, libraries have developed different strategies to cope with this challenge. The common thread in
the various strategy documents is that they try to redefine the library as a node in the vast network of
institutions that provide knowledge, enable learning, facilitate cooperation and initiate dialogues. Some of
the strategic plans redefine the library space as an “independent medium to be developed” (Committee on
the Public Libraries in the Knowledge Society, 2010), and advise libraries to transform themselves into
culture and community centers which establish partnerships with citizens, communities and with other
public and private institutions. Some librarians propose even more radical ways of keeping the library
5
ArXiv, SSRN, RePEc, PubMed Central, Google Scholar, Google Books, Amazon, Mendeley, Citavi,
ResearchGate, Goodreads, LibraryThing, Wikipedia, Yahoo Answers, Khan Academy, specialized twitter and other
social media accounts are just a few of the available discovery services.
11
Bodó B. (2015): Libraries in the post-scarcity era.
in: Porsdam (ed): Copyrighting Creativity: Creative values, Cultural Heritage Institutions and Systems of Intellectual Property, Ashgate
relevant by, for example, advocating more opening hours without staff and hosting more user-governed
activities.
In the research library sphere, the Commission on the Future of the Library, a task force set up by the
University of California Berkeley defined the values the university research library will add in the digital
age as “1) Human expertise; 2) Enabling infrastructure; and 3) Preservation and dissemination of
knowledge for future generations.” (Commission on the Future of the Library, 2013). This approach is
from among the more conservative ones, still relying on the hope that libraries can offer something
unique that no one else is able to provide. Others, working at the Association of Research Libraries are
more like their public library counterparts, defining the future role of the research libraries as a “convener
of ‘conversations’ for knowledge construction, an inspiring host; a boundless symposium; an incubator;
a 3rd space both physically and virtually; a scaffold for independence of mind; and a sanctuary for
freedom of expression, a global entrepreneurial engine” (Pendleton-Jullian, Lougee, Wilkin, & Hilton,
2014), in other words, as another important, but in no way unique node in the wider network of
institutions that creates and distributes knowledge.
Despite the differences in priorities, all these recommendations carry the same basic message. The unique
position of libraries in the center of a book-based knowledge economy, on the top of the paper-bound
knowledge hierarchy is about to be lost. As libraries are losing their monopoly of giving low cost, low
restrictions access to books which are scarce by nature, and they are losing their privileged and powerful
position as the guardians of and guides to the knowledge stored in the stacks. If they want to survive, they
need to find their role and position in a network of institutions, where everyone else is engaged in
activities that overlap with the historic functions of the library. Just like the books themselves, the power
that came from the privileged access to books is in part dispersed among the countless nodes in the
knowledge and learning networks, and in part is being captured by those who control the digital rights to
digitize and distribute books in the digital era.
One of the main reasons why libraries are trying to redefine themselves as providers of ancillary services
is because the lack of digital lending rights prevents them from competing on their own traditional home
turf - in giving free access to knowledge. The traditional legal limitations and exceptions to copyright that
enabled libraries to fulfill their role in the analogue world do not apply in the digital realm. In the
European Union, the Infosoc Directive (“Directive 2001/29/EC on the harmonisation of certain aspects of
copyright and related rights in the information society,” 2001) allows for libraries to create digital copies
for preservation, indexing and similar purposes and allows for the display of digital copies on their
premises for research and personal study (Triaille et al., 2013). While in theory these rights provide for
12
Bodó B. (2015): Libraries in the post-scarcity era.
in: Porsdam (ed): Copyrighting Creativity: Creative values, Cultural Heritage Institutions and Systems of Intellectual Property, Ashgate
the core library services in the digital domain, their practical usefulness is rather limited, as off-premises
e-lending of copyrighted works is in most cases6 only possible through individual license agreements with
publishers.
Under such circumstances libraries complain that they cannot fulfill their public interest mission in the
digital era. What libraries are allowed to do under their own under current limitations and exceptions, is
seen as inadequate for what is expected of them. But to do more requires the appropriate e-lending
licenses from rights holders. In many cases, however, libraries simply cannot license digitally for e-lending. In those cases when licensing is possible, they see transaction costs as prohibitively high; they
feel that their bargaining positions vis-à-vis rightholders is unbalanced; they do not see that the license
terms are adapted to libraries’ policies, and they fear that the licenses provide publishers excessive and
undue influence over libraries (Report on the responses to the Public Consultation on the Review of the
EU Copyright Rules, 2013).
What is more, libraries face substantial legal uncertainties even where there are more-or-less well defined
digital library exceptions. In the EU, questions such as whether the analogue lending rights of libraries
extend to e-books, whether an exhaustion of the distribution right is necessary to enjoy the lending
exception, and whether licensing an e-book would exhaust the distribution right are under consideration
by the Court of Justice of the European Union in a Dutch case (Rosati, 2014b). And while in another case
(Case C-117/13 Technische Universität Darmstadt v Eugen Ulmer KG) the CJEU reaffirmed the rights of
European libraries to digitize books in their collection if that is necessary to give access to them in digital
formats on their premises, it also created new uncertainties by stating that libraries may not digitize their
entire collections (Rosati, 2014a).
US libraries face a similar situation, both in terms of the narrowly defined exceptions in which libraries
can operate, and the huge uncertainty regarding the limits of fair use in the digital library context. US
rights holders challenged both Google’s (Authors Guild v Google) and the libraries (Authors Guild v
HathiTrust) rights to digitize copyrighted works. While there seems to be a consensus of courts that the
mass digitization conducted by these institutions was fair use (Diaz, 2013; Rosati, 2014c; Samuelson,
2014), the accessibility of the scanned works is still heavily limited, subject to licenses from publishers,
the existence of print copies at the library and the institutional membership held by prospective readers.
While in the highly competitive US e-book market many commercial intermediaries offer e-lending
6
The notable exception being orphan works which are presumed to be still copyrighted, but without an identifiable
rights owner. In the EU, the Directive 2012/28/EU on certain permitted uses of orphan works in theory eases access
to such works, but in practice its practical impact is limited by the many constraints among its provisions. Lacking
any orphan works legislation and the Google Book Settlement still in limbo, the US is even farther from making
orphan works generally accessible to the public.
13
Bodó B. (2015): Libraries in the post-scarcity era.
in: Porsdam (ed): Copyrighting Creativity: Creative values, Cultural Heritage Institutions and Systems of Intellectual Property, Ashgate
licenses to e-book catalogues of various sizes, these arrangements also carry the danger of a commercial
lock-in of the access to digital works, and render libraries dependent upon the services of commercial
providers who may or may not be the best defenders of public interest (OECD, 2012).
Shadow libraries like Aleph are called into existence by the vacuum that was left behind by the collapse
of libraries in the digital sphere and by the inability of the commercial arrangements to provide adequate
substitute services. Shadow libraries are pooling distributed resources and expertise over the internet, and
use the lack of legal or technological barriers to innovation in the informal sphere to fill in the void left
behind by libraries.
What can Aleph teach us about the future of libraries?
The story of Aleph offers two, closely interrelated considerations for the debate on the future of libraries:
a legal and an organizational one. Aleph operates beyond the limits of legality, as almost all of its
activities are copyright infringing, including the unauthorized digitization of books, the unauthorized
mass downloads from e-text repositories, the unauthorized acts of uploading books to the archive, the
unauthorized distribution of books, and, in most countries, the unauthorized act of users’ downloading
books from the archive. In the debates around copyright infringement, illegality is usually interpreted as a
necessary condition to access works for free. While this is undoubtedly true, the fact that Aleph provides
no-cost access to books seems to be less important than the fact that it provides an access to them in the
first place.
Aleph is a clear indicator of the volume of the demand for current books in digital formats in developed
and in developing countries. The legal digital availability, or rather, unavailability of its catalogue also
demonstrates the limits of the current commercial and library based arrangements that aim to provide low
cost access to books over the internet. As mentioned earlier, Aleph’s catalogue is mostly of recent books,
meaning that 80% of the titles with a valid ISBN number are still in print and available as a new or used
print copy through commercial retailers. What is also clear, that around 66% of these books are yet to be
made available in electronic format. While publishers in theory have a strong incentive to make their most
recent titles available as e-books, they lag behind in doing so.
This might explain why one third of all the e-book downloads in Aleph are from highly developed
Western countries, and two third of these downloads are of books without a kindle version. Having access
to print copies either through libraries or through commercial retailers is simply not enough anymore.
Developing countries are a slightly different case. There, compared to developed countries, twice as many
14
Bodó B. (2015): Libraries in the post-scarcity era.
in: Porsdam (ed): Copyrighting Creativity: Creative values, Cultural Heritage Institutions and Systems of Intellectual Property, Ashgate
of the downloads (17% compared to 8% in developed countries) are of titles that aren’t available in print
at all. Not having access to books in print seems to be a more pressing problem for developing countries
than not having access to electronic copies. Aleph thus fulfills at least two distinct types of demand: in
developed countries it provides access to missing electronic versions, in developing countries it provides
access to missing print copies.
The ability to fulfill an otherwise unfulfilled demand is not the only function of illegality. Copyright
infringement in the case of Aleph has a much more important role: it enables the peer production of the
library. Aleph is an open source library. This means that every resource it uses and every resource it
creates is freely accessible to anyone for use without any further restrictions. This includes the server
code, the database, the catalogue and the collection. The open source nature of Aleph rests on the
ideological claim that the scientific knowledge produced by humanity, mostly through public funds
should be open for anyone to access without any restrictions. Everything else in and around Aleph stems
from this claim, as they replicate the open access logic in all the other aspects of Aleph’s operation. Aleph
uses the peer produced Open Library to fetch book metadata, it uses the bittorrent and ed2k P2P networks
to store and make books accessible, it uses Linux and MySQL to run its code, and it allows its users to
upload books and edit book metadata. As a consequence of its open source nature, anyone can contribute
to the project, and everyone can enjoy its benefits.
It is hard to quantify the impact of this piratical open access library on education, science and research in
various local contexts where Aleph is the prime source of otherwise inaccessible books. But it is
relatively easy to measure the consequences of openness at the level of the Aleph, the library. The
collection of Aleph was created mostly by those individuals and communities who decided to digitize
books by themselves for their own use. While any single individual is only capable of digitizing a few
books at the maximum, the small contributions quickly add up. To digitize the 1.15 million documents in
the Aleph collection would require an investment of several hundred million Euros, and a substantial
subsequent investment in storage, collection management and access provision (Poole, 2010). Compared
to these figures the costs associated with running Aleph is infinitesimal, as it survives on the volunteer
labor of a few individuals, and annual donations in the total value of a few thousand dollars. The hundreds
of thousands who use Aleph on a more or less regular basis have an immense amount of resources, and by
disregarding the copyright laws Aleph is able to tap into those resources and use them for the
development of the library. The value of these resources and of the peer produced library is the difference
between the actual costs associated with Aleph, and the investment that would be required to create
something remotely similar.
15
Bodó B. (2015): Libraries in the post-scarcity era.
in: Porsdam (ed): Copyrighting Creativity: Creative values, Cultural Heritage Institutions and Systems of Intellectual Property, Ashgate
The decentralized, collaborative mass digitization and making available of current, thus most relevant
scientific works is only possible at the moment through massive copyright infringement. It is debatable
whether the copyrighted corpus of scientific works should be completely open, and whether the blatant
disregard of copyrights through which Aleph achieved this openness is the right path towards a more
openly accessible body of scientific knowledge. It is also yet to be measured what effects shadow libraries
may have on the commercial intermediaries and on the health of scientific publishing and science in
general. But Aleph, in any case, is a case study in the potential benefits of open sourcing the library.
Conclusion
If we can take Aleph as an expression of what users around the globe want from a library, then the answer
is that there is a strong need for a universally accessible collection of current, relevant (scientific) books
in restrictions-free electronic formats. Can we expect any single library to provide anything even remotely
similar to that in the foreseeable future? Does such a service have a place in the future of libraries? It is as
hard to imagine the future library with such a service as without.
While the legal and financial obstacles to the creation of a scientific library with as universal reach as
Aleph may be difficult the overcome, other aspects of it may be more easily replicable. The way Aleph
operates demonstrates the amount of material and immaterial resources users are willing to contribute to
build a library that responds to their needs and expectations. If libraries plan to only ‘host’ user-governed
activities, it means that the library is still imagined to be a separate entity from its users. Aleph teaches us
that this separation can be overcome and users can constitute a library. But for that they need
opportunities to participate in the production of the library: they need the right to digitize books and copy
digital books to and from the library, they need the opportunity to participate in the cataloging and
collection building process, they need the opportunity to curate and program the collection. In other
words users need the chance to be librarians in the library if they wish to do so, and so libraries need to be
able to provide access not just to the collection but to their core functions as well. The walls that separate
librarians from library patrons, private and public collections, insiders and outsiders can all prevent the
peer production of the library, and through that, prevent the future that is the closest to what library users
think of as ideal.
16
Bodó B. (2015): Libraries in the post-scarcity era.
in: Porsdam (ed): Copyrighting Creativity: Creative values, Cultural Heritage Institutions and Systems of Intellectual Property, Ashgate
References
Abramitzky, R., & Sin, I. (2014). Book Translations as Idea Flows: The Effects of the Collapse of
Communism
on
the
Diffusion
of
Knowledge
(No.
w20023).
Retrieved
from
http://papers.ssrn.com/abstract=2421123
Battles, M. (2004). Library: An unquiet history. WW Norton & Company.
Benkler, Y. (2006). The wealth of networks : how social production transforms markets and freedom.
New Haven: Yale University Press.
Bently, L., Davis, J., & Ginsburg, J. C. (Eds.). (2010). Copyright and Piracy An Interdisciplinary
Critique. Cambridge University Press.
Bodó, B. (2011a). A szerzői jog kalózai. Budapest: Typotex.
Bodó, B. (2011b). Coda: A Short History of Book Piracy. In J. Karaganis (Ed.), Media Piracy in
Emerging Economies. New York: Social Science Research Council.
Bodó, B. (forthcoming). Piracy vs privacy–the analysis of Piratebrowser. IJOC.
Commission on the Future of the Library. (2013). Report of the Commission on the Future of the UC
Berkeley Library. Berkeley: UC Berkeley.
Committee on the Public Libraries in the Knowledge Society. (2010). The Public Libraries in the
Knowledge Society. Copenhagen: Kulturstyrelsen.
Darnton, R. (1982). The literary underground of the Old Regime. Cambridge, Mass: Harvard University
Press.
Darnton, R. (2003). The Science of Piracy: A Crucial Ingredient in Eighteenth-Century Publishing.
Studies on Voltaire and the Eighteenth Century, 12, 3–29.
Diaz, A. S. (2013). Fair Use & Mass Digitization: The Future of Copy-Dependent Technologies after
Authors Guild v. Hathitrust. Berkeley Technology Law Journal, 23.
Directive 2001/29/EC on the harmonisation of certain aspects of copyright and related rights in the
information society. (2001). Official Journal L, 167, 10–19.
Elst, M. (2005). Copyright, freedom of speech, and cultural policy in the Russian Federation.
Leiden/Boston: Martinus Nijhoff.
Ermolaev, H. (1997). Censorship in Soviet Literature: 1917-1991. Rowman & Littlefield.
Friedberg, M., Watanabe, M., & Nakamoto, N. (1984). The Soviet Book Market: Supply and Demand.
Acta Slavica Iaponica, 2, 177–192.
Giblin, R. (2011). Code Wars: 10 Years of P2P Software Litigation. Cheltenham, UK ; Northampton,
MA: Edward Elgar Publishing.
17
Bodó B. (2015): Libraries in the post-scarcity era.
in: Porsdam (ed): Copyrighting Creativity: Creative values, Cultural Heritage Institutions and Systems of Intellectual Property, Ashgate
Johns, A. (2010). Piracy: The Intellectual Property Wars from Gutenberg to Gates. University Of
Chicago Press.
Judge, C. B. (1934). Elizabethan book-pirates. Cambridge: Harvard University Press.
Khan, B. Z. (2004). Does Copyright Piracy Pay? The Effects Of U.S. International Copyright Laws On
The Market For Books, 1790-1920. Cambridge, MA: National Bureau Of Economic Research.
Khan, B. Z., & Sokoloff, K. L. (2001). The early development of intellectual property institutions in the
United States. Journal of Economic Perspectives, 15(3), 233–246.
Landes, W. M., & Posner, R. A. (2003). The economic structure of intellectual property law. Cambridge,
Mass.: Harvard University Press.
Lessig, L. (2004). Free culture : how big media uses technology and the law to lock down culture and
control creativity. New York: Penguin Press.
Liang, L. (2012). Shadow Libraries. e-flux. Retrieved from http://www.e-flux.com/journal/shadowlibraries/
Patry, W. F. (2009). Moral panics and the copyright wars. New York: Oxford University Press.
Patterson, L. R. (1968). Copyright in historical perspective (p. vii, 264 p.). Nashville,: Vanderbilt
University Press.
Pendleton-Jullian, A., Lougee, W. P., Wilkin, J., & Hilton, J. (2014). Strategic Thinking and Design—
Research Library in 2033—Vision and System of Action—Part One. Colombus, OH: Association of
Research
membership-refines-strategic-thinking-and-design-at-spring-2014-meeting
Pollard, A. W. (1916). The Regulation Of The Book Trade In The Sixteenth Century. Library, s3-VII(25),
18–43.
Pollard, A. W. (1920). Shakespeare’s fight with the pirates and the problems of the transmission of his
text. Cambridge [Eng.]: The University Press.
Poole, N. (2010). The Cost of Digitising Europe’s Cultural Heritage - A Report for the Comité des Sages
of
the
European
Commission.
Retrieved
from
http://nickpoole.org.uk/wp-
content/uploads/2011/12/digiti_report.pdf
Report on the responses to the Public Consultation on the Review of the EU Copyright Rules. (2013).
European Commission, Directorate General for Internal Market and Services.
Rosati, E. (2014a). Copyright exceptions and user rights in Case C-117/13 Ulmer: a couple of
observations. IPKat. Retrieved October 08, 2014, from http://ipkitten.blogspot.co.uk/2014/09/copyrightexceptions-and-user-rights-in.html
18
Bodó B. (2015): Libraries in the post-scarcity era.
in: Porsdam (ed): Copyrighting Creativity: Creative values, Cultural Heritage Institutions and Systems of Intellectual Property, Ashgate
Rosati, E. (2014b). Dutch court refers questions to CJEU on e-lending and digital exhaustion, and another
Dutch reference on digital resale may be just about to follow. IPKat. Retrieved October 08, 2014, from
http://ipkitten.blogspot.co.uk/2014/09/dutch-court-refers-questions-to-cjeu-on.html
Rosati, E. (2014c). Google Books’ Library Project is fair use. Journal of Intellectual Property Law &
Practice, 9(2), 104–106.
Rose, M. (1993). Authors and owners : the invention of copyright. Cambridge, Mass: Harvard University
Press.
Samuelson, P. (2002). Copyright and freedom of expression in historical perspective. J. Intell. Prop. L.,
10, 319.
Samuelson, P. (2014). Mass Digitization as Fair Use. Communications of the ACM, 57(3), 20–22.
Schultz, M. F. (2007). Copynorms: Copyright Law and Social Norms. Intellectual Property And
Information Wealth v01, 1, 201.
Sezneva, O. (2012). The pirates of Nevskii Prospekt: Intellectual property, piracy and institutional
diffusion in Russia. Poetics, 40(2), 150–166.
Solly, E. (1885). Henry Hills, the Pirate Printer. Antiquary, xi, 151–154.
Stelmakh, V. D. (2001). Reading in the Context of Censorship in the Soviet Union. Libraries & Culture,
36(1), 143–151.
Suber,
P.
(2013).
Open
Access
(Vol.
1).
Cambridge,
MA:
The
MIT
Press.
doi:10.1109/ACCESS.2012.2226094
Swartz,
A.
(2008).
Guerilla
Open
Access
Manifesto.
Aaron
Swartz.
Retrieved
from
https://archive.org/stream/GuerillaOpenAccessManifesto/Goamjuly2008_djvu.txt
Triaille, J.-P., Dusollier, S., Depreeuw, S., Hubin, J.-B., Coppens, F., & Francquen, A. de. (2013). Study
on the application of Directive 2001/29/EC on copyright and related rights in the information society (the
“Infosoc Directive”). European Union.
Wittmann, R. (2004). Highwaymen or Heroes of Enlightenment? Viennese and South German Pirates and
the German Market. Paper presented at the History of Books and Intellectual History conference.
Princeton University.
Yu, P. K. (2000). From Pirates to Partners: Protecting Intellectual Property in China in the Twenty-First
Century.
American
University
Law,
50.
Retrieved
from
http://papers.ssrn.com/sol3/papers.cfm?abstract_id=245548
Мошков, М. (1999). Что вы все о копирайте. Лучше бы книжку почитали (Библиотеке копирайт не
враг). Компьютерры, (300).
19
Bodo
In the Name of Humanity
2016
# In the Name of Humanity
By [Balazs Bodo](https://limn.it/researchers/bodo/)

As I write this in August 2015, we are in the middle of one of the worst
refugee crises in modern Western history. The European response to the carnage
beyond its borders is as diverse as the continent itself: as an ironic
contrast to the newly built barbed-wire fences protecting the borders of
Fortress Europe from Middle Eastern refugees, the British Museum (and probably
other museums) are launching projects to “protect antiquities taken from
conflict zones” (BBC News 2015). We don’t quite know how the conflict
artifacts end up in the custody of the participating museums. It may be that
asylum seekers carry such antiquities on their bodies, and place them on the
steps of the British Museum as soon as they emerge alive on the British side
of the Eurotunnel. But it is more likely that Western heritage institutions,
if not playing Indiana Jones in North Africa, Iraq, and Syria, are probably
smuggling objects out of war zones and buying looted artifacts from the
international gray/black antiquities market to save at least some of them from
disappearing in the fortified vaults of wealthy private buyers (Shabi 2015).
Apparently, there seems to be some consensus that artifacts, thought to be
part of the common cultural heritage of humanity, cannot be left in the hands
of those collectives who own/control them, especially if they try to destroy
them or sell them off to the highest bidder.
The exact limits of expropriating valuables in the name of humanity are
heavily contested. Take, for example, another group of self-appointed
protectors of culture, also collecting and safeguarding, in the name of
humanity, valuable items circulating in the cultural gray/black markets. For
the last decade Russian scientists, amateur librarians, and volunteers have
been collecting millions of copyrighted scientific monographs and hundreds of
millions of scientific articles in piratical shadow libraries and making them
freely available to anyone and everyone, without any charge or limitation
whatsoever (Bodó 2014b; Cabanac 2015; Liang 2012). These pirate archivists
think that despite being copyrighted and locked behind paywalls, scholarly
texts belong to humanity as a whole, and seek to ensure that every single one
of us has unlimited and unrestricted access to them.
The support for a freely accessible scholarly knowledge commons takes many
different forms. A growing number of academics publish in open access
journals, and offer their own scholarship via self-archiving. But as the data
suggest (Bodó 2014a), there are also hundreds of thousands of people who use
pirate libraries on a regular basis. There are many who participate in
courtesy-based academic self-help networks that provide ad hoc access to
paywalled scholarly papers (Cabanac 2015).[1] But a few people believe that
scholarly knowledge could and should be liberated from proprietary databases,
even by force, if that is what it takes. There are probably no more than a few
thousand individuals who occasionally donate a few bucks to cover the
operating costs of piratical services or share their private digital
collections with the world. And the number of pirate librarians, who devote
most of their time and energy to operate highly risky illicit services, is
probably no more than a few dozen. Many of them are Russian, and many of the
biggest pirate libraries were born and/or operate from the Russian segment of
the Internet.
The development of a stable pirate library, with an infrastructure that
enables the systematic growth and development of a permanent collection,
requires an environment where the stakes of access are sufficiently high, and
the risks of action are sufficiently low. Russia certainly qualifies in both
of these domains. However, these are not the only reasons why so many pirate
librarians are Russian. The Russian scholars behind the pirate libraries are
familiar with the crippling consequences of not having access to fundamental
texts in science, either for political or for purely economic reasons. The
Soviet intelligentsia had decades of experience in bypassing censors, creating
samizdat content distribution networks to deal with the lack of access to
legal distribution channels, and running gray and black markets to survive in
a shortage economy (Bodó 2014b). Their skills and attitudes found their way to
the next generation, who now runs some of the most influential pirate
libraries. In a culture, where the know-how of how to resist information
monopolies is part of the collective memory, the Internet becomes the latest
in a long series of tools that clandestine information networks use to build
alternative publics through the illegal sharing of outlawed texts.
In that sense, the pirate library is a utopian project and something more.
Pirate librarians regard their libraries as a legitimate form of resistance
against the commercialization of public resources, the (second) enclosure
(Boyle 2003) of the public domain. Those handful who decide to publicly defend
their actions, speak in the same voice, and tell very similar stories. Aaron
Swartz was an American hacker willing to break both laws and locks in his
quest for free access. In his 2008 “Guerilla Open Access Manifesto” (Swartz
2008), he forcefully argued for the unilateral liberation of scholarly
knowledge from behind paywalls to provide universal access to a common human
heritage. A few years later he tried to put his ideas into action by
downloading millions of journal articles from the JSTOR database without
authorization. Alexandra Elbakyan is a 27-year-old neurotechnology researcher
from Kazakhstan and the founder of Sci-hub, a piratical collection of tens of
millions of journal articles that provides unauthorized access to paywalled
articles to anyone without an institutional subscription. In a letter to the
judge presiding over a court case against her and her pirate library, she
explained her motives, pointing out the lack of access to journal articles.[2]
Elbakyan also believes that the inherent injustices encoded in current system
of scholarly publishing, which denies access to everyone who is not
willing/able to pay, and simultaneously denies payment to most of the authors
(Mars and Medak 2015), are enough reason to disregard the fundamental IP
framework that enables those injustices in the first place. Other shadow
librarians expand the basic access/injustice arguments into a wider critique
of the neoliberal political-economic system that aims to commodify and
appropriate everything that is perceived to have value (Fuller 2011; Interview
with Dusan Barok 2013; Sollfrank 2013).
Whatever prompts them to act, pirate librarians firmly believe that the fruits
of human thought and scientific research belong to the whole of humanity.
Pirates have the opportunity, the motivation, the tools, the know-how, and the
courage to create radical techno-social alternatives. So they resist the
status quo by collecting and “guarding” scholarly knowledge in libraries that
are freely accessible to all.
Both the curators of the British Museum and the pirate librarians claim to
save the common heritage of humanity, but any similarities end there. Pirate
libraries have no buildings or addresses, they have no formal boards or
employees, they have no budgets to speak of, and the resources at their
disposal are infinitesimal. Unlike the British Museum or libraries from the
previous eras, pirate libraries were born out of lack and despair. Their
fugitive status prevents them from taking the traditional paths of
institutionalization. They are nomadic and distributed by design; they are _ad
hoc_ and tactical, pseudonymous and conspiratory, relying on resources reduced
to the absolute minimum so they can survive under extremely hostile
circumstances.
Traditional collections of knowledge and artifacts, in their repurposed or
purpose-built palaces, are both the products and the embodiments of the wealth
and power that created them. Pirate libraries don’t have all the symbols of
transubstantiated might, the buildings, or all the marble, but as
institutions, they are as powerful as their more established counterparts.
Unlike the latter, whose claim to power was the fact of ownership and the
control over access and interpretation, pirates’ power is rooted in the
opposite: in their ability to make ownership irrelevant, access universal, and
interpretation democratic.
This is the paradox of the total piratical archive: they collect enormous
wealth, but they do not own or control any of it. As an insurance policy
against copyright enforcement, they have already given everything away: they
release their source code, their databases, and their catalogs; they put up
the metadata and the digitalized files on file-sharing networks. They realize
that exclusive ownership/control over any aspects of the library could be a
point of failure, so in the best traditions of archiving, they make sure
everything is duplicated and redundant, and that many of the copies are under
completely independent control. If we disregard for a moment the blatantly
illegal nature of these collections, this systematic detachment from the
concept of ownership and control is the most radical development in the way we
think about building and maintaining collections (Bodó 2015).
Because pirate libraries don’t own anything, they have nothing to lose. Pirate
librarians, on the other hand, are putting everything they have on the line.
Speaking truth to power has a potentially devastating price. Swartz was caught
when he broke into an MIT storeroom to download the articles in the JSTOR
database.[3] Facing a 35-year prison sentence and $1 million in fines, he
committed suicide.[4] By explaining her motives in a recent court filing,[5]
Elbakyan admitted responsibility and probably sealed her own legal and
financial fate. But her library is probably safe. In the wake of this lawsuit,
pirate libraries are busy securing themselves: pirates are shutting down
servers whose domain names were confiscated and archiving databases, again and
again, spreading the illicit collections through the underground networks
while setting up new servers. It may be easy to destroy individual
collections, but nothing in history has been able to destroy the idea of the
universal library, open for all.
For the better part of that history, the idea was simply impossible. Today it
is simply illegal. But in an era when books are everywhere, the total archive
is already here. Distributed among millions of hard drives, it already is a
_de facto_ common heritage. We are as gods, and might as well get good at
it.[6]
## About the author
**Bodo Balazs,** PhD, is an economist and piracy researcher at the Institute
for Information Law (IViR) at the University of Amsterdam. [More
»](https://limn.it/researchers/bodo/)
## Footnotes
[1] On such fora, one can ask for and receive otherwise out-of-reach
publications through various reddit groups such as
[r/Scholar](https://www.reddit.com/r/Scholar) and using certain Twitter
hashtags like #icanhazpdf or #pdftribute.
[2] Elsevier Inc. et al v. Sci-Hub et al, New York Southern District Court,
Case No. 1:15-cv-04282-RWS
[3] While we do not know what his aim was with the article dump, the
prosecution thought his Manifesto contained the motives for his act.
[4] See _United States of America v. Aaron Swartz_ , United States District
Court for the District of Massachusetts, Case No. 1:11-cr-10260
[5] Case 1:15-cv-04282-RWS Document 50 Filed 09/15/15, available at
[link](https://www.unitedstatescourts.org/federal/nysd/442951/).
[6] I of course stole this line from Stewart Brand (1968), the editor of the
Whole Earth catalog, who, in return, claims to have been stolen it from the
British anthropologist Edmund Leach. See
[here](http://www.wholeearth.com/issue/1010/article/195/we.are.as.gods) for
the details.
## Bibliography
BBC News. “British Museum ‘Guarding’ Object Looted from Syria. _BBC News,_
June 5. Available at [link](http://www.bbc.com/news/entertainment-
arts-33020199).
Bodó, B. 2015. “Libraries in the Post-Scarcity Era.” In _Copyrighting
Creativity_ , edited by H. Porsdam (pp. 75–92). Aldershot, UK: Ashgate.
———. 2014a. “In the Shadow of the Gigapedia: The Analysis of Supply and Demand
for the Biggest Pirate Library on Earth.” In _Shadow Libraries_ , edited by J.
Karaganis (forthcoming). New York: American Assembly. Available at
[link](http://ssrn.com/abstract=2616633).
———. 2014b. “A Short History of the Russian Digital Shadow Libraries.” In
Shadow Libraries, edited by J. Karaganis (forthcoming). New York: American
Assembly. Available at [link](http://ssrn.com/abstract=2616631).
Boyle, J. 2003. “The Second Enclosure Movement and the Construction of the
Public Domain.” _Law and Contemporary Problems_ 66:33–42. Available at
[link](http://dx.doi.org/10.2139/ssrn.470983).
Brand, S. 1968. _Whole Earth Catalog,_ Menlo Park, California: Portola
Institute.
Cabanac, G. 2015. “Bibliogifts in LibGen? A Study of a Text‐Sharing Platform
Driven by Biblioleaks and Crowdsourcing.” _Journal of the Association for
Information Science and Technology,_ Online First, 27 March 2015 _._
Fuller, M. 2011. “In the Paradise of Too Many Books: An Interview with Sean
Dockray.” _Metamute._ Available at
[link](http://www.metamute.org/editorial/articles/paradise-too-many-books-
interview-sean-dockray).
Interview with Dusan Barok. 2013. _Neural_ 10–11.
Liang, L. 2012. “Shadow Libraries.” _e-flux._ Available at
[link](http://www.e-flux.com/journal/shadow-libraries/).
Mars, M., and Medak, T. 2015. “The System of a Takedown: Control and De-
commodification in the Circuits of Academic Publishing.” Unpublished
manuscript.
Shabi, R. 2015. “Looted in Syria–and Sold in London: The British Antiques
Shops Dealing in Artefacts Smuggled by ISIS.” _The Guardian,_ July 3.
Available at [link](http://www.theguardian.com/world/2015/jul/03/antiquities-
looted-by-isis-end-up-in-london-shops).
Sollfrank, C. 2013. “Giving What You Don’t Have: Interviews with Sean Dockray
and Dmytri Kleiner.” _Culture Machine_ 14:1–3.
Swartz, A. 2008. “Guerilla Open Access Manifesto.” Available at
[link](https://archive.org/stream/GuerillaOpenAccessManifesto/Goamjuly2008_djvu.txt).
Constant
Tracks in Electronic fields
2009
figure 3 Dmytri Kleiner: Web 2.0
is a business model, it capitalises
on community created values.
figure 1 E-traces: In the reductive
world of Web 2.0 there are no
insignificant actors because once
added up, everybody counts.
figure 4 Christophe Lazaro:
Sociologists and anthropologists
are trying to stick the notion of
‘social network' to the specificities
of digital networks, that is to say
to their horizontal character
figure 2
1
1
1
2
2
figure 5 The Robot Syndicat:
Destined to survive collectively
through multi-agent systems
and colonies of social robots
figure 6
figure 11
figure 7
figure 9
figure 8
figure 10
2
2
2
3
3
figure 12
Destination port:
Every single passing
of a visitor triggers
the projection of
a simultaneous
registration
figure 15
figure 18
figure 16
figure 13
figure 17
figure 19
Doppelgänger: The
electronic double
(duplicate, twin) in
a society of control
and surveillance
figure 14
3
3
3
4
4
figure 20 CookieSensus: Cookies
found on washingtonpost.com
figure 22 Image Tracer: Images
and data accumulate into layers as
the query is repeated over time
figure 21 ... and
cookies sent by tacodo.net
figure 23 Shmoogle: In one click,
Google hierarchy crumbles down
4
4
4
5
5
figure 24 Jussa
Parrikka: We move
onto a baroque world,
a mode of folding
and enveloping new
ways of perception
and movement
figure 25
figure 26 Extended Speakers: A
netting of thin metal wires suspends
from the ceiling of the haunted
house in the La Bellone courtyard
figure 28
figure 27
figure 29
figure 30
5
5
5
6
6
figure 31
figure 32
figure 33
figure 34
figure 35
figure 38
figure 41
figure 44
figure 47
figure 36
figure 39
figure 42
figure 45
figure 48
figure 37
figure 40
figure 43
figure 46
figure 49
6
6
6
7
7
figure 50
figure 55
figure 60
figure 65
figure 70
figure 75
figure 51
figure 56
figure 61
figure 66
figure 71
figure 76
figure 52
figure 57
figure 62
figure 67
figure 72
figure 77
figure 53
figure 58
figure 63
figure 68
figure 73
figure 78
figure 54
figure 59
figure 64
figure 69
figure 74
figure 79
7
7
7
8
8
figure 80 Elgaland-Vargaland:
Since November 2007, the Embassy
permanently resides in La Bellone
figure 81 Ambassadors Yves
Poliart and Wendy Van Wynsberghe
figure 85
figure 82
figure 84
figure 86
figure 83
8
8
8
9
9
figure 87 It could be the
result of psychic echoes from
the past, psychokinesis, or the
thoughts of aliens or nature spirits
figure 89 Manu
Luksch: Our
digital selves are
many dimensional,
alert, unforgetting
figure 88
figure 91
figure 93
figure 92
figure 94
figure 90
9
9
9
10
10
figure 95
figure 97
figure 96
figure 99
figure 98
10
10
10
11
11
figure 100
figure 101
figure 103
Audio-geographic
dérive: Listening to
the electro-magnetic
spectrum of Brussels
figure 106
figure 107
figure 102
figure 104
figure 108
figure 110
figure 105
figure 112
figure 111
figure 109
11
11
11
12
12
figure 113 Michael Murtaugh:
Rather than talking about
leaning forward or backward,
a more useful split might be
between reading and writing
figure 114
figure 117
figure 115 Adrian
Mackenzie: This
opacity reflects the
sheer number of
operations that have
to be compressed
into code ...
figure 116 ... in
order for digital signal
processing to work
figure 118
12
12
12
13
13
figure 119 Sabine Prokhoris and
Simon Hecquet: What happens
precisely when one decides to
consider these margins, these
‘supplementen', as fullgrown
creations – slave, nor attachment?
figure 120 Praticable:
Making the body as a locus of
knowledge production tangible
figure 121
figure 123
figure 122
figure 124
figure 125
13
13
13
14
14
figure 126 Mutual
Motions Video Library:
A physical exchange
between existing
imagery, real-time
interpretation,
experiences
and context
figure 129
figure 130
figure 127 Modern
Times: His gestures
are burlesque responses
to the adversity
in his life, or just
plain ‘exuberant'
figure 131 Michael
Terry: We really
want to have lots of
people looking at it,
and considering it,
and thinking about
the implications
figure 128
figure 132
figure 133 Görkem
Çetin: There's a lack of
usability bug reporting
tool which can be
used to submit, store,
modify and maintain
user submitted videos,
audio files and pictures
figure 134 Simon
Yuill: It is here
where contingency
and notation meet,
but it is here also
that error enters
14
14
14
15
15
figure 135
figure 141
figure 138
figure 136
figure 139
figure 137
figure 140
15
15
15
16
16
figure 144 Séverine Dusollier:
I think amongst many of the
movements that are made, most are
not ‘a work', they are subconscious
movements, movements that
are translations of gestures that
are simply banal or necessary
figure 142
figure 145
figure 143
16
16
16
17
17
figure 146 Sadie Plant: It is
this kind of deep collectivity,
this profound sense of
micro-collaboration, which
has often been tapped into
17
17
17
18
18
18
18
18
19
19
Verbindingen/Jonctions 10
EN
NL
FR
Tracks in electr(on)ic fields
19
19
19
20
20
Introduction
E-Traces
25
EN, NL, FR
35
EN, NL, FR
Nicolas Malevé, Michel Cleempoel
E-traces en contexte NL, FR
38
Dmytri Kleiner, Brian Wyrick
InfoEnclosure 2.0 NL
47
Christophe Lazaro
58
Marc Wathieu
65
Michel Cleempoel
Destination port
Métamorphoz
Doppelgänger
Andrea fiore
Cookiesensus
FR
70
EN, NL, FR
71
FR, NL, EN
73
EN
Tsila Hassine
Shmoogle and Tracer
EN
Jussi Parikka
Insects, Affects and Imagining New
Sensoriums EN
75
77
81
20
20
20
21
21
Pierre Berthet
Concert with various extended objects
EN, NL, FR
93
Leiff Elgren, CM von Hausswolff
Elgaland-Vargaland EN, NL, FR
95
CM von Hausswolff, Guy-Marc Hinant
Ghost Machinery EN, NL
98
Read Feel Feed Real
101
EN, NL, FR
Manu Luksch, Mukul Patel
Faceless: Chasing the Data Shadow
EN
104
Julien Ottavi
Electromagnetic spectrum Research code
0608 FR
119
Michael Murtaugh
Active Archives or: What's wrong with the
YouTube documentary? EN
131
EN, NL, FR
Femke Snelting
NL
139
143
Adrian Mackenzie
Centres of envelopment and intensive
movement in digital signal processing EN
155
Elpueblodechina
El Curanto EN
174
21
21
21
22
22
Alice Chauchat, Frédéric Gies
181
Dance (notation)
184
EN
Sabine Prokhoris, Simon Hecquet
Mutual Motions Video Library
188
198
EN, NL, FR
Inès Rabadan
Does the repetition of a gesture irrevocably
lead to madness?
215
Michael Terry (interview)
Data analysis as a discourse
217
EN
233
254
Sadie Plant
A Situated Report
275
Biographies
EN
287
EN, NL, FR
License register
311
Vocabulary
313
22
22
22
23
23
The Making-of
323
EN
Colophon
331
23
23
23
24
24
24
24
24
25
25
EN
Introduction
25
25
25
26
26
29
EN
Traces in electr(on)ic fields documents the 10 th edition
of Verbindingen/Jonctions with the same name, a bi-annual multidisciplinary festival organised by Constant, association for arts and media. It is a meeting point for a
diverse public that from an artistic, activist and / or theoretical perspective is interested in experimental reflections
on technological culture.
Not for the first time, but during this edition more explicit than ever, we put the question of the interaction
between body and technology on the table. How to think
about the actual effects of surveillance, the ubiquitous presence of cameras and public safety procedures that can only
regard individuals as an amalgamate of analysable data?
What is the status of ‘identity' when it appears both elusive and unchangeable? How are we conditioned by the
technology we use? What is the relationship between commitment and reward? flexibility of work and healthy life?
Which traces does technology leave in our thinking, behavior, our routine movements? And what residue do we
leave behind ourselves on electr(on)ic fields through our
presence in forums, social platforms, databases, log files?
The dual nature of the term ‘notation' formed an important source of inspiration. Systems that choreographers,
composers and computer programmers use to record ideas
and observations, can then be interpreted as instruction,
as a command which puts an actor, software, performing artist or machine in to motion. From punch card to
musical scale, from programming language to Laban notation, we were interested in the standards and protocols
needed to make such documents work. It was the reason
29
29
29
30
30
to organise the festival inside the documentation, library
and workshop for theater and dance, ‘maison du spectacle'
La Bellone. Located in the heart of Brussels, La Bellone
offered hospitality to a diverse group of thinkers, dancers,
artists, programmers, interface designers and others and
its meticulously renovated 17th century façade formed the
perfect backdrop for this intense program.
Throughout the festival we worked with a number of
themes, not meant to isolate areas of thinking, but rather
as ‘spider threads' interlinking various projects:
E-traces (p. 35) subjected the current reality of Web 2.0
to a number of critical considerations. How do we regain
control of the abundant data correlation that mega-companies such as Google and Yahoo produce, in exchange for
our usage of their services? How do we understand ‘service' when we are confronted with their corporate Janus
face: one a friendly interface, the other Machiavellian
user licenses?
Around us, magnetic fields resonate unseen waves (p.
77) took the ghostly presence of technology as a starting
point and Read Feel Feed Real (p. 101) listened to unheard
sounds and looked behind the curtains in Do-It-Yourself,
walks and urban interventions. Through the analysis of radio waves and their use in artistic installations, by making
electro-magnetic fields heard, we made unexplained phenomena tangible.
As machines learn about bodies, bodies learn about machines and the movements that emerge as a result, are
not readily reduced to cause and effect. Mutual movements (p. 139) started in the kitchen, the perfect place to
30
30
30
31
31
reconsider human-machine configurations, without having
to separate these from everyday life and the patterns that
are ingrained in it. Would a different idea of ‘user' also
change our approach to ‘use'?
At the end of the adventure Sadie Plant remarked in
her ‘situated report' on Tracks in electr(on)ic fields (p.
275): “It is ultimately very difficult to distinguish between
the user and the developer, or the expert and the amateur. The experiment, the research, the development is
always happening in the kitchen, in the bedroom, on the
bus, using your mobile or using your computer. (...) this
sense of repetitive activity, which is done in many trades
and many lines, and that really is the deep unconscious
history of human activity. And arguably that's where the
most interesting developments happen, albeit in a very unsung, unseen, often almost hidden way. It is this kind of
deep collectivity, this profound sense of micro-collaboration, which has often been tapped into.”
Constant, October 2009
34
34
35
35
EN
E-Traces
35
35
35
36
36
How does the information we seize in search engines
circulate, what happens to our data entered in social networking sites, health records, news sites, forums and chat
services we use? Who is interested? How does the ‘market' of the electronic profile function? These questions
constitute the framework of the E-traces project.
For this, we started to work on Yoogle!, an online game.
This game, still in an early phase of development, will allow users to play with the parameters of the Web 2.0 economy and to exchange roles between the different actors
of this economy. We presented a first demo of this game,
accompanied by a public discussion with lawyers, artists
and developers. The discussion and lecture were meant
to analyse more deeply the mechanism of the economy
behind its friendly interface, the speculation on profiling,
the exploitation of free labor, but also to develop further
the scenario of the game.
EN
NL
36
36
36
37
37
47
DMYTRI KLEINER, BRIAN WYRICK
License: Dmytri Kleiner & Brian Wyrick, 2007. Anti-Copyright. Use as desired in whole or in part. Independent or collective commercial use encouraged. Attribution optional.
Text first published in English in Mute: http://www.metamute.org/InfoEnclosure-2.0. For translations in
Polish and Portuguese, see http://www.telekommunisten.net
figure 3
Dmytri
Kleiner
MICHEL CLEEMPOEL
License: Free Art License
figure 12
Every single
passing of
a visitor
triggered the
projection
of a
simultaneous
registration
figure 14
EN
Destination port
During the Jonctions festival, Destination port registered the flux
of visitors in the entrance hall of La Bellone. Every single passing
of a visitor triggered the projection of a simultaneous registration
in the hall, and this in superposition with formerly captured images
of visitors, thus creating temporary and unlikely encounters between
persons.
Doppelgänger
Born in September 2001, represented here by Valérie Cordy et
Natalia De Mello, the MéTAmorphoZ collective is a multidisciplinary
association that create installations, spectacles and transdisciplinary
performances that mix artistic experiments and digital practices.
With the project Doppelganger, the collective MéTAmorphoZ focuses on the thematic of the electronic double(duplicate, twin) in a
society of control and surveillance.
“Our electronic identity, symbol of this new society of control,
duplicates our organic and social identity. But this legal obligation
to be assigned a unique, stable and unforgeable identity isn't, in the
end, a danger for our fundamental freedom to claim identitites which
are irreducibly multiple for each of us?”
72
72
72
73
73
ANDREA fiORE
License: Creative Commons Attribution-NonCommercial-ShareAlike
EN
Cookiecensus
Although still largely perceived as a private activity, web surfing
leaves persistent trails. While users browse and interact through the
web, sites watch them read, write, chat and buy. Even on the basis
of a few basic web publishing experiences one can conclude that most
web servers record ‘by default' their entire clickstream in persistent
‘log' files.
‘Web cookies' are sort of digital labels sent by websites to web
browsers in order to assign them a unique identity and automatically
recognize their users over several visits. Today, this technology, which
was introduced with the first version of the Netscape browser in 1994,
constitutes the de facto standard upon which a wide range of interactive functionalities are built that were not conceived by the early web
protocol design. Think, for example, of user accounts and authentications, personalized content and layouts, e-commerce and shopping
charts.
While it has undeniably contributed to the development and the
social spread of the new medium, web cookie technology is still to
be considered as problematic. Especially the so-called ‘third party
cookies' issue – a technological loophole enabling marketeers and advertisement firms to invisibly track users over large networks of syndicated websites – has been the object of a serious controversy, involving
a varied set of actors and stakeholders.
Cookiecensus is a software prototype. A wannabe info tool for
studying electronic surveillance in one of its natively digital environments. Its core functionality consists of mapping and analyzing third
party's cookies distribution patterns within a given web, in order to
identify its trackers and its network of syndicated sites. A further
feature of the tool is the possibility to inspect the content of a web
page in relation to its third party cookie sources.
figure 20
Cookies
found on
Washingtonpost.com
figure 21
Cookies
sent by
Tacodo.net
73
73
73
74
74
It is an attempt to deconstruct the perceived unity and consistency
of web pages by making their underlying content assemblage and their
related attention flows visible.
74
74
74
75
75
TSILA HASSINE
License: Free Art License
EN
Shmoogle and Tracer
What is Shmoogle? Shmoogle is a Google randomizer. In one
click, Google hierarchy crumbles down. Results that were usually exiled to pages beyond user attention get their ‘15 seconds of PageRank
fame'. While also being a useful tool for internet research, Shmoogle
is a comment, a constant reminder that the Google order is not necessarily ‘the good order', and that sometimes chaos is more revealing
than order. While Google serves the users with information ready for
immediate consumption, Shmoogle forces its users to scroll down and
make their own choices. If Google is a search engine, then Shmoogle
is a research engine.
figure 22
Images
and data
accumulate
into layers
as the query
is repeated
over time
figure 23 In
one click,
Google
hierarchy
crumbles
down
In Image Tracer, order is important. Image Tracer is a collaboration between artist group De Geuzen and myself. Tracer was born
out of our mutual interest in the traces images leave behind them on
their networked paths. In Tracer images and data accumulate into
layers as the query is repeated over time. Boundaries between image
and data are blurred further as the image is deliberately reduced to
thumbnail size, and emphasis is placed on the image's context, the
neighbouring images, and the metadata related to that image. Image Tracer builds up an archive of juxtaposed snapshots of the web.
As these layers accumulate, patterns and processes reveal themselves,
and trace a historiography in the making.
75
75
75
76
76
76
76
76
77
77
EN
NL
FR
Around us, magnetic fields resonate
unseen waves
Om ons heen resoneren ongeziene
golven
Autour de nous, les champs
magnétiques font résonner des ondes
invisibles
77
77
77
78
78
In computer terminology many words refer to chimerical images such as bots, demons and ghosts. Dr. Konstantin Raudive, a Latvian psychologist, and Swedish film
producer Friedrich Jurgenson went a step further and explored the territory of the Electric Voice phenomena. Electronic voice phenomena (EVP) are speech or speech-like
sounds that can be heard on electronic devices that were
not present at the time the recording was made. Some
believe these could be of paranormal origin.
For this part of the V/J10 programme, we chose a
metaphorical approach, working with bodiless entities and
hidden processes, finding inspiration in The Embassy of
Elgaland-Vargaland, semi-fictional kingdoms, consisting
of all Border Territories (Geographical, Mental & Digital). These kingdoms were founded by Leiff Elgren and
CM Von Hausswolff. Elgren stated that: “All dead people
are inhabitants of the country Elgaland-Vargaland, unless
they stated that they did not want to be an inhabitant”.
JUSSI PARIKKA
License: Creative Commons Attribution-NonCommercial-ShareAlike
EN
Insects, Affects and Imagining New Sensoriums
figure 24
Jussa
Parrikka
at V/J10
A Media Archaeological Rewiring
from Geniuses to Animals
An insect media artist or a media archaeologist imagining a potential weird medium might end up with something that sounds quite
mundane to us humans. For the insect probe head, the question of
what it feels like to perceive with two eyes and ears and move with two
legs would be a novel one, instead of the multiple legs and compound
eyes that it has to use to manoeuvre through space. The uncanny
formations often used in science fiction to describe something radically inhuman (like the killing machine insects of Alien movies) differ
from the human being in their anatomy, behaviour and morals. The
human brain might be a much more effcient problem solver and the
human hands are quite handy tool making metatools, and the human
body could be seen as an original form of any model of technics, as
Ernst Kapp already suggested by the end of the 19 th century. But
still, such realisations do not take away the fascination that emerges
from the question of what would it be like to move, perceive and think
differently; what does a becoming-animal entail.
I am of course taking my cue here from the philosopher Manuel DeLanda who in his 1991 book War in the Age of Intelligent Machines,
asked what would the history of warfare look like from the viewpoint
of a future robot historian? An exercise perhaps in creative imagination, DeLanda's question also served other ends relating to physics of
self-organization. My point is not to discuss DeLanda, or the history
of war machines, but I want to pick an idea from this kind of an
approach, an idea that could be integrated into media archaeological considerations, concerning actual or imaginary media. As already
said, imagining alternative worlds is not the endpoint of this exercise
81
81
81
82
82
in ‘insect media', but a way to dip into an alternative understanding
of media and technology, where such general categories as ‘humans'
and ‘machines' are merely the endpoints of intensive flows, capacities, tendencies and functions. Such a stance takes much of its force
from Gilles Deleuze's philosophical ontology of abstract materialism,
which focuses primarily on a Spinozian ontology of intensities, capacities and functions. In this sense, the human being is not a distinct
being in the world with secondary qualities, but a “capacity to signify, exchange, and communicate”, as Claire Colebrook has pointed
out in her article ‘The Sense of Space' (Postmodern Culture). This
opens up a new agenda not focused on ‘beings' and their tools, but
on capacities and tendencies that construct and create beings in a
move which emphasizes Deleuze's interest in pre-Kantian worlds of
baroque. In addition, this move includes a multiplication of subjectivities and objects of the world, a certain autonomy of the material
world beyond the privileged observer. Like everybody who has done
gardening knows: there is a world teeming with life outside the human
sphere, with every bush and tree being a whole society in itself.
To put it shortly, still following Colebrook's recent writing on the
concept of affect, what Deleuze found in the baroque worlds of windowless monads was a capacity of perception that does not stem from
a universalising idea of perception in general. Man or any general
condition of perception is not the primary privileged position of perception but perceptions and creations of space and temporality are
multiplied in the numerous monadic worlds, a distributed perception
of a kind that according to Deleuze later found resonance in the philosophy of A.N.Whitehead. For Whitehead, the perceiving subject is
more akin to a ‘superject', a second order construction from the sum
of its perceptions. It is the world perceived that makes up superjects
and based on the variations of perceptions also alternative worlds.
Baroque worlds, argues Deleuze in his book Le Pli from 1988, are
characterised by the primacy of variation and perspectivism which is
a much more radical notion than a relativist idea of different subjects
having different perspectives on the world. Instead, “the subject will
be what comes to the point of view”, and where “the point of view is
not what varies with the subject, at least in the first instance; it is, to
82
82
82
83
83
the contrary, the condition in which an eventual subject apprehends
a variation (metamorphosis). . . ”.
Now why this focus on philosophy, this short excursion that merely
sketches some themes around variation and imagination? What I am
after is an idea of how to smuggle certain ideas of variation, modulation and perception into considerations of media culture, media
archaeology and potentially also imaginary media, where imaginary
media become less a matter of a Lacanian mirror phase looking for
utopian communication offering unity, but a deterritorialising way
of understanding the distributed ontology of the world and media
technologies. Variation and imagination become something else than
the imaginations of a point of view – quite the contrary, the imagination and variation give rise to points of view, which opens up a
whole new agenda of a past paradoxically not determined, and even
further, future as open to variation. This would mean taking into
account perceptions unheard of, unfelt, unthought-of, but still real in
their intensive potentiality, a becoming-other of the sensorium so to
speak. Hence, imagination becomes not a human characteristic but
an epistemological tool that interfaces analytics of media theory and
history with the world of animals and novel affects.
Imaginary media and variations at the heart of media cultural
modes of seeing and hearing have been discussed in various recent
books. The most obvious one is The Book of Imaginary Media, edited
by Eric Kluitenberg. According to the introduction, all media consist
of a real and an imagined part, a functional coupling of material characteristics and discursive dreams which fabricate the crucial features
of modern communication tied intimately with utopian ideals. Imaginary media – or actual media imagined beyond its real capacities
– have been dreamed to compensate insuffcient communication, a
realisation that Kluitenberg elaborates with the argument that “central to the archaeology of imaginary media in the end are not the
machines, but the human aspirations that more often than not are
left unresolved by the machines. . . ”. Powers of imagination are then
based in the human beings doing the imagining, in the human powers
able to transcend the actual and factual ways of perception and to
83
83
83
84
84
grasp the unseen, unheard and unthought of media creations. Variation remains connected to the principle of the central point where
variation is perceived.
Talking of the primacy of variation, we are easily reminded of
Siegfried Zielinski's application of the idea of ‘variantology' as an
‘anarchaeology of media', a task dedicated to the primacy of variation resisting the homogeneous drive of commercialised media spheres.
Excavating dreams of past geniuses, from Empedocles to Athanius
Kircher's cosmic machines and communication networks to Ernst florens Friedrich Chladni's visualisation of sound, Zielinski has been underlining the creative potential in an exercise of imagining media. In
this context, he defines in threefold the term ‘imaginary media' in his
chapter in the Book of Imaginary Media:
• Untimely media/apparatus/machines: “Media devised and designed
either much too late or much too early. . . ”
• Conceptual media/apparatus/machines: “Artefacts that were only
ever sketched as models. . . but never actually built.”
• Impossible media/apparatus/machines: “Imaginary media in the
true sense, by which I mean hermetic and hermeneutic machines. . .
they cannot actually be built, and whose implied meanings nonetheless have an impact on the factual world of media.”
A bit reminiscent of the baroque idea, variation is primary, claims
Zielinski. Whereas the capitalist orientated consumer media culture
is working towards a psychopathia medialis of homogenized media
technological environments, variantology is committed to promoting
heterogeneity, finding dynamic moments of media archaeological past,
and excavating radical experiments that push the limits of what can
be seen, heard and thought. Variantology is then implicitly suggested
as a mode of ontogenesis, of bringing forth, of modulation and change
– an active mode of creation instead of distanced contemplation.
Indeed, the aim of promoting diversity is a much welcomed one,
but I would like to propose a slight adjustment to this task, something that I engage under the banner of ‘insect media'. Whereas
Zielinski and much of the existing media archaeological research still
84
84
84
85
85
starts off from the human world of male inventor-geniuses, I propose
a slightly more distributed look at the media archaeology of affects,
capacities, modes of perception and movement, which are primarily
not attached to a specific substance (animal, technology), but since
the 19 th century at least, refer to a certain passage, vector from animals to technology and vice versa. Here, a mode of baroque thought,
a thought tuned in terms of variations becomes unravelled with the
help of animality that is not to be seen as a metaphor, but as a metamorphosis, as ‘teachings' in weird perceptions, novel ways of moving,
new ways of sensing, opening up to the world of sensations and contracting them. Instead of looking for variations through inventions of
people, we can turn to the ‘storehouses of invention' of for example
insects that from the 19 th century on were introduced as an alien
form of media in themselves. Next I will elaborate how we can use
these tiny animals as philosophical and media archaeological tools to
address media and technology as intensities that signal weird sensory
experiences.
Novel Sensoriums
During the latter half of the 19 th century, insects were seen as
uncanny but powerful forms of media in themselves, capable of weird
sensory and kinaesthetic experiences. Examples range from popular newspaper discourse to scientific measurements and such early
best-sellers as An Introduction to Entomology; or, Elements of the
Natural History of Insects: Comprising an Account of Noxious and
Useful Insects, of Their Metamorphoses, Hybernation, Instinct (1815—
1826) by William Kirby and William Spence.
Since the 19 th century, insects and animal affects are not only
found in biology but also in art, technology and popular culture. In
this sense, the 19 th century interest in insects produces a valuable
perspective on the intertwining of biology (entomology), technology
and art, where the basics of perception are radically detached from
human-centred models towards the animal kingdom. In addition, this
science-technology-art trio presents a challenge to rethink the forces
which form what we habitually refer to as ‘media' as modes of perception. By expanding our notions of ‘media' from the technological
85
85
85
86
86
apparatuses to the more comprehensive assemblages that connect biological, technological, social and aesthetic issues, we are also able to
bring forth novel contexts for contemporary analysis and design of media systems. In a way, then, the concept of the ‘insect' functions here
as a displacing and a deterritorialising force that seeks a questioning
of where and in what kind of conditions we approach media technologies. This is perhaps an approach that moves beyond a focus on
technology per se, but still does not remain blind to the material forces
of the world. It presents an alternative to the ‘substance-approaches'
that start from a stability or a ground like ‘technology' or ‘humans'.
It is my claim that Deleuzian biophilosophy, that has taken elements
from Spinozian ontology, von Uexküll's ethology, Whitehead's ideas
as well as Simondon's notions on individuation, is able to approach
the world as media in itself: a contracting of forces and analysing
them in terms of their affects, movements, speeds and slownesses.
These affects are primary defining capacities of an entity, instead of
a substance or a class it belongs to, as Deleuze explains in his short
book Spinoza: Practical Philosophy. From this perspective we can
adopt a novel media archaeological rewiring that looks at media history not as one of inventors, geniuses and solid technologies, but as a
field of affects, interactions and modes of sensation and perception.
Examples from the 19 th century popular discourse are illustrative.
In 1897, New York Times addressed spiders as ‘builders, engineers
and weavers', and also as ‘the original inventors of a system of telegraphy'. Spiders' webs offer themselves as ingenious communication
systems which do not merely signal according to a binary setting
(something has hit the web/has not hit the web) but transmits information regarding the “general character and weight of any object
touching it (. . . )” Or take for example the book Beautés et merveilles
de la nature et des arts by Eliçagaray from the 18 th century which
lists both technological and animal wonders, for example bees and
ants, electricity and architectural constructions as marvels of artifice
and nature.
Similar accounts abound since the mid 19 th century. Insects sense,
move, build, communicate and even create art in various ways that
raised wonder and awe for example in U.S. popular culture. Apt
86
86
86
87
87
example of the 19 th century insect mania is the New York Times
story (May 29, 1880) about the ‘cricket mania' of a certain young
lady who collected and trained crickets as musical instruments:
200 crickets in a wirework-house, filled with ferns and shells,
which she called a ‘fernery'. The constant rubbing of the wings
of these insects, producing the sounds so familiar to thousands
everywhere seemed to be the finest music to her ears. She
admitted at once that she had a mania for capturing crickets.
Besides entertainment, and in a much earlier framework, the classic
of modern entomology, the aforementioned An Introduction to Entomology by Kirby and Spence already implicitly presented throughout
its four volume best seller the idea of a primitive technics of nature –
insect technics that were immanent to their surroundings.
Kirby and Spence's take probably attracted the attention it did
because of the catchy language but also what could be called its
ethological touch. Insects were approached as living and interacting
entities that are intimately coupled with their environment. Insects
intertwine with human lives (“Direct and indirect injuries caused by
insects, injuries to our living vegetable property but also direct and
indirect benefits derived from insects”), but also engage in ingenious
building projects, stratagems, sexual behaviour and other expressive
modes of motion, perception and sensation. Instead of pertaining to a
taxonomic account of the interrelations between insect species, their
forms, growth or for example structural anatomy, An Introduction to
Entomology (vol. 1) is traversed by a curiosity cabinet kind of touch
on the ethnographics of insects. Here, insects are for example war
machines, like the horse-fly (Tabanus L.): “Wonderful and various
are the weapons that enable them to enforce their demand. What
would you think of any large animal that should come to attack you
with a tremendous apparatus of knives and lancets issuing from its
mouth?”.
From Kirby and Spence to later entomologists and other writers,
insects' powers of building continuously attracted the early entomological gaze. Buildings of nature were described as more fabulous than
87
87
87
88
88
the pyramids of Egypt or the aqueducts of Rome. Suddenly, in this
weird parallel world, such minuscule and admittedly small-brained
entities like termites were pictured as alike to the ancient monarchies
and empires of Western civilization. The Victorian appreciation of
ancient civilization could also incorporate animal kingdoms and their
buildings of monarchic measurements. Perhaps the parallel was not
to be taken literally, but in any case it expressed a curious interest
towards microcosmical worlds. A recurring trope was that of ‘insect
geometrics' which seemed with accuracy, paralleled only in mathematics, to follow and fold nature's resources into micro versions of
emerging urban culture. To quote Kirby and Spence's An Introduction to Entomology, vol. 2:
No thinking man ever witnesses the complexness and yet regularity and effciency of a great establishment, such as the Bank
of England or the Post Offce without marvelling that even human reason can put together, with so little friction and such
slight deviations from correctness, machines whose wheels are
composed not of wood and iron, but of fickle mortals of a thousand different inclinations, powers, and capacities. But if such
establishments be surprising even with reason for their prime
mover, how much more so is a hive of bees whose proceedings
are guided by their instincts alone!
Whereas the imperialist powers of Europe headed for overseas conquests, the mentality of exposition and mapping new terrains turned
also towards other fields than the geographical. The Seeing Eye – a
key figure of hierarchical modern power – could also be a non-human
eye, as with the fly which according to Steven Connor can be seen as
the recurring mode of “radically alien mode of entomological vision”
with its huge eyes consisting of 4000 sensors. Hence, it is fitting how
in 1898 the idea of “photographing through a fly's eye” was suggested
as a mode of experimental vision – able also to catch queen Victoria
with “the most infinitesimal lens known to science”, that of a dragon
fly.
88
88
88
89
89
Jean-Jacques Lecercle explains how the Victorian enthusiasm for
entomology and insect worlds is related to a general discourse of natural history that as a genre labelled the century. Through the themes
of ‘exploration' and ‘taxonomy' Lecercle claims how Alice in Wonderland can be read as a key novel of the era in its evaluation and
classification of various life worlds beyond the human. Like Alice in
the 1865 novel, new landscapes and exotic species are offered as an
armchair exploration of worlds not merely extensive but also opened
up by intensive gaze into microcosms. Uncanny phenomenal worlds
are what tie together the entomological quest, Darwinian inspired biological accounts of curious species and Alice's adventures into imaginative worlds of twisting logic. In taxonomic terms, the entomologist
is surrounded by a new cult of private and public archiving. New
modes of visualizing and representing insect life produce a new phase
of taxonomy becoming a public craze instead of merely a scientific
tool. Again the wonder worlds of Alice or Edward Lear, the Victorian nonsense poet, are the ideal point of reference for 19 th century
natural historian and entomologist, as Lecercle writes:
And it is part of a craze for discovering and classifying new
species. Its advantage over natural history is that it can invent those species (like the Snap-dragon-fly) in the imaginative
sense, whereas natural history can invent them only in the
archaeological sense, that is discover what already exists. Nonsense is the entomologist's dream come true, or the Linnaean
classification gone mad, because gone creative (. . . )
For Alice, the feeling of not being herself and “being so many different sizes in a day is very confusing”, which of course is something
incomprehensible to the Caterpillar she encounters. It is not queer for
the Caterpillar whose mode of being is defined by the metamorphosis
and the various perception/action-modulations it brings about. It
is only the suddenness of the becoming-insect of Alice that dizzies
her. A couple of years later, in The Population of an Old-Pear Tree,
or Stories of insect life (1870) an everyday meadow is disclosed as
a vivacious microcosm in itself. The harmonious scene, “like a great
89
89
89
90
90
amphitheatre”, is filled with life that easily escapes the (human) eye.
Like Alice, the protagonist wandering in the meadow is “lulled and
benumbed by dreamy sensations” which however transport him suddenly into new perceptions and bodily affects. What is revealed to
our boy hero in this educational novel fashioned in the style of travel
literature (connecting it thus to the colonialist contexts of its age)
is a world teeming with sounds, movements, sensations and insect
beings (huge spiders, cruel mole-crickets, energetic bees) that are beyond the human form (despite the constant tension of such narratives
as educational and moralising tales that anthropomorphize affective
qualities into human characteristics). True to entomological classification, a big part is reserved for the structural-anatomical differences
of the insect life but also the affect-life of how insects relate to their
surroundings is under scrutiny.
As precursors of ethology, such natural historical quests (whether
archaeological, entomological or imaginative) were expressing an appreciation of phenomenal worlds differing from that of the human
with its two hands, two eyes and two feet. In a way, this entailed a
kind of an extended Kantianism interested not only in the conditions
of possibility of experiences, but the emergence of alternative potentials on the immanent level of life that functions through a technics of
nature. Curiously the inspiration with new phenomenal worlds was
connected to the emergence of new technologies of movements, sensation and communication (all challenging the Kantian apperception of
Man as the historically constant basis of knowledge and perception).
Nature was gradually becoming the “new storehouse of invention”
(New York Times, August 4, 1901) that was to entice inventors into
perfecting their developments. What I argue is that this theme can
also be read as an expression of a shift in understanding technology
– a shift that marked the rise of modern discourse concerning media
technologies since the end of the 19 th century and that has usually
been attributed to an anthropological and ethnological turn in understanding technology. I also address this theme in another text of
mine, ‘Insect Technics'. For several writers such as Ernst Kapp who
became one of the predecessors of later theories of media as ‘extensions of man', it was the human body that served as a storage house
90
90
90
91
91
of potential media. However, at the same time, another undercurrent
proposed to think of technologies, inventions and solutions to problems posed by life as stemming from a much more different class of
bodies, namely insects.
So beyond Kant, we move onto a baroque world, not as a period of
art, but as a mode of folding and enveloping new ways of perception
and movement. The early years and decades of technical media were
characterized by the new imaginary of communication, from work
by inventors such as Nikola Tesla to various modes of e.g. spiritualism analyzed recently in her art works by Zoe Beloff. However, one
can radicalize the viewpoint even further and take an animal turn and
not look for alien but for animal and insect ways of sensing the world.
Naturally, this is exactly what is being proposed in a variety of media
art pieces and exhibitions. Insects have made their appearance for
example in Toshio Iwai's Music Insects (1990), Sarah Peebles' electroacoustic Insect Grooves as an example of imaginary soundscapes,
David Dunn's acoustic ecology pieces with insect sounds, the Sci-Art:
Bio-Robotic Choreography project (2001, with Stelarc as one of the
participators), and Laura Beloff's Spinne (2002), a networked spider installation that works according to the web spider/ant/crawler
technology.
Here we are dealing not just with representing the insect, but engaging with the animal affects, indistinguishable from those of the
technological, as in Stelarc's work where the experimentation with
new bodily realities is a form of becoming-insect of the technological
human body. Imagining by doing is a way to engage directly with
affects of becoming-animal of media where the work of sound and
body artists doubles the media archaeological analysis of historical
strata. In other words, one should not reside on the level of intriguing representations of imagined ways of communication, or imagined
apparatuses that never existed, but realize the overabundance of real
sensations, perceptions to contract, to fold, the neomaterialist view
towards imagined media.
91
91
91
92
92
Literature
Ernest van Bruyssel, The population of an old pear-tree; or, Stories
of insect life. (New York: Macmillan and co., 1870).
Lewis Carroll, Alice's Adventures in Wonderland and Through the
Looking Glass. Edited with an Introduction and Notes by Roger
Lancelyn Green. (Oxford: Oxford University Press, 1998).
Claire Colebrook, ‘The Sense of Space. On the Specificity of Affect
in Deleuze and Guattari.' In: Postmodern Culture, vol. 15, issue 1,
2004.
Steven Connor, fly. (London: Reaktion Books, 2006).
Manuel DeLanda, War in the Age of Intelligent Machines. (New
York: Zone Books, 1991).
Gilles Deleuze, Spinoza: Practical Philosophy. Transl. Robert
Hurley. (San Francisco: City Lights, 1988).
Gilles Deleuze, The Fold. Transl. Tom Conley. (Minneapolis:
University of Minnesota Press, 1993).
Ernst Kapp, Grundlinien einer Philosophie der Technik: Zur Entstehungsgeschichte der Kultur aus neuen Gesichtspunkten. (Braunschweig:
Druck und Verlag von George Westermann, 1877).
William Kirby & William Spence, An Introduction to Entomology,
or Elements of the Natural History of Insects. Volumes 1 and 2.
Unabridged Faximile of the 1843 edition. (London: Elibron, 2005).
Eric Kluitenberg (ed.), Book of Imaginary Media. Excavating the
Dream of the Ultimate Communication Medium. (Rotterdam: NAi
publishers, 2006).
Jean-Jacques Lecercle, Philosophy of Nonsense: The Intuitions of
Victorian Nonsense Literature. (London: Routledge, 1994).
Jussi Parikka, ‘Insect Technics: Intensities of Animal Bodies.' In:
(Un)Easy Alliance - Thinking the Environment with Deleuze/Guattari, edited by Bernd Herzogenrath. (Newcastle: Cambridge Scholars
Press, Forthcoming 2008).
Siegfried Zielinski, ‘Modelling Media for Ignatius Loyola. A Case
Study on Athanius Kircher's World of Apparatus between the Imaginary and the Real.' In: Book of Imaginary Media, edited by Kluitenberg. (Rotterdam: NAi, 2006).
92
92
92
93
93
PIERRE BERTHET
License: Creative Commons Attribution-NonCommercial-ShareAlike
EN
Extended speakers
& Concert with various extended objects
We invited Belgian artist Pierre Berthet to create an installation
for V/J10 that explores the resonance of EVP voices. He made a
netting of thin metal wires which he suspended from the ceiling of
the haunted house in the La Bellone courtyard.
Through these metal wires, loudspeakers without membranes were
connected to a network of resonating cans. Sinus tones and radio
recordings were transmitted through the speakers, making the metal
wires vibrate which, in their turn, caused the cans to resonate.
figure 26
A netting
of thin
metal wires
suspended
from the
ceiling of
the haunted
house in the
La Bellone
courtyard
figure 27
93
93
93
94
94
Concert with various extended objects
94
94
94
95
95
LEIff ELGREN, CM VON Hausswolff
License: Fully Restricted Copyright
EN
Elgaland-Vargaland
The Embassy of the The Kingdoms of Elgaland-Vargaland
(KREV)
The Kingdoms were proclaimed in 1992 and consist of all ‘Border
Territories': geographical, mental and digital. Elgaland-Vargaland is
the largest – and most populous – realm on Earth, incorporating all
boundaries between other nations as well as ‘Digital Territory' and
other states of existence. Every time you travel somewhere, and every
time you enter another form of being, such as the dream state, you
visit Elgaland-Vargaland, the kingdom founded by Leiff Elgren and
CM von Hausswolff.
During the Venice Biennale, Elgren stated that all dead people
are inhabitants of the country Elgaland-Vargaland unless they had
declared that they did not want to be an inhabitant.
Since V/J10, the Elgaland-Vargaland Embassy permanently resides in La Bellone.
figure 80
Since V/J10,
the Elgaland-Vargaland
Embassy permanently
resides in
La Bellone
figure 82
figure 81
Ambassadors
Yves
Poliart and
Wendy Van
Wynsberghe
figure 83
figure 85
figure 86
95
95
95
96
96
NL
Elgaland-Vargaland
figure 84
Every time
you travel
somewhere,
and every
time you
enter another form of
being, you
visit Elgaland-Vargaland.
CM VON Hausswolff, GUY-MARC HINANT
License: Creative Commons Attribution-NonCommercial-ShareAlike
figure 88
Drawings by
Dominique
Goblet,
EVP sounds
by Carl
Michael von
Hausswolff,
images by
Guy-Marc
Hinant
figure 87
EVP could
be the result
of psychic
echoes from
the past,
psychokinesis, or the
thoughts
of aliens
or nature
spirits.
For more information on EVP, see: http://en.wikipedia.org/wiki/Electronic_voice_phenomenon##_note
-fontana1
EN
Ghost Machinery
During V/J10 we showed an audiovisual installation entitled Ghost
Machinery, with drawings by Dominique Goblet, EVP sounds by Carl
Michael von Hausswolff, and images by Guy-Marc Hinant, based on
Dr. Stempnicks Electronic Voice Phenomena recordings.
EVP has been studied primarily by paranormal researchers since
the 1950s, who have concluded that the most likely explanation for
the phenomena is that they are produced by the spirits of the deceased. In 1959, Attila Von Szalay first claimed to have recorded the
‘voices of the dead', which led to the experiments of Friedrich Jürgenson. The 1970s brought increased interest and research including
the work of Konstantine Raudive. In 1980, William O'Neill backed by
industrialist George Meek built a ‘Spiricom' device, which was said to
facilitate very clear communication between this world and the spirit
world.
Investigation of EVP continues today through the work of many
experimenters, including Sarah Estep and Alexander McRae. In addition to spirits, paranormal researchers have claimed that EVP could
be due to psychic echoes from the past, psychokinesis unconsciously
produced by living people, or the thoughts of aliens or nature spirits.
Paranormal investigators have used EVP in various ways, including
as a tool in an attempt to contact the souls of dead loved ones and in
ghost hunting. Organizations dedicated to EVP include the American
Association of Electronic Voice Phenomena, the International Ghost
Hunters Society, as well as the skeptical Rorschach Audio project.
98
98
98
99
99
Read Feel Feed Real
101
101
101
102
102
Electro Magnetic fields of ordinary objects acted as EN
source material for an audio performance, surveillance
camera's and legislation are ingredients for a science fiction film, live annotation of videostreaming with the help
of IRC chats. . .
A mobile video laboratory was set up during the festival, to test out how to bring together scripting, annotation, data readings and recordings in digital archives.
Operating somewhere between surveillance and observation, the Open Source video team mixed hands-on Icecast
streaming workshops with experiments looking at the way
movements are regulated through motion control and vice
versa.
MANU LUKSCH, MUKUL PATEL
License: Creative Commons Attribution - NonCommercial - ShareAlike license
figure 94
CCTV
sculpture
in a park
in London
EN
Faceless: Chasing the Data Shadow
Stranger than fiction
Remote-controlled UAVs (Unmanned Aerial Vehicles) scan the city
for anti-social behaviour. Talking cameras scold people for littering
the streets (in children's voices). Biometric data is extracted from
CCTV images to identify pedestrians by their face or gait. A housing project's surveillance cameras stream images onto the local cable
channel, enabling the community to monitor itself.
figure 95
Poster in
London
These are not projections of the science fiction film that this text
discusses, but techniques that are used today in Merseyside 1. The
Guardian has reported the MoD rents out an RAF-staffed spy plane
for public surveillance, carrying reconnaissance equipment able to
monitor telephone conversations on the ground. It can also be used
for automatic number plate recognition: “Cheshire police recently revealed they were using the Islander [aircraft] to identify people speeding, driving when using mobile phones, overtaking on double white
lines, or driving erratically.”, Middlesborough 2, Newham and Shoreditch 3 in the UK. In terms of both density and sophistication, the UK
1
“Police spy in the sky fuels ‘Big Brother fears'”, Philip Johnston, Telegraph, 23/05/2007
http://www.telegraph.co.uk/news/main.jhtml?xml=/news/2007/05/22/ndrone22.xml
‘Talking' CCTV scolds offenders', BBC News, 4 April 2007 http://news.bbc.co.uk/2
/hi/uk_news/england/6524495.stm
3
“If the face fits, you're nicked”, Independent, Nick Huber, Monday, 1 April 2002 http:/
/www.independent.co.uk/news/business/analysis-and-features/if-the-face-fits-youre-nicked
-656092.html
“In 2001 the Newham system was linked to a central control room operated by the
London Metropolitan Police Force. In April 2001 the existing CCTV system in Birmingham city centre was upgraded to smart CCTV. People are routinely scanned by both
systems and have their faces checked against the police databases.”
Centre for Computing and Social Responsibility http://www.ccsr.cse.dmu.ac.uk
/resources/general/ethicol/Ecv12no1.html
2
104
104
104
105
105
leads the world in the deployment of surveillance technologies. With
an estimated 4.2 million CCTV cameras in place, its inhabitants are
the most watched in the world. 4 Many London buses have five or more
cameras inside, plus several outside, including one recording cars that
drive in bus lanes.
But CCTV images of our bodies are only one of many traces of
data that we leave in our wake, voluntarily and involuntarily. Vehicles are tracked using Automated Number Plate Recognition systems, our movements revealed via location-aware devices (such as
cell phones), the trails of our online activities recorded by Internet
Service Providers, our conversations overheard by the international
communications surveillance system Echelon, shopping habits monitored through store loyalty cards, individual purchases located using
RfiD (Radio-frequency identification) tags, and our meal preferences
collected as part of PNR (flight passenger) data. 5 Our digital selves
are many dimensional, alert, unforgetting.
4
5
A Report on the Surveillance Society. For the Information Commissioner by the Surveillance Studies Network, September 2006, p.19. Available from http://www.ico.gov.uk
‘e-Borders' is a £ 1.2bn passenger-screening programme to be introduced in 2009 and
to be complete by 2014. The single border agency, combining immigration, customs
and visa checks, includes a £ 650m contract with consortia Trusted Borders for a passenger-screening IT system: anyone entering or leaving Britain are to give 53 pieces
of information in advance of travel. This information, taken when a travel ticket is
bought, will be shared among police, customs, immigration and the security services
for at least 24 hours before a journey is due to take place. Trusted Borders consists
of US military contractor Raytheon Systems who will work with Accenture, Detica,
Serco, QinetiQ, Steria, Capgemini, and Daon. Ministers are also said to be considering
the creation of a list of ‘disruptive' passengers. It is expected to cost travel companies
£ 20million a year compiling the information. These costs will be passed on to customers via ticket prices, and the Government is considering introducing its own charge
on travellers to recoup costs. A pilot of the e-borders technology, known as Project
Semaphore, has already screened 29 million passengers.
Similarly, the arms manufacturer Lockheed Martin, the biggest defence contractor in
the U.S., that undertakes intelligence work as well as contributing to the Trident programme in the UK, is bidding to run the UK 2011 Census. New questions in the 2011
Census will include information about income and place of birth, as well as existing
questions about languages spoken in the household and many other personal details.
The Canadian Federal Government granted Lockheed Martin a $43.3 million deal to
conduct its 2006 Census. Public outcry against it resulted in only civil servants handling the actual data, and a new government task force being set up to monitor privacy
during the Census.
http://censusalert.org.uk/
http://www.vivelecanada.ca/staticpages/index.php/20060423184107361
105
105
105
106
106
Increasingly, these data traces are arrayed and administered in
networked structures of global reach. It is not necessary to posit a
totalitarian conspiracy behind this accumulation – data mining is an
exigency of both market effciency and bureaucratic rationality. Much
has been written on the surveillance society and the society of control,
and it is not the object here to construct a general critique of data
collection, retention and analysis. However, it should be recognised
that, in the name of effciency and rationality – and, of course, security – an ever-increasing amount of data is being shared (also sold,
lost and leaked 6) between the keepers of such seemingly unconnected
records as medical histories, shopping habits, and border crossings.
6
Sales: “Personal details of all 44 million adults living in Britain could be sold to
private companies as part of government attempts to arrest spiralling costs for the new
national identity card scheme, set to get the go-ahead this week. [...] ministers have
opened talks with private firms to pass on personal details of UK citizens for an initial
cost of £ 750 each.”
“Ministers plan to sell your ID card details to raise cash”, Francis Elliott, Andy McSmith and Sophie Goodchild, Independent, Sunday 26 June 2005
http://www.independent.co.uk/news/uk/politics/ministers-plan-to-sell-your-id-card-details
-to-raise-cash-496602.html
Losses: In January 2008, hundreds of documents with passport photocopies, bank
statements and benefit claims details from the Department of Work and Pensions were
found on a road near Exeter airport, following their loss from a TNT courier vehicle.
There were also documents relating to home loans and mortgage interest, and details
of national insurance numbers, addresses and dates of birth.
In November 2007, HM Revenue and Customs (HMRC) posted, unrecorded and unregistered via TNT, computer discs containing personal information on 25 million people
from families claiming child benefit, including the bank details of parents and the dates
of birth and national insurance numbers of children. The discs were then lost.
Also in November, HMRC admitted a CD containing the personal details of thousands
of Standard Life pension holders has gone missing, leaving them at heightened risk
of identity theft. The CD, which contained data relating to 15,000 Standard Life
pensions customers including their names, National Insurance numbers and pension
plan reference numbers was lost in transit from the Revenue offce in Newcastle to the
company's headquarters in Edinburgh by ‘an external courier'.
Thefts: In November 2007, MoD acknowledged the theft of a laptop computer containing the personal details of 600,000 Royal Navy, Royal Marines, and RAF recruits
and of people who had expressed interest in joining, which contained, among other
information, passport, and national insurance numbers and bank details.
In October 2007, a laptop holding sensitive information was stolen from the boot of
an HMRC car. A staff member had been using the PC for a routine audit of tax
information from several investment firms. HMRC refused to comment on how many
individuals may be at risk, or how many financial institutions have had their data
stolen as well. BBC suggest the computer held data on around 400 customers with
high value individual savings accounts (ISAs), at each of five different companies –
including Standard Life and Liontrust. (In May, Standard Life sent around 300 policy
documents to the wrong people.)
106
106
106
107
107
Legal frameworks intended to safeguard a conception of privacy by
limiting data transfers to appropriate parties exist. Such laws, and in
particular the UK Data Protection Act (DPA, 1998) 7, are the subject
of investigation of the film Faceless.
From Act to Manifesto
“I wish to apply, under the Data Protection Act,
for any and all CCTV images of my person held
within your system. I was present at [place] from
approximately [time] onwards on [date].” 8
For several years, ambientTV.NET conducted a series of exercises
to visualise the data traces that we leave behind, to render them
into experience and to dramatise them, to watch those who watch
us. These experiments, scrutinising the boundary between public
and private in post-9/11 daily life, were run under the title ‘the Spy
School'. In 2002, the Spy School carried out an exercise to test the
reach of the UK Data Protection Act as it applies to CCTV image
data.
The Data Protection Act 1998 seeks to strike a balance between
the rights of individuals and the sometimes competing interests
of those with legitimate reasons for using personal information.
The DPA gives individuals certain rights regarding information
held about them. It places obligations on those who process information (data controllers) while giving rights to those who are
the subject of that data (data subjects). Personal information
covers both facts and opinions about the individual. 9
7
9
The full text of the DPA (1998) is at http://www.opsi.gov.uk/ACTS/acts1998
/19980029.htm
Data Protection Act Fact Sheet available from the UK Information Commissioners
Offce, http://www.ico.gov.uk
107
107
107
108
108
The original DPA (1984) was devised to ‘permit and regulate'
access to computerised personal data such as health and financial
records. A later EU directive broadened the scope of data protection
and the remit of the DPA (1998) extended to cover, amongst other
data, CCTV recordings. In addition to the DPA, CCTV operators
‘must' comply with other laws related to human rights, privacy, and
procedures for criminal investigations, as specified in the CCTV Code
of Practice (http://www.ico.gov.uk).
As the first subject access request letters were successful in delivering CCTV recordings for the Spy School, it then became pertinent
to investigate how robust the legal framework was. The Manifesto for
CCTV filmmakers was drawn up, permitting the use only of recordings obtained under the DPA. Art would be used to probe the law.
figure 92
Still from
Faceless,
2007
figure 94
Multiple,
conflicting
timecode
stamps
A legal readymade
Vague spectres of menace caught on time-coded surveillance
cameras justify an entire network of peeping vulture lenses. A
web of indifferent watching devices, sweeping every street, every
building, to eliminate the possibility of a past tense, the freedom
to forget. There can be no highlights, no special moments: a
discreet tyranny of now has been established. Real time in its
most pedantic form. 10
Faceless is a CCTV science fiction fairy tale set in London, the city
with the greatest density of surveillance cameras on earth. The film
is made under the constraints of the Manifesto – images are obtained
from existing CCTV systems by the director/protagonist exercising
her/his rights as a surveilled person under the DPA. Obviously the
protagonist has to be present in every frame. To comply with privacy
legislation, CCTV operators are obliged to render other people in
the recordings unidentifiable – typically by erasing their faces, hence
the faceless world depicted in the film. The scenario of Faceless thus
derives from the legal properties of CCTV images.
10
(Ian Sinclair: Lights out for the territory, Granta, London, 1998, p. 91)
108
108
108
109
109
“RealTime orients the life of every citizen. Eating, resting, going
to work, getting married – every act is tied to RealTime. And every
act leaves a trace of data – a footprint in the snow of noise...” 11
The film plays in an eerily familiar city, where the reformed RealTime calendar has dispensed with the past and the future, freeing
citizens from guilt and regret, anxiety and fear. Without memory or
anticipation, faces have become vestigial – the population is literally
faceless. Unimaginable happiness abounds – until a woman recovers
her face...
There was no traditional shooting script: the plot evolved during
the four-year long process of obtaining images. Scenes were planned
in particular locations, but the CCTV recordings were not always
obtainable, so the story had to be continually rewritten.
Faceless treats the CCTV image as an example of a legal readymade (‘objet trouvé'). The medium, in the sense of raw materials
that are transformed into artwork, is not adequately described as
simply video or even captured light. More accurately, the medium
comprises images that exist contingent on particular social and legal
circumstances – essentially, images with a legal superstructure. Faceless interrogates the laws that govern the video surveillance of society
and the codes of communication that articulate their operation, and
in both its mode of coming into being and its plot, develops a specific
critique.
Reclaiming the data body
Through putting the DPA into practice and observing the consequences over a long exposure, close-up, subtle developments of the
law were made visible and its strengths and lacunae revealed.
“I can confirm there are no such recordings of
yourself from that date, our recording system was
not working at that time.” (11/2003)
11
Faceless, 2007
109
109
109
110
110
Many data requests had negative outcomes because either the surveillance camera, or the recorder, or the entire CCTV system in question
was not operational. Such a situation constitutes an illegal use of
CCTV: the law demands that operators: “comply with the DPA by
making sure [...] equipment works properly.” 12
In some instances, the non-functionality of the system was only
revealed to its operators when a subject access request was made. In
the case below, the CCTV system had been installed two years prior
to the request.
“Upon receipt of your letter [...] enclosing the
required 10£ fee, I have been sourcing a company
who would edit these tapes to preserve the privacy of other individuals who had not consented
to disclosure. [...] I was informed [...] that all
tapes on site were blank. [.. W]hen the engineer
was called he confirmed that the machine had not
been working since its installation.
Unfortunately there is nothing further that can be
done regarding the tapes, and I can only apologise
for all the inconvenience you have been caused.”
(11/2003)
Technical failures on this scale were common. Gross human errors
were also readily admitted to:
12
CCTV Systems and the Data Protection Act 1998, available from http://www.ico.gov
.uk
110
110
110
111
111
“As I had advised you in my previous letter, a request was made to remove the tape and for it not
to be destroyed. Unhappily this request was not
carried out and the tape was wiped according with
the standard tape retention policy employed by
[deleted]. Please accept my apologies for this and
assurance that steps have been taken to ensure a
similar mistake does not happen again.” (10/2003)
figure 98
The Rotain
Test, devised
by the
UK Home
Offce Police
Scientific
Development
Branch,
measures
surveillance
camera
performance.
Some responses, such as the following, were just mysterious (data
request made after spending an hour below several cameras installed
in a train carriage).
“We have carried out a careful review of all relevant tapes and we confirm that we have no images of
you in our control.” (06/2005)
Could such a denial simply be an excuse not to comply with the costly
demands of the DPA?
“Many older cameras deliver image quality so poor
that faces are unrecognisable. In such cases the
operator fails in the obligation to run CCTV for
the declared purposes.
You will note that yourself and a colleague's faces
look quite indistinct in the tape, but the picture you sent to us shows you wearing a similar
fur coat, and our main identification had been made
through this and your description of the location.”
(07/2002)
111
111
111
112
112
To release data on the basis of such weak identification compounds
the failure.
Much confusion is caused by the obligation to protect the privacy
of third parties in the images. Several data controllers claimed that
this relieved them of their duty to release images:
“[... W]e are not able to supply you with the images you requested because to do so would involve
disclosure of information and images relating to
other persons who can be identified from the tape
and we are not in a position to obtain their consent to disclosure of the images. Further, it is
simply not possible for us to eradicate the other
images. I would refer you to section 7 of the Data
Protection Act 1998 and in particular Section 7
(4).” (11/2003)
Even though the section referred to states that it is:
“not to be construed as excusing a data controller
from communicating so much of the information
sought by the request as can be communicated without disclosing the identity of the other individual concerned, whether by the omission of names or
other identifying particulars or otherwise.”
Where video is concerned, anonymisation of third parties is an expensive, labour-intensive procedure – one common technique is to occlude
each head with a black oval. Data controllers may only charge the
statutory maximum of 10 £ per request, though not all seemed to be
aware of this:
112
112
112
113
113
“It was our understanding that a charge for production of the tape should be borne by the person
making the enquiry, of course we will now be checking into that for clarification. Meanwhile please
accept the enclosed video tape with compliments of
[deleted], with no charge to yourself.” (07/2002)
figure 90
Off with
their heads!
Visually provocative and symbolically charged as the occluded heads
are, they do not necessarily guarantee anonymity. The erasure of a
face may be insuffcient if the third party is known to the person requesting images. Only one data controller undeniably (and elegantly)
met the demands of third party privacy, by masking everything but
the data subject, who was framed in a keyhole. (This was an uncommented second offering; the first tape sent was unprocessed.) One
CCTV operator discovered a useful loophole in the DPA:
“I should point out that we reserve the right, in
accordance with Section 8(2) of the Data Protection
Act, not to provide you with copies of the information requested if to do so would take disproportionate effort.” (12/2004)
What counts as ‘disproportionate effort'? The gold standard was set
by an institution whose approach was almost baroque – they delivered
hard copies of each of the several hundred relevant frames from the
time-lapse camera, with third parties heads cut out, apparently with
nail scissors.
Two documents had (accidentally?) slipped in between the printouts – one a letter from a junior employee tendering her resignation
(was it connected with the beheading job?), and the other an ironic
memo:
113
113
113
114
114
“And the good news -- I enclose the 10 £ fee to be
passed to the branch sundry income account.” (Head
of Security, internal communication 09/2003)
From 2004, the process of obtaining images became much more difficult.
“It is clear from your letter that you are aware
of the provisions of the Data Protection Act and
that being the case I am sure you are aware of
the principles in the recent Court of Appeal decision in the case of Durant vs. financial Services Authority. It is my view that the footage you
have requested is not personal data and therefore
[deleted] will not be releasing to you the footage
which you have requested.” (12/2004)
Under Common Law, judgements set precedents. The decision in
the case Durant vs. financial Service Authority (2003) redefined
‘personal data'; since then, simply featuring in raw video data does
not give a data subject the right to obtain copies of the recording.
Only if something of a biographical nature is revealed does the subject
retain the right.
114
114
114
115
115
“Having considered the matter carefully, we do not
believe that the information we hold has the necessary relevance or proximity to you. Accordingly
we do not believe that we are obligated to provide
you with a copy pursuant to the Data Protection Act
1988. In particular, we would remark that the video
is not biographical of you in any significant way.”
(11/2004)
Further, with the introduction of cameras that pan and zoom, being
filmed as part of a crowd by a static camera is no longer grounds for
a data request.
“[T]he Information Commissioners office has indicated that this would not constitute your personal
data as the system has been set up to monitor the
area and not one individual.” (09/2005)
As awareness of the importance of data rights grows, so the actual
provision of those rights diminishes:
115
115
115
116
116
figure 89
Still from
Faceless,
2007
"I draw your attention to CCTV systems and the Data
Protection Act 1998 (DPA) Guidance Note on when the
Act applies. Under the guidance notes our CCTV system is no longer covered by the DPA [because] we:
• only have a couple of cameras
• cannot move them remotely
• just record on video whatever the cameras pick
up
• only give the recorded images to the police to
investigate an incident on our premises"
(05/2004)
Data retention periods (which data controllers define themselves)
also constitute a hazard to the CCTV filmmaker:
“Thank you for your letter dated 9 November addressed to our Newcastle store, who have passed
it to me for reply. Unfortunately, your letter was
delayed in the post to me and only received this
week. [...] There was nothing on the tapes that you
requested that caused the store to retain the tape
beyond the normal retention period and therefore
CCTV footage from 28 October and 2 November is no
longer available.” (12/2004)
Amidst this sorry litany of malfunctioning equipment, erased tapes,
lost letters and sheer evasiveness, one CCTV operator did produce
reasonable justification for not being able to deliver images:
116
116
116
117
117
“We are not in a position to advise whether or not
we collected any images of you at [deleted]. The
tapes for the requested period at [deleted] had
been passed to the police before your request was
received in order to assist their investigations
into various activities at [deleted] during the
carnival.” (10/2003)
figure 91
Still from
Faceless,
2007
In the shadow of the shadow
There is debate about the effcacy, value for money, quality of
implementation, political legitimacy, and cultural impact of CCTV
systems in the UK. While CCTV has been presented as being vital in solving some high profile cases (e.g. the 1999 London nail
bomber, or the 1993 murder of James Bulger), at other times it has
been strangely, publicly, impotent (e.g. the 2005 police killing of Jean
Charles de Menezes). The prime promulgators of CCTV may have
lost some faith: during the 1990s the UK Home Offce spent 78% of
its crime prevention budget on installing CCTV, but in 2005, an evaluation report by the same offce concluded that, “the CCTV schemes
that have been assessed had little overall effect on crime levels.” 13
An earlier, 1992, evaluation reported CCTV's broadly positive
public reception due to its assumed effectiveness in crime control,
acknowledging “public acceptance is based on limited and partly inaccurate knowledge of the functions and capabilities of CCTV systems
in public places.” 14
By the 2005 assessment, support for CCTV still “remained high in
the majority of cases” but public support was seen to decrease after
implementation by as much as 20%. This “was found not to be the
reflection of increased concern about privacy and civil liberties, as
this remained at a low rate following the installation of the cameras,”
13
Gill, M. and Spriggs, A., Assessing the impact of CCTV. London: Home Offce
Research, Development and Statistics Directorate 2005, pp.60-61.
www.homeoffce.gov.uk/rds/pdfs05/hors292.pdf
14
http://www.homeoffce.gov.uk/rds/prgpdfs/fcpu35.pdf
117
117
117
118
118
but “that support for CCTV was reduced because the public became
more realistic about its capabilities” to lower crime.
Concerns, however, have begun to be voiced about function creep
and the rising costs of such systems, prompted, for example, by the
disclosure that the cameras policing London's Congestion Charge remain switched on outside charging hours and that the Met are to
have live access to them, having been exempted from parts of the
Data Protection Act to do so. 15 As such realities of CCTV's daily
operation become more widely known, existing acceptance may be
somewhat tempered.
Physical bodies leave data traces: shadows of presence, conversation, movement. Networked databases incorporate these traces into
data bodies, whose behaviour and risk are priorities for analysis and
commodification, by business and by government. The securing of
a data body is supposedly necessary to secure the human body, either preventatively or as a forensic tool. But if the former cannot
be assured, as is the case, what grounds are there for trust in the
hollow promise of the latter? The all-seeing eye of the panopticon is
not complete, yet. Regardless, could its one-way gaze ever assure an
enabling conception of security?
15
Surveillance State Function Creep – London Congestion Charge “real-time bulk data”
to be automatically handed over to the Metropolitan Police etc. http://p10.hostingprod
.com/@spyblog.org.uk/blog/2007/07/surveillance_state_function_creep_london_congestion
_charge_realtime_bulk_data.html
118
118
118
119
119
MICHAEL MURTAUGH
figure 113
Start
broadcasting
yourself!
License: Free Art License
EN
Active Archives
or: What's wrong with the YouTube documentary?
As someone who has shot video and programmed web-based interfaces to video over the past decade, it has been exciting to see how
distributing video via the Internet has become increasingly popularized, thanks in large part to video sharing sites like YouTube. At the
same time, I continue to design and write software in search of new
forms of collaborative and ‘evolving' documentaries; and for myself,
and others around me, I feel disinterest, even aversion, to posting
videos on YouTube. This essay has two threads: (1) I revisit an
earlier essay describing the ‘Evolving Documentary' model to get at
the roots of my enthusiasm for working with video online, and (2) I
examine why I find YouTube problematic, and more a reflection of
television than the possibilities that the web offers.
In 1996, I co-authored an essay with Glorianna Davenport, then
my teacher and director of the Interactive Cinema group at the MIT
Media Lab, called Automatist storyteller systems and the shifting
sands of story. 1 In it, we described a model for supporting ‘Evolving
Documentaries', or an “approach to documentary storytelling that
celebrates electronic narrative as a process in which the author(s), a
networked presentation system, and the audience actively collaborate
in the co-construction of meaning.” In this paper, Glorianna included
a section entitled ‘What's wrong with the Television Documentary?'
The main points of this argument were as follows:
1
figure 114
Join the
largest
worldwide
video-sharing
community!
1.
[... T]elevision consumes the viewer. Sitting passively in front
of a TV screen, you may appreciate an hour-long documentary;
you may even find the story of interest; however, your ability to
learn from the program is less than what it might be if you were
actively engaged with it, able to control its shape and probe its
contents.
Here, it is crucial to understand what is meant by the word ‘active'
. In a naive comparison between the activities of watching television
and surfing the web, one might say that the latter is inherently more
active in the sense that the process is ‘driven' by the choices of the
user; in the early days of the web it became popular to refer to this
split as ‘lean back vs. lean forward' media. Of course, if one means
to talk about cognitive activity, this is clearly misleading as aimlessly surfing the net can be achieved at near comatose levels of brain
function (as any late night surfer can attest to) and watching a particularly sharp television program can be incredibly engaging, even
life changing. Glorianna would often describe her frustration with
traditional documentary by observing the vast difference between her
own sense of engagement with a story gained through the process of
shooting and editing, versus the experience of an audience member
from simply viewing the end result. Thus ‘active' here relates to the
act of authoring and the construction of meaning. Rather than talking about leaning forward or backward, a more useful split might be
between reading and writing. Rather than being a question of bad
versus good access, the issue becomes about two interconnected cognitive processes, both hopefully ‘active' and involving thought. An
ideal platform for online documentary would be one that facilitates a
fluid movement between moments of reflection (reading) and of construction (writing).
132
132
132
133
133
2.
Television severely limits the ways in which an author can
‘grow' a story. A story must be composed into a fixed, unchanging form before the audience can see and react to it: there is no
obvious way to connect viewers to the process of story construction. Similarly, the medium offers no intrinsic, immediately
available way to interconnect the larger community of viewers
who wish to engage in debate about a particular story.
Part of the promise of crossing video with computation is the potential to combine the computers' ability to construct models and
run simulations with the random access possibilities of digitized media. Instead of editing a story down into a fixed form or ‘final cut',
one can program a ‘storytelling system' that can act as an ‘editor in
software'. Thus the system can maintain a dynamic representation
of the context of a particular telling, on which to base (or support a
viewer in making) editing decisions ‘on the fly'. The ‘Evolving Documentary' was intended to support complex stories that would develop
over time, and which could best be told from a variety of points of
view.
3.
Like published books and movies, television is designed for
unidirectional, one-to-many transmission to a mass audience,
without variation or personalization of presentation. The remote-control unit and the VCR (videocassette recorder) - currently the only devices that allow the viewer any degree of independent control over the play-out of television - are considered
anathema by commercial broadcasters. Grazing, time-shifting,
and ‘commercial zapping' run contrary to the desire of the industry for a demographically correct audience that passively
absorbs the programming - and the intrusive commercial messages - that the broadcasters offer.
133
133
133
134
134
Adding a decentralized means of distribution and feedback such
as the Internet provides the final piece of the puzzle in creating a
compelling new medium for the evolving documentary. No longer
would footage have to be excluded for reasons of reaching a ‘broad'
or average audience. An ideal storytelling system would be one that
could connect an individual viewer to whatever material was most
personally relevant. The Internet is a unique ‘mass media' in its
potential support for enabling access to non-mainstream, individually
relevant and personal subject matter.
What's wrong with the YouTube documentary?
YouTube has massively popularized the sharing and consumption
of video online. That said, most of the core concerns made in the
arguments related to television, are still relevant to YouTube when
considered as a platform for online collaborative documentary.
Clips are primarily ‘view-only'
Already in it's name, ‘YouTube' consciously invokes the television
set, thus inviting visitors to ‘lean back' and watch. The YouTube
interface functions primarily as a showcase of static monolithic elements. Clips are presented as fixed and finished, to be commented
upon, rated, and possibly bookmarked, but no more. The clip is
‘atomic' in the sense that it's not possible to make selections within a
clip, to export images or sound, or even to link to a particular starting
point. Without special plugins, the site doesn't even allow downloading of the clip. While users are encouraged ‘to embed' YouTube content in other websites (by cutting and pasting special HTML codes
that refer back to the YouTube site), the resulting video plays using
the YouTube player, complete with ‘related' links back into the service. It is in fact a violation of the YouTube terms of use to attempt
to display videos from the service in any other way.
134
134
134
135
135
The format of the clip is fixed and uniform for all kinds
of content
Technically, YouTube places some rather arbitrary limits on the
format of clips: all clips must contain an image and a sound track
and may not be longer than 10 minutes in length. Furthermore all
clips are treated equally, there is no notion of a ‘lecture', versus a
‘slideshow', versus a ‘music video', together with a sense that these
different kinds of material might need to be handled differently. Each
clip is compressed in a uniform way, meaning at the moment into a
flash format video file of fixed data rate and screen size.
Clips have no history
Despite these limitations, users of YouTube have found workarounds
to, for instance, download clips to then rework them into derived clips.
Although the derived works are often placed back again on YouTube,
the system itself has no means representing this kind of relationship.
(There is a mechanism for posting video responses to other clips, but
this kind of general purpose solution seems not to be understood or
used to track this kind of ‘derived' relationship.) The system is unable to model or otherwise make available the ‘history' of a particular
piece of media. Contrast this with a system like Wikipedia, where the
full history of an article, with a record of what was changed, by whom,
when, and even ‘meta-level' discussions about the changes (including
possible disagreement) is explicitly facilitated.
Weak or ‘flat' narrative structure
YouTube's primary model for narrative is a broad (and somewhat
obscure) sense of ‘relatedness' (based on user-defined tags) modulated
by popularity. As with many ‘social networking' and media sharing
sites, YouTube relies on ‘positive feedback' popularity mechanisms,
such as view counts, ‘star' ratings and favorites, to create ranked lists
of clips. Entry points like ‘Videos being watched right now', ‘Most
Viewed', ‘Top Favorites', only close the loop of featuring what's already popular to begin with. In addition, YouTube's commercial
135
135
135
136
136
model of enabling special paid levels of membership leads to ambiguous selection criteria, complicated by language as in the ‘Promoted
Videos' and ‘Featured Videos' of YouTube's front page (promoting
what?, featured by whom?).
The ‘editing logic' threading the user through the various clips is
flat, in that a clip is shown the same way regardless of what has been
viewed before it. Thus YouTube makes no visible use of a particular viewing history (though the fact that this information is stored
has been brought to the attention of the public via the ongoing Viacom lawsuit, http://news.bbc.co.uk/2/hi/technology/7506948.stm).
In this way it's difficult to get a sense of being in a particular ‘story
arc' or thread when moving from clip to clip in YouTube as in a sense
each click and each clip restarts the narrative experience.
No licenses for sharing / reuse
The lack of a download feature in YouTube could be said to protect the interests of those who wish to assert a claim of copyright.
However, YouTube ignores and thus obscures the question of license
altogether. One can find for instance the early films of Hitchcock,
now part of the public domain, in 10 minute chunks on YouTube;
despite this status (not indicated on the site), these clips are, like all
YouTube clips, unavailable for any kind of manipulation. This approach, and the limitations it places on the use of YouTube material,
highlights the fact that YouTube is primarily focused on getting users
to consume YouTube material, framed in YouTube's media player, on
YouTube's terms.
Traditional models for (software) authorship
While YouTube is built using open source software (Python and
ffmpeg for instance), the source code of the system itself is closed,
leaving little room for negotiation about how the software of the
site itself operates. This is a pity on a variety of levels. Free and
open source software is inextricably bound to the web not only in
terms of providing many of the underlying software (like the Apache
web server), but also in the reverse, as the possibilities for collaborative development that the web provides has catalyzed the process of
136
136
136
137
137
open source development. Software designed to support collaborative
work on code, like Subversion and other CVS's (concurrent versioning systems), and platforms for tracking and discussing software (like
TRAC), provide much richer models of use and relationship to work
than those which YouTube offer for video production.
Broadcasting over coherence
From it's slogan (‘Broadcast yourself'), to the language the service
uses around joining and uploading videos (see images), YouTube falls
very much into a traditional model of commercial broadcast television. In this model sharing means getting others to watch your clips,
with the more eyeballs the better.
The desire for broadness and the building of a ‘worldwide' community united only by a desire to ‘broadcast one's self' means creating
coherence is not a top priority. YouTube comments, for instance,
seem to suffer from this lack of coherence and context. Given no
particular focus, comments seem doomed to be similarly ungrounded
and broad. Indeed, comments in YouTube often seem to take on
more the character of public toilets than of public broadcasting, replete with the kind of sexism, racism, and homophobia that more or
less anonymous ‘blank wall' access seems to encourage.
A problematic space for ‘sharing'
The combination of all these aspects make YouTube for many a
problematic space for ‘sharing' - particularly when the material is of
a personal or particular nature. While on the one hand appearing
to pose an alternative platform to television, YouTube unfortunately
transposes many of that form's limitations and conventions onto the
web.
Looking to the future, what still remains challenging, is figuring
out how to fuse all those aspects that make the Internet so compelling
as a medium and enable them in the realm of online video: the net's
decentralized nature, the possibilities for participatory/collaboration
production, the ability to draw on diverse sources of knowledge (from
‘amateur' and home-based, to ‘expert'). How can the successful examples of collaborative text-based projects like Wikipedia inspire new
137
137
137
138
138
forms of collaborative video online; and in a way that escapes the
‘heaviness' and inertia of traditional forms of film/video. This fusion
can and needs to take place on a variety of levels, from the concept
of what a documentary is and can be, to the production tools and
content management systems media makers use, to a legal status of
media that reflects an understanding that culture is something which
is shared, down to the technical details of the formats and codecs
carrying the media in a way that facilitates sharing, instead of complicating it.
138
138
138
139
139
EN
NL
FR
Mutual Motions
139
139
139
140
140
Whether we operate a computer with the help of a command line interface, or by using buttons, switches and
clicks. . . the exact location of interaction often serves as
conduit for mutual knowledge - machines learn about bodies and bodies learn about machines. Dialogues happen
at different levels and in various forms: code, hardware,
interface, language, gestures, circuits.
Those conversations are sometimes gentle in tone - ubiquitous requests almost go unnoticed - and other times
they take us by surprise because of their authoritative
and demanding nature: “Put That There”. How can we
think about such feed back loops in productive ways?
How are interactions translated into software, and how
does software result in interaction? Could the practice of
using and producing free software help us find a middle
ground between technophobia and technofetishism? Can
we imagine ourselves and our realities differently, when we
try to re-design interfaces in a collaborative environment?
Would a different idea about ‘user' change our approach
to ‘use' as well?
7
“Classic puff pastry begins with a basic dough called a détrempe (pronounced day-trahmp) that is rolled out and
wrapped around a slab of butter. The
dough is then repeatedly rolled, folded,
and turned.”, Molly Stevens, A Shortcut
to flaky Puff Pastry. http://www.taunton
.com/finecooking/articles/how-to/rough-puff
-pastry.aspx 2008
146
146
146
147
147
figure XI
figure XIII
ADRIAN MACKENZIE
License: Creative Commons Attribution-NonCommercial-ShareAlike
EN
Centres of envelopment and intensive movement
in digital signal processing
figure 115
Adrian
Mackenzie
at V/J10
Abstract
The paper broadly concerns algorithmic processes commonly found
in wireless networks, video and audio compression. The problem it
addresses is how to account for the convoluted nature of the digital
signal processing (DSP). Why is signal processing so complex and relatively inaccessible? The paper argues that we can only understand
what is at stake in these labyrinthine calculations by switching focus away from abstract understandings of calculation to the dynamic
re-configuration of space and movement occurring in signal processing. The paper works through one example in detail of this reconfigured
movement in order to illustrate how digital signal processing enables
different experiences of proximity, intimacy, co-location and distance.
It explores how wireless signal processing algorithms envelope heterogeneous spaces in the form of hidden states, and logistical networks.
Importantly, it suggests that the ongoing dynamism of signal processing could be understood in terms of intensive movement produced by
a centre of envelopment. Centres of envelopment generate extensive
changes, but they also change the nature of change itself.
From sets to signals: digital signal processing
In new media art, in new media theory and in various forms of
media activism, there has been so much work that seizes on the possibilities of using digital technologies to design interactions, sound,
image, text, and movement that challenge dominant forms of experience, habit and selfhood. In various ways, the processes of branding,
commodification, consumption, control and surveillance associated
155
155
155
156
156
with contemporary media have been critically interrogated and challenged.
However, there are some domains of contemporary technological
and media culture that are really hard to work with. They may
be incredibly important, they may be an intimate part of everyday
life, yet remain relatively intractable. They resist contestation, and
engagement with may even seem pointless. This is because they may
contain intractable materials, or be organised in such complicated
ways that they are hard to change.
This paper concerns one such domain, digital signal processing
(DSP). I am not saying that new media has not engaged with DSP. Of
course it has, especially in video art and sound art, but there is little
work that helps us make sense of how the sensations, textures, and
movements associated with DSP come to be taken for granted, come
to appear as normal, and everyday, or how they could be contested.
A promotional video from Intel for the UltraMobilePC 1 promotes
change in relation to mobile media. Intel, because it makes semiconductors, is highly invested in digital signal processing in various forms.
In any case, video itself is a prime example of contemporary DSP at
work. Two aspects of this promotional video for the UMPC, the UltraMobile PC, relate to digital signal processing. There is much signal
processing here. It connects the individual's eyes, mouths and ears
to screens that display information services of various kinds. There
is also much signal processing in the wireless network infrastructures
that connect all these gadgets to each other and to various information services (maps, calendars, news feeds). In just this example,
sound, video, speech recognition, fibre, wireless and satellite, imaging
technologies in medicine all rely on DSP. We could say a good portion
of our experience is DSP-based.
This paper is an attempt to develop a theory of digital signal processing, a theory that could be used to talk about ways of contesting,
critiquing, or making alternatives. The theory under development
here relies a lot on two notions, ‘intensive movement' and ‘centre
of envelopment' that Deleuze proposed in Difference and Repetition.
figure 117
A promotional video
from Intel
for the UltraMobilePC
1
http://youtube.com/watch?v=GFS2TiK3AI
156
156
156
157
157
However, I want to keep the philosophy in the background as much as
possible. I basically want to argue that we need to ask: why does so
much have to be enveloped or interiorised in wireless or audiovisual
DSP?
How does DSP differ from other algorithmic processes?
What can we say about DSP? firstly, influenced by recent software
studies-based approaches (Fuller, Chun, Galloway, Manovich), I think
it is worth comparing the kinds of algorithmic processes that take
place in DSP with those found in new media more generally. Although
it is an incredibly broad generalisation, I think it is safe to say that
DSP does not belong to the set-based algorithms and data-structures
that form the basis of much interest in new media interactivity or
design.
DSP differs from set-based code. If we think of social software such
as flickr, Google, or Amazon, if we think of basic information infrastructures such as relational databases or networks, if we think of
communication protocols or search engines, all of these systems rely
on listing, enumerating, and sorting data. The practices of listing,
indexing, addressing, enumerating and sorting, all concern sets. Understood in a fairly abstract way, this is what much software and code
does: it makes and changes sets. Even areas that might seem quite
remote from set-making, such as the 3D-projective geometry used in
computer game graphics are often reduced algorithmically to complicated set-theoretical operations on shapes (polygons). Even many
graphic forms are created and manipulated using set operations.
The elementary constructs of most programming languages reflect
this interest in set-making. For instance, networks or, in computer
science terms, graphs, are visually represented like using lines and
boxes. But in terms of code, they are presented as either edge or
‘adjacency lists', like this: 2
graph = {'A': ['B', 'C'],
'B': ['C', 'D'],
2
http://www.python.org/doc/essays/graphs/
157
157
157
158
158
'C':
'D':
'E':
'F':
['D'],
['C'],
['F'],
['C']}
A graph or network can be seen as a list of lists. This kind of
representation in code of relations is very neat and nice. It means that
something like the structure of the internet, as a hybrid of physical
and logical relations, can be recorded, stored, sorted and re-ordered
in code. Importantly, it is highly open to modification and change.
Social software, or Web2.0, as exemplified in websites like Facebook or
YouTube also can be understood as massive deployments of set theory
in the form of code. Their sociality is very much dependent on set
making and set changing operations, both in the composition of the
user interfaces and in the underlying databases that make constantly
seek to attach new relations to data, to link identities and attributes.
In terms of activism, and artwork, relations that can be expressed in
the form of sets and operations on sets, are highly manipulable. They
can be learned relatively easily, and they are not too difficult to work
with. For instance, scripts that crawl or scrape websites have been
widely used in new media art and activism.
By contrast, DSP code is not based on set-making. It relies on
a different ordering of the world that lies closer to streams of signals that come from systems such as sensors, transducers, cameras,
and that propagate via radio or cable. Indeed, although it is very
widely used, DSP is not usually taught as part of the computer science or software engineering. The textbooks in these areas often do
not mention DSP. The distinction between DSP and other forms of
computation is clearly defined in a textbook of DSP:
Digital Signal Processing is distinguished from other areas in
computer science by the unique type of data it uses: signals.
In most cases, these signals originate as sensory data from the
real world: seismic vibrations, visual images, sound waves, etc.
DSP is the mathematics, the algorithms, and the techniques
158
158
158
159
159
used to manipulate these signals after they have been converted
into a digital form. (Smith, 2004)
While it draws on some of the logical and set-based operations
found in code in general, DSP code deals with signals that usually involve some kind of sensory data – vibrations, waves, electromagnetic
radiation, etc. These signals often involve forms of rapid movement,
rhythms, patterns or fluctuations. Sometimes these movements are
embodied in physical senses, such as the movements of air involved in
hearing, or the flux of light involved in seeing. Because they are often
irregular movements, they cannot be easily captured in the forms of
movement idealised in classical mechanics – translation, rotation, etc.
Think for instance of a typical photograph of a city street. Although
there are some regular geometrical forms, the way in which light is
reflected, the way shadows form, is very difficult to describe geometrically. It is much easier, as we will see, to think of an image as a
signal that distributes light and colour in space. Once an image or
sound can be seen as a signal, it can undergo digital signal processing.
What distinguishes DSP from other algorithmic processes is its
reliance on transforms rather than functions. This is a key difference.
The ‘transform' deals with many values at once. This is important
because it means it can deal with things that are temporal or spatial,
such as sounds, images, or signals in short. This brings algorithms
much closer to sensation, and to what bodies feel. While there is
codification going on, since the signal has to be treated digitally as
discrete numerical values, it is less reducible to the sequence of steps or
operations that characterise set-theoretical coding. Here for instance
is an important section of the code used in MPEG video encoding in
the free software ffmpeg package:
figure 116
The simplest
mpeg encoder
**
* @file mpegvideo.c
* The simplest mpeg encoder (well, it was the simplest!).
*
...
159
159
159
160
160
* for jpeg fast DCT */
#define CONST_BITS 14
static const uint16_t aanscales[64] = {
/* precomputed values scaled up by 14 bits */
16384, 22725, 21407, 19266, 16384, 12873, 8867, 4520,
22725, 31521, 29692, 26722, 22725, 17855, 12299, 6270,
21407, 29692, 27969, 25172, 21407, 16819, 11585, 5906,
19266, 26722, 25172, 22654, 19266, 15137, 10426, 5315,
16384, 22725, 21407, 19266, 16384, 12873, 8867, 4520,
12873, 17855, 16819, 15137, 12873, 10114, 6967, 3552,
8867, 12299, 11585, 10426, 8867, 6967, 4799, 2446,
4520, 6270, 5906, 5315, 4520, 3552, 2446, 1247
};
...
for(i=0;i<64;i++) {
const int j=
dsp{}->}idct_permutation[i];
qmat[qscale][i] = (int)((uint64_t_C(1)
<< (QMAT_SHIFT + 14))
(aanscales[i]
* qscale * quant_matrix[j]));
I don't think we need to understand this code in detail. There is
only one thing I want to point out in this code: the list of ‘precomputed' numerical values is used for ‘jpeg fast DCT'. This is a typical
piece of DSP type code. It refers to the way in which video frames are
encoding using Fast Fourier Transforms. The key point here is that
these values have been carefully worked out in advance to scale different colour and luminosity components of the image differently. The
transform, DCT (Discrete Cosine Transform), is applied to chunks of
sensation – video frames – to make them into something that can be
manipulated, stored, changed in size or shape, and circulated. Notice
160
160
160
161
161
that the code here is quite opaque in comparison to the graph data
structures discussed previously. This opacity reflects the sheer number of operations that have to be compressed into code in order for
digital signal processing to work.
Working with DSP: architecture and geography
So we can perhaps see from the two code examples above that there
is something different about DSP in comparison to the set-based
processing. DSP seems highly numerical and quantified, while the
set-based code is symbolic and logical. What is at stake in this difference? I would argue that it is something coming into the code from
outside, something that is difficult to read in the code itself because
it is so opaque and convoluted. Why is DSP code hard to understand
and also hard to write?
You will remember that I said at the outset that there are some
facets of technological cultures that resist appropriation or intervention. I think the mathematics of DSP is one of those facets. If I just
started explaining some of the mathematical models that have been
built into the contemporary world, I think it would be shoring up
or reinforcing a certain resistance to change associated with DSP, at
least in its main mathematical formalisations. I do think the mathematical models are worth engaging with, partly because they look
so different from the set-based operations found in much code today.
The mathematical models can tell us why DSP is difficult to intervene
in at a low level.
However, I don't think it is the mathematics as such that makes
digital signal processing hard to grapple with. The mathematics is an
architectural response to a geographical problem, a problem of where
code can go and be in the world. I would argue that it is the relation
between the architecture and geography of digital signal processing
itself that we should grapple with. It is something to do about the
immersion in everyday life, the proximity to sensation, the shifting
multi-sensory patterning of sociality, the movements of bodies across
variable distances, and the effervescent sense of impending change
that animates the convoluted architecture of DSP.
161
161
161
162
162
We could think of the situations in which DSP is commonly found.
For instance, in the background of the scenes in the daily lives of
businessmen shown in Intel's UPMC video, lie wireless infrastructures
and networks. Audiovisual media and wireless networks both use
signal processing, but for different reasons. Although they seem quite
disparate from each other in terms of how we embody them, they
actually sometimes use the same DSP algorithms. (In other work, I
have discussed video codecs. 3
3
The case of video codecs
In the foreground of the UMPC vision, stand images, video images in particular, and
to a lesser extent, sounds. They form a congested mass, created by media and information networks. People in electronic media cultures constantly encounter images in
circulation. Millions of images flash across TV, cinema and computer screens. DVD's
shower down on us. The internet is loaded down with video at the moment (Google
Video, YouTube.com, Yahoo video, etc.). A powerful media-technological imagining of
video moving everywhere, every which way, has taken root.
The growth of video material culture is associated with a key dynamic: the proliferation
of software and hardware codecs. Codecs generate linear transforms of images and
sound. Transformed images move through communication networks much more quickly
than uncompressed audiovisual materials. Without codecs, an hour of raw digital video
would need 165 CD-ROMs or take roughly 24 hours to move across a standard computer
network (10Mbit/sec ethernet). Instead of 165 CDs, we take a single DVD on which a
film has been encoded by a codec. We play it on a DVD player that also has a codec,
usually implemented in hardware. Instead of 32Mbyte/sec, between 1-10 MByte/sec
streams from the DVD into the player and then onto the television screen.
The economic and technical value of codecs can hardly be overstated. DVD, the transmission formats for satellite and cable digital television (DVB and ATSC), HDTV
as well as many internet streaming formats such as RealMedia and Windows Media,
third generation mobile phones and voice-over-ip (VoIP), all depend on video and audio codecs. They form a primary technical component of contemporary audiovisual
culture.
Physically, codecs take many forms, in software and hardware. Today, codecs nestle in
set-top boxes, mobile phones, video cameras and webcams, personal computers, media
players and other gizmos. Codecs perform encoding and decoding on a digital data
stream or signal, mainly in the interest of finding what is different in a signal and what
is mere repetition. They scale, reorder, decompose and reconstitute perceptible images
and sounds. They only move the differences that matter through information networks
and electronic media. This performance of difference and repetition of video comes at
a cost. Enormous complication must be compressed in the codec itself.
Much is at stake in this logistics from the perspective of cultural studies of technology
and media. On the one hand, codecs analyse, compress and transmit images that
fascinate, bore, fixate, horrify and entertain billions of spectators. Many of these
videos are repetitive or cliched. There are many re-runs of old television series or
Hollywood classics. YouTube.com, a video upload site, offers 13,500 wedding videos.
Yet the spatio-temporal dynamics of these images matters deeply. They open new
patterns of circulation. To understand that circulation matters deeply, we could think
of something we don't want to see, for instance, the execution of many hostages (Daniel
Perl, Nick Berg, and others) in Jihadist videos since 2002. Islamist and ‘shock-site' web
162
162
162
163
163
While images are visible, wireless signals are relatively hard to
sense. So they are a ‘hard case' to analyse. We know they surround
us, but we hardly have any sensation of them. A tightly packed
labyrinth of digital signal processing lies between antenna and what
reaches the business travellers' eyes and ears. Much of what they
look at and listen has passed through wireless chipsets. The chipsets,
produced by Broadcom, Intel, Texas Instruments, Motorola, Airgo or
Pico, are tiny (1 cm) fragments that support highly convoluted and
concatenated paths on nanometre scales. In wireless networks such
as Wi-fi, Bluetooth, and 3G mobile phones with their billions of
miniaturised chipsets, we encounter a vast proliferation of relations.
What is at stake in these convoluted, compressed packages of relationality, these densely patterned architectures dedicated to wireless
communication?
Take for instance the picoChip, a latest-generation wireless digital
signal processing chip, designed by a ‘fabless' semiconductor company,
picoChip Designs Ltd, in Bath, UK. The product brief describes the
chip as:
[t]he architecture of choice for next-generation wireless. Expressly designed to address the new air-interfaces, picoChip's
multi-core DSP is the most powerful baseband processor on
the market. Ideally suited to WiMAX, HSPA, UMTS-LTE,
802.16m, 802.20 and others, the picoArray delivers ten-times
better MIPS/$ than legacy approaches. Crucially, the picoArray is easy to program, with a robust development environment
and fast learning curve. (PicoChip, 2007)
Written for electronics engineers, the key points here are that the
chip is designed for wireless communication or ‘air-interface', that
servers streamed these videos across the internet using the low-bitrate Windows Media
Video codec, a proprietary variant of the industry-standard MPEG-4. The shock of
such events – the sight of a beheading, the sight of a journalist pleading for her life –
depends on its circulation through online and broadcast media. A video beheading lies
at the outer limit of the ordinary visual pleasures and excitations attached to video
cultures. Would that beheading, a corporeal event that takes video material culture to
its limits, occur without codecs and networked media?
163
163
163
164
164
its purpose is to receive and transmit information wirelessly, and
that it accommodates a variety of wireless communication standards
(WiMAX, HSPA, 802.16m, etc). In this context, much of the terminology of performance and low cost is familiar. The chip combines computing performance and value for money (“ten times better
MIPS/$ – Million Instructions Per Second/$”) as a ‘baseband processor'. That means that it could find its way into many different version of hardware being produced for applications that range between
large-scale wireless information infrastructures and small consumer
electronics applications. Only the last point is slightly surprisingly
emphatic: “[c]rucially, the picoArray is easy to program, with a robust development environment and fast learning curve.” Why should
ease of programming be important?
And why should so many processors be needed for wireless
signal processing?
The architecture of the picoChip stands on shifting ground. We
are witnessing, as Nigel Thrift writes, “a major change in the geography of calculation. Whereas ‘computing' used to consist of centres
of calculation located at definite sites, now, through the medium of
wireless, it is changing its shape” (Thrift, 2004, 182). The picoChip's
architecture is a respond to the changing geographies of calculation.
Calculation is not carried out at definite sites, but at almost any
site. We can see the picoChip as an architectural response to the
changing geography of computing. The architecture of the picoChip
is typical in the ways that it seeks to make a constant re-shaping
of computation possible, normal, affordable, accessible and programmable. This is particularly evident in the parallel character of its
architecture. Digital signal processing requires massive parallellisation: more chips everywhere, and chips that do more in parallel. The
advanced architecture of the picoChip is typical of the shape of things
more generally:
[t]he picoArray™ is a tiled processor architecture in which hundreds of processors are connected together using a deterministic
interconnect. The level of parallelism is relatively fine grained
164
164
164
165
165
with each processor having a small amount of local memory.
... Multiple picoArrayTM devices may be connected together to
form systems containing thousands of processors using on-chip
peripherals which effectively extend the on-chip bus structure.
(Panesar, et al., 2006, 324)
The array of processors shown then, is a partial representation, an
armature for a much more extensive diffusion of processors in wireless
digital signal processing: in wireless base stations, 3G phones, mobile
computing, local area networks, municipal, community and domestic
Wi-fi network, in femtocells, picocells, in backhaul, last-mile or first
mile infrastructures.
Architectures and intensive movement
It is as if the picoChip is a miniaturised version of the urban geography that contains the many gadgets, devices, and wireless and wired
infrastructures. However, this proliferation of processors is more than
a diffusion of the same. The interconnection between these arrays of
processors is not just extensive, as if space were blanketed by an ever
finer and wider grid of points occupied by processors at work shaping
signals. As we will see, the interconnection between processors in DSP
seeks to potentialise an intensive movement. It tries to accommodate
a change in the nature of movement. Since all movement is change,
intensive movement is a change in change. When intensive movement
occurs, there is always a change in kind, a qualitative change.
Intensive movements always respond to a relational problem. The
crux of the relational problem of wirelessness is this: how can many
things (signals, messages, flows of information) occupy the same space
at the same time, yet all be individualised and separate? The flow of
information and messages promises something highly individualised
(we saw this in the UMPC video from Intel). In terms of this individualising change, the movement of images, messages and data, and the
movement of people, have become linked in very specific ways today.
The greater the degree of individualization, the more dense becomes
the mobility of people and the signals they transmit and receive. And
as people mobilise, they drag personalised flows of communication on
165
165
165
166
166
the move with them. Hence flows of information multiply massively,
and networks must proliferate around those flows. The networks need
to become more dense, and imbricate lived spaces more closely in response to individual mobility.
This poses many problems for the architecture of communication infrastructure. The infrastructural problems of putting networks everywhere are increasingly, albeit only partially, solved by packing radio-frequency waves with more and more intricately modulated signal
patterns. This is the core response of DSP to the changing geography
of calculation, and to the changing media embodiments associated
with it. To be clear on this: were it not for digital signal processing,
the problems of interference, of unrelated communications mixing together, would be potentially insoluble. The very possibility of mobile
devices and mobility depends on ways of increasing the sheer density
of wireless transmissions. Radio spectrum becomes an increasingly
valuable, tightly controlled resource. For any one individual communication, not much space or time can be available. And even when
there is space, it may be noisy and packed with other people and
things trying to communicate. different kinds of wireless signals are
constantly added to the mix. Signals may have to work their way
through crowds of other signals to reach a desired receiver. Communication does not take place in open, uncluttered space. It takes
place in messy configurations of buildings, things and people, which
obstruct waves and bounce signals around. The same signal may
be received many times through different echoes (‘multipath echo'
). Because of the presence of crowds of other signals, and the limited spectrum available for any one transmission, wirelessness needs
to be very careful in its selection of paths if experience is to stream
rather than just buzz. The problem for wireless communication is to
micro-differentiate many paths and to allow them to interweave and
entwine with each other without coming into relation.
So the changing architectures of code and computation associated
with DSP in wireless networks does more, I would argue, than fit in
with changing geography of computing. It belongs to a more intensive, enveloped, and enveloping set of movements. To begin addressing this dynamic, we might say that wireless DSP is the armature
166
166
166
167
167
of a centre of envelopment. This is a concept that Gilles Deleuze
proposes late in Difference and Repetition. ‘Centres of envelopment'
are a way of understanding how extensive movements arise from intensive movement. Such centres crop up in ‘complex systems' when
differences come into relation:
to the extent that every phenomenon finds its reason in a difference of intensity which frames it, as though this constituted
the boundaries between which it flashes, we claim that complex
systems increasingly tend to interiorise their constitutive differences: the centres of envelopment carry out this interiorisation
of the individuating factors. (Deleuze, 2001, 256)
Much of what I have been describing as the intensive movement
that folds spaces and times inside DSP can be understood in terms
of an interiorisation of constitutive differences. An intensive movement always entails a change in the nature of change. In this case,
a difference in intensity arises when many signals need to co-habit
that same place and moment. The problem is: how can many signals
move simultaneously without colliding, without interfering with each
other? How can many signals pass by each other without needing
more space? These problems induce the compression and folding of
spaces inside wireless processing, the folding that we might understand as a ‘centre of envelopment' in action.
The Fast Fourier Transform: transformations between time
and space
I have been arguing that the complications of the mathematics
and the convoluted nature of the code or hardware used in DSP,
stems from an intensive movement or constitutive difference that is
interiorised. We can trace this interiorisation in the DSP used in
wireless networks. I do not have time to show how this happens
in detail, but hopefully one example of DSP that occurs but in the
video codecs and wireless networks will illustrate how this happens
in practice.
167
167
167
168
168
Late in the encoding process, and much earlier in the decoding
process in contemporary wireless networks, a fairly generic computational algorithm comes into action: the Fast Fourier Transform
(ffT). In some ways, it is not surprising to find the ffT in wireless networks or in digital video. Dating from the mid-1960s, ffTs
have long been used to analyse electrical signals in many scientific
and engineering settings. It provides the component frequencies of
a time-varying signal or waveform. Hence, in ‘spectral analysis', the
ffT can show the spectrum of frequencies present in a signal.
The notion of the Fourier transform is mathematical and has been
known since the early 19th century: it is an operation that takes
an arbitrary waveform and turns it into a set of periodic waves (sinusoids) of different frequencies and amplitudes. Some of these sinusoids
make more important contributions to overall shape of the waveform
than others. Added together again, these sine or cosine waves should
exactly re-constitute the original signal. Crucially, a Fourier transform can turn something that varies over time (a signal) into a set of
simple components (sine or cosine waves) that do not vary over time.
Put more technically, it switches between ‘time' and ‘frequency' domains. Something that changes in time, a signal, becomes a set of
distinct components that can be handled separately. 4
In a way, this analysis of a complex signal into simple static component signals means that DSP does use the set-based approaches I
described earlier. Once a complex signal, such as an image, has been
analysed into a set of static components, we can imagine code that
4
Humanities and social science work on the Fast Fourier Transform is hard to find, even
though the ffT is the common mathematical basis of contemporary digital image,
video and sound compression, and hence of many digital multimedia (in JPEG, MPEG
files, in DVDs). In the early 1990s, Friedrich Kittler wrote an article that discussed
it {Kittler, 1993 #753}. His key point was largely to show that there is no realtime
in digital signal processing. The ffT works by defining a sliding window of time for
a signal. It treats a complicated signal as a set of blocks that it lifts out of the time
domain and transforms into the frequency domain. The ffT effectively plots an event
in time as a graph in space. The experience of realtime is epiphenomenal. In terms of
the ffT, a signal is always partly in the future or the past. Although Kittler was not
referring to the use of ffT in wireless networks, the same point applies – there is no
realtime communication. However, while this point about the impossibility of realtime
calculation was important to make during the 1990s, it seems well-established now.
168
168
168
169
169
would select the most important or relevant components. This is precisely what happens in video and sound codecs such as MPEG and
MP3.
The ffT treats sounds and images as complicated superimpositions of waveforms. The envelope of a signal becomes something that
contains many simple signals. It is interesting that wireless networks
tend to use this process in reverse. It deliberately takes a well-separated and discrete set of signals – a digital datastream – and turns it
into a single complex signal. In contrast to the normal uses of ffT in
separating important from insignificant parts of a signal, in wireless
networks, and in many other communications setting, ffT is used to
put signals together in such a way as to contain them in a single envelope. The ffT is found in many wireless computation algorithms
because it allows many different digital signals to be put together on
a single wave and then extracted from it again.
Why would this superimposition of many signals onto a single complex waveform be desirable? Would it not increase the possibilities of
confusion or interference between signals? In some ways the ffT is
used to slow everything down rather than speed it up. Rather than
simply spatialising a duration, the ffT as used in wireless networks
defines a different way of inhabiting the crowded, noise space of electromagnetic radiation. Wireless transmitters are better at inhabiting
crowded signal spectrum when they don't try to separate themselves
off from each other, but actually take the presence of other transmitters into account. How does the ffT allow many transmitters to
inhabit the same spectrum, and even use the same frequencies?
The name of this technique is OFDM (Orthogonal Frequency Division Multiplexing). OFDM spreads a single data stream coming
from a single device across a large number of sub-carriers signals (52
in IEEE 802.11a/g). It splits the data stream into dozens of separate signals of slightly different frequency that together evenly use
the whole available radio spectrum. This is done in such a way that
many different transmitters can be transmitting at the same time,
on the same frequency, without interfering with each other. The advantage of spreading a single high speed data stream across many
signals (wideband) is that each individual signal can carry data at a
169
169
169
170
170
much slower rate. Because the data is split into 52 different signals,
each signal can be much slower (1/50). That means each bit of data
can be spaced apart more in time. This has great advances in urban
environments where there are many obstacles to signals, and signals
can reflect and echo often. In this context, the slower the data is
transmitted, the better.
At the transmitter, a reverse ffT (IffT) is used to re-combine
the 50 signals onto 1 signal. That is, it takes the 50 or so different
sub-carriers produced by OFDM, each of which has a single slightly
different, but carefully chosen frequency, and combines them into one
complex signal that has a wide spectrum. That is, it fills the available
spectrum quite evenly because it contains many different frequency
components. The waveform that results from the IffT looks like
'white noise': it has no remarkable or outstanding tendency whatsoever, except to a receiver synchronised to exactly the right carrier
frequency. At the receiver, this complex signal is transformed, using ffT, back into a set of 50 separate data streams, that are then
reconstituted into a single high speed stream.
Even if we cannot come to grips with the techniques of transformation using in DSP in any great detail, I hope that one point stands
out. The transformation involves ‘c'hanges in kind. Data does not
simply move through space. It changes in kind in order to move
through space, a space whose geography is understood as too full of
potential relations.
Conclusion
A couple of points in conclusion:
a. The spectrum of different wireless-audiovisual devices competing
to do more or less the same thing, are all a reproduction of the
same. Extensive movement associated with wireless networks and
digital video occur in various forms. firstly in the constant enveloping of spaces by wireless signals, and secondly in the dense
170
170
170
171
171
population of wireless spectrum by competing, overlapping signals, vying for market share in highly visible, well-advertised campaigns to dominate spectrum while at the same time allowing for
the presence of many others.
b. Actually, in various ways, wirelessness puts the very primacy of
extension as space-making in question. Signals seem to be able to
occupy the same space at the same time, something that should
not happen in space as usually understood. We can understand
this by re-conceptualising movement as intensive. Intensive movement occurs in multiple ways. Here I have emphasised the constant folding inwards or interiorisation of heterogeneous movements via algorithms used in digital signal processing. Intensive
movement ensues occurs when a centre of envelopment begins to
interiorise differences. While these interiorised spaces are computationally intensive (as exemplified by the picoChip's massive
processing power), the spaces they generate are not perceived as
calculated, precise or rigid. Wirelessness is a relatively invisible,
messy, amorphous, shifting sets of depths and distances that lacks
the visible form and organisation of other entities produced by
centres of calculation (for instance, the shape of a CAD-designed
building or car). However, similar processes occur around sound
and images through DSP. In fact, different layers of DSP are increasingly coupled in wireless media devices.
c. Where does this leave the centre of envelopment? The cost of
this freeing up of movement, of mobility, seems to me to be an
interiorisation of constitutive differences, not just in DSP code
but in the perceptual fields and embodiment of the mobile user.
The irony of the DSP is that it uses code to quantify sensations
or physical movements that lie at the fringes of representation
or awareness. We can't see DSP as such, but it supports our
seeing and moving. It brings code quite close to the body. It
can work with audio and images in ways that bring them much
closer to us. The proliferation of mobile devices such as mp3 and
digital cameras is one consequence of that. Yet the price DSP
pays for this proximity to sensation, to sounds, movement, and
others, is the envelopment I have been describing. DSP acts as
171
171
171
172
172
a centre of envelopment, as something that tends to interiorise
intensive movements, the changing nature of change, the intensive
movements that give rise to it.
d. This brings us back to the UMPC video: it shows two individuals.
Their relation can never, it seems, get very far. The provision
of images, sound and wireless connectivity has come so far, that
they hardly need encounter each other at all. There is something
intensely monadological here: DSP is heavily engaged in furnishing the interior walls of the monad, and with orienting the monad
in relation to other monads, but making sure that nothing much
need pass between them. So much has already been pre-processed
between, that nothing much need happen between. They already
have a complete perception of their relation to the other.
e. On a final constructive note, it seems that there is room for contestation here. The question is how to introduce the set-based
code processes that have proven productive in other areas into
the domain of DSP. What would that look like? How would it be
sensed? What could it do to our sensations of video or wireless
media?
172
172
172
173
173
References
Deleuze, Gilles. Difference and Repetition. Translated by Paul
Patton, Athlone Contemporary European Thinkers. (London; New
York: Continuum, 2001).
Panesar, Gajinder, Daniel Towner, Andrew Duller, Alan Gray, and
Will Robbins. ‘D'eterministic Parallel Processing, International Journal of Parallel Programming 34, no. 4 (2006): 323-41.
PicoChip. 'Advanced Wireless Technologies', (2007). http://www
.picochip.com/solutions/advanced_wireless_technologies
PicoChip. 'Pc202 Integrated Baseband Processor Product Brief',
(2007). http://www.picochip.com/downloads/03989ce88cdbebf5165e2f095a1cb1c8
/PC202_product_brief.pdf
Smith, Steven W. The Scientist and Engineer's Guide to Digital
Signal Processing: California Technical Publishing, 2004).
Thrift, Nigel. ‘R'emembering the Technological Unconscious by
Foregrounding Knowledges of Position, Environment & Planning D:
Society & Space 22, no. 1 (2004): 175-91.
173
173
173
174
174
ELPUEBLODECHINA A.K.A.
ALEJANDRA MARIA PEREZ NUNEZ
License: ??
EN
El Curanto
Curanto is a traditional method of cooking in the ground by the
people of Chiloe, in the south of Chile. This technique is practiced
throughout the world under different names. What follows is a summary of the ELEMENTS and steps enunciated and executed during el
curanto, which was performed in the centre of Brussels during V/J10.
Recipe
?
For making a curanto you need
to take the following steps and
arrange the following ELEMENTS:
This image is repeated in many
different cultures. Might be an
ancient way of cooking. What
does this underground cooking
imply? Most of all, it takes a lot
of TIME.
Free Libre Open Source
Curanto in the center
of Bruxelles
OVEN, a hole in the ground
filled with fire resistant STONES.
? find a way to get a good deal
at the market to get fresh
MUSSELS for x people.
It
helps to have a CHARISMATIC
WOMAN do it for you.
figure A
a slow cooking
OVEN
174
174
174
175
175
onomies of immaterial labour.
?
A BRIGHT WOMAN FRIEND to
find out about BELGIAN PORPHYRY and tell you about the
mining carrière in Quenast
(Hainaut).
? A CAMERA WOMAN to hand
you a MARBLE STONE to put
inside the OVEN.
figure B a TERRAIN VAGUE in
the centre of Brussels and a
NEIGHBOUR willing to let you in.
?
or some other MULwho is
extremely PATIENT and HUMOURISTIC and who helps
you to focus and takes pictures.
WENDY
TITASKING WOMAN
?
or some
that
TRUSTS the carrier of the
performance, will tell their
STORY about TRAVELING MUSSELS.
FEMKE
and
PETER
EXCENTRIC COUPLE
figure C A HOLE in the
ground 1.5 m deep, 1 m
diameter. (It makes me
think of a hole in my head).
A hole in the ground reminds me
of the unknown. FOOD cooked
inside the ground relates to ideas,
creativity and GIFT. It helps to
have GUILLAUME or a strong and
positive MAN to help you dig the
hole. A second PERSON would be
of great help, especially if, while
digging, he would talk about tax-
Mussels eaten in the centre of
Brussels are grown in Ireland and
immersed in Dutch seawater and
are then offcially called Dutch.
After 2 days in Dutch water, they
are ready to be exported to Brussels and become Belgian mussels
that are in fact Dutch-Irish.
175
175
175
176
176
figure D Original curanto
STONES are round fire
resistant stones. I couldn't
find them in Brussels.
figure E A good BUCKET
to scoop the rain out
of your newly dug HOLE
The only round and granite stones
were very expensive design ones.
In Chile you just dig a hole anywhere and find them. The only
fire resistant rock in Brussels was
the STREET itself.
? Square shaped rocks collected
randomly throughout the city
by means of appropriation.
Streets are made of a type of
granite rock, might be Belgian
porphyry. Note that there is a
message on one of the stones we
picked up in the centre. It reads
'watch your head'.
figure F A tent to protect
your fiRE from random RAIN
176
176
176
177
177
figure G LAIA or some
psychonaut, hierophant friend.
Should be someone who is able to
transmit confidence to the execution of el curanto and who will
keep you company while you are
appropriating stones in Brussels.
? A good BOUILLON made of
cheap white wine and concentrated bio vegetables and
spices is one of the secrets.
figure I GIRL that will
randomly come to the place
with her MOTHER and
speak in Spanish to the
carrier of the performance.
She will play the flute, give
the OVEN some orders to cook
well and sing improvised SONGS.
She and some other children will
play around by digging holes and
making their own CURANTO.
figure J A big fiRE to heat up
the wet cold ground of Brussels
figure H You need to find
or some Palestinian fellow
to help you keep the fire burning
MOAM
177
177
177
178
178
figure K
figure M A SACK CLOTH
to cover the food and to
retain STEAM for cooking.
RED HOT COAL
figure L Using some
cabbage leaves to cover
the RED HOT COAL to
place the FOOD on top of
figure N
or some
who is
happy to SHARE his expert
knowledge and willing to
join in the performance.
DIDIER
PANIC COOK MAN
178
178
178
179
179
?
?
HOLE
?
MUSSELS
?
figure O ONIONS,
and SPECULATIONS.
GESTURES
?
While reading VALIS, the carrier
of the performance will become
reverend TIMOTHY ARCHER and
read about TIME (something that
has mainly been forgotten is
Palestine).
figure P el curanto is
to be made together with
PEOPLE and for EVERYONE.
WOOD found in a dismantled
house. It helps to find a ride
to transport it.
SPICES,
leaf.
rosemary and bay
MICHAEL or some DEDICATED
friend that will assist with the
execution of the performance
and keep the pictures of it afterwards for months.
figure Q You can eat from
the shell by using your hands
or a little WOODEN SPOON.
If you want to eat later, take the
mussels out of their shell, add
OLIVE OIL, make a spread and
keep it cold in a jar. find QUEER
couples to savour it with BREAD
while talking about SEX.
179
179
179
180
180
?
fiRE
?
RED HOT COAL
?
FOOD
?
from the cooking MUSIt helps to use 'hot'
PIEZZO MICROPHONES.
NOISE
SELS.
Here TIME turns into space.
“Time can be overcome”, Mircea
Eliade wrote. That's what it's all
about.
The great mystery of Eleusis, of
the Orphics, of the early Christians, of Sarapis, of the Greco
1
-Roman mystery religions, of
Hermes Trismegistos, of the Renaissance Hermetic alchemists,
of the Rose Cross Brotherhood,
of Apollonius of Tyana, of Simon
Magus, of Asklepios, of Paracelsus, of Bruno, consists of the abolition of time. The techniques are
there. Dante discusses them in
the Comedy. It has to do with
the loss of amnesia; when forgetfulness is lost, true memory
spreads out backward and forward, into the past and into the
future, and also, oddly, into alternate universes; it is orthogonal as well as linear. 1
Philip K. Dick Valis (1972)
180
180
180
181
181
ALICE CHAUCHAT, FRÉDÉRIC GIES
License: Attribution-Noncommercial-No Derivative Work
EN
Praticable
Praticable is a collaborative research project between several artists
(currently: Alice Chauchat, Frédéric de Carlo, Frédéric Gies, Isabelle
Schad and Odile Seitz).
Praticable proposes itself as a horizontal work structure, which
brings research, creation, transmission and production structure into
relation with each other. This structure is the basis for the creation
of a variety of performances by either one or several of the project's
participants. In one way or another, these performances start from
the exploration of body practices, leading to a questioning of its representation. More concretely, Praticable takes the form of collective
periods of research and shared physical practices, both of which are
the basis for various creations. These periods of research can either
be independent of the different creation projects or integrated within
them.
During Jonctions/Verbindingen 10, Alice Chauchat and Frédéric
Gies gave a workshop for participants dealing with different ‘body
practices'. On the basis of Body-Mind Centering (BMC) techniques,
the body as a locus of knowledge production was made tangible. The
notation of the Dance performance with which Frédéric Gies concluded the day is reproduced in this book and published under an
open license.
figure 120
Workshop for
participants
with different
body
practices
at V/J10
figure 121
The body as
a locus of
knowledge
production
was made
tangible
figure 122
figure 123
184
Dance (Notation)
20 sec.
31. INTERCELLULAR flUID
Initiate movement in your intercellular fluid. Start slowly and
then put more and more energy
and speed in your movement, using intercellular fluid as a pump
to make you jump.
20 sec.
32. VENOUS BLOOD
Initiate movement in your venous
blood, rising and falling and following its waves.
20 sec.
33. VENOUS BLOOD
Initiate movement in your venous blood, slowing down progressively.
184
184
184
185
185
Less than 5 sec.
34. TRANSITION
Make visible in your movement a
transition from venous blood to
cerebrospinal fluid. finish in the
same posture you chose to start
PART 3.
1 min.
35. EACH flUID
Go through each fluid quality you
have moved with since the beginning of PART 3. The 1st one has
to be cerebrospinal fluid. After
this one, the order is free.
185
185
185
186
186
61. ALL GLANDS
Stand up slowly, building your
vertical axis from coccygeal body
to pineal gland. Use this time to
bound with earth through your
feet, as if you were growing roots.
INSTRUMENTAL (during the voice echo)
Down, down, down in your heart
find, find, find the secret
62. LOWER GLANDS OF THE
PELVIS
Dance as if you were dancing
in a club. Focus on your lower
glands, in your pelvis, to initiate your dance. Your arms, torso,
neck and head are also involved
in your dance.
SMALL PERIMETER
Turn, turn, turn your head around
63. MAMILLARY BODIES
Turn and turn your head around,
initiating this movement in
mamillary bodies. Let your head
drive the rest of your body into
turns.
186
186
186
187
187
Baby we can do it
We can do it alright
64. LOWER GLANDS OF THE
PELVIS
Dance as if you were dancing
in a club. Focus on your lower
glands, in your pelvis, to initiate your dance. Your arms, torso,
neck and head are also involved
in your dance.
Do you believe in love at first sight
It's an illusion, I don't care
Do you believe I can make you feel better
Too much confusion, come on over here
65. HEART BODY
Keep on dancing as if you were
dancing in a club and initiate
movements in your heart body,
connecting with your forearms
and hands.
License: Attribution-Noncommercial-No Derivative Work
187
187
187
188
188
Mutual Motions Video Library
To be browsed, a vision to be displaced
figure 126
figure 125
Wearing the video library, performer Isabelle Bats presents a selection of films related to the themes of V/J10. As a living memory, the
discs and media players in the video library are embedded in a dress
designed by artists collective De Geuzen. Isabelle embodies an accessible interface between you (the viewer), and the videos. This human
interface allows for a mutual relationship: viewing the films influences
the experience of other parts of the program, and the situation and
context in which you watch the films play a role in experiencing and
interpreting the videos. A physical exchange between existing imagery, real-time interpretation, experiences and context, emerges as
a result.
The V/J10 video library collects excerpts of performance and dance
video art, and (documentary) film, which reflect upon our complex
body–technique relations. Searching for the indicating, probing, disturbing or subverting gesture(s) in the endless feedback loop between
technology, tools, data and bodies, we collected historical as well as
contemporary material for this temporary archive.
Modern Times or the Assembly Line
Reflects the body in work environments, which are structured by
technology, ranging from the pre-industrial manual work with analogue
tools, to the assembly line, to postmodern surveillance configurations.
24 Portraits
Excerpt from a series of documentary portraits by Alain Cavalier, FR,
1988-1991.
umentaries paying tribute to women's
manual work. The intriguing and sensitive portraits of 24 women working
in different trades reveal the intimacy
of bodies and their working tools.
24 Portraits is a series of short doc-
198
198
198
199
199
Humain, trop humain
Quotes from a documentary by Louis
Malle, FR, 1972.
A documentary filmed at the Citroen
car factory in Rennes and at the 1972
Paris auto show, documenting the monotonous daily routines of working the
assembly lines, the close interaction
between bodies and machines.
Performing the Border
Video essay by Ursula Biemann, CH,
1999, 45 min.
“Performing the Border is a video
essay set in the Mexican-U.S. border town Ciudad Juarez, where the
U.S. industries assemble their electronic and digital equipment, located
right across El Paso, Texas.
The
video discusses the sexualization of
the border region through labour division, prostitution, the expression of
female desires in the entertainment industry, and sexual violence in the public sphere. The border is presented
as a metaphor for marginalization and
the artificial maintenance of subjective boundaries at a moment when
the distinctions between body and machine, between reproduction and production, between female and male,
have become more fluid than ever.”
(Ursula Biemann)
http://www.geobodies.org
Maquilapolis (city of factories)
A film by Vicky Funari and Sergio
De La Torre, Mexico/U.S.A., 2006, 68
min.
Carmen works the graveyard shift in
one of Tijuana's maquiladoras, the
multinationally-owned factories that
came to Mexico for its cheap labour.
After making television components
all night, Carmen comes home to a
shack she built out of recycled garage
doors, in a neighbourhood with no
sewage lines or electricity. She suffers
from kidney damage and lead poisoning from her years of exposure to toxic
chemicals. She earns six dollars a day.
But Carmen is not a victim. She is a
dynamic young woman, busy making
a life for herself and her children.
As Carmen and a million other
maquiladora workers produce televisions, electrical cables, toys, clothes,
batteries and IV tubes, they weave
the very fabric of life for consumer nations. They also confront labour violations, environmental devastation and
urban chaos – life on the frontier of
the global economy. In Maquilapolis Carmen and her colleague Lourdes reach beyond the daily struggle for
survival to organize for change: Carmen takes a major television manufacturer to task for violating her labour
rights, Lourdes pressures the government to clean up a toxic waste dump
left behind by a departing factory.
As they work for change, the world
changes too: a global economic crisis
and the availability of cheaper labour
in China begin to pull the factories
away from Tijuana, leaving Carmen,
Lourdes and their colleagues with an
uncertain future.
A co-production of the Independent
Television Service (ITVS), project of
Creative Capital.
http://www.maquilapolis.com
199
199
199
200
200
Practices of everyday life
Everyday life as the place of a performative encounter between bodies
and tools, from the U.S.A. of the 70s to contemporary South Africa.
Saute ma ville
Chantal Akerman, B, 1968, 13 min.
states that, “When the woman speaks,
she names her own oppression.”
A girl returns home happily. She locks
herself up in her kitchen and messes up
the domestic world. In her first film,
Chantal Akerman explores a scattered
form of being, where the relationship
with the controlled human world literally explodes. Abolition of oneself,
explosion of oneself.
“I was concerned with something like
the notion of ‘language speaking the
subject', and with the transformation
of the woman herself into a sign in
a system of signs that represent a
system of food production, a system
of harnessed subjectivity.” (Martha
Rosler)
Semiotics of the Kitchen
Choreography
Video by Martha Rosler, U.S.A., 1975,
05:30 min.
Semiotics of the Kitchen adopts the
form of a parodic cooking demonstration in which, Rosler states, “An
anti-Julia Child replaces the domesticated ‘meaning' of tools with a lexicon
of rage and frustration.” In this performance-based work, a static camera is
focused on a woman in a kitchen. On
a counter before her are a variety of
utensils, each of which she picks up,
names and proceeds to demonstrate,
but with gestures that depart from the
normal uses of the tool. In an ironic
grammatology of sound and gesture,
the woman and her implements enter
and transgress the familiar system of
everyday kitchen meanings – the securely understood signs of domestic
industry and food production erupt
into anger and violence. In this alphabet of kitchen implements, Rosler
Video installation preview by Anke
Schäfer, NL/South Africa, 13:07 min
(loop), 2007.
Choreography reflects on the notion
‘Armed Response' as an inner state
of mind. The split screen projection
shows the movements of two women
commuting to their work. On the one
side, the German-South African Edda
Holl, who lives in the rich Northern
suburbs of Johannesburg. Her search
for a safe journey is characterized
by electronic security systems, remote
controls, panic buttons, her constant
cautiousness, the reassuring glances
in the tinted car windows. On the
other side, you see the African-South
African Gloria Fumba, who lives in
Soweto and whose security techniques
are very basic: clutching her handbag to her body, the way she cues for
the bus, avoiding to go home alone
when it's dark. A classical continuity
200
200
200
201
201
editing, as seen fiction film, suggests
at first a narrative storyline, but is
soon interrupted by moments of pause.
These pauses represent the desires of
both women to break with the safety
mechanism that motivates their daily
movements.
Television
Ximena Cuevas, Mexico, 1999, 2 min.
“The vacuum cleaner becomes the device of the feminist ‘liberation', or the
monster that devours us.” (Insite 2000
program, San Diego Museum of Art)
http://www.livemovie.org
Perform the script, write the score
Considers dance and performance as knowledge systems where movement and data interact. With excerpts of performance documents,
interviews and (dance) films. But also the script, the code, as system
of perversion, as an explorative space for the circulation of bodies.
William Forsythe's works
Choreography can be understood as
writing moving bodies into space, a
complex act of inscription, which is
situated on the borderline between
creating and remembering, future and
past. Movement is prescribed and is
passing at the same time. It can be
inscribed into the visceral body memory through constant repetition, but
it is also always undone:
As Laurie Anderson says:
“You're walking. And you don't always realize it, but you're always
falling. With each step you fall forward slightly. And then catch yourself from falling.
Over and over,
you're falling.
And then catching
your self from falling.” (Quoted after
Gabriele Brandstetter, ReMembering
the Body)
William Forsythe, for instance, considers classical ballet as a historical
form of a knowledge system loaded
with ideologies about society, the self,
the body, rather than a fixed set
of rules, which simply can be implemented. An arabesque is a platonic ideal for him, a prescription,
but it can't be danced: “There is
no arabesque, there is only everyone's arabesque.” His choreography
is concerned with remembering and
forgetting: referencing classical ballet, creating a geometrical alphabet,
which expands the classical form, and
searching for the moment of forgetfulness, where new movement can arise.
Over the years, he and his company
developed an understanding of dance
as a complex system of processing information with some analogies to computer programming.
Chance favours
pared mind
the
pre-
Educational dance film, produced by
Vlaams Theaterinstituut, Ministerie
van Onderwijs dienst Media and Informatie, dir. Anne Quirynen, 1990,
201
201
201
202
202
Rehearsal Last Supper
25 min.
Chance favours the prepared mind
features discussions and demonstrations by William Forsythe and four
Frankfurt Ballet Dancers about their
understanding of movement and their
working methods: “Dance is like writing or drawing, some sort of inscription.” (William Forsythe)
The way of the weed
Experimental dance film featuring
William Forsythe, Thomas McManus
and dancers of the Frankfurt Ballet,
An-Marie Lambrechts, Peter Missotten and Anne Quirynen, soundtrack:
Peter Vermeersch, 1997, 83 min.
In this experimental dance film, investigator Thomas is dropped in a desert
in 7079, not only to investigate the
growth movements of the plant life
there, but also the life's work of the
obscure scientist William F. (William
Forsythe), who has achieved numerous insights and discoveries on the
growth and movement of plants. This
knowledge is stored in the enormous
data bank of an underground laboratory. It is Thomas's task to hack into
his computer and check the professor's secret discoveries. His research
leads him into the catacombs of a
complex building, where he finds people stored in cupboards in a comatose
state. They are loaded with professor F.'s knowledge of vegetation. He
puts the ‘people-plants' into a large
transparent pool of water and notices
that in the water the ‘samples' come
to life again. . . A complex reflection
on (body) memory, (digital) archives
and movement as repetition and interference.
Video installation preview by Anke
Schäfer, NL/South Africa, 16:40 min.
(loop), 2007.
The work Rehearsal Last Supper combines a kind of ‘Three Stooges' physical, slapstick-style comedy, but with
far more serious subject matters such
as abuse, gender violence, and the
general breakdown of family relationships. It's a South African and mixed
couple re-enactment of a similar scene
that Bruce Nauman realized in the 70s
with a white, middle-aged man and
woman.
The experience, the ‘Gestalt' of the
experienced violence, the frustration
and the unwillingly or even forced internalization are felt to the core of the
voice and the body. Humour can help
to express the suppressed and to use
your pain as power.
Actors: Nat Ramabulana, Tarryn Lee,
Megan Reeks, Raymond Ngomane
(from Wits University Drama department), Kekeletso Matlabe, Lebogang
Inno, Thabang Kwebu, Paul Noko
(from Market Theatre Laboratory).
http://www.livemovie.org
Nest Of Tens
Miranda July, U.S.A., 1999, 27 min.
Nest Of Tens is comprised of four alternating stories, which reveal mundane yet personal methods of control.
These systems are derived from intuitive sources. Children and a retarded
adult operate control panels made out
of paper, lists, monsters, and their
own bodies.
“A young boy, home alone, performing
202
202
202
203
203
a bizarre ritual with a baby; an uneasy, aborted sexual flirtation between
a teenage babysitter and an older man;
an airport lounge encounter between a
businesswoman (played by July) and a
young girl. Linked by a lecturer enumerating phobias in a quasi-academic
seminar, these three perverse, unnerving scenarios involving children and
adults provide authentic glimpses into
the queasy strangeness that lies behind the everyday.” (New York Video
Festival, 2000)
In the field of players
Jeanne Van Heeswijk & Marten Winters, 2004, NL
Duration: 25.01.2004 – 31.01.2004
Location: TENT.Rotterdam
Participants: 106 through casting, 260
visitors of TENT.
Together with artist Marten Winters,
Van Heeswijk developed a ‘game:set'.
In cooperation with graphic designer
Roger Teeuwen, they marked out a
set of lines and fields on the ground.
Just like in a sporting venue, these
lines had no meaning until used by the
players. The relationship between the
players was revealed by the rules of the
game.
Designer Arienne Boelens created special game cards that were handed out
during the festival by the performance
artists Bliss. Both Bliss and the cards
turned up all over the festival, showing
up at every hot spot or special event.
Through these game cards people were
invited to fulfil the various roles of
the game – like ‘Round Miss' (the
girl who walks around the ring holding up a numbered card at the start
of each round at boxing matches),
‘40-plus male in (high) cultural position', ‘Teen girl with star ambitions',
‘Vital 65-plus'. But even ‘Whisperer',
and ‘Audience' were specific roles.
Writing Desire
Video essay by Ursula Biemann, CH,
2000, 25 min.
Writing Desire is a video essay on
the new dream screen of the Internet, and its impact on the global circulation of women's bodies from the
‘Third World' to the ‘first World'
. Although underage Philippine ‘pen
pals' and post-Soviet mail-order brides
have been part of the transnational
exchange of sex in the post-colonial
and post-Cold War marketplace of desire before the digital age, the Internet has accelerated these transactions.
The video provides the viewers with
a thoughtful meditation on the obvious political, economic and gender inequalities of these exchanges by simulating the gaze of the Internet shopper
looking for the imagined docile, traditional, pre-feminist, but Web-savvy
mate.
http://www.geobodies.org
203
203
203
204
204
INÈS RABADAN
License: Creative Commons Attribution-NonCommercial-ShareAlike
EN
Does the repetition of a gesture irrevocably
lead to madness?
figure 127
Screening
Modern
Times at
V/J10
A personal introduction to Modern Times
(Charles Chaplin, 1936)
figure 128
One of the most memorable moments of Modern Times, is the one
where the tramp goes mad after having spent the whole day screwing
bolts on the assembly line. He is free: neither husband, nor worker,
nor follower of some kind of movement, nor even politically engaged.
His gestures are burlesque responses to the adversity in his life, or
just plain ‘exuberant'. But through the interaction with the machine,
however, he completely goes off the rails and ends up in prison.
Inès Rabadan made two short films in which a female protagonist
is confined by the fast-paced work of the assembly line. Tragically
and mercilessly, the machine changes the woman and reduces her to
a mechanical gesture – a gesture in which she sometimes takes pride,
precisely in order not to lose her sanity. Or else, she really goes mad,
ruined by the machine, eventually managing to free herself.
figure 129
figure 130
MICHAEL TERRY
License: Free Art License
EN
Data analysis as a discourse
figure 131
Michael
Terry in
between
LGM sessions
An interview with Michael Terry
Michael Terry is a computer scientist working at the Human Computer Interaction Lab of the University of Waterloo, Canada. His
main research focus is on improving usability in open source software, and ingimp is the first result of that work.
In a Skype conversation that was live broadcast in La Bellone during Verbindingen/Jonctions 10, we spoke about ingimp, a clone of the
popular image manipulation programme Gimp, but with an important difference. Ingimp allows users to record data about their usage
in to a central database, and subsequently makes this data available
to anyone.
At the Libre Graphics Meeting 2008 in Wroclaw, just before Michael
Terry presents ingimp to an audience of Gimp developers and users,
Ivan Monroy Lopez and Femke Snelting meet up with Michael Terry
again to talk more about the project and about the way he thinks
data analysis could be done as a form of discourse.
figure 132
Interview
at Wroclaw
Femke Snelting (FS) Maybe we could start this face-to-face conversation with a description of the ingimp project you are developing
and – what I am particularly interested in –, why you chose to work
on usability for Gimp?
Michael Terry (MT) So the project is ‘ingimp', which is an instrumented version of Gimp, it collects information about how the
software is used in practice. The idea is you download it, you install
it, and then with the exception of an additional start up screen, you
use it just like regular Gimp. So, our goal is to be as unobtrusive as
possible to make it really easy to get going with it, and then to just
217
217
217
218
218
forget about it. We want to get it into the hands of as many people
as possible, so that we can understand how the software is actually
used in practice. There are plenty of forums where people can express
their opinions about how Gimp should be designed, or what's wrong
with it, there are plenty of bug reports that have been filed, there
are plenty of usability issues that have been identified, but what we
really lack is some information about how people actually apply this
tool on a day to day basis. What we want to do is elevate discussion
above just anecdote and gut feelings, and to say, well, there is this
group of people who appear to be using it in this way, these are the
characteristics of their environment, these are the sets of tools they
work with, these are the types of images they work with and so on,
so that we have some real data to ground discussions about how the
software is actually used by people.
You asked me now why Gimp? I actually used Gimp extensively
for my PhD work. I had these little cousins come down and hang
out with me in my apartment after school, and I would set them up
with Gimp, and quite often they would start off with one picture,
they would create a sphere, a blue sphere, and then they played with
filters until they got something really different. I would turn to them
looking at what they had been doing for the past twenty minutes,
and would be completely amazed at the results they were getting
just by fooling around with it. And so I thought, this application
has lots and lots of power; I'd like to use that power to prototype
new types of interface mechanisms. So I created JGimp, which is
a Java based extension for the 1.0 Gimp series that I can use as a
back-end for prototyping novel user interfaces. I think that it is a
great application, there is a lot of power to it, and I had already an
investment in its code base, so it made sense to use that as a platform
for testing out ideas of open instrumentation.
FS: What is special about ingimp, is the fact that the data you
collect, is equally free to use, run, study and distribute, as the software
you are studying. Could you describe how that works?
218
218
218
219
219
MT: Every bit of data we collect, we make available: you can go to
the website, you can download every log file that we have collected.
The intent really is for us to build tools and infrastructure so that the
community itself can sustain this analysis, can sustain this form of
usability. We don't want to create a situation where we are creating
new dependencies on people, or where we are imposing new tasks on
existing project members. We want to create tools that follow the
same ethos as open source development, where anyone can look at
the source code, where anyone can make contributions, from filing
a bug to doing something as simple as writing a patch, where they
don't even have to have access to the source code repository, to make
valuable contributions. So importantly, we want to have a really low
barrier to participation. At the same time, we want to increase the
signal-to-noise ratio. Yesterday I talked with Peter Sikking, an information architect working for Gimp, and he and I both had this
experience where we work with user interfaces, and since everybody
uses an interface, everybody feels they are an expert, so there can be
a lot of noise. So, not only did we want to create an open environment for collecting this data, and analysing it, but we also wanted to
increase the chance that we are making valuable contributions, and
that the community itself can make valuable contributions. Like I
said, there is enough opinion out there. What we really need to do
is to better understand how the software is being used. So, we have
made a point from the start to try to be as open as possible with
everything, so that anyone can really contribute to the project.
FS: Ingimp has been running for a year now. What are you finding?
MT: I have started analysing the data, and I think one of the things
that we realised early on is that it is a very rich data set; we have lots
and lots of data. So, after a year we've had over 800 installations, and
we've collected about 5000 log files, representing over half a million
commands, representing thousands of hours of the application being
used. And one of the things you have to realise is that when you have
a data set of that size, there are so many different ways to look at it
that my particular perspective might not be enough. Even if you sit
219
219
219
220
220
someone down, and you have him or her use the software for twenty
minutes, and you videotape it, then you can spend hours analysing
just those twenty minutes of videotape. And so, I think that one of
the things we realised is that we have to open up the process so that
anyone could easily participate. We have the log files available, but
they really didn't have an infrastructure for analysing them. So, we
created this new piece of software called ‘Stats Jam', an extension
to MediaWiki, which allows anyone to go to the website and embed
SQL-queries against the ingimp data set and then visualise those
results within the Wiki text. So, I'll be announcing that today and
demonstrating that, but I have been using that tool now for a week
to complement the existing data analysis we have done.
One of the first things that we realized is that we have over 800
installations, but then you have to ask, how many of those are really serious users? A lot of people probably just were curious, they
downloaded it and installed it, found that it didn't really do much
for them and so maybe they don't use it anymore. So, the first thing
we had to do is figure out which data points should we really pay
attention to. We decided that a person should have used ingimp on
two different occasions, preferably at least a day apart, where they'd
saved an image on both of the instances. We used that as an indication of what a serious user is. So with that filter in place, the ‘800
installations' drops down to about 200 people. So we had about 200
people using ingimp; and looking at the data, this represents about
800 hours of use, about 4000 log files, and again still about half a
million commands. So, it's still a very significant group of people.
200 people are still a lot, and that's a lot of data, representing about
11000 images they have been working on – there's just a lot.
From that group, what we found is that use of ingimp is really
short and versatile. So, most sessions are about fifteen minutes or
less, on average. There are outliers, there are some people who use it
for longer periods of time, but really it boils down to them using it for
about fifteen minutes, and they are applying fewer than a hundred
operations when they are working on the image. I should probably
be looking at my data analysis as I say this, but they are very quick,
220
220
220
221
221
short, versatile sessions, and when they use it, they use less than 10
different tools, or they apply less than 10 different commands.
What else did we find? We found that the two most popular monitor resolutions are 1280 by 1024, and 1024 by 768. So, those represent
collectively 60 % of the resolutions, and really 1280 by 1024 represents
pretty much the maximum for most people, although you have some
higher resolutions. So one of the things that's always contentious
about Gimp, is its window management scheme and the fact that it
has multiple windows, right? And some people say, well you know,
this works fine if you have two monitors, because you can throw out
the tools on one monitor and then your images are on another monitor. Well, about 10 to 15 % of ingimp users have two monitors, so
that design decision is not working out for most of the people, if that
is the best way to work. These are things I think that people have
been aware of, it's just now we have some actual concrete numbers
where you can turn to and say: now this is how people are using it.
There is a wide range of tasks that people are performing with the
tool, but they are really short, quick tasks.
FS: Every time you start up ingimp, a screen comes up asking
you to describe what you are planning to do and I am interested in
the kind of language users invent to describe this, even when they
sometimes don't know exactly what it is they are going to do. So
inventing language for possible actions with the software has in a
way become a creative process that is now shared between interface
designer, developer and user. If you look at the ‘activity tags' you
are collecting, do you find a new vocabulary developing?
MT: I think there are 300 to 600 different activity tags that people
register within that group of ‘significant users'. I didn't have time to
look at all of them, but it is interesting to see how people are using
that as a medium for communicating to us. Some people will say,
“Just testing out, ignore this!” Or, people are trying to do things like
insert HTML code, to do like a cross-site scripting attack, because,
you have all the data on the website, so they will try to play with
that. Some people are very sparse and they say ‘image manipulation'
221
221
221
222
222
or ‘graphic design' or something like that, but then some people are
much more verbose, and they give more of a plan, “This is what I
expect to be doing.” So, I think it has been interesting to see how
people have adopted that and what's nice about it, is that it adds a
really nice human element to all this empirical data.
Ivan Monroy Lopez (IM): I wanted to ask you about the data;
without getting too technical, could you explain how these data are
structured, what do the log files look like?
MT: So the log files are all in XML, and generally we compress
them, because they can get rather large. And the reason that they
are rather large is that we are very verbose in our logging. We want
to be completely transparent with respect to everything, so that if
you have some doubts or if you have some questions about what kind
of data has been collected, you should be able to look at the log file,
and figure out a lot about what that data is. That's how we designed
the XML log files, and it was really driven by privacy concerns and
by the desire to be transparent and open. On the server side we take
that log file and we parse it out, and then we throw it into a database,
so that we can query the data set.
FS: Now we are talking about privacy. . . I was impressed by the
work you have done on this; the project is unusually clear about why
certain things are logged, and other things not; mainly to prevent
the possibility of ‘playing back' actions so that one could identify
individual users from the data set. So, while I understand there are
privacy issues at stake I was wondering... what if you could look at the
collected data as a kind of scripting for use, as writing a choreography
that might be replayed later?
MT: Yes, we have been fairly conservative with the type of information that we collect, because this really is the first instance where
anyone has captured such rich data about how people are using software on a day to day basis, and then made it all that data publicly
222
222
222
223
223
available. When a company does this, they will keep the data internally, so you don't have this risk of someone outside figuring something out about a user that wasn't intended to be discovered. We
have to deal with that risk, because we are trying to go about this
in a very open and transparent way, which means that people may
be able to subject our data to analysis or data mining techniques
that we haven't thought of, and extract information that we didn't
intent to be recording in our file, but which is still there. So there are
fairly sophisticated techniques where you can do things like look at
audio recordings of typing and the timings between keystrokes, and
then work backwards with the sounds made to figure out the keys
that people are likely pressing. So, just with keyboard audio and
keystroke timings alone, you can often give enough information to be
able to reconstruct what people are actually typing. So we are always
sort of weary about how much information is in there.
While it might be nice to be able to do something like record people's actions and then share that script, I don't think that that is
really a good use of ingimp. That said, I think it is interesting to
ask: could we characterize people's use enough, so that we can start
clustering groups of people together and then providing a forum for
these people to meet and learn from one another? That's something
we haven't worked out. I think we have enough work cut out for us
right now just to characterize how the community is using it.
FS: It was not meant as a feature request, but as a way to imagine
how usability research could flip around and also become productive
work.
MT: Yes, totally. I think one of the things that we found when
bringing people into the basic usability of the ingimp software and
ingimp website, is that people like looking at what commands other
people are using, what the most frequently used commands are; and
part of the reason that they like that, is because of what it teaches
them about the application. So they might see a command they were
unaware of. So we have toyed with the idea of then providing not
223
223
223
224
224
only the command name, but then a link from that command name
to the documentation – but I didn't have time to implement it, but
certainly there are possibilities like that, you can imagine.
FS: Maybe another group can figure something out like that? That's
the beauty of opening up your software plus data set of course.
Well, just a bit more on what is logged and what not... Maybe you
could explain where and why you put the limit, and what kind of use
you might miss out on as a result?
MT: I think it is important to keep in mind that whatever instrument you use to study people, you are going to have some kind of
bias, you are going to get some information at the cost of other information. So if you do a video taped observation of a user and you
just set up a camera, then you are not going to find details about
the monitor maybe, or maybe you are not really seeing what their
hands are doing. No matter what instrument you use, you are always
getting a particular slice.
I think you have to work backwards and ask what kind of things
do you want to learn. And so the data that we collect right now, was
really driven by what people have done in the past in the area of instrumentation, but also by us bringing people into the lab, observing
them as they are using the application, and noticing particular behaviours and saying, hey, that seems to be interesting, so what kind of
data could we collect to help us identify those kind of phenomena, or
that kind of performance, or that kind of activity? So again, the data
that we were collecting was driven by watching people, and figuring
out what information will help us to identify these types of activities.
As I've said, this is really the first project that is doing this, and
we really need to make sure we don't poison the well. So if it happens that we collect some bit of information, that then someone can
later say, “Oh my gosh, here is the person's file system, here are the
names they are using for the files” or whatever, then it's going to
make the normal user population weary of downloading this type of
224
224
224
225
225
instrumented application. The thing that concerns me most about
open source developers jumping into this domain, is that they might
not be thinking about how you could potentially impact privacy.
IM: I don't know, I don't want to get paranoid. But if you are
doing it, then there is a possibility someone else will do it in a less
considerate way.
MT: I think it is only a matter of time before people start doing
this, because there are a lot of grumblings about, “We should be
doing instrumentation, someone just needs to sit down and do it.”
Now there is an extension out for firefox that will collect this kind
of data as well, so you know. . .
IM: Maybe users could talk with each other, and if they are aware
that this type of monitoring could happen, then that would add a
different social dimension. . .
MT: It could. I think it is a matter of awareness, really. We have a
lengthy concern agreement that details the type of information we are
collecting and the ways your privacy could be impacted, but people
don't read it.
FS: So concretely... what information are you recording, and what
information are you not recording?
MT: We record every command name that is applied to a document,
to an image. Where your privacy is at risk with that, is that if you
write a custom script, then that custom script's name is going to be
inserted into a log file. And so if you are working for example for Lucas
or DreamWorks or something like that, or ILM, in some Hollywood
movie studio and you are using ingimp and you are writing scripts,
then you could have a script like ‘fixing Shrek's beard', and then that
is getting put into the log file and then people are going to know that
the studio uses ingimp.
225
225
225
226
226
We collect command names, we collect things like what windows
are on the screen, their positions, their sizes, and we take hashes of
layer names and file names. We take a string and then we create a
hash code for it, and we also collect information about how long is
this string, how many alphabetical characters, numbers; things like
that, to get a sense of whether people are using the same files, the
same layer names time and time again, and so on. But this is an
instance where our first pass at this, actually left open the possibility
of people taking those hashes and then reconstructing the original
strings from that. Because we have the hash code, we have the length
of the string – all you have to do is generate all possible strings of
that length, take the hash codes and figure out which hashes match.
And so we had to go back and create a new scheme for recording this
type of information where we create a hash and we create a random
number, we pair those up on the client machine but we only log the
random number. So, from log to log then, we can track if people
use the same image names, but we have no idea of what the original
string was.
There are these little ‘gotchas' like that, that I don't think most
people are aware of, and this is why I get really concerned about
instrumentation efforts right now, because there isn't this body of
experience of what kind of data should we collect, and what shouldn't
we collect.
FS: As we are talking about this, I am already more aware of what
data I would allow being collected. Do you think by opening up this
data set and the transparent process of collecting and not collecting,
this will help educate users about these kinds of risks?
MT: It might, but honestly I think probably the thing that will
educate people the most is if there was a really large privacy error
and that it got a lot of news, because then people would become more
aware of it because right now – and this is not to say that we want
that to happen with ingimp – but when we bring people in and we ask
them about privacy, “Are you concerned about privacy?” and they
say “No”, and we say “Why?” Well, they inherently trust us, but the
226
226
226
227
227
fact is that open source also lends a certain amount of trust to it,
because they expect that since it is open source, the community will
in some sense police it and identify potential flaws with it.
FS: Is that happening? Are you in dialogue with the open source
community about this?
MT: No, I think probably five to ten people have looked at the
ingimp code – realistically speaking I don't think a lot of people looked
at it. Some of the Gimp developers took a gander at it to see “How
could we put this upstream?” But I don't want it upstream, because
I want it to always be an opt-in, so that it can't be turned on by
mistake.
FS: You mean you have to download ingimp and use it as a separate
program? It functions in the same way as Gimp, but it makes the
fact that it is a different tool very clear.
MT: Right. You are more aware, because you are making that
choice to download that, compared to the regular version. There is
this awareness about that.
We have this lengthy text based consent agreement that talks about
the data we collect, but less than two percent of the population reads
license agreements. And, most of our users are actually non-native
English speakers, so there are all these things that are working against
us. So, for the past year we have really been focussing on privacy, not
only in terms of how we collect the data, but how we make people
aware of what the software does.
We have been developing wordless diagrams to illustrate how the
software functions, so that we don't have to worry about localisation
errors as much. And so we have these illustrations that show someone
downloading ingimp, starting it up, a graph appears, there is a little
icon of a mouse and a keyboard on the graph, and they type and you
see the keyboard bar go up, and then at the end when they close the
application, you see the data being sent to a web server. And then
227
227
227
228
228
we show snapshots of them doing different things in the software, and
then show a corresponding graph change. So, we developed these by
bringing in both native and non-native speakers, having them look at
the diagrams and then tell us what they meant. We had to go through
about fifteen people and continual redesign until most people could
understand and tell us what they meant, without giving them any
help or prompts. So, this is an ongoing research effort, to come up
with techniques that not only work for ingimp, but also for other
instrumentation efforts, so that people can become more aware of the
implications.
FS: Can you say something about how this type of research relates
to classic usability research and in particular to the usability work
that is happening in Gimp?
MT: Instrumentation is not new, commercial software companies
and researchers have been doing instrumentation for at least ten years,
probably ten to twenty years. So, the idea is not new, but what is
new – in terms of the research aspects of this –, is how do we do this
in a way where we can make all the data open? The fact that you
make the data open, really impacts your decision about the type of
data you collect and how you are representing it. And you need to
really inform people about what the software does.
But I think your question is... how does it impact the Gimp's
usability process? Not at all, right now. But that is because we have
intentionally been laying off to the side, until we got to the point
where we had an infrastructure, where the entire community could
really participate with the data analysis. We really want to have
this to be a self-sustaining infrastructure, we don't want to create a
system where you have to rely on just one other person for this to
work.
IM: What approach did you take in order to make this project
self-sustainable?
228
228
228
229
229
MT: Collecting data is not hard. The challenge is to understand
the data, and I don't want to create a situation where the community
is relying on only one person to do that kind of analysis, because this
is dangerous for a number of reasons. first of all, you are creating
a dependency on an external party, and that party might have other
obligations and commitments, and might have to leave at some point.
If that is the case, then you need to be able to pass the baton to
someone else, even if that could take a considerate amount of time
and so on.
You also don't want to have this external dependency, because of
the richness in the data, you really need to have multiple people
looking at it, and trying to understand and analyse it. So how are
we addressing this? It is through this Stats Jam extension to the
MediaWiki that I will introduce today. Our hope is that this type
of tool will lower the barrier for the entire community to participate
in the data analysis process, whether they are simply commenting on
the analysis we made or taking the existing analysis, tweaking it to
their own needs, or doing something brand new.
In talking with members of the Gimp project here at the Libre
Graphics Meeting, they started asking questions like, “So how many
people are doing this, how many people are doing this and how many
this?” They'll ask me while we are sitting in a café, and I will be able
to pop the database open and say, “A certain number of people have
done this.” or, “No one has actually used this tool at all.”
The danger is that this data is very rich and nuanced, and you
can't really reduce these kinds of questions to an answer of “N people
do this”, you have to understand the larger context. You have to
understand why they are doing it, why they are not doing it. So, the
data helps to answer some questions, but it generates new questions.
They give you some understanding of how the people are using it,
but then it generates new questions of, “Why is this the case?” Is this
because these are just the people using ingimp, or is this some more
widespread phenomenon?
They asked me yesterday how many people are using this colour
picker tool – I can't remember the exact name – so I looked and there
229
229
229
230
230
was no record of it being used at all in my data set. So I asked them
when did this come out, and they said, “Well it has been there at
least since 2.4.” And then you look at my data set, and you notice
that most of my users are in the 2.2 series, so that could be part of
the reasons. Another reason could be, that they just don't know that
it is there, they don't know how to use it and so on. So, I can answer
the question, but then you have to sort of dig a bit deeper.
FS: You mean you can't say that because it is not used, it doesn't
deserve any attention?
MT: Yes, you just can't jump to conclusions like that, which is
again why we want to have this community website, which shows the
reasoning behind the analysis: here are the steps we had to go through
to get this result, so you can understand what that means, what the
context means – because if you don't have that context, then it's sort
of meaningless. It's like asking, “What are the most frequently used
commands?” This is something that people like to ask about. Well
really, how do you interpret that? Is it the numbers of times it has
been used across all log files? Is it the number of people that have
used it? Is it the number of log files where it has been used at least
once? There are lots and lots of ways in which you can interpret
this question. So, you really need to approach this data analysis as
a discourse, where you are saying: here are my assumptions, here is
how I am getting to this conclusion, and this is what it means for
this particular group of people. So again, I think it is dangerous if
one person does that and you become to rely on that one person. We
really want to have lots of people looking at it, and considering it,
and thinking about the implications.
FS: Do you expect that this will impact the kind of interfaces that
can be done for Gimp?
MT: I don't necessarily think it is going to impact interface design,
I see it really as a sort of reality check: this is how communities are
using the software and now you can take that information and ask,
230
230
230
231
231
do we want to better support these people or do we. . . For example
on my data set, most people are working on relatively small images
for short periods of time, the images typically have one or two layers,
so they are not really complex images. So regarding your question,
one of the things you can ask is, should we be creating a simple tool
to meet these people's needs? All the people are just doing cropping
and resizing, fairly common operations, so should we create a tool
that strips away the rest of the stuff? Or, should we figure out why
people are not using any other functionality, and then try to improve
the usability of that?
There are so many ways to use data – I don't really know how
it is going to be used, but I know it doesn't drive design. Design
happens from a really good understanding of the users, the types of
tasks they perform, the range of possible interface designs that are
out there, lots of prototyping, evaluating those prototypes and so on.
Our data set really is a small potential part of that process. You can
say, well, according to this data set, it doesn't look like many people
are using this feature, let's not too much focus on that, let's focus on
these other features or conversely, let's figure out why they are not
using them. . . Or you might even look at things like how big their
monitor resolutions are, and say, well, given the size of the monitor
resolution, maybe this particular design idea is not feasible. But I
think it is going to complement the existing practices, in the best
case.
FS: And do you see a difference in how interface design is done in
free software projects, and in proprietary software?
MT: Well, I have been mostly involved in the research community,
so I don't have a lot of exposure to design projects. I mean, in my
community we are always trying to look at generating new knowledge,
and not necessarily at how to get a product out the door. So, the
goals or objectives are certainly different.
231
231
231
232
232
I think one of the dangers in your question is that you sort of
lump a lot of different projects and project styles into one category
of ‘open source'. ‘Open source' ranges from volunteer driven projects
to corporate projects, where they are actually trying to make money
out of it. There is a huge diversity of projects that are out there;
there is a wide diversity of styles, there is as much diversity in the
open source world as there is in the proprietary world.
One thing you can probably say, is that for some projects that are
completely volunteer driven like Gimp, they are resource strapped.
There is more work than they can possibly tackle with the number of
resources they have. That makes it very challenging to do interface
design; I mean, when you look at interface code, it costs you 50 or 75
% of a code base. That is not insignificant, it is very difficult to hack,
and you need to have lots of time and manpower to be able to do
significant things. And that's probably one of the biggest differences
you see for the volunteer driven projects: it is really a labour of
love for these people and so very often the new things interest them,
whereas with a commercial software company developers are going to
have to do things sometimes they don't like, because that is what is
going to sell the product.
232
232
232
233
233
SADIE PLANT
License: Creative Commons Attribution-NonCommercial-ShareAlike
Interwoven with her own thoughts and experiences, Sadie Plant gave a situated report on the Mutual
Motions track, and responded to the issues discussed during the week-end.
figure 146
Sadie Plant
reports
at V/J10
EN
A Situated Report
I have to begin with many thanks to Femke and Laurence, because
it really has been a great pleasure for me to have been here this weekend. It's nearly five years since I came to an event like this, believe
it or not, and I really cannot say enough how much I have enjoyed it,
and how stimulating I have found it. So yes, a big thank you to both
for getting me here. And as you say, it's ten years since I wrote Zeros
+ Ones, and you are marking ten years of this festival too, so it's an
interesting moment to think about a lot of the issues that have come
up over the weekend. This is a more or less spontaneous report, very
much an ‘open performance', to use Simon Yuill's words, and not to
be taken as any kind of definitive account of what has happened this
weekend. But still I hope it can bring a few of the many and varied strands of this event together, not to form a true conclusion, but
perhaps to provide some kind of digestif after a wonderful meal.
I thought I should begin as Femke very wisely began, with the
theme of cooking. Femke gave us a recipe at the beginning of the
weekend, really a kind of recipe for the whole event, with cooking as
an example of the fact that there are many models, many activities,
many things that we do in our everyday lives, which might inform
and expand our ideas about technology and how we work with them.
So, I too will begin with this idea of cooking, which is as Femke
said a very magical, transformative experience. Femke's clip from
the Cathérine Deneuve film was a really lovely instance of the kind
of deep elemental, magical chemistry which goes on in cooking. It is
this that makes it such an instructive and interesting candidate, for a
model to illuminate the work of programming, which itself obviously
has this same kind of potential to bring something into effect in a very
275
275
275
276
276
direct and immediate sense. And cooking is also the work behind the
scene, the often forgotten work, again a little bit like programming,
that results in something which – again like a lot of technology – can
operate on many different scales. Cooking is in one sense the most
basic kind of activity, a simple matter of survival, but it can also
work on a gourmet level too, where it becomes the most refined – and
well paid – kind of work. It can be the most detailed, fiddly, sort of
decorative work; it can be the most backbreaking, heavy industrial
work – bread making for example as well. So it really covers the whole
panoply of these extremes.
If we think about a recipe, and ask ourselves about the machine that
the recipe requires, it's obviously running on an incredibly complex
assemblage: you have the kitchen, you have all the ingredients, you
have machines for cooling things, machines for heating things, you
have the person doing the cooking, the tools in question. We really
are talking here about a complex process, and not just an end result.
The process is also, again, a very ‘open' activity. Simon Yuill defined
an `open performance' as a partial composition completed in the
performance.
Cooking is always about experimentation and the kitchen really is
a kind of lab. The instructions may be exact, the conditions may be
more or less precise but the results are never the same twice. There
are just too many variables, too many contingencies involved. Of
course like any experimental work, it can go completely wrong, it
often does go wrong: sometimes it really is all about process, and
not about eating at all! But as Simon again said today, quoting Sun
Ra: there are no real mistakes, there are no truly wrong things. This
was certainly the case with the fantastic cooking process that we
had throughout the whole day yesterday, which ended with us eating
these fantastic mussels, which I am sure elpueblodechina thought in
fact were not as they should have been. But only she knew what
she was aiming at: for the people who ate them they were delicious,
their flavour enhanced by the whole experience of their production.
elpueblodechina's meal made us ask: what does it mean for something
to go wrong? She was using a cooking technique which has come out
of generations and generations of errors, mistakes, probings, fallings
276
276
276
277
277
backs, not just simply a continuous kind of story of progress, success,
and forward movement. So the mistakes are clearly always a very big
part of how things work in life, in any context in life, but especially
of course in the context of programming and working with software
and working with technologies, which we often still tend to assume
are incredibly reliable, logical systems, but in fact are full of glitches
and errors. As thinkers and activists resistant to and critical of mainstream methods and cultures, this is something that we need to keep
encouraging.
I have for a long time been interested in textiles, and I can't resist mentioning the fact that the word ‘recipe' was the old word for
knitting patterns: people didn't talk about knitting patterns, but
‘recipes' for knitting. This brings us to another interesting junction
with another set of very basic, repetitive kinds of domestic and often
overlooked activities, which are nevertheless absolutely basic to human existence. Just as we all eat food, so we all wear clothes. As with
cooking, the production of textiles again has this same kind of sense
of being very basic to our survival, very elemental in that sense, but
it can also function at a high level of detailed, refined activity as well.
With a piece of knitting it is difficult to see the ways in which a single
thread becomes looped into a continuous textile. But if you look at a
woven pattern, the program that has led to the pattern is right there
in front of you, as you see the textile itself. This makes weaving a
very nice, basic and early example of how this kind of immediacy can
be brought into operation. What you look at in a piece of woven cloth
is not just a representation of something that can happen somewhere
else, but the actual instructions for producing and reproducing that
piece of woven cloth as well. So that's the kind of deep intuitive connection that it has with computer programming, as well as the more
linear historical connections of which I have often spoken.
There are some other nice connections between textiles, cooking
and programming as well. Several times yesterday there was a lot
of talk about both experts and amateurs, and developers and users.
These are divisions which constantly, and often perhaps with good
reason, reassert themselves, and often carry gendered connotations
too. In the realm of cooking, you have the chef on the one hand,
277
277
277
278
278
who is often male and enjoys the high status of the inventive, creative expert, and the cook on the other, who is more likely to be
female and works under quite a different rubric. In reality, it might
be said that the distinction is far from precise: the very practise of
using computers, of cooking, of knitting, is almost inevitably one of
constantly contributing to their development, because they are all relatively open systems and they all evolve through people's constant,
repetitive use of them. So it is ultimately very difficult to distinguish
between the user and the developer, or the expert and the amateur.
The experiment, the research, the development is always happening
in the kitchen, in the bedroom, on the bus, using your mobile or
using your computer. Fernand Braudel speaks about this kind of ‘micro-histories', this sense of repetitive activity, which is done in many
trades and many lines, and that really is the deep unconscious history
of human activity. And arguably that's where the most interesting
developments happen, albeit in a very unsung, unseen, often almost
hidden way. It is this kind of deep collectivity, this profound sense of
micro-collaboration, which has often been tapped into this weekend.
Still, of course, the social and conceptual divisions persist, and
still, just as we have our celebrity chefs, so we have our celebrity
programmers and dominant corporate software developers. And just
as we have our forgotten and overlooked cooks, so we have people who
are dismissed, or even dismiss themselves, as ‘just computer users'.
The technological realities are such that people are often forced into
this role, with programmes that really are so fixed and closed that
almost nothing remains for the user to contribute. The structural
and social divisions remain, and are reproduced on gendered lines as
well.
In the 1940s, computer programming was considered to be extremely menial, and not at all a glamorous or powerful activity.
Then of course, the business of dealing with the software was strictly
women's work, and it was with the hardware of the system that the
most powerful activity lay. That was where the real solid development was done, and that was where the men were working, with what
were then the real nuts and bolts of the machines. Now of course, it
has all turned around. It is women who are building the chips and
278
278
278
279
279
putting the hardware – such as it is these days – together, while the
male expertise has shifted to the writing of software. In only half a
century, the evolution of the technology has shifted the whole notion
of where the power lies. No doubt – and not least through weekends
like this – the story will keep moving on.
But as the world of computing does move more and more into
software and leave the hardware behind, it is accompanied by the
perceived danger that the technology and, by extension, the cultures
around it, tend to become more and more disembodied and intangible.
This has long been seen as a danger because it tends to reinforce what
have historically, in the Western world at least, been some of the more
oppressive tendencies to affect women and all the other bodies that
haven't quite fitted the philosophical ideal. Both the Platonic and
Christian traditions have tended to dismissing or repress the body,
and with it all the kind of messy, gritty, tangible stuff of culture,
as transient, difficult, and flawed. And what has been elevated is of
course the much more formal, idealist, disembodied kind of activities
and processes. This is a site of continual struggle, and I guess part of
the purpose of a weekend like this is to keep working away, re-injecting
some sense of materiality, of physicality, of the body, of geography,
into what are always in danger of becoming much more formal and
disembodied worlds. What Femke and Laurence have striven to remind us this weekend is that however elevated and removed our work
appears to be from the matter of bodies and physical techniques,
we remain bodies, complex material processes, working in a complex
material work.
Once again, there still tends to be something of a gendered divide.
The dance workshop organised this morning by Alice Chauchat and
Frédéric Gies was an inspiring but also difficult experience for many
of us, unused as we are to using our bodies in such literally physical
and public ways. It was not until we came out of the workshop into
a space which was suddenly mixed in terms of gender, that I realised
that the participants in the workshop had been almost exclusively
female. It was only the women who had gone to this kind of more
physical, embodied, and indeed personally challenging part of the
weekend. But we all need to continually re-engage with this sense
279
279
279
280
280
of the body, all this messiness and grittiness, which it is in many
vested interests to constantly cleanse from the world. We have to
make ourselves deal with all the embarrassment, the awkwardness,
and the problematic side of this more tangible and physical world.
For that reason it has been fantastic that we have had such strong
input from people involved in dance and physical movement, people
working with bodies and the real sense of space. Sabine Prokhoris
and Simon Hecquet made us think about what it means to transcribe
the movements of the body; Séverine Dusollier and Valérie Laure
Benabou got us to question the legal status of such movements too.
And what we have gained from all of this is this sense that we are all
always working with our bodies, we are always using our bodies, with
more or less awareness and talent, of course, whether we are dancing
or baking or knitting or slumped over our keyboards. In some ways we
shouldn't even need to say it, but the fact that we do need to remind
ourselves of our embodiment shows just how easy it is for us to forget
our physicality. This morning's dance workshop really showed some
of the virtues of being able to turn off one's self-consciousness, to
dismiss the constantly controlling part of one's self and to function
on a different, slightly more automatic level. Or perhaps one might
say just to prioritise a level of bodily activity, of bodily awareness,
of a sense of spatiality that is so easy to forget in our very cerebral
society.
What Frédéric and Alice showed us was not simply about using the
body, but rather how to overcome the old dualism of thinking of the
body as a kind of servant of the mind. Perhaps this is how we should
think about our relationships to our technologies as well, not just to
see them as our servants, and ourselves as the authors or subjects of
the activity, but rather to perceive the interactivity, the sense of an
interplay, not between two dualistic things, the body and the mind, or
the agent and the tool, the producer and the user, but to try and see
much more of a continuum of different levels and different kinds and
different speeds of material activity, some very big and clunky, others at extremely complex micro-levels. During the dance workshop,
Frédéric talked about all the synaptic connections that are happening as one moves one's body, in order to instil in us this awareness
280
280
280
281
281
of ourselves as physical, material, thinking machines, assemblages of
many different kinds of activity. And again, I think this idea of bringing together dance, food, software, and brainpower, to see ourselves
operating at all these different levels, has been extremely rewarding.
Femke asked a question of Sabine and Simon yesterday, which perhaps never quite got answered, but expressed something about how
as people living in this especially wireless world, we are now carrying more and more technical devices, just as I am now holding this
microphone, and how these additional machines might be changing
our awarenesses of ourselves. Again it came up this morning in the
workshop when we were asked to imagine that we might have different parts of our bodies, another head, or our feet may have mirrors
in them, or in one brilliant example that we might have magnets,
so that we were forced to have parts of our bodies drawn together
in unlikely combinations, just to imagine a different kind of sense of
self that you get from that experience, or a different way of moving
through space. But in many ways, because of our technologies now,
we don't need to imagine such shifts: we are most of us now carrying
some kind of telecommunicating device, for example, and while we
are not physically attached to our machines – not yet anyway –, we
are at least emotionally attached to them. Often they are very much
with us and part of us: the mobile phone in your pocket is to hand,
it is almost a part of us. And I too am very interested in how that
has changed not only our more intellectual conceptions of ourselves,
but also our physical selves. The fact that I am holding this thing
[the microphone] obviously does change my body, its capacities, and
its awareness of itself. We are all aware of this to some extent: everyone knows that if you put on very formal clothes, for example, you
behave in different ways, your body and your whole experience of its
movement and spatiality changes. Living in a very conservative part
of Pakistan a few years ago, where I had to really be completely covered up and just show my eyes, gave me an acute sense of this kind
of change: I had to sit, stand, walk and turn to look at things in an
entirely new set of ways. In a less dramatic but equally affective way,
wirelessness obviously introduces a new sense of our bodies, of what
we can do with our bodies, of what we carry with us on our bodies,
281
281
281
282
282
and consequently of who we are and how we interact with our environment. And in this sense wirelessness has also brought the body
back into play, rescuing us from what only ten years ago seemed to
be the very real dangers of a more formal and disembodied sense of a
virtual world, which was then imagined as some kind of ‘other place'
, a notion of cyberspace, up there somehow, in an almost heavenly
conception. Wirelessness has made it possible for computer devices to
operate in an actual, geographical environment: they can now come
with us. We can almost start to talk more realistically about a much
more interesting notion of the cyborg, rather than some big clunky
thing trailing wires. It really can start to function as a more interesting idea, and I am very interested in the political and philosophical
implications of this development as well, and in that it does reintroduce the body to as I say what was in danger of becoming a very
kind of abstract and formal kind of cyberspace. It brings us back into
touch with ourselves and our geographies.
The interaction between actual space and virtual space, has been
another theme of this weekend; this ability to translate, to move between different kinds of spaces, to move from the analogue to the
digital, to negotiate the interface between bodies and machines. Yesterday we heard from Adrian Mackenzie about digital signal processing, the possibility of moving between that real sort of analogue world
of human experience and the coding necessary to computing. Sabine
and Simon talked about the possibilities of translating movement into
dance, and this also has come up several times today, and also with
Simon's work in relation to music and notation. Simon and Sabine
made the point that with the transcription and reading of a dance,
one is offered – rather as with a recipe – the same ingredients, the
same list of instructions, but once again as with cooking, you will
never get the same dance, or you will never get the same food as a
consequence. They were interested in the idea of notation, not to
preserve or to conserve, but rather to be able to send food or dance
off into the future, to make it possible in the future. And Simon
referred to these fantastic diagrams from The Scratch Orchestra, as
an entirely different way of conceiving and perceiving music, not as
a score, a notation in this prescriptive, conserving sense of the word,
282
282
282
283
283
but as the opportunity to take something forward into the future.
And to do so not by writing down the sounds, or trying to capture
the sounds, but rather as a way of describing the actions necessary
to produce those sounds, is almost to conceive the production of music as a kind of dance, and again to emphasise its embodiment and
physicality.
This sense of performance brings into play the idea of ‘play' itself,
whether ‘playing' a musical instrument, ‘playing' a musical score, or
‘playing' the body in an effort to dance. I think in some dance traditions one speaks about ‘playing the body'; in Tai Chi it is certainly
said that one plays the body, as though it was an instrument. And
when I think about what I have been doing for the last five years,
it's involved having children, it's involved learning languages, it's involved doing lots of cooking, and lots of playing, funny enough. And
what has been lovely for me about this weekend is that all of these
things have been discussed, but they haven't been just discussed, they
have actually been done as well. So we have not only thought about
cooking, but cooking has happened, not only with the mussels, but
also with the fantastic food that has been provided all weekend. We
haven't just thought about dancing, but dancing has actually been
done. We haven't just thought about translating, but with great
thanks to the translators – who I think have often had a very difficult job – translating has also happened as well. And in all of these
cases we have seen what might so easily have been a simply theoretical discussion, has itself been translated into real bodily activity:
they have all been, literally, brought into play. And this term ‘play'
, which spans a kind of mathematical play of numbers, in relation to
software and programming, and also the world of music and dance,
has enormous potential for us all: Simon talked about ‘playing free'
as an alternative term to ‘improvisation', and this notion of ‘playing
free' might well prove very useful in relation to all these questions of
making music, using the body, and even playing the system in terms
of subverting or hacking into the mainstream cultural and technical
programs with which we presented.
283
283
283
284
284
This weekend was inspired by several desires and impulses to which
I feel very sympathetic, and which remain very urgent in all our debates about technology. As we have seen, one of the most important
of those desires is to reinsert the body into what is always in danger of becoming a disembodied realm of computing and technology.
And to reinsert that body not as a kind of Chaplinesque cog in the
wheel that we saw when Inès Rabadán introduced Modern Times last
night, but as something more problematic, something more complex
and more interesting. And also not to do so nostalgically, with some
idea of some kind of lost natural activity that we need to regain, or to
reassert, or to reintroduce. There is no true body, there is no natural
body, that we can recapture from some mythical past and bring back
into play. At the same time we need to find a way of moving forward,
and inserting our senses of bodies and physicality into the future, to
insist that there is something lively and responsive and messy and
awkward always at work in what could have the tendency otherwise
to be a world of closed systems and dead loops.
One of the ways of doing this is to constantly problematise both
individualised conceptions of the body and orthodox notions of communities and groups. Michael Terry's presentation about ingimp, developed in order to imagine the community of people who are using
his image manipulation software, raised some very problematic issues
about the notion of community, which were also brought up again by
Simon today, with this ideas about collaboration and collectivity, and
what exactly it means to come together and try to escape an individualised notion of one's own work. Femke's point to Michael exemplified
the ways in which the notion of community has some real dangers:
Michael or his team had done the representations of the community
themselves – so if people told them they were graphic artists, they
had found their own kind of symbols for what a graphic artist would
look like –, and when Femke suggested that people – especially if
they were graphic artists – might be capable of producing their own
representations and giving their own way of imagining themselves,
Michael's response was to the effect that people might then come up
with what he and his team would consider to be ‘undesirable images'
of themselves. And this of course is the age old problem with the idea
284
284
284
285
285
of a community: an open, democratic grouping is great when you're
in it and you all agree what's desirable, but what happens to all the
people that don't quite fit the picture? How open can one afford to
be? We need some broader, different senses of how to come together
which, as Alice and Frédéric were discussed, are ways of collaborating
without becoming a new fixed totality. If we go back to the practices
of cooking, weaving, knitting, and dancing, these long histories of
very everyday activities that people have performed for generation
after generation, in every culture in the world – it is at this level that
we can see a kind of collective activity, which is way beyond anything
one might call a ‘community' in the self-conscious sense of the term.
And it's also way beyond any simple notion of a distributed collection of individuals: it is perhaps somewhere at the junction of these
modes, an in-between way of working which has come together in its
own unconscious ways over long periods of time.
This weekend has provided a rich menu of questions and themes to
feed in and out of the writing and use of software, as well as all our
other ways of dealing with our machines, ourselves, and each other.
To keep the body and all its flows and complexities in play, in a lively
and productive sense; to keep all the interruptive possibilities alive;
to stop things closing down; to keep or to foster the sense of collectivity in a highly individualised and totalising world; to find new
ways – constantly find new ways – of collaborating and distributing
information: these are all crucial and ongoing struggles in which we
must all remain continually engaged. And I notice even now that I
used this term ‘to keep', as though there was something to conserve
and preserve, as though the point of making the recipes and writing
the programs is to preserve something. But the ‘keeping' in question
here is much more a matter of ‘keeping on', of constantly inventing
and producing without, as Simon said earlier, leaving ourselves too
vulnerable to all the new kinds of exploitation, the new kinds of territorialisation, which are always waiting around the corner to capture
even the most fluid and radical moves we make. This whole weekend
has been an energising reminder, a stimulating and inspiriting call to
285
285
285
286
286
keep problematising things, to keep inventing and to keep reinventing, to keep on keeping on. And I thank you very much for giving me
the chance to be here and share it all. Thank you.
A quick postscript. After this ‘spontaneous report' was made,
the audience moved upstairs to watch a performance by the dancer
Frédéric Gies, who had co-hosted the morning's workshop. I found
the energy, the vulnerability, and the emotion with which he danced
quite overwhelming. The Madonna track - Hung Up (Time Goes by
so Slowly) – to which he danced ran through my head for the whole
train journey back to Birmingham, and when I got home and checked
out the Madonna video on YouTube I was even more moved to see
what a beautiful commentary and continuation of her choreography
Frédéric had achieved. This really was an example not only of playing
the body, the music, and the culture, but also of effecting the kind of
‘free play' and ‘open performance', which had resonated through the
whole weekend and inspired us all to keep our work and ourselves in
motion. So here's an extra thank you to Frédéric Gies. Madonna will
never sound the same to me.
286
286
286
287
287
Biographies
Valérie Laure Benabou
http://www.juriscom.net/minicv/vlb
EN
Valérie Laure Benabou is associate
Professor at the University of Versailles-Saint Quentin and teaches at
the Ecole des Mines. She is a member of the Centre d'Etude et de
Recherche en Droit de l'Immatériel
(CERDI), and of the Editorial Board
of Propriétés Intellectuelles. She also
teaches civil law at the University
of Barcelona and taught international
commercial law at the Law University
in Phnom Penh, Cambodia. She was a
member of the Commission de réflexion du Conseil d'Etat sur Internet et
les réseaux numériques, co-ordinated
by Ms Falque-Pierrotin, which produced the Rapport du Conseil d'Etat,
(La Documentation française, 1998).
She is the author of a number of works
and articles, including ‘La directive
droit d'auteur, droits voisins et société
de l'information: valse à trois temps
avec l'acquis communautaire', in Europe, No. 8-9, September 2001, p.
3, and in Communication Commerce
Electronique, October 2001, p. 8., and
‘Vie privée sur Internet: le traçage', in
Les libertés individuelles à l'épreuve
des NTIC, PUL, 2001, p. 89.
Pierre Berthet
http://pierre.berthet.be/
EN
Studied percussion with André Van
287
287
287
288
288
Belle and Georges-Elie Octors, improvisation with Garrett List, composition with Frederic Rzewski, and music theory with Henri Pousseur. Designs and builds sound objects and installations (composed of steel, plastic,
water, magnetic fields etc.). Presents
them in exhibitions and solo or duo
performances with Brigida Romano
(CD Continuum asorbus on the Sub
Rosa label) or Frédéric Le Junter (CD
Berthet Le Junter on the Vandœuvres
label). Collaborated with 13th tribe
(CD Ping pong anthropology). Played
percussion in Arnold Dreyblatt's Orchestra of excited strings (CD Animal magnetism, label Tzadik; CD The
sound of one string, label Table of the
elements).
avec Garrett List, la composition avec
Frederic Rzewski, et la théorie de
la musique avec Henri Pousseur. Il
conçoit et construit des objets et installations sonores (en acier, plastique, eau, champs magnétiques etc.),
et les a présentés lors d'expositions et
de performances en solo ou en duo
avec Brigida Romano (CD Continuum asorbus sur le label Sub Rosa)
or Frédéric Le Junter (CD Berthet Le
Junter sur le label Vandœuvres). A
collaboré avec 13th tribe (CD Ping
pong anthropology). A joué de la
percussion chez Orchestra of excited
strings d'Arnold Dreyblatt (CD Animal magnetism, label Tzadik; CD The
sound of one string, sur le label Table
of the elements).
NL
Alice Chauchat
Geluidskunstenaar.
Studeerde percussie met André Van Belle en Georges-Eliehttp://www.theselection.net/dance/
Octors, improvisatie met Garrett List,
EN
compositie met Frederic Rzewski, en
muziektheorie met Henri Pousseur.
Member of the Praticable collective.
Hij ontwerpt en bouwt sonore voorAlice Chauchat was born in 1977 in
werpen en installaties (in staal, plasSaint-Etienne (France) and lives in
tiek, water, magnetische velden etc.).
Paris. She studied at the ConservaDeze toont hij tijdens tentoonstellintoire National Supérieur de Lyon and
gen en performances, solo of samen
P.A.R.T.S in Brussels. She is a foundmet Brigida Romano (cd Continuum
ing member of the collective B.D.C.
asorbus bij het label Sub Rosa) en
With other members such as Tom PlisFrédéric Le Junter (cd Berthet Le
chke, Martin Nachbar and Hendrik
Junter bij het label Vandœuvres).
Laevens she created Events for TeleBerthet werkte samen met 13th tribe
vision, Affects and(Re)sort, between
(cd Ping pong anthropology). Hij ver1999 and 2001. In 2001 she presented
zorgde de percussie voor Arnold Dreyher first solo Quotation marks me.
blatts Orchestra of excited strings (cd
In 2003 she collaborated with Vera
Animal magnetism, label Tzadik; cd
Knolle (A Number of Classics in the
The sound of one string, bij het label
Age of Performance). In 2004 she
Table of the elements).
made J'aime, together with Anne JuFR
Plasticien sonore. A étudié la percussion avec André Van Belle et
Georges-Elie Octors, l'improvisation
ren, and CRYSTALLL, a collaboration with Alix Eynaudi. She also takes
part in other people's projects, such as
Projet, initiated by Xavier Le Roy, or
288
288
288
289
289
Michel Cleempoel
http://www.michelcleempoel.be/
EN
Graduated from the National Superior Art School La Cambre in Brussels.
Author of numerous digital art works
and exhibitions. Worked in collaboration with Nicolas Malevé:
http://www.deshabillez-vous.be
289
289
289
290
290
http://www.geuzen.org/
EN
EN
Femke Snelting, Renée Turner and
Riek Sijbring form the art and design
collective De Geuzen (a foundation for
multi-visual research). De Geuzen develop various strategies on and off line,
to explore their interests in the female
identity, critical resistance, representation and narrative archives.
Séverine Dusollier
Doctor in Law, Professor at the University of Namur (Belgium), Head of
the Department of Intellectual Property Rights at the Research Center for
Computer and Law of the University
of Namur, and Project Leader Creative Commons Belgium, Namur.
NL
EN
Leif Elggren (born 1950, Linköping,
Sweden) is a Swedish artist who lives
and works in Stockholm.
Active since the late 1970s, Leif
Elggren has become one of the most
constantly surprising conceptual artists
to work in the combined worlds of
audio and visual. A writer, visual
artist, stage performer and composer,
he has many albums to his credits, solo and with the Sons of God,
on labels such as Ash International,
Touch, Radium and his own firework Edition. His music, often conceived as the soundtrack to a visual
installation or experimental stage performance, usually presents carefully
selected sound sources over a long
stretch of time and can range from
mesmerising quiet electronics to harsh
noise. His wide-ranging and prolific
body of art often involves dreams and
subtle absurdities, social hierarchies
turned upside-down, hidden actions
and events taking on the quality of
icons.
Together with artist Carl Michael
von Hausswolff, he is a founder of
the Kingdoms of Elgaland-Vargaland
(KREV), where he enjoys the title of
King.
EN
elpueblodechina a.k.a.
Alejandra
Perez Nuñez is a sound artist and
performer working with open source
291
291
291
292
292
tools, electronic wiring and essay writing. In collaborative projects with
Barcelona based group Redactiva, she
works on psychogeography and social science fiction projects, developing narratives related to the mapping of collective imagination. She received an MA in Media Design at the
Piet Zwart Institute in 2005, and has
worked with the organization V2_ in
Rotterdam. She is currently based in
Valparaíso, Chile, where she is developing a practice related to appropriation, civil society and self-mediation
through electronic media.
EN
Born in Bari (Italy) in 1980, and graduated in May 2005 in Communication
Sciences at the University of Rome
La Sapienza, with a dissertation thesis on software as cultural and social
artefact. His educational background
is mostly theoretical: Humanities and
Media Studies. More recently, he has
been focussing on programming and
the development of web based applications, mostly using open source technologies. In 2007 he received an M.A.
in Media Design at the Piet Zwart Institute in Rotterdam.
His areas of interest are:
social
software, actor network theory, digital archives, knowledge management,
machine readability, semantic web,
data mining, information visualization, profiling, privacy, ubiquitous
computing, locative media.
292
292
293
293
ware, de compilatie van data en de
exploratie van numerieke archieven en
privacy. In 2007 behaalde hij een M.A.
in Media Design aan het Piet Zwart
Instituut in Rotterdam.
amazons (1st version in Tanzfabrik,
2nd in Ausland, Berlin) and The
bitch is back under pressure (reloaded) (Basso, Berlin). As a memeber of the Praticable collective, he
created Dance and The breast piece,
in collaboration with Alice Chauchat.
He also collaborated on Still Lives
(Good Work: Anderson/ Gies/ Pelmus/ Pocheron/ Schad).
EN
After studying ballet and contemfaut (CND, Parijs), Le principal déporary dance, Frédéric Gies worked
faut-solo (Tipi de Beaubourg, Parijs),
with various choreographers such as
En corps (CND, Parijs), Post porn
Daniel Larrieu, Bernard Glandier,
traffc (Macba, Barcelona), In bed
Jean-François Duroure, Olivia Grandville with Rebecca (Vooruit, Gent), (don't)
and Christophe Haleb. In 1995, he
Show it! (Scène nationale, Dieppe),
created a duet in collaboration with
Second hand vintage collector (someOdile Seitz (Because I love). In 1998
times we like to mix it up!) (Ausland,
he started working with Frédéric De
Berlijn).
Carlo. Together they have created
In 2004 danst hij in The better you
various performances such as Le prinlook, the more you see
293
293
293
294
294
Dominique Goblet
http://www.dominique-goblet.be/
EN
Visual artist. She shows her work in
galleries and publishes her stories in
magazines and books. In all cases,
what she tries to pursue is an art of
the multi-faceted narrative. Her exhibitions of paintings – from frame to
frame and in the whole space of the
gallery – could be ‘read' as fragmented
stories. Her comic books question the
deep or thin relations between human
beings. As an author, she has taken
part in almost all the Frigobox series
published by Fréon (Brussels) and to
several Lapin magazines, published by
L'Association (Paris). A silent comic
book was published in the gigantic
Comix 2000 (L'Association). In the
beginning of 2002, a second book is
published by the same editor: Souvenir d'une journée parfaite - Memories of a perfect day - a complex story
that combines autobiographical facts
and fictions.
Tsila Hassine
http://www.missdata.org/
EN
Tsila Hassine is a media artist / designer.
Her interests lie with the
hidden potentialities withheld in the
electronic data mines. In her practice she endeavours to extrude undercurrents of information and traces of
processes that are not easily discerned
through regular consumption of mass
networked media. This she accomplishes through repetitive misuse of
available platforms.
She completed a BScs in Mathematics and Computer Science and spent
2003 at the New Media department
of the HGK Zürich.
In 2004 she
joined the Piet Zwart Institute in Rotterdam, where she pursued an MA
in Media Design, until graduating in
June 2006 with Google randomizer
Shmoogle.
She is currently a researcher at the Design department of
the Jan van Eyck Academie.
Simon Hecquet
EN
Dancer and choreographer. Educated
in classical and contemporary dance,
Hecquet has worked with many different dance companies, specialised
in contemporary as well as baroque
dance.
During this time, he also
studied different notation systems to
describe movement, after which he
wrote scores for several dance pieces
from the contemporary choreographic
repertory. He also contributed, among
others, with the Quatuor Knust,
to projects that restaged important
dance pieces of the 20th century. Together with Sabine Prokhoris he made
a movie, Ceci n'est pas une danse
chorale (2004), and a book, Fabriques
de la Danse (PUF, 2007). He teaches
transcription systems for movement,
among others, at the department of
Dance at the Université de Paris VIII.
Guy Marc Hinant
EN
Guy Marc Hinant is a filmmaker of
films like The Garden is full of Metal
(1996), Éléments d'un Merzbau oublié (1999), The Pleasure of Regrets
– a Portrait of Léo Kupper (2003),
Luc Ferrari face to his Tautology
(2006) and I never promised you a
rose garden – a portrait of David
Toop through his records collection
(2008), all developed together with
Dominique Lohlé. He is the curator
of An Anthology of Noise and Electronic Music CD Series, and manages
the Sub Rosa label. He writes fragmented fictions and notes on aesthetics (some of his texts have been published by Editions de l'Heure, Luna
Park, Leonardo Music Journal etc.).
Dmytri Kleiner
http://www.telekommunisten.net/
EN
Dmytri Kleiner is a USSR-born, Canadian software developer and cultural
producer. In his work, he investigates the intersections of art, technology and political economy. He is a
founder of Telekommunisten, an anarchist technology collective, and lives
in Berlin with his wife Franziska and
his daughter Henriette.
Bettina Knaup
EN
Cultural producer and curator with a
background in theatre and film studies, political science and gender studies. She is interested in the interface
of live arts, politics and knowledge
production, and has curated and/or
produced transnational projects such
as the public arts and science program ‘open space' of the International Women's University (Hannover,
1998-2000), and the transdisciplinary
performing arts laboratory, IN TRANSIT (Berlin, House of World Cultures
2002-2003). Between 2001 and 2004,
she has co-curated and co-directed
the international festival of contemporary arts, CITY OF WOMEN (Ljubljana). After directing the new European platform for cultural exchange
LabforCulture during its launch phase
(Amsterdam, 2004-06), Knaup works
again as an independent curator with
a base in Berlin.
EN
Christophe Lazaro is a scientific collaborator at the Law department
of the Facultés Notre-Dame de la
Paix, Namur, and researcher at the
Research Centre for Computer and
Law. His interest in legal matters is
complemented by socio-anthropological research on virtual communities
(free software community), the human/artefact relationship (prothesis,
implants, RfiD chips), transhumanism and posthumanism.
Manu Luksch, founder of ambientTV.NET,
is a filmmaker who works outside the
frame. The ‘moving image', and in
particular the evolution of film in the
digital or networked age, has been
a core theme of her works. Characteristic is the blurring of boundaries between linear and hypertextual
narrative, directed work and multiple
authorship, and post-produced and
self-generative pieces. Expanding the
idea of the viewing environment is also
of importance; recent works have been
NL
shown on electronic billboards in pub
Nicolas Malevé
He has recently been working on sigSince 1998 multimedia artist Nicolas
nal processing, looking at how artists,
Malevé has been an active member of
activists, development projects, and
the organization of Constant. As such,
community groups are making alterhe has taken part in organizing varinate or competing communication inous activities connected with alternafrastructures.
tives to copyrights, such as ‘Copy.cult
Michael Murtaugh
http://automatist.org/
EN
Born in September 2001, represented
here by Valérie Cordy and Natalia
De Mello, the MéTAmorphoZ collective is a multidisciplinary association that create installations, spectacles and transdisciplinary performances that mix artistic experiments
and digital practices.
EN
Freelance developer of (tools for) online documentaries and other forms of
digital archives. He works and lives in
the Netherlands and online at automatist.org. He teaches at the MA Media
Design program at the Piet Zwart Institute in Rotterdam.
301
301
301
302
302
Julien Ottavi
http://www.noiser.org/
Ottavi is the founder, artistic programmer, audio computer researcher
(networks and audio research) and
sound artist of the experimental music
organization Apo33. Founded in 1997,
Apo33 is a collective of artists, musicians, sound artists, philosophers and
computer scientists, who aim to promote new types of music and sound
practices that do not receive large media coverage. The purpose of Apo33
is to create the conditions for the development of all of the kinds of music
and sound practices that contribute
to the advancement of sound creation,
including electronic music, concrete
music, contemporary written music,
sound poetry, sound art and other
practices which as yet have no name.
Apo33 refers to all of these practices
as ‘Audio Art'.
EN
Jussi Parikka teaches and writes on
the cultural theory and history of new
media. He has a PhD in Cultural
History from the University of Turku,
finland, and is Senior Lecturer in
Media Studies at Anglia Ruskin University, Cambridge, UK. Parikka has
published a book on ‘cultural theory
in the age of digital machines' (Koneoppi, in finnish) and his Digital
Contagions: A Media Archaeology of
Computer Viruses has been published
by Peter Lang, New York, Digital Formations-series (2007). Parikka is currently working on a book on ‘Insect
Media', which focuses on the media
theoretical and historical interconnections of biology and technology.
Sadie Plant
Sadie Plant is the author of The Most
Radical Gesture, Zeros and Ones,
and Writing on Drugs.
She has
taught in the Department of Cultural
Studies, University of Birmingham,
and the Department of Philosophy,
University of Warwick. For the last
ten years she has been working independently and living in Birmingham,
where she is involved with the Ikon
Gallery, Stan's Cafe Theatre Company, and the Birmingham Institute
of Art and Design.
EN
Praticable proposes itself as a horizontal work structure, which brings into
relation research, creation, transmission and production structure. This
structure is the basis for the creation
of many performances that will be
signed by one or more participants in
the project. These performances are
grounded, in one way or another, in
the exploration of body practices to
approach representation. Concretely,
the form of Praticable is periods of
common research of /on physical practices which will be the soil for the various creations. The creation periods
will be part of the research periods.
Thus, each specific project implies the
involvement of all participants in the
practice, the research and the elaboration of the practice from which the
piece will ensue.
304
304
304
305
305
Sabine Prokhoris
EN
EN
Psychoanalyst and author of, among
others, Witch's Kitchen:
Freud,
Faust, and the Transference (Cornell
University Press, 1995), and co-author
with Simon Hecquet of Fabriques de la
Danse (PUF, 2007). She is also active
in contemporary dance, as a critic and
a choreographer. In 2004 she made the
film Ceci n'est pas une danse chorale
together with Simon Hecquet.
After obtaining a master's degree in
Philosophy and Letters, Inès Rabadan
studied film at the IAD. Her short
films (Vacance, Surveiller les Tortues,
Maintenant, Si j'avais dix doigts,
Le jour du soleil), were shown at
about sixty festivals. Surveiller les
tortues and Maintenant were awarded
at the festivals of Clermont, Vendôme,
Chicago, Aix, Grenoble, Brest and
Namur. Occasionally she supervises
scenario workshops.
Her first feature film, Belhorizon, was selected
for the festivals of Montréal, Namur, Créteil, Buenos Aires, Santiago de Chile, Santo Domingo and
Mannheim-Heidelberg.
At the end
of 2006, it was released in Belgium,
France and Switzerland.
305
305
305
306
306
EN
Antoinette Rouvroy is researcher at
the Law department of the Facultés
Notre-Dame de la Paix in Namur,
and at the Research Centre for Computer and Law. Her domains of expertise range from rights and ethics
of biotechnologies, philosophy of Law
and ‘critical legal studies' to interdisciplinary questions related to privacy
and non-discrimination, science and
technology studies, law and language.
NL
Antoinette Rouvroy is onderzoekster
aan het departement Rechten van de
Facultés Notre-Dame de la Paix in Namen, en aan het Centre de Recherche
Informatique et Droit van de Universiteit van Namen. Zij is gespecialiseerd in het recht en de ethiek
Femke Snelting is a member of the
art and design collective De Geuzen
and of the experimental design agency
OSP.
NL
Michael Terry
http://www.ingimp.org/
Computer Scientist, University of Waterloo, Canada.
Carl Michael von Hausswolff
Von Hausswolff was born in 1956 in
Linkšping, Sweden.
He lives and
works in Stockholm. Since the end
of the 70s, von Hausswolff has been
working as a composer using the tape
recorder as his main instrument and
as a conceptual visual artist working with performance art, light- and
sound installations and photography.
His audio compositions from 1979 to
1992, constructed almost exclusively
from basic material taken from earlier audiovisual installations and performance works, essentially consist of
complex macromal drones with a surface of aesthetic elegance and beauty.
In later works, von Hausswolff retained the aesthetic elegance and the
drone, and added a purely isolationistic sonic condition to composing.
Marc Wathieu
http://www.erg.be/sdr/blog/
Marc Wathieu teaches at Erg (digital arts) and HEAJ (visual communication). He is a digital artist (he
works with the Brussels based collective LAB[au]) and sound designer.
He is also an offcial representative of
the Robots Trade Union with the human institutions. During V/J10 he
presented the Robots Trade Union's
Chart and ambitions.
Peter Westenberg
Brian Wyrick
FR
Peter Westenberg is an artist and film
and video maker, and member of Constant. His projects evolve from an
interest in social cartography, urban
anomalies and the relationships between locative identity and cultural
Brian Wyrick is an artist, filmmaker
and web developer working in Berlin
and Chicago. He is also co-founder
of Group 312 films, a Chicago-based
film group.
Simon Yuill
http://www.spring-alpha.org/
EN
Artist and programmer based in Glasgow, Scotland. He is a developer in
the spring_alpha and Social Versioning System (SVS) projects. He has
helped to set up and run a number
of hacklabs and free media labs in
Scotland including the Chateau Institute of Technology (ChIT) and Electron Club, as well as the Glasgow
branch of OpenLab. He has written
on aspects of Free Software and cultural praxis, and has contributed to
publications such as Software Studies
(MIT Press, 2008), the flOSS Manuals and Digital Artists Handbook project (GOTO10 and Folly).
License Register
??
65, 174
a
Attribution-Noncommercial-No Derivative Work
181, 188
c
Copyright Presses Universitaires de France, 2007 188
Creative Commons Attribution-NonCommercial-ShareAlike 58, 71,
73, 81, 93, 98, 155, 215, 254, 275
Creative Commons Attribution - NonCommercial - ShareAlike license
104
d
Dmytri Kleiner & Brian Wyrick, 2007. Anti-Copyright. Use as desired in whole or in part. Independent or collective commercial use
encouraged. Attribution optional.
47
f
Free Art License 38, 70, 75, 131, 143, 217
Fully Restricted Copyright 95
g
GNUFDL 119
311
311
311
312
312
t
The text is under a GPL. The images are a little trickier as none of
them belong to me. The images from ap and David Griffths can
be GPL as well, the Scratch Orchestra images (the graphic music
scores) were always published ‘without copyright' so I guess are
public domain. The photograph of the Scratch Orchestra performance can be GPL or public domain and should be credited to
Stefan Szczelkun. The other images, Sun Ra, Black Arts Group
and Lester Bowie would need to mention ‘contact the photographers'. Sorry the images are complicated but they largely come
from a time before copyleft was widespread.
233
312
312
312
313
313
This publication was produced with a set of digital tools that are
rarely used outside the world of scientific publishing: TEX, LATEX and
ConTEXt. As early as the summer of 2008, when most contributions
and translations to Tracks in electronic fields were reaching their final
stage, we started discussing at OSP 1 how we could design and produce
a book in a way that responded to the theme of the festival itself. OSP
is a design collective working with Free Software, and our relation to
the software we design with, is particular on purpose. At the core
of our design practice is the ongoing investigation of the intimate
connection between form, content and technology. What follows, is a
report of an experiment that stretched out over a little more than a
year.
For the production of previous books, OSP used Scribus, an Open
Source Desktop Publishing tool which resembles its proprietary variants PageMaker, InDesign or QuarkXpress. In this type of software,
each single page is virtually present as a ‘canvas' that has the same
proportions as a physical page and each of these ‘pages' can be individually altered through adding or manipulating the virtual objects
on it. Templates or ‘master pages' allow the automatic placement
of repeated elements such as page numbers and text blocks, but like
in a paper-based design workflow, each single page can be treated as
an autonomous unit that can be moved, duplicated and when necessary removed. Scribus would have certainly been fit for this job,
though the rapidly developing project is currently in a stage that the
production of books with more than 40 pages can become tedious.
Users are advised to split up such documents into multiple sections
which means that in able to keep continuity between pages, design
decisions are best made beforehand. As a result, the design workflow
is rendered less flexible than you would expect from state-of-the-art
Open Source Publishing http://ospublish.constantvzw.org
36
323
323
323
324
324
creative software. In previous projects, Scribus' rigid workflow challenged us to relocate our creative energy to another territory: that
of computation. We experimented with its powerful Python scripting
API to create 500 unique books. In another project, we transformed
a text block over a sequence of pages with the help of a fairy-tale
script. But for Tracks in electronic fields we dreamed of something
else.
Pierre Huyghebaert takes on the responsibility for the design of
the book. He had been using various generations of lay-out software
since the early 90's, and gathered an extensive body of knowledge
about their potential and limitations. More than once he brought up
the desire to try out a legendary typesetting system called TEX a
sublime typographic engine that allegedly implemented the work of
grandmaster Jan Tshichold 2 with mathematical precision.
TEX is a computer language designed by Donald Knuth in the
1970's, specifically for typesetting mathematical and other scientific
material. Powerful algorithms automatize widow and orphan control and can handle intelligent image placement. It is renowned for
being extremely stable, for running on many different kinds of computers and for being virtually bug free. In the academic tradition
of free knowledge exchange, Knuth decided to make TEX available
‘for no monetary fee' and modifications of or experimentations with
the source code are encouraged. In typical self referential style, the
near perfection of its software design is expressed in a version number
which is converging to π 3.
For OSP, TEX represents the potential of doing design differently.
Through shifting our software habits, we try to change our way of
working too. But Scribus, like the kinds of proprietary softwares it is
modeled on, has a ‘productionalist' view of design built into it 4, which
In Die neue Typographie (1928), Jan Tschichold formulated the classic canon of modernist bookdesign.
3
The value of Π (3.141592653589793...) is the ratio of any circle's circumference to its
diameter and it's decimal representation never repeats. The current version number of
TEX is 3.141592
4
“A DTP program is the equivalent of a final assembly in an industrial process”
Christoph Schäfer, Gregory Pittman et al. The Offcial Scribus Manual.fles Books,
2009
31
32
33
34
35
36
324
324
324
325
325
is undeniably seeping through in the way we use it. An exotic Free
Software tool like TEX, rooted firmly in an academic context rather
than in commercial design, might help us to re-imagine the familiar
skill of putting type on a page. By making this kind of ‘domain
shift' 5 we hope to discover another experience of making, and find a
more constructive relation between software, content and form. So
when Pierre suggests that this V/J10 publication is possibly the right
occasion to try, we respond with enthusiasm.
By the end of 2008, Pierre starts carving out a path in the dense
forest of manuals, advice, tips-and-tricks with the help of Ivan Monroy Lopez. Ivan is trained as mathematician and more or less familiar with the exotic culture of TEX. They decide to use the popular
macro-package LATEX 6 to interface with TEX and find out about the
tong-in-cheek concept of ‘badness' (depending on the tension put on
hyphenated paragraphs, compiling a .tex document produces ‘badness' for each block on a scale from 0 to 10.000), and encounter a
long history of wonderful but often incoherent layers of development
that envelope the mysterious lasagna beauty of TEX's typographic
algorithms.
Laying-out a publication in LATEX is an entirely different experience than working with a canvas-based software. first of all, design decisions are executed through the application of markup which
vaguely reminds of working with CSS or HTML. The actual design is
only complete after ‘compiling' the document, and this is where TEX
magic happens. The software passes several times over a marked up
.tex file, incrementally deciding where to hyphenate a word, place a
paragraph or image. In principle, the concept of a page only applies
after compilation is complete. Design work therefore radically shifts
from the act of absolute placement to co-managing a flow. All elements remain relatively placed until the last tour has passed, and
while error messages, warnings and hyphenation decisions scroll by on
the command line, the sensation of elasticity is almost tangible. And
See: Richard Sennett. The Craftsman. Allen Lane (Penguin Press), 2008
6 L
ATEX
is a high-level markup language that was first developed by Leslie Lamport in
1985. Lamport is a computer scientist also known for his work on distributed systems
and multi-treading algorithms.
34
35
36
325
325
325
326
326
indeed, when within the acceptable ‘stretch' of the program placement of a paragraph is exceeded, words literally break out of the grid
(see page 34 example).
When I join Pierre to continue the work in January 2009, the
book is still far from finished. By now, we can produce those typical
academic-style documents with ease, but we still have not managed to
use our own fonts 7. flipping back and forth in the many manuals and
handbooks that exist, we enjoy discovering a new culture. Though
we occasionally cringe at the paternalist humour that seems to have
infected every corner of the TEX community and which is clearly
inspired by witticisms of the founding father, Donald Knuth himself,
we experience how the lightweight, flexible document structure of
TEX allows for a less hierarchical and non-linear workflow, making
it easier to collaborate on a project. It is an exhilarating experience
to produce a lay-out in dialogue with a tool and the design process
takes on an almost rhythmical quality, iterative and incremental. It
also starts to dawn on us, that souplesse comes with a price.
“Users only need to learn a few easy-to-understand commands that
specify the logical structure of a document” promises The Not So
Short Introduction to LATEX. “They almost never need to tinker with
the actual layout of the document”. It explains why using LATEX
stops being easy-to-understand once you attempt to expand its strict
model of ‘book', ‘article' or ‘thesis': the ‘users' that LATEX addresses
are not designers and editors like us. At this point, we doubt whether
to give up or push through, and decide to set ourselves a limit of a
week in which we should be able to to tick off a minimal amount of
items from a list of essential design elements. Custom page size and
headers, working with URL's... they each require a separate ‘package'
that may or may not be compatible with another one. At the end of
the week, just when we start to regain confidence in the usability of
LATEX for our purpose, our document breaks beyond repair when we
try to use custom paper size with custom headers at the same time.
“Installing fonts in LATEX has the name of being a very hard task to accomplish. But
it is nothing more than following instructions. However, the problem is that, first, the
proper instructions have to be found and, second, the instructions then have to be read
and understood”. http://www.ntg.nl/maps/29/13.pdf
34
35
36
326
326
326
327
327
In February, more than 6 months into the process, we briefly consider switching to OpenOffce instead (which we had never tried for
such a large publication) or go back to Scribus (which means for
Pierre, learning a new tool). Then we remember ConTEXt, a relatively young ‘macro package' that uses the TEX engine as well. “While
LATEX insulates the writer from typographical details, ConTEXt takes
a complementary approach by providing structured interfaces for handling typography, including extensive support for colors, backgrounds,
hyperlinks, presentations, figure-text integration, and conditional compilation” 8. This is what we have been looking for.
ConTEXt was developed in the 1990's by a Dutch company specialised in ‘Advanced Document Engineering'. They needed to produce complex educational materials and workplace manuals and came
up with their own interface to TEX. “The development was purely
driven by demand and configurability, and this meant that we could
optimize most workflows that involved text editing”. 9
However frustrating it is to re-learn yet another type of markup
(even if both are based on the same TEX language, most of the LATEX
commands do not work in ConTEXt and vice versa), many of the
things that we could only achieve by means of ‘hack' in LATEX, are
built in and readily available in ConTEXt. With the help of the
very active ConTEXt mailinglist we find a way to finally use our own
fonts and while plenty of questions, bugs and dark areas remain, it
feels we are close to producing the kind of multilingual, multi-format,
multi-layered publication we imagine Tracks in Electr(on)ic fields to
be.
However, Pierre and I are working on different versions of Ubuntu,
respectively on a Mac and on a PC and we soon discover that our
installations of ConTEXt produce different results. We can't find
a solution in the nerve-wrackingly incomplete, fragmented though
extensive documentation of ConTEXt and by June 2009, we still have
not managed to print the book. As time passes, we find it increasingly
Interview with Hans Hagen http://www.tug.org/interviews/interview-files/hans-hagen
.html
9
Interview with Hans Hagen http://www.tug.org/interviews/interview-files/hans-hagen
.html
34
35
36
327
327
327
328
328
difficult to allocate concentrated time for learning and it is a humbling
experience that acquiring some sort of fluency seems to pull us in all
directions. The stretched out nature of the process also feeds our
insecurity: Maybe we should have tried this package also? Have we
read that manual correctly? Have we read the right manual? Did we
understand those instructions really? If we were computer scientists
ourselves, would we know what to do? Paradoxically, the more we
invest into this process, mentally and physically, the harder it is to
let go. Are we refusing to see the limits of this tool, or even scarier,
our own limitations? Can we accept that the experience we'd hoped
for, is a lot more banal than the sublime results we secretly expected?
A fellow Constant member suggests in desperation: “You can't just
make a book, can you?”
In July, Pierre decides to pay for a consult with the developers
of ConTEXt themselves, and once and for all solve some of the issues we continue to struggle with. We drive up expectantly to the
headquarters of Pragma in Hasselt (NL) and discuss our problems,
seated in the recently redecorated rooms of a former bank building.
Hans Hagen himself reinstalls markIV (the latest in ConTEXt) on the
machine of Pierre, while his colleague Ton Otten tours me through
samples of the colorful publications produced by Pragma. In the afternoon, Hans gathers up some code examples that could help us place
thumbnail images and before we know it we are on our way South
again. Our visit confirms the impression we had from the awkwardly
written manuals and peculiar syntax, that ConTEXt is in essence a
one man mission. It is hard to imagine that a tool written to solve
particular problems of a certain document engineer, will ever grow
into the kind of tool that we desire too as well.
In August, as I type up this report, the book is more or less ready
to go to print. Although it looks ‘handsome' according to some, due
to unexpected bugs and time restraints, we have had to let go of
some of the features we hoped to implement. Looking at it now, just
before going to print, it has certainly not turned out to be the kind of
eye-opening typographic experience we dreamt of and sadly, we will
never know whether that is due to our own limited understanding
of TEX, LATEX and ConTEXt, to the inherent limits of those tools
themselves, or to the crude decision to finally force through a lay-out
in two weeks. Probably a mix of all of the above, it is first of all a
relief that the publication finally exists. Looking back at the process, I
am reminded of the wise words of Joseph Weizenbaum, who observed
that “Only rarely, if indeed ever, are a tool and an altogether original
job it is to do, invented together” 10.
While this book nearly crumbled under the weight of the projections it had to carry, I often thought that outside academic publishing, the power of TEX is much like a Fata Morgana. Mesmerizing
and always out of reach, TEX continues to represent a promise of an
alternative technological landscape that keeps our dream of changing
software habits alive.
Joseph Weizenbaum. Computer power and human reason: from judgment to calculation.
MIT, 1976
36
329
329
329
330
330
330
330
330
331
331
Colophon
Tracks in electr(on)ic fields is a publication of Constant, Association for Art
and Media, Brussels.
Translations: Steven Tallon, Anne Smolar, Yves Poliart, Emma Sidgwick
Copy editing: Emma Sidgwick, Femke Snelting, Wendy Van Wynsberghe
English editing and translations: Sophie Burm
Design: Pierre Huyghebaert, Femke Snelting (OSP)
Photos, unless otherwise noted: Constant (Peter Westenberg). figure 5-9: Marc
Wathieu, figure 31-96: Constant (Christina Clar, video stills), figure 102-104:
Leiff Elgren, CM von Hausswolff, figure 107-116: Manu Luksch, figure A-Q:
elpueblodechina, figure 151 + 152: Pierre Huyghebaert, figure 155: Cornelius
Cardew, figure 160-162: Scratch Orchestra, figure 153 + 154: Michael E. Emrick
(Courtesy of Ben Looker), figure 156-157 + 159: photographer unknown, figure
158: David Griffths, pages 19, 25, 35, 77 and 139: public domain or unknown.
This book was produced in ConTEXt, based on the TEX typesetting engine, and
other Free Softwares (OpenOffce, Gimp, Inkscape). For a written account of
the production process see The Making Of on page 323.
Printing: Drukkerij Geers Offset, Gent
figure 148 De Vlaamse Minister van Cultuur,
Jeugd, Sport en Brussel
figure 149 De Vlaamse Gemeenschapscommissie
332
332
332
Constant
Conversations
2015
This book documents an ongoing dialogue between developers and designers involved in the wider ecosystem of Libre
Graphics. Its lengthy title, I think that conversations are the
best, biggest thing that Free Software has to offer its user, is taken
from an interview with Debian developer Asheesh Laroia, Just
ask and that will be that, included in this publication. His remark points at the difference that Free Software can make when
users are invited to consider, interrogate and discuss not only
the technical details of software, but its concepts and histories
as well.
Conversations documents discussions about tools and practices
for typography, layout and image processing that stretch out
over a period of more than eight years. The questions and answers were recorded in the margins of events such as the yearly
Libre Graphics Meeting, the Libre Graphics Research Unit,
a two-year collaboration between Medialab Prado in Madrid,
Worm in Rotterdam, Piksel in Bergen and Constant in Brussels,
or as part of documenting the work process of the Brussels’
design team OSP. Participants in these intersecting events and
organisations constitute the various instances of ‘we’ and ‘I’ that
you will discover throughout this book.
The transcriptions are loosely organised around three themes:
tools, communities and design. At the same time, I invite you
to read Conversations as a chronology of growing up in Libre
Graphics, a portrait of a community gradually grasping the interdependencies between Free Software and design practice.
Femke Snelting
Brussels, December 2014
Introduction
A user should not be able to shoot himself in the foot
I think the ideas behind it are beautiful in my mind
We will get to know the machine and we will understand
ConTeXt and the ballistics of design
Meaningful transformations
Tools for a Read Write World
Etat des Lieux
Distributed Version Control
Even when you are done, you are not done
Having the tools is just the beginning
Data analysis as a discourse
Why you should own the beer company you design for
Just Ask and That Will Be That
Tying the story to data
Unicodes
If the design thinking is correct, the tools should be irrelevant
You need to copy to understand
What’s the thinking here
The construction of a book (Aether9)
Performing Libre Graphics
Computational concepts, their technological language and the hybridisation of creative practice have been successfully explored in Media Arts for a
few decades now. Digital was a narrative, a tool and a concept, an aesthetic
and political playground of sorts. These experiments created a notion of
the digital artisan and creative technologist on the one hand and enabled
a new view of intellectual property on the other. They widened a pathway
to participation, collaboration and co-creation in creative software development, looking critically at the software as cultural production as well as
technological advance.
This book documents conversations between artists, typographers, designers, developers and software engineers involved in Libre Graphics, an independent, self-organised, international community revolving around Free,
Libre, Open Source software (F/LOSS). Libre Graphics resembles the community of Media arts of the late twentieth Century, in so far that it is using
software as a departure point for creative exploration of design practice. In
some cases it adopts software development processes and applies them to
graphic design, using version control and platforms such as GitHub, but it
also banks on a paradigm shift that Free Software offers – an active engagement with software to bend it, fork it, reshape it – and in that it establishes
conversations with a developers community that haven’t taken place before.
This pathway was, however, at moments full of tension, created by diverging views on what the development process entails and what it might
mean. The conversations brought together in this book resulted from the
need to discuss those complex issues and to adress the differences and similarities between design, design production, Free Culture and software development. As in theatre, where it is said that conflict drives the plot forward,
so it does here. It makes us think harder about the ethics of our practices
while we develop tools and technologies for the benefit of all.
The Libre Graphics Meeting (LGM) was brought to my attention in
2012 as an interesting example of dialogue between creative types and developers. The event was running since 2006 and was originally conceived as an
annual gathering for discussions about Free and Open Source software used
in graphics. At the time I was teaching at the University of Westminster
for nearly ten years. The subject was computers, arts and design and it took
a variety of forms; sometimes focused on graphic design, sometimes on
contemporary media practice, interaction design, software design and mysterious hypermedia. F/LOSS was part of my artistic practice for many years,
7
Larisa Blazic:
Introduction
but its inclusion to the UK Higher Education was a real challenge. My
frustration with difficult computer departments grew exponentially year by
year and LGM looked like a place to visit and get much needed support.
Super fast-forward to Madrid in April 2013: I landed. Little did I know
that this journey would change everything. Firstly, the wonderfully diverse
group of people present: artists, designers, software developers, typographers, interface designers, more software developers! It was very exciting
listening to talks, overhearing conversations in breaks, observing group discussions and slowly engaging with the Libre Graphics community. Being
there to witness how far the F/LOSS community has come was so heartwarming and uplifting, that my enthusiasm was soaring.
The main reason for my attendance at the Madrid LGM was to join
the launch of a network of Free Culture aware educators in art, music and
design education. 1 Aymeric Mansoux and his colleagues from the Willem
De Kooning Academie and the Piet Zwart Institute in Rotterdam convened
the first ever meeting of the network with the aim to map out a landscape
of current educational efforts as well as to share experiences. I was aware of
Aymeric’s efforts through his activities with GOTO10 and the FLOSS+Art
book 2 that they published a couple of years before we finally met. Free
Culture was deeply embedded in his artistic and educational practice, and it
was really good to have someone like him set the course of discussion.
Lo’ and behold the conversation started – we sat in a big circle in the
middle of Medialab Prado. The introduction round began, and I thought:
there are so many people using F/LOSS in their teaching! Short courses,
long courses, BA courses, MA courses, summer schools, all sorts! There
were so many solutions presented for overcoming institutional barricades,
Adobe marriages and Apple hostages. Individual efforts and group efforts,
long term and short, a whole world of conventional curriculums as well as
a variety of educational experimentations were presented. Just sitting there,
listening about shared troubles and achievements was enough to give me a
new surge of energy to explore new strategies for engaging BA level students
with F/LOS tools and communities.
Taking part in LGM 2013 was a useful experience that has informed
my art and educational practice since. It was clear from the gathering that
1
2
http://eightycolumn.net/
Aymeric Mansoux and Marloes de Valk. FLOSS+Art. OpenMute, 2008.
http://things.bleu255.com/floss-art
8
Larisa Blazic:
Introduction
F/LOSS is not a ghetto for idealists and techno fetishists – it was ready for
an average user, it was ready for a specialist user, it was ready for all and
what is most important the communication lines were open. Given that
Linux distributions extend the life of a computer by at least ten years, in
combination with the likes of Libre Graphics, Open Video and a plethora
of other F/LOS software, the benefits are manyfold, important for all and
not to be ignored by any form of creative practice worldwide.
Libre Graphics seems to offer a very exciting transformation of graphic design practice through implementation of F/LOS software development and
production processes. A hybridisation across these often separated fields of
practice that take under consideration openness and freedom to create, copy,
manipulate and distribute, while contributing to the development of visual
communication itself. All this may lease a new life to an over-commercialised
graphic design practice, banalised by mainstream culture.
This book brings together reflections on collaboration and co-creation
in graphic design, typography and desktop publishing, but also on gender
issues and inclusion to the Libre Graphics community. It offers a paradigm
shift, supported by historical research into graphic and type design practice,
that creates strong arguments to re-engage with the tools of production.
The conversations conducted give an overview of a variety of practices and
experiences which show the need for more conversations and which can help
educate designers and developers alike. It gives detailed descriptions of the
design processes, productions and potential trade-offs when engaged in software design and development while producing designed artefacts. It points
to the importance of transparent software development, breaking stereotypes and establishing a new image of the designer-developer combo, a fresh
perspective of mutual respect between disciplines and a desire to engage in
exchange of knowledge that is beneficial beyond what any proprietary software could ever be.
Larisa Blazic is a media artist living and working in London. Her interests range from
creative collaborations to intersections between video art and architecture. As senior lecturer
at the Faculty of Media, Arts and Design of the University of Westminster, she is currently
developing a master’s program on F/LOSS art & design.
9
While in the background participants of the Libre Graphics
Meeting 2007 start saying goodbye to each other, Andreas
Vox makes time to sit down with us to talk about Scribus,
the Open Source application for professional page layout.
The software is significant not only to it’s users that do design with it, but also because Scribus helps us think about
links between software, Free Culture and design. Andreas
is a mathematician with an interest in system dynamics,
who lives and works in Lübeck, Germany. Together with
Franz Schmid, Petr Vanek (subik), Riku Leino (Tsoots),
Oleksandr Moskalenko (malex), Craig Bradney (MrB), Jean
Ghali and Peter Linnel (mrdocs) he forms the core Scribus
developer team. He has been working on Scribus since
2003 and is currently responsible for redesigning the internal workings of its text layout system.
This weekend Peter Linnel presented amongst many other new Scribus features 1 ,
‘The Color Wheel’, which at the click of a button visualises documents the way
they would be perceived by a colour blind person. Can you explain how such a
feature entered into Scribus? Did you for example speak to accessibility experts?
I don’t think we did. The code was implemented by subik 2 , a developer
from the Czech Republic. As far as I know, he saw a feature somewhere else
or he found an article about how to do this kind of stuff, and I don’t know
where he did it, but I would have to ask him. It was a logic extension of the
colour wheel functionality, because if you pick different colours, they look
different to all people. What looks like red and green to one person, might
look like grey and yellow to other persons. Later on we just extended the
code to apply to the whole canvas.
1
2
http://wiki.scribus.net/index.php/Version_1.3.4%2B-New_Features
Petr Vanek
13
It is quite special to offer such a precise preview of different perspectives in your
software. Do you think it it is particular to Scribus to pay attention to these kind
of things?
Yeah, sure. Well, the interesting thing is ... in Scribus we are not depending
on money and time like other proprietary packages. We can ask ourselves:
Is this useful? Would I have fun implementing it? Am I interested in seeing
how it works? So if there is something we would like to see, we implement
it and look at it. And because we have a good contact with our user base,
we can also pick up good ideas from them.
There clearly is a strong connection between Scribus and the world of prepress
and print. So, for us as users, it is an almost hallucinating experience that while
on one side the software is very well developed when it comes to .pdf export for
example, I would say even more developed than in other applications, but than
still it is not possible to undo a text edit. Could you maybe explain how such a
discrepancy can happen, to make us understand better?
One reason is, that there are more developers working on the project,
and even if there was only one developer, he or she would have her own
interests. Remember what George Williams said about FontForge ... 3 he is
not that interested in nice Graphical User Interfaces, he just makes his own
functionality ... that is what interests him. So unless someone else comes
up who compensates for this, he will stick to what he likes. I think that
is the case with all Open Source applications. Only if you have someone
interested and able to do just this certain thing, it will happen. And if it
is something boring or something else ... it will probably not happen. One
way to balance this, is to keep in touch with real users, and to listen to
the problems they have. At least for the Scribus team, if we see people
complaining a lot about a certain feature missing ... we will at some point
say: come on, let’s do something about it. We would implement a solution and
when we get thanks from them and make them happy, that is always nice.
Can you tell us a bit more about the reasons for putting all this work into
developing Scribus, because a layout application is quite a complex monster with
all the elements that need to work together ... Why is it important you find, to
develop Scribus?
3
I think the ideas behind it are beautiful in my mind
14
I use to joke about the special mental state you need to become a Scribus
developer ... and one part of it is probably megalomania! It is kind of mountain climbing. We just want to do it, to prove it can be done. That must
have been also true for Franz Schmid, our founder, because at that time,
when he started, it was very unlikely that he would succeed. And of course
once you have some feedback, you start to think: hey, I can do it ... it works.
People can use it, people can print with it, do things ... so why not make it even
better? Now we are following InDesign and QuarkXpress, and we are playing
the top league of page layout applications ... we’re kind of in a competition
with them. It is like climbing a mountain and than seeing the next, higher
mountain from the top.
In what way is it important to you that Scribus is Free Software?
Well ... it would not work with closed software. Open software allows you to
get other people that also are interested in working on the project involved,
so you can work together. With closed software you usually have to pay
people; I would only work because someone else wants me to do it and
we would not be as motivated. It is totally different. If it was closed, it
would not be fun. In Germany they studied what motivates Open Source
developers, and they usually list: ‘fun’; they want to do something more
challenging than at work, and some social stuff is mentioned as well. Of
course it is not money.
One of the reasons the Scribus project seems so important to us, is that it might
draw in other kinds of users, and open up the world of professional publishing to
people who can otherwise not afford proprietary packages. Do you think Scribus
will change the way publishing works? Does that motivate you, when you work
on it?
I think the success of Open Source projects will also change the way people
use software. But I do not think it is possible to foresee or plan, in what
way this will change. We see right now that Scribus is adopted by all kinds
of idealists, who think that is interesting, lets try how far we can go, and
do it like that. There are other users that really just do not have the money
to pay for a professional page layout application such as very small newspapers associations, sports groups, church groups. They use Scribus because
otherwise they would have used a pirated copy of some other software, or
15
another application which is not up to that task, such as a normal word processor. Or otherwise they would have used a deficient application like MS
Publisher to do it. I think what Scribus will change, is that more people
will be exposed to page layout, and that is a good thing, I think.
In another interview with the Scribus team 4 , Craig Bradney speaks about the
fact that the software is often compared with its proprietary competition. He
brings up the ‘Scribus way of doing things’. What do you think is ‘The Scribus
Way’?
I don’t think Craig meant it that way. Our goal is to produce good output,
and make that easy for users. If we are in doubt, we think for example:
InDesign does this in quite an OK way, so we try to do it in a similar way;
we do not have any problems with that. On the other hand ... I told you a
bit about climbing mountains ... We cannot go from the one top to the next
one just in one step. We have to move slowly, and have to find our ways and
move through valleys and that sometimes also limits us. I can say: I want it
this way but then it is not possible now, it might be on the roadmap, but we
might have to do other things first.
When we use Scribus, we actually thought we were experiencing ‘The Scribus
Way’ through how it differences from other layout packages. First of all, in
Scribus there is a lot more attention for everything that happens after the layout
is done, i.e. export, error checking etc. and second, working with the text editor
is clearly the preferred way of doing layout. For us it links the software to a more
classic ways of doing design: a strictly phased process where a designer starts with
writing typographic instructions which are carried out by a typesetter, after which
the designer pastes everything into the mock-up. In short: it seems easier to do a
magazine in Scribus, than a poster. Do you recognize that image?
That is an interesting thought, I have never seen it that way before. My
background is that I did do a newspaper, magazine for a student group, and
we were using PageMaker, and of course that influenced me. In a small
group that just wants to bring out a magazine, you distribute the task of
writing some articles, and usually you have only one or two persons who are
capable of using a page layout application. They pull in the stories and make
some corrections, and then do the layout. Of course that is a work flow I am
4
familiar with, and I don’t think we really have poster designers or graphic
artists in the team. On the other hand ... we do ask our users what they
think should be possible with Scribus and if a functionality is not there, we
ask them to put in a bug report so we do not forget it and some time later
we will pick it up and implement it. Especially the possibility to edit from
the canvas, this will approve in the upcoming versions.
Some things we just copied from other applications. I think Franz 5 had no
previous experience with PageMaker, so when I came to Scribus, and saw
how it handled text chains, I was totally dismayed and made some changes
right away because I really wanted it to work the way it works in PageMaker,
that is really nice. So, previous experience and copying from another applications was one part of the development. Another thing is just technical
problems. Scribus is at the moment internally not that well designed, so we
first have to rewrite a lot of code to be able to reach some elements. The
coding structure for drawing and layout was really cumbersome inside and
it was difficult to improve. We worked with 2.500 lines of code, and there
were no comments in between. So we broke it down in several elements,
put some comments in and also asked Franz: why did you did this or that, so
we could put some structure back into the code to understand how it works.
There is still a lot of work to be done, and we hope we can reach a state
where we can implement new stuff more easily.
It is interesting how the 2.500 lines of code are really tangible when you use
Scribus old-style, even without actually seeing them. When Peter Linnel was
explaining how to make the application comply to the conservative standards of
the printing business, he used this term ‘self-defensive code’ ...
At Scribus we have a value that a file should never break in a print shop.
Any bug report we receive in this area, is treated with first priority.
We can speak from experience, that this is really true! But this robustness shifts
out of sight when you use the inbuilt script function; then it is as if you come
in to the software through the backdoor. From self-defence to the heart of the
application?
It is not really self-defence ... programmers and software developers sometimes use the expression: ‘a user should not shoot himself in the foot’.
5
Schmid
17
Scribus will not protect you from ugly layout, if that would be possible at
all! Although I do sometimes take deliberate decisions to try and do it ...
for example that for as long as I am around, I will not make an option to
do ‘automatic letter spacing’, because I think it is just ugly. If you do it
manually, that is your responsibility; I just do not feel like making anything
like that work automatically. What we have no problems with, is to prevent
you from making invalid output. If Scribus thinks a certain font is not OK,
and it might break on one or two types of printers ... this is reason enough
for us to make sure this font is not used. The font is not even used partially,
it is gone. That is the kind of self-defence Peter Linnel was talking about.
It is also how we build .pdf files and PostScript. Some ways of building
PostScript take less storage, some of it would be easier to read for humans,
but we always take an approach that would be the least problematic in a
print shop. This meant for example, that you could not search in a .pdf. 6
I think you can do that now, but there are still limitations; it is on the
roadmap to improve over time, to even add an option to output a web oriented .pdf and a print oriented .pdf ... but it is an important value in Scribus
is to get the output right. To prevent people to really shoot themselves in
the foot.
Our last question is about the relation between the content that is layed out
in Scribus, and the fact that it is an Open Source project. Just as an example,
Microsoft Word will come out with an option to make it easy to save a document
with a Creative Commons License 7 . Would this, or not, be an interesting option
to add to Scribus? Would you be interested in making that connection, between
software and content?
It could well be we would copy that, if it is not already been patented by
Microsoft! To me it sounds a bit like a marketing trick ... because it is such
an easy function to do. But, if someone from Creative Commons would ask
for this function, I think someone would implement it for Scribus in a short
time, and I think we would actually like it. Maybe we would generalize it a
little, so that for example you could also add other licenses too. We already
have support for some meta data, and in the future we might put some more
function in to support license managing, for example also for fonts.
6
7
because the fonts get outlined and/or reencoded
http://creativecommons.org/press-releases/entry/5947
18
About the relation between content and Open Source software in general
... there are some groups who are using Scribus I politically do not really
identify with. Or more or less not at all. If I meet those people on the IRC
chat, I try to be very neutral, but I of course have my own thoughts in the
back of my head.
Do you think using a tool like Scribus produces a certain kind of use?
No. Preferences for work tools and political preference are really orthogonal,
and we have both. For example when you have some right wing people they
could also enjoy using Scribus and socialist groups as well. It is probably the
best for Scribus to keep that stuff out of it. I am not even sure about the
political conviction of the other developers. Usually we get along very well,
but we don’t talk about those kinds of things very much. In that sense I
don’t think that using Scribus will influence what is happening with it.
As a tool, because it makes creating good page layouts much easier, it will
probably change the landscape because a lot of people get exposed to page
layout and they learn and teach other people; and I think that is growing,
and I hope it will be growing faster than if it is all left to big players like
InDesign and Quark ... I think this will improve and it will maybe also
change the demands that users will make for our application. If you do page
layout, you get into a new frame of mind ... you look in a different way at
publications. It is less content oriented, but more layout oriented. You will
pick something up and it will spread. People by now have understood that
it is not such a good idea to use twelve different fonts in one text ... and I
think that knowledge about better page layout will also spread.
19
When we came to the Libre Graphics Meeting
for the first time in 2007, we recorded this rare
conversation with George Williams, developer of
FontForge, the editing tool for fonts. We spoke
about Shakespeare, Unicode, the pleasure of making beautiful things, and pottery.
We‘re doing these interviews, as we’re working as designers on Open Source
OK.
With Open Source tools, as typographers, but often when we speak to
developers they say well, tell me what you want, or they see our interest in
what they are doing as a kind of feature request or bug report.
(laughs) Yes.
Of course it’s clear that that’s the way it often works, but for us it’s also
interesting to think about these tools as really tools, as ways of shaping
work, to try and understand how they are made or who is making them.
It can help us make other things. So this is actually what we want to talk
about. To try and understand a bit about how you’ve been working on
FontForge. Because that’s the project you’re working on.
OK.
And how that connects to other ideas of tools or tools’ shape that you
make. These kind of things. So maybe first it’s good to talk about what
it is that you make.
OK. Well ... FontForge is a font editor.
I started playing with fonts when I bought my first Macintosh, back in the
early eighties (actually it was the mid-eighties) and my father studied textual bibliography and looked at the ways the printing technology of the
Renaissance affected the publication of Shakespeare’s works. And what that
meant about the errors in the compositions we see in the copies we have
left from the Renaissance. So my father was very interested in Renaissance
printing (and has written books on this subject) and somehow that meant
23
that I was interested in fonts. I’m not quite sure how that connection happened, but it did. So I was interested in fonts. And there was this program
that came out in the eighties called Fontographer which allowed you to create PostScript 1 and later TrueType 2 fonts. And I loved it. And I made lots
of calligraphic fonts with it.
You were ... like 20?
I was 20~30. Lets see, I was born in 1959, so in the eighties I was in my
twenties mostly. And then Fontographer was bought up by Macromedia 3
who had no interest in it. They wanted FreeHand 4 which was done by
the same company. So they dropped Fon ... well they continued to sell
Fontographer but they didn’t update it. And then OpenType 5 came out and
Unicode 6 came out and Fontographer didn’t do this right and it didn’t do
that right ... And I started making my own fonts, and I used Fontographer
to provide the basis, and I started writing scripts that would add accents to
latin letters and so on. And figured out the Type1 7 format so that I could
decompose it — decompose the Fontographer output so that I could add
1
2
3
4
5
6
7
PostScript fonts are outline font specifications developed by Adobe Systems for professional
digital typesetting, which uses PostScript file format to encode font information.
Wikipedia. PostScript fonts — Wikipedia, The Free Encyclopedia, 2014. [Online; accessed 18.12.2014]
TrueType is an outline font standard developed by Apple and Microsoft in the late 1980s as a
competitor to Adobe’s Type 1 fonts used in PostScript.
Wikipedia. TrueType — Wikipedia, The Free Encyclopedia, 2014. [Online; accessed 18.12.2014]
Macromedia was an American graphics, multimedia and web development software company
(1992–2005). Its rival, Adobe Systems, acquired Macromedia on December 3, 2005.
Wikipedia. Macromedia — Wikipedia, The Free Encyclopedia, 2014. [Online; accessed 18.12.2014]
Adobe FreeHand (formerly Macromedia Freehand) is a computer application for creating
two-dimensional vector graphics. Adobe discontinued development and updates to the
program. Wikipedia. Adobe FreeHand — Wikipedia, The Free Encyclopedia, 2014. [Online; accessed 18.12.2014]
OpenType is a format for scalable computer fonts. It was built on its predecessor TrueType,
retaining TrueType’s basic structure and adding many intricate data structures for prescribing
typographic behavior. Wikipedia. Opentype — wikipedia, the free encyclopedia, 2014. [Online; accessed 18.12.2014]
Unicode is a computing industry standard for the consistent encoding, representation, and
handling of text expressed in most of the world’s writing systems.
Wikipedia. Unicode — Wikipedia, The Free Encyclopedia, 2014. [Online; accessed 18.12.2014]
Type 1 is a font format for single-byte digital fonts for use with Adobe Type Manager
software and with PostScript printers. It can support font hinting. It was originally a
proprietary specification, but Adobe released the specification to third-party font
manufacturers provided that all Type 1 fonts adhere to it.
Wikipedia. PostScript fonts — Wikipedia, The Free Encyclopedia, 2014. [Online; accessed 18.12.2014]
24
my own things to it. And then Fontographer didn’t do Type0 8 PostScript
fonts, so I figured that out.
And about this time, the little company I was working for, a tiny little
startup — we wrote a web HTML editor — where you could sit at your
desk and edit pages on the web — it was before FrontPage 9 , but similar to
FrontPage. And we were bought by AOL and then we were destroyed by
AOL, but we had stock options from AOL and they went through the roof.
So ... in the late nineties I quit. And I didn’t have to work.
And I went off to Madagascar for a while to see if I wanted to be a primatologist. And ... I didn’t. There were too many leaches in the rainforest.
(laughs)
So I came back, and I wrote a font editor instead.
And I put it up on the web and in late 99, and within a month someone
gave me a bug report and was using it.
(laughs) So it took a month
Well, you know, there was no advertisement, it was just there, and someone
found it and that was neat!
(laughs)
And that was called PfaEdit (because when it began it only did PostScript)
and I ... it just grew. And then — I don’t know — three, four, five years ago
someone pointed out that PfaEdit wasn’t really appropriate any more, so I
asked various users what would be a good name and a french guy said How
’bout FontForge? So. It became FontForge then. — That’s a much better
name than PfaEdit.
(laughs)
Used it ever since.
But your background ... you talked about your father studying ...
8
9
Type 0 is a ‘composite’ font format . A composite font is composed of a high-level font that
references multiple descendent fonts.
Wikipedia. PostScript fonts — Wikipedia, The Free Encyclopedia, 2014. [Online; accessed 18.12.2014]
Microsoft FrontPage is a WYSIWYG HTML editor and Web site administration tool from
Microsoft discontinued in December 2006.
Wikipedia. Microsoft FrontPage — Wikipedia, The Free Encyclopedia, 2014. [Online; accessed 18.12.2014]
25
I grew up in a household where Shakespeare was quoted at me every day,
and he was an English teacher, still is an English teacher, well, obviously
retired but he still occasionally teaches, and has been working for about 30
years on one of those versions of Shakespeare where you have two lines of
Shakespeare text at the top and the rest of the page is footnotes. And I went
completely differently and became a mathematician and computer scientist
and worked in those areas for almost twenty years and then went off and
tried to do my own things.
So how did you become a mathematician?
(pause) I just liked it.
(laughs) just liked it
I was good at it. I got pushed ahead in high school. It just never occurred
to me that I’d do anything else — until I met a computer. And then I still
did maths because I didn’t think computers were — appropriate — or — I
was a snob. How about that.
(laughs)
But I spent all my time working on computers as I went through university.
And then got my first job at JPL 10 and shortly thereafter the shuttle 11
blew up and we had some — some of our experiments — my little group
— flew on the shuttle and some of them flew on an airplane which went
over the US took special radar pictures of the US. We also took special radar
pictures of the world from the shuttle (SIR-A, SIR-B, SIR-C). And then
our airplane burned up. And JPL was not a very happy place to work after
that. So then I went to a little company with some college friends of mine,
that they’d started, created compilers and debuggers — do you know what
those are?
Mm-hmm.
And I worked a long time on that, and then the internet came out and found
another little company with some friends — and worked on HTML.
10
11
Jet Propulsion Laboratory
The Space Shuttle Challenger disaster occurred on January 28, 1986, when the NASA Space
Shuttle orbiter Challenger broke apart 73 seconds into its flight, leading to the deaths of its
seven crew members.
Wikipedia. Space Shuttle Challenger disaster — Wikipedia, The Free Encyclopedia, 2014. [Online; accessed 18.12.2014]
26
So when, before we moved, I was curious about, I wanted you to talk
about a Shakespearian influence on your interest in fonts. But on the
other hand you talk about working in a company where you did HTML
editors at the time you actually started, I think. So do you think that
is somehow present ... the web is somehow present in your — in how
FontForge works? Or how fonts work or how you think about fonts?
I don’t think the web had much to do with my — well, that’s not true.
OK, when I was working on the HTML editor, at the time, mid-90s, there
weren’t any Unicode fonts, and so part of the reason I was writing all these
scripts to add accents and get Type0 support in PostScript (which is what
you need for a Unicode font) was because I needed a Unicode font for our
HTML product.
To that extent — yes-s-s-s.
It had an effect. Aside from that, not really.
The web has certainly allowed me to distribute it. Without the web I doubt
anyone would know — I wouldn’t have any idea how to ‘market’ it. If that’s
the right word for something that doesn’t get paid for. And certainly the
web has provided a convenient infrastructure to do the documentation in.
But — as for font design itself — that (the web) has certainly not affected
me.
Maybe with this creative commons talk that Jon Phillips was giving, there
may be, at some point, a button that you can press to upload your fonts to
the Open Font Library 12 — but I haven’t gotten there yet, so I don’t want
to promise that.
(laughs) But no, indeed there was – hearing you speak about ccHost 13 –
that’s the ...
Mm-hmm.
... Software we are talking about?
That’s what the Open Font Library uses, yes.
12
13
Open Font Library is a project devoted to the hosting and encouraged creation of fonts
released under Free Licenses.
Wikipedia. Open Font Library — Wikipedia, The Free Encyclopedia, 2014. [Online; accessed 18.12.2014]
ccHost is a web-based media hosting engine upon which Creative Commons’ ccMixter remix
web community is built. Wikipedia. CcHost — Wikipedia, The Free Encyclopedia, 2012. [Online; accessed 18.12.2014]
27
Yeah. And a connection to FontForge could change the way, not only
how you distribute fonts, but also how you design fonts.
It — it might. I don’t know ... I don’t have a view of the future.
I guess to some extent, obviously font design has been affected by requiring
it (the font) to be displayed on a small screen with a low resolution display.
And there are all kinds of hacks in modern fonts formats for dealing with
low resolution stuff. PostScript calls them hints and TrueType calls them
instructions. They are different approaches to the same thing. But that,
that certainly has affected font design in the last — well since PostScript
came out.
The web itself? I don’t think that has yet been a significant influence on
font design, but then — I’m no longer a designer. I discovered I was much
better at designing font editors than at designing fonts.
So I’ve given up on that aspect of things.
Mm-K, because I’m curious about your making a division about being a
designer, or being a font-editor-maker, because for me that same definition of maker, these two things might be very related.
Well they are. And I only got in to doing it because the tools that were
available to me were not adequate. But I have found since — that I’m
not adequate at doing the design, there are many people who are better at
designing — designing fonts, than I am. And I like to design fonts, but I
have made some very ugly ones at times.
And so I think I will — I’ll do that occasionally, but that’s not where I’m
going to make a mark.
Mostly now —
I just don’t have the —
The font editor itself takes up so much of time that I don’t have the energy,
the enthusiasm, or anything like that to devote to another major creative
project. And designing a font is a major creative project.
Well, can we talk about the major creative project of designing a font
editor? I mean, because I’m curious how — how that is a creative project
for you — how you look at that.
I look at it as a puzzle. And someone comes up to me with a problem, and I
try and figure out how to solve it. And sometimes I don’t want to figure out
28
how to solve it. But I feel I should anyway. And sometimes I don’t want to
figure out how to solve it and I don’t.
That’s one of the glories of being one’s own boss, you don’t have to do
everything that you are asked.
But — to me — it’s just a problem. And it’s a fascinating problem. But
why is it fascinating? — That’s just me. No one else, probably, finds
it fascinating. Or — the guys who design FontLab probably also find it
fascinating, there are two or three other font design programs in the world.
And they would also find it fascinating.
Can you give an example of something you would find fascinating?
Well. Dave Crossland who was sitting behind me at the end was talking
to me today — he sat down — we started talking after lunch but on the
way up the stairs — at first he was complaining that FontForge isn’t written
with a standard widget set. So it looks different from everything else. And
yes, it does. And I don’t care. Because this isn’t something which interests
me.
On the other hand he was saying that what he also wanted was a paragraph
level display of the font. So that as he made changes in the font he could
see a ripple effect in the paragraph.
Now I have a thing which does a word level display, but it doesn’t do multilines. Or it does multi-lines if you are doing Japanese (vertical writing mode)
but it doesn’t do multi-columns then. So it’s either one vertical row or one
horizontal row of glyphs.
And I do also have a paragraph level display, but it is static. You bring
it up and it takes the current snapshot of the font and it generates a real
TrueType font and pass it off to the X Window 14 rasterizer — passes it off
to the standard Linux toolchain (FreeType) as that static font and asks that
toolchain to display text.
So what he’s saying is OK, do that, but update the font that you pass off every
now and then. And Yeah, that’d be interesting to do. That’s an interesting project
to work on. Much more interesting than changing my widget set which is
just a lot of work and tedious. Because there is nothing to think about.
It’s just OK, I’ve got to use this widget instead of my widget. My widget does
14
The X Window System is a windowing system for bitmap displays, common on UNIX-like
computer operating systems. X provides the basic framework for a GUI environment:
drawing and moving windows on the display device and interacting with a mouse and
keyboard. Wikipedia. X Window System — Wikipedia, The Free Encyclopedia, 2014. [Online; accessed 18.12.2014]
29
exactly what I want — because I designed it that way — how do I make this
thing, which I didn’t design, which I don’t know anything about, do exactly
what I want?
And — that’s dull. For me.
Yeah, well.
Dave, on the other hand, is very hopeful that he’ll find some poor fool
who’ll take that on as a wonderful opportunity. And if he does, that would
be great, because not having a standard widget set is one of the biggest
complaints people have. Because FontForge doesn’t look like anything else.
And people say Well the grey background is very scary. 15
I thought it was normal to have a grey background, but uh ... that’s why we
now have a white background. A white background may be equally scary,
but no one has complained about it yet.
Try red.
I tried light blue and cream. One of them I was told gave people migraines
— I don’t remember specifically what the comment was about the light
blue, but
(someone from inkscape): Make it configurable.
Oh, it is configurable, but no one configures it.
(someone from inkscape): Yeah, I know.
So ...
So, you talked about spending a lot of time on this project, how does that
work, you get up in the morning and start working on FontForge? Or ...
Well, I do many things. Some mornings, yes, I get up in the morning and I
start working on FontForge and I cook breakfast in the background and eat
breakfast and work on FontForge. Some mornings I get up at four in the
morning and go out running for a couple of hours and come back home and
sort of collapse and eat a little bit and go off to yoga class and do a pilates
class and do another yoga class and then go to my pottery class, and go to
the farmers’ market and come home and I haven’t worked on FontForge at
all. So it varies according to the day. But yes I ...
15
It used to have a grey background, now it has a white background
30
There was a period where I was spending 40, 50 hours a week working
on FontForge, I don’t spend that much time on it now, it’s more like 20
hours, though the last month I got all excited about the release that I put
out last Tuesday — today is Sunday. And so I was working really hard —
probably got up to — oh — 30 hours some of that time. I was really excited
about the change. All kinds of things were different — I put in Python
scripting, which people had been asking for — well, I’m glad I’ve done it,
but it was actually kind of boring, that bit — the stuff that came before was
— fascinating.
Like?
I — are you familiar with the OpenType spec? No. OK. The way you ...
the way you specify ligatures and kerning in OpenType can be looked at at
several different levels. And the way OpenType wants you to look at it, I
felt, was unnecessarily complicated. So I didn’t look at it at that level. And
then after about 5 years of looking at it that way I discovered that the reason
I thought it was unnecessarily complicated was because I was only used to
Latin or Cyrillic or Greek text, and for Latin, Cyrillic or Greek, it probably
is unnecessarily complicated. But for Indic scripts it is not unnecessarily
complicated, and you need all those things. So I ripped out all of the code
for specifying strange glyph conversions. You know in Arabic a character
looks different at the beginning of a word and so on? So that’s also handled
in this area. And I ripped all that stuff out and redid it in the way that
OpenType wanted it to be done and not the somewhat simplified but not
sufficiently powerful method that I’d been using up until then.
And that I found, quite fascinating.
And once I’d done that, it opened up all kinds of little things that I could
change that made the font editor itself bettitor. Better. Bettitor?
(laughs) That’s almost Dutch.
And so after I’d done that the display I talked about which could show a
word — I realized that I should redo that to take advantage of what I had
done. And so I redid that, and it’s now, it’s now much more usable. It now
shows — at least I hope it shows — more of what people want to see when
they are working with these transformations that apply to the font, there’s
now a list of the various transformations, that can be enabled at any time
and then it goes through and does them — whereas before it just sort of —
31
well it did kerning, and if you asked it to it would substitute this glyph so
you could see what it would look like — but it was all sort of — half-baked.
It wasn’t very elegant.
And — it’s much better now, and I’m quite proud of that.
It may crash — but it’s much better.
So you bring up half-baked, and when we met we talked about bread
baking.
Oh, yes.
And the pleasure of handling a material when you know it well. Maybe
make reliable bread — meaning that it comes out always the same way,
but by your connection to the material you somehow — well — it’s a
pleasure to do that. So, since you’ve said that, and we then went on
talking about pottery — how clay might be of the same — give the same
kind of pleasure. I’ve been trying to think — how does FontForge have
that? Does it have that and where would you find it or how is the ...
I like to make things. I like to make things that — in some strange
definition are beautiful. I’m not sure how that applies to making bread,
but my pots — I think I make beautiful pots. And I really like the glazing I
put onto them.
It’s harder to say that a font editor is beautiful. But I think the ideas behind
it are beautiful in my mind — and in some sense I find the user interface
beautiful. I’m not sure that anyone else in the world does, because it’s what
I want, but I think it’s beautiful.
And there’s a satisfaction in making something — in making something
that’s beautiful. And there’s a satisfaction too (as far as the bread goes) in
making something I need. I eat my own bread — that’s all the bread I eat
(except for those few days when I get lazy and don’t get to make bread that
day and have to put it off until the next day and have to eat something that
day — but that doesn’t happen very often).
So it’s just — I like making beautiful things.
OK, thank you.
Mm-hmm.
That was very nice, thank you very much.
Thank you. I have pictures of my pots if you’d like to see them?
Yes, I would very much like to see them.
32
This conversation with Juliane de Moerlooze was recorded in March 2009.
When you hear people talk about women having more sense
for the global, intuitive and empathic ... and men are more
logical ... even if it is true ... it seems quite a good thing to
have when you are doing math or software?
Juliane is a Brussels based computer scientist, feminist
and Linux user from the beginning. She studied math,
programming and system administration and participates in Samedies. 1 In February 2009 she was voted
president of the Brussels Linux user group (BXLug).
I will start at the end ... you have recently become president of the BXLug. Can
you explain to us what it is, the BXLug?
It is the Brussels Linux user group, a group of Linux users who meet
regularly to really work together on Linux and Free Software. It is the most
active group of Linux users in the French speaking part of Belgium.
How did you come into contact with this group?
That dates a while back. I have been trained in Linux a long time ago ...
Five years? Ten years? Twenty years?
Almost twenty years ago. I came across the beginnings of Linux in 1995 or
1996, I am not sure. I had some Slackware 2 installed, I messed around with
friends and we installed everything ... then I heard people talk about Linux
distributions 3 and decided to discover something else, notably Debian. 4
1
2
3
4
Femmes et Logiciels Libres, group of women maintaining their own server
http://samedi.collectifs.net
one of the earliest Linux distributions
a distribution is a specific collection of applications and a software kernel
one of the largest Linux distributions
37
It is good to know that with Linux you really have a diversity, there are
distributions specially for audio, there are distributions for the larger public
with graphical interfaces, there are distributions that are a bit more ‘geek’,
in short you find everything: there are thousands of distributions but there
are a few principal ones and I heard people talk about an interesting development, which was Debian. I wanted to install it to see, and I discovered
the BXLug meetings, and so I ended up there one Sunday.
What was your experience, the first time you went?
(laughs) Well, it was clear that there were not many women, certainly not. I
remember some sessions ...
What do you mean, not many women? One? Or five?
Usually I was there on my own. Or maybe two. There was a time that we
were three, which was great. There was a director of a school who pushed
Free Software a lot, she organised real ’Journées du Libre’ 5 at her school,
to which she would invite journalists and so on. She was the director but
when she had free time she would use it to promote Free Software, but
I haven’t seen her in a while and I don’t know what happened since. I
also met Faty, well ... I wasn’t there all the time either because I had also
other things to do. There was a friendly atmosphere, with a little bar where
people would discuss with each other, but many were cluttered together in
the middle of the room, like autists hidden behind their computers, without
much communication. There were other members of the group who like me
realised that we were humans that were only concentrating on our machines
and not much was done to make new people feel welcome. Once I realised,
I started to move to the back of the room and say hello to people arriving.
Well, I was not the only one who started to do that but I imagine it might
have felt like a closed group when you entered for the first time. I also
remember in the beginning, as a girl, that ... when people asked questions
... nobody realised that I was actually teaching informatics. It seemed there
was a prejudice even before I had a chance to answer a question. That’s a
funny thing to remember.
Could you talk about the pleasure of handling computers? You might not be the
kind of person that loses herself in front of her computer, but you have a strong
5
Journées du Libre is a yearly festival organised by the BXLug
38
relationship with technology which comes out when you open up the commandline
... there’s something in you that comes to life.
Oh, yes! To begin with, I am a mathematician (‘matheuse’), I was a math
teacher, and I have been programming during my studies and yes, there
was something fantastic about it ... informatics for me is all about logic, but
logic in action, dynamic logic. A machine can be imperfect, and while I’m
not specialised in hardware, there is a part on which you can work, a kind
of determinism that I find interesting, it poses challenges because you can
never know all, I mean it is not easy to be a real system administrator that
knows every detail, that understands every problem. So you are partially in
the unknown, and discovering, in a mathematical world but a world that
moves. For me a machine has a rhythm, she has a cadence, a body, and her
state changes. There might be things that do not work but it can be that
you have left in some mistakes while developing etcetera, but we will get
to know the machine and we will understand. And after, you might create
things that are maybe interesting in real life, for people that want to write
texts or edit films or want to communicate via the Internet ... these are all
layers one adds, but you start ... I don’t know how to say it ... the machine is
at your service but you have to start with discovering her. I detest the kind
of software that asks you just to click here and there and than it doesn’t
work, and than you have to restart, and than you are in a situation where
you don’t have the possibility to find out where the problem is.
When it doesn’t show how it works?
For me it is important to work with Free Software, because when I have
time, I will go far, I will even look at the source code to find out what’s
wrong with the interface. Luckily, I don’t have to do this too often anymore
because software has become very complicated, twenty years later. But we
are not like persons with machines that just click ... I know many people,
even in informatics, who will say ‘this machine doesn’t work, this thing
makes a mistake’
The fact that Free Software proposes an open structure, did that have anything
to do with your decision to be a candidate for BXLug?
Well, last year I was already very active and I realised that I was at a point
in my life that I could use informatics better, and I wanted to work in this
39
field, so I spent much time as a volunteer. But the moment that I decided,
now this is enough, I need to put myself forward as a candidate, was after a
series of sexist incidents. There was for example a job offer on the BXLug
mailing list that really needed to be responded to ... I mean ... what was
that about? To be concrete: Someone wrote to the mailing list that his
company was looking for a developer in so and so on and they would like
a Debian developer type applying, or if there weren’t any available, it would
be great if it would be a blond girl with large tits. Really, a horrible thing so
I responded immediately and than it became even worse because the person
that had posted the original message, sent out another one asking whether
the women on the list were into castration and it took a large amount of
diplomacy to find a way to respond. We discussed it with the Samediennes 6
and I though about it ... I felt supported by many people that had well
understood that this was heavy and that the climate was getting nasty but
in the end I managed to send out an ironic message that made the other
person excuse himself and stop these kind of sexist jokes, which was good.
And after that, there was another incident, when the now ex-president of
the group did a radio interview. I think he explained Free Software relatively
well to a public that doesn’t know about it, but as an example how easy it is
to use Free Software, he said even my wife, who is zero with computers, knows
how it works, using the familiar cliché without any reservation. We discussed
this again with the Samediennes, and also internally at the BXLug and than
I thought: well, what is needed is a woman as president, so I need to present
myself. So it is thanks to the Samedies, that this idea emerged, out of the
necessity to change the image of Free Software.
In software and particularly in Free Software, there are relatively few women
participating actively. What kinds of possibilities do you see for women to enter?
It begins already at school ... all the clichés girls hear ... it starts there. We
possibly have a set of brains that is socially constructed, but when you hear
people talk about women having more sense for the global, intuitive and
empathic ... and men are more logic ... even if it is true ... it seems quite a
good thing to have when you are doing math or software? I mean, there is
no handicap we start out with, it is a social handicap ... convincing girls to
become a secretary rather than a system administrator.
6
Participants in the Samedies: Femmes et logiciels libres (http://www.samedies.be)
40
I am assuming there is a link between your feminism and your engagement with
Free Software ...
It is linked at the point where ... it is a political liaison which is about reappropriating tools, and an attempt to imagine a political universe where we
are ourselves implicated in the things we do and make, and where we collectively can discuss this future. You can see it as something very large, socially,
and very idealist too. You should also not idealise the Free Software community itself. There’s an anthropologist who has made a proper description 7 ...
but there are certainly relational and organisational problems, and political
problems, power struggles too. But the general idea ... we have come to the
political point of saying: we have technologies, and we want to appropriate
them and we will discuss them together. I feel I am a feminist ... but I know
there are other kinds of feminism, liberal feminism for example, that do not
want to question the political economical status quo. My feminism is a bit
different, it is linked to eco-feminism, and also to the re-appropriation of
techniques that help us organise as a group. Free Software can be ... well,
there is a direction in Free Software that is linked to ‘Free Enterprise’ and
the American Dream. Everything should be possible: start-ups or pin-ups,
it doesn’t matter. But for me, there is another branch much more ‘libertaire’
and left-wing, where there is space for collective work and where we can ask
questions about the impact of technology. It is my interest of course, and I
know well that even as president of the BXLug I sometimes find myself on
the extreme side, so I will not speak about my ‘libertaire’ ideas all the time
in public, but if anyone asks me ... I know well what is at stake but it is not
necessarily representative of the ideas within the BXLug.
Are their discussions between members, about the varying interests in Free Software?
I can imagine there are people more excited about efficiency and performativity
of these tools, and others attracted by it’s political side.
Well, these arguments mix, and also since some years there is unfortunately
less of a fundamental discussion. At the moment I have the impression that
we are more into ‘things to do’ when we meet in person. On the mailing
list there are frictions and small provocations now and then, but the really
interesting debates are over, since a few years ... I am a bit disappointed in
7
Christophe Lazarro. La liberté logicielle. Une ethnographie des pratiques d’échange et de
coopération au sein de la communauté Debian. Academia editons, 2008
41
that, actually. But it is not really a problem, because I know other groups
that pose more interesting questions and with whom I find it more interesting to have a debate. Last year we have been working away like small busy
bees, distributing the general idea of Free Software with maybe a hint to the
societal questions behind but in fact not marking it out as a counterweight
to a commercialised society. We haven’t really deepened the problematics,
because for me ... it is clear that Free Software has won the battle, they have
been completely recuperated by the business world, and now we are in a
period where tendencies will become clear. I have the impression that with
the way society is represented right now ... where they are talking about the
economical crisis ... and that we are becoming a society of ‘gestionnaires’
and ideological questions seem not very visible.
So do you think it is more or less a war between two tendencies, or can both
currents coexist, and help each other in some way?
The current in Free Software that could think about resistance and ask
political questions and so on, does not have priority at the moment. But
what we can have is debates and discussions from person to person and we
can interpolate members of the BXLug itself, who really sometimes start to
use a kind of marketing language. But it is relational ... it is from person
to person. At the moment, what happens on the level of businesses and
society, I don’t know. I am looking for a job and I see clearly that I will
need to accept the kinds of hierarchies that exist but I would like to create
something else. The small impact a group like BXLug can make ... well,
there are several small projects, such as the one to develop a distribution
specifically designed for small organisations, to which nobody could object
of course. Different directions coexist, because there is currently not any
project with enough at stake that it would shock the others.
To go once again from a large scale to a small scale ... how would you describe
your own itinerary from mathematics to working on and with software?
I did two bachelors at the University Libre de Bruxelles, and than I studied
to become a math teacher. I had a wonderful teacher, and we were into
the pleasure of exercising our brains, and discovering theory but a large part
of our courses were concentrated on pedagogy and how to become a good
teacher, how to open up the mind of a student in the context of a course.
That’s when I discovered another pleasure, of helping a journey into a kind
42
of math that was a lot more concrete, or that I learned to render concrete.
One of the difficult subjects you need to teach in high schools, is scales and
plans. I came up with a rendering of a submarine and all students, boys as
well as girls, were quickly motivated, wanting to imagine themselves at the
real scale of the vessel. I like math, because it is not linked to a pre-existing
narrative structure, it is a theoretical construct we accept or not, like the
rules of a game. For me, math is an ideal way to form a critical mind.
When you are a child, math is fundamentally fiction, full stop. I remember
that when I learned modern math at school ... I had an older teacher, and
she wasn’t completely at ease with the subject. I have the impression that
because of this ... maybe it was a question of the relation between power and
knowledge ... she did not arrive with her knowledge all prepared, I mean it
was a classical form of pedagogy, but it was a new subject to her and there
was something that woke up in me, I felt at ease, I followed, we did not go
too fast ...
It was open knowledge, not already formed and closed?
Well, we discovered the subject together with the teacher. It might sound
bizarre, and she certainly did not do this on purpose, but I immediately felt
confident, which did not have too much to do with the subject of the class,
but with the fact that I felt that my brains were functioning.
I still prefer to discover the solution to a mathematical problem together
with others. But when it comes to software, I can be on my own. In
the end it is me, who wants to ask myself: why don’t I understand? Why
don’t I make any progress? In Free Software, there is the advantage of
having lots of documentation and manuals available online, although you
can almost drown in it. For me, it is always about playing with your brain,
there is at least always an objective where I want to arrive, whether it is
understanding theory or software ... and in software, it is also clear that you
want something to work. There is a constraint of efficiency that comes in
between, that of course somehow also exists in math, but in math when you
have solved a problem, you have solved it on a piece of paper. I enjoy the
game of exploring a reality, even if it is a virtual one.
43
In September 2013 writer, developer, freestyle rapper and
poet John Haltiwanger joined the ConTeXt user meeting in
Brejlov (Czech Republic) 1 to present his ideas on Subtext,
‘A Proposed Processual Grammar for a Multi-Output PreFormat’. The interview started as a way to record John’s
impressions fresh from the meeting, but moved into discussing the future of layout in terms of ballistics.
How did you end up going to the ConTeXt meeting? Actually, where was it?
It was in Brejlov, which apparently might not even be a town or city. It
might specifically be a hotel. But it has its own ... it’s considered a location,
I guess. But arriving was already kind of a trick, because I was under the
impression there was a train station or something. So I was asking around:
Where is Brejlov? What train do I take to Brejlov? But nobody had any clue,
that this was even something that existed. So that was tricky. But it was really a beautiful venue. How I ended up at the conference specifically? That’s
a good question. I’m not an incredibly active member on the ConTeXt
mailing list, but I pop up every now and again and just kind of express a
few things that I have going on. So initially I mentioned my thesis, back in
January or maybe March, back when it was really unformulated. Maybe it
was even in 2009. But I got really good responses from Hans. 2 Originally,
when I first got to the Netherlands in 2009 in August, the next weekend
was the third annual ConTeXt meeting. I had barely used the software at
that point, but I had this sort of impulse to go. Well anyway, I did not have
the money for it at that time. So the fact that there was another one coming
round, was like: Ok, that sounds good. But there was something ... we got
into a conversation on the mailing list. Somebody, a non-native English
speaker was asking about pronouns and gendered pronouns and the proper
way of ‘pronouning’ things. In English we don’t have a suitable gender neutral pronoun. So he asked the questions and some guy responded: The
1
2
http://meeting.contextgarden.net/2013/
Hans Hagen is the principal author and developer of ConTeXt, past president of NTG, and
active in many other areas of the TeX community
Hans Hagen – Interview – TeX Users Group. http://tug.org/interviews/hagen.html, 2006. [Online; accessed 18.12.2014]
47
proper way to do it, is to use he. It’s an invented problem. This whole question is
an invented question and there is no such thing as a need for considering any other
options besides this. 3 So I wrote back and said: That’s not up to you to decide,
because if somebody has a problem, than there is a problem. So I kind of naively
suggested that we could make a Unicode character, that can stand in, like a
typographical element, that does not necessarily have a pronounciation yet.
So something that, when you are reading it, you could either say he or she
or they and it would be sort of [emergent|dialogic|personalized].
Like delayed political correctness or delayed embraciveness. But, little did I
know, that Unicode was not the answer.
Did they tell you that? That Unicode is not the answer?
Well, Arthur actually wrote back 4 , and he knows a lot about Unicode and
he said: With Unicode you have to prove that it’s in use already. In my sense,
Unicode was a playground where I could just map whatever values I wanted
to be whatever glyph I wanted. Somewhere, in some corner of unused
namespace or something. But that’s not the way it works. But TeX works
like this. So I could always just define a macro that would do this. Hans
actually wrote a macro 5 that would basically flip a coin at the beginning of
your paper. So whenever you wanted to use the gender neutral, you would
just use the macro and then it wouldn’t be up to you. It’s another way of
obfuscating, or pushing the responsibility away from you as an author. It’s
like ok, well, on this one it was she, the next it was he, or whatever.
So in a way gender doesn’t matter anymore?
Right. And then I was just like, that’s something we should talk about at the
meeting. I guess I sent out something about my thesis and Hans or Taco,
they know me, they said that it would great for you to do a presentation of
this at the meeting. So that’s very much how I ended up there.
You had never met anyone from ConTeXt before?
3
4
5
No. You and Pierre were the only people I knew, that have been using it,
besides me, at the time. It was interesting in that way, it was really ... I mean
I felt a little bit ... nervous isn’t exactly the word, but I sort of didn’t know
what exactly my positon was meant to be. Because these guys ... it’s a users’
meeting, right? But the way that tends to work out for Open Source projects
is developers talking to developers. So ... my presentation was saturated ...
I think, I didn’t realise how quickly time goes in presentations, at the time.
So I spent like 20 minutes just going through my attack on media theory in
the thesis. And there was a guy, falling asleep on the right side of the room,
just head back. So, that was entertaining. To be the black sheep. That’s
always a fun position. It was entertaining for me, to meet these people
and to be at the same time sort of an outsider. Not a really well known
user contrasted with other people, who are more like cornerstones of the
community. They were meeting everybody in person for the first time. And
somehow I could connect. So now, a month and a half later we’re starting
this ConTeXt group, an international ConTeXt users’ group and I’m on the
board, I’m editing the journal. So it’s like, it ...
... that went fast!
It went fast indeed!
What is this ‘ConTeXt User Group’?
To a certain extent the NTG, which is the Netherlands TeX Group, had sort
of been consumed from the inside by the heavyness of ConTeXt, specifically
in the Netherlands. The discussion started to shift to be more ConTeXt.
Now the journal, the MAPS journal, there are maybe 8 or 10 articles, two of
which are not written by either Hans or Taco, who are the main developers
of ConTeXt. And there is zero on anything besides ConTeXt. So the NTG
is almost presented as ok, if you like ConTeXt or if you wanna be in a ConTeXt
user group, you join the NTG. Apparently the journal used to be quite thick
and there are lots of LaTeX users, who are involved. So partially the attempt
is sort of ease that situation a little bit.
It allowed the two communities to separate?
49
Yeah, and not in any way like fast or abrupt fashion. We’re trying to be
very conscious about it. I mean, it’s not ConTeXt’s fault that LaTeX users
are not submitting any articles for the journal. That user group will always have the capacity, those people could step up. The idea is to setup a
more international forum, something that has more of the sense of support
for ... because the software is getting bigger and right now we’re really reliant on this mailing list and if you have your stupid question either Hans,
Taco or Wolfgang will shoot something back. And they become reliant on
Wolfgang to be able to answer questions, because there are more users coming. Arthur was really concerned, among other people, with the scalability
of our approach right now. And how to set up this infrastructure to support
the software as it grows bigger. I should forward you this e-mail that I
wrote, that is a response to their name choices. They were contemplating
becoming a group called ‘cows’. Which is clearly an inside joke because they
loved to do figure demonstrations with cows. And seeing ConTeXt as I do,
as a platform, a serious platform, for the future, something that ... it’s almost like it hasn’t gotten to its ... I mean it’s in such rapid development ...
it’s so undocumented ... it’s so ... like ... it’s like rushing water or something.
But at some point ... it’s gonna fill up the location. Maybe we’re still building this platform, but when it’s solid and all the pieces are ... everything
is being converted to metric, no more inches and miles and stuff. At that
point, when we have this platform, it will turn into a loadable Lua library.
It won’t even be an executable at that point.
It is interesting how quickly you have become part of this community. From being
complete outsider not knowing where to go, to now speaking about a communal
future.
To begin with, I guess I have to confront my own seemingly boundless
propensity for picking obscure projects ... as sort of my ... like the things
that I champion. And ... it often boils down to flexibility.
You think that obscurity has anything to do with the future compatibility of
ConTeXt?
50
Well, no. I think the obscurity is something that I don’t see this actually
lasting for too long in the situation of ConTeXt. As it gets more stable it’s
basically destined to become more of a standard platform. But this is all
tied into to stuff that I’m planning to do with the software. If my generative
typesetting platform ... you know ... works and is actually feasible, which is
maybe a 80% job.
Wait a second. You are busy developing another platform in parallel?
Yes, although I’m kind of hovering over it or sort of superceeding it as
an interface. You have LaTeX, which has been at version 2e since the
mid-nineties, LaTeX 3 is sort of this dim point on the horizon. Whereas
ConTeXt is changing every week. It’s converting the entire structure of this
macro package from being written in TeX to being written in Lua. And
so there is this transition from what could be best described as an archaic
approach to programming, to this shiny new piece of software. I see it as
being competitive strictly because it has so much configurability. But that’s
sort of ... and that’s the double edged sword of it, that the configuration
is useless without the documentation. Donald Knuth is famous for saying
that he realises he would have to write the software and the manual for the
software himself. And I remember in our first conversation about the sort
of paternalistic culture these typographic projects seem to have. Or at least
in the sense of TeX, they seem to sort of coagulate around a central wizard
kind of guy.
You think ConTeXt has potential for the future, while TeX and LaTeX belong
... to the past?
I guess that’s sort of the way it sounds, doesn’t it?
I guess I share some of your excitement, but also have doubts about how far the
project actually is away from the past. Maybe you can describe how you think it
will develop, what will be that future? How you see that?
Right. That’s a good way to start untangling all the stuff I was just talking
about, when I was sort of putting the cart before the horse. I see it developing in some ways ... the way that it’s used today and the way that current,
51
heavy users use it. I think that they will continue to use in it in a similar
way. But you already have people who are utilising LuaTeX ... and maybe
this is an important thing to distinguish between ConTeXt and LuaTeX.
Right now they’re sort of very tied together. Their development is intrinsic,
they drive each other. But to some extent some of the more interesting
stuff that is been being done with these tools is ... like ... XML processing.
Where you throw XML into Lua code and run LuaTeX kerning operations
and line breaking and all this kind of stuff. Things that, to a certain extent,
you needed to engage TeX on its own terms in the past. That’s why macro
packages develop as some sort of sustainable way to handle your workflow.
This introduction of LuaTeX I think is sort of ... You can imagine it being
loaded as a library just as a way to typeset the documentation for code. It
could be like this holy grail of literate programming. Not saying this is the
answer, but that at least it will come out as a nice looking .pdf.
LuaTeX allows the connection to TeX to widen?
Yeah. It takes sort of the essence of TeX. And this is, I guess, the crucial
thing about LuaTeX that up until now TeX is both a typesetting engine and
a programming language. And not a very good one. So now that TeX can
be the engine, the Tschicholdian algorithms, the modernist principles, that,
for whatever reason, do look really good, can be utilised and connected to
without having to deal with this 32 year old macro programming language.
On top of that and part of how directly engaging with that kind of movement foreward is ... not that I am switching over to LuaTeX entirely at this
point ... but that this generative typesetting platform that was sort of the
foundation of this journal proposal we did. Where you could imagine actual
humanity scholars using something that is akin to markdown or a wiki formatting kind of system. And I have a nice little buzzword for that: ‘visually
semantic markup’. XML, HTML, TeX, ... none of those are visually semantic. Because it’s all based around these primitives ‘ok, between the angle
brackets’. Everything is between angle brackets. You have to look what’s
inside the angle brackets to know what is happening to what’s between the
angle brackets. Whereas a visually semantic markup ... OK headers! OK
so it’s between two hashmarks or it’s between two whatever ... The whole
52
design of those preformatting languages, maybe not wiki markup, but at
least markdown was that it could be printed as a plaintext document and
you could still get a sense of the structure. I think that’s a really crucial
development. So ... in a web browser, on one half of the browser you have
you text input, on the other half you have an real-time rendering of it into
HTML. In the meantime, the way that the interface works, the way that
the visually semantic markup works, is that it is a mutable interface. It
could be tailored to your sense of what it should look like. It can be tailored
specifically to different workflows. And because there is such a diversity
within typographic workflows, typesetting workflows ... that is akin to the
separation of form and content in HTML and CSS, but it’s not meant to be
... as problematic as that. I’m not sure if that is a real goal, or if that goal
is feasible or not. But it’s not meant to be drawing an artificial line, it’s just
meant to make things easier.
So by pulling apart historically grown elements, it becomes ... possibly modern?
Hypermodern?
Something for now and later.
Yes. Part of this idea, the trick ... This software is called ‘Subtext’ and at
this point it’s a conceptual project, but that will change pretty soon. Its
trick is this idea of separation instead of form and content, it’s translation
and effect. The parser itself has to be mutable, has to be able to pull in
the interface, print like decorations basically from a YAML configuration
file or some sort of equivalent. One of this configuration mechanisms that
was designed to be human readable and not machine readable. Like, well
both, striking that balance. Maybe we can get to that kind of ... talking
about agency a little bit. Its trick to really pull that out so that if you want
to ... for instance now in markdown if you have quotes it will be translated
in ConTeXt into \quotation. In ConTeXt that’s a very simple switch
to turn it into German quotes. Or I guess that’s more like international
quotes, everything not English. For the purposes of markdown there is
no, like really easy way, to change that part of the interface. So that when
53
I’m writing, when I use the angle brackets as a quote it would turn into
a \quotation in the output. Whereas with ‘Subtext’ you would just go
into the interface type like configuration and say: These are converted into
a quote basically. And then the effects are listed in other configuration files
so that the effects of quotes in HTML can be ...
... different.
Yes. Maybe have specific CSS properties for spacing, that kind of stuff. And
then in ConTeXt the same sort of ... both the environmental setup as well
as the raw ‘what is put into the document when it’s translated’. This kind of
separation ... you know at that point if both those effects are already the way
that you want them, then all you have to do is change the interface. And
then later on typesetting system, maybe iTeX comes out, you know, Knuth’s
joke, anyway. 6 That kind of separation seems to imply a future proofing
that I find very elegant. That you can just add later on the effects that you
need for a different system. Or a different version of a system, not that you
have to learn ‘mark 6’, or something like that ...
Back to the future ... I wonder about ConTeXt being bound to a particular
practise located with two specific people. Those two are actually the ones that
produce the most complete use cases and thereby define the kind of practise that
ConTeXt allows. Do you think this is a temporary stage or do you think that by
inviting someone like you on the board, as an outsider, that it is a sign of things
going to change?
Right. Well, yeah, this is another one of those put-up or shut-up kind of
things because for instance at the NTG meeting on Wednesday my presentation was very much a user presentation in a room of developers. Because I
basically was saying: Look like this is gonna be a presentation – most presentation are about what you know – and this presentation is really about
what I don’t know ... but what I do know is that there is a lot of room for
teaching ConTeXt in a more practical fashion, you could say. So my idea is
to basically write this documentation on how to typeset poetry, which gets
6
http://en.wikipedia.org/wiki/Donald_Knuth#Humor
54
into a lot of interesting questions, just a lot of interesting things. Like you
gonna need to write your own macros just at the start ... to make sure you
have not to go in and change every width value at some point. you know,
this kind of thing like ... really baby steps. How to make a cover page. These
kinds of things are not documented.
Documentation is let’s say an interesting challenge for ConTeXt. How do you
think the ConTeXt community could enable different kinds of use, beyond the
ones that are envisioned right now? I guess you have a plan?
Yeah ... that’s a good question. Part of it is just to do stuff, like to get you
more involved in the ConTeXt group for instance, because I was talking to
Arthur and he hadn’t even read the article from V/J10 7 . I think that kind
of stuff is really important. It’s like the whole Blender Foundation kind
of impulse. We have some developers who are paid to do this and that’s
kind of rare already in an Open Source/Free Software project. But then to
kind of have users pushing the boundaries and hitting limits. It’s rare that
Hans will encounter some kind of use case that he didn’t think of and react
in a negative way. Or react in a way like I’m not gonna even entertain that
possibility. Part of it is moving beyond this ... even the sort of centralisation
as you call it ... how to do that directly ... I see it more as baby steps for
me personally at this point. Just getting a tutorial on how to typeset a cd
booklet. Just basically what I’m writing. That at the same time, you know,
gets you familiar with ConTeXt and TeX in general. Before my presentation
I was wondering, I was like: how do you set a variable in TeX. Well, it’s a
macro programming language so you just make a macro that returns a value.
Like that kind of stuff is not initially obvious if you’re used to a different
paradigm or you know .. So these baby steps of kind of opening the field up
a little bit and then using it my own practise of guerilla typesetting and kind
of putting it out there. and you know ... And people gonna start being like:
oh yeah, beautiful documents are possible or at least better looking documents
are possible. And then once we have them at that, like, then how do you we
7
Constant, Clementine Delahaut, Laurence Rassel, and Emma Sidgwick.
Verbindingen/Jonctions: Tracks in electr(on)ic fields. Constant Verlag, 2009.
http://ospublish.constantvzw.org/sources/vj10
55
take it to the next level. How do I turn a lyric sheet from something that
is sort of static to ... you know ... two pages that are like put directly on the
screen next to each other. Like a screen based system where it’s animated
to the point ... and this is what we actually started to karaoke last night ...
so you have an English version and a Spanish version – for instance in the
case of the music that I’ve been doing. And we can animate. We can have
timed transitions so you can have a ‘current lyric indicator’ move down the
page. That kind of use case is not something that Pragma 8 is ever going
to run into. But as soon as it is done and documented then what’s the next
thing, what kind of animations are gonna be ... or what kind of ... once that
possibility is made real or concrete ... you know, so I kind of see it as a very
iterative process at this point. I don’t have any kind of grand scheme other
than ‘Subtext’ kind of replacing Microsoft Word as the dominant academic
publishing platform, I think. (laughs)
Just take over the world.
That’s one way to do it, I think.
You talked about manuals for things that you would maybe not do in another
kind of software ...
Right.
Manuals that not just explain ‘this is how you do it’ but also ‘this is the kind of
user you could be’.
Right.
I’m not sure if instructions for how to produce a cd cover would draw me in, but
if it helped me understand how to set a variable, it would.
Right.
8
Hans Hagen’s company for Advanced Document Engineering
56
You want the complete manual of course?
Yeah!
You were saying that ConTeXt should replace Microsoft Word as the standard
typesetting tool for academic publishing. You are thinking about the future for
ConTeXt more in the context of academic publishing than in traditional design
practise?
Yes. In terms of ‘Subtext’, I mean the origins of that project, very much
... It’s an interesting mix because it’s really a hybridity of many different
processes. Some, much come directly from this obscure art project ‘the abstraction’. So I have stuff like the track changes using Git version control
and everything being placed on plaintext as a necessity. That’s a holdover
from that project as well as the idea of gradiated presence. Like software
enabling a more real-time peer review, anonymous peer review system. And
even a collaborative platform where you don’t know who you’re writing with,
until the article comes out. Someting like out that. So these interesting
tweaks that you can kind of make, those all are holdovers from this very,
very much maybe not traditional design practise but certainly like ... twisted
artistic project that was based around hacking a hole from signified to siginifier and back again. So ... In terms of its current envisionment and the
use case for which we were developing it at the beginning, or I’m developing
it, whatever ... I’ll say it the royal way, is an academic thing. But I think
that ... doesn’t have to stop there and ...
At some point at OSP we decided to try ConTeXt because we were stuck with
Scribus for page layout as the only option in Free Software. We wanted escape
that kind of stiffness of the page, or of the canvas in a way. But ConTeXt
was not the dream solution either. For us it had a lot to do, of course, with
issues of documentation ... of not understanding, not coming from that kind of
automatism of treating it as another programming language. So I think we could
have had much more fun if we had understood the culture of the project better.
I think the most frustrating experience was to find out how much the model of
typesetting is linked to the Tschichold universe, that at the moment you try to
57
break out, the system completely looses all flexibility. And it is almost as if you
can hear it freeze. So if we blame half of our troubles with ConTeXt on our
inability to actually understand what we could do with ConTeXt, I think there is
a lot also in its assumption what a legible text would look like, how it’s structured,
how it’s done. Do you think a modern version of ConTeXt will keep that kind
of inflexibility? How can it become more flexible in it’s understanding of what a
page or a book could be?
That’s an interesting question, because I’m not into the development side
of LuaTex at all, but I would be surprised if the way that it was being
implemented was not significantly more modular than for instance when
it was written in Pascal, you know, how that was. Yeah, that’s a really
interesting question of how swappable is the backend. How much can we
go in and kind of ... you know. And it its an inspirational question to me,
because now I’m trying to envision a different page. And I’m really curious
about that. But I think that ConTeXt itself will likely be pretty stable in its
scope ... in that way of being ... sort of ... deterministic in its expectations.
But where that leaves us as users ... first I’d be really surprised if the engine
itself, if LuaTeX was not being some way written to ... I feel really ignorant
about this, I wish I just knew. But, yeah, there must be ... There is no way
to translate this into a modern programming language without somehow
thinking about this in terms of the design. I guess to certain extent the
answer to your question is dependent on the conscientiousness of Taco and
the other LuaTex developers for this kind of modularity. But I don’t ... you
know ... I’m actually feeling very imaginatively lacking in terms of trying to
understand what you’re award-winning book did not accomplish for you ...
Yeah, what’s wrong with that?
I think it would be good to talk with Pierre, not Pierre Marchand but Pierre ...
... Huggybear.
Yeah. We have been talking about ‘rivers’ as a metaphor for layout ... like were
you could have things that are ... let’s say fluid and other things that could be
placed and force things around it. Layout is often a combination of those two
58
things. And this is what is frustrating in canvas based layout that it is all fixed
and you have to make it look like it’s fluid. And here it’s all fluid and sometimes
you want it to be fixed. And at the moment you fix something everything breaks.
Then it’s up to you. You’re on your own.
Right.
The experience of working with ConTeXt is that it is very much elastic, but there
is very little imagination about what this elasticity could bring.
Right.
It’s all about creating universally beautiful pages, in a way it is using flexibility
to arrive at something that is already fixed.
Right.
Well, there is a lot more possible than we ever tried, but ... again ... this goes
back to the sort of centralist question: If those possibilities are mainly details in
the head of the main developers than how will I ever start to fantasize about the
book I would want to make with it?
Right.
I don’t even need access to all the details. Because once I have a sort of sense of
what I want to do, I can figure it out. Right now you’re sort of in the dark about
the endless possibilities ...
Its existence is very opaque in some ways. The way that it’s implemented,
like everything about it is sort of ... looking at the macros that they wrote,
the macros that you invoke ... like ... that takes ... flow control in TeX is like
... I mean you might as well write it in Bash or ... I mean I think Bash would
even be more sensible to figuring out what’s going on. So, the switch to Lua
there is kind of I think a useful step just in being more transparent. To allow
you to get into becoming more intimate with the source or the operation
59
of the system ... you know ... without having to go ... I mean I guess ... the
TeX Book would still be useful in some ways but that’s ... I mean ... to go
back and learn TeX when you’re just trying to use ConTeXt is sort of ...
it’s not ... I’m not saying it’s, you know ... it’s a proper assumption to say oh
yeah, don’t worry about the rules and the way TeX is organised but you’re not
writing your documents in ConTeXt the way you would write them if you’re
using plain TeX. I mean that’s just ... it’s just not ... It’s a different workflow
... it has a completely different set of processes that you need to arrange. So
it has a very distinct organisational logic ... that I think that ... yeah ... like
being able to go into the source and be like oh OK, like I can see clearly this
is ... you know. And then you can write in your own way, you can write back
in Lua.
This kind of documentation would be the killer feature of ConTeXt ...
Yeah.
It’s kind of strange paradox in the TeX community. At one hand you’re sort of
supposed to be able to do all of it. But at the same time on every page you’re told
not to do it, because it’s not for you to worry about this.
Right. That’s why the macro packages exist.
With ConTeXt there is this strange sense of very much wanting to understand the
way the logic works, or ... what the material is, you’re dealing with. And at the
same time being completely lost in the labyrinth between the old stuff from TeX
and LaTeX, the newer stuff from LuaTex, Mark 4, 3, 5, 6 ...
So that was sort of my idea with the cd typesetting project, is not to say,
that that is something that is immediately interesting to anybody who is
not trying to do that specifically, right? But at the same time if I’m ... if it’s
broken down into ‘How to do a bitmap cover page’ (=Lesson 1).
Lesson 2: ‘How to start defining you own macros’. And so you know, it’s
this thing that could be at one point a very ... because the documentation as
it stands right now is ... I think it’s almost ... fixing that documentation, I’m
60
not sure is even possible. I think that it has to be completely approached
differently. I mean, like a real ConTeXt manual, that documents ... you
know ... command by command exactly what those things do. I mean our
reference manual now just shows you what arguments are available, but
doesn’t even list the available arguments. It’s just like: These are the positions
of the arguments. And it’s interesting.
So expecting writers of the program to write the manual fails?
Right.
What is the difference between your plans for ‘Subtext’ and a page layout program
like Scribus?
You mentioned ‘Subtext’ coming from a more academic publishing rather
than a design background. I think that this belies where I have come into
typesetting and my understanding of typography. Because in reality DTP
has never kind of drawn me in in that way. The principle differences are
really based on this distribution of agency, in my mind. That when you’re
demanding the software to be ‘what you see is what you get’ or when you
place that metaphor between you and your process. Or you and your engagement, you’re gaining the usefulness of that metaphor, which is ... it’s
almost ... I hope I don’t sound offensive ... but it’s almost like child’s play.
It’s almost like point, click, place. To me it just seems so redundant or ...
time-consuming maybe ... to really deal with it that way. There are advantages to that metaphor. For instance I don’t plan on designing covers in
ConTeXt. Or even a poster or something like that. Because it doesn’t really
give affordances for that kind of creativity. I mean you can do generative
stuff with the MetaFun package. You can sort of play around with that. But
I haven’t seen a ConTeXt generated cover that I liked, to be honest.
OK.
OK. Principle differences. I’m trying to ... I’m struggling a little bit. I think
that’s partially because I’m not super comfortable with the layout mechanism
61
and stuff yet. And you have things like \blank in order to move down the
page. Because it has this sort of literal sense of a page and movement on
a page. Obviously Scribus has a literal idea of a page as well, but because
it’s WYSIWYG it has that benefit where you don’t have to think OK, well,
maybe it should be 1.6 ems down or maybe it should be 1.2 ems down. You
move it until it looks right. And then you can measure it and you’re like
ok, I’m gonna use this measurement for the further on in my document. So it’s
that whole top-down vs. bottom-up approach. It really breaks down into
the core organisational logics of those softwares.
I think it’s too easy to make the difference based on the fact that there is a
metaphorical layer or not. I think there is a metaphorical layer in ConTeXt too
...
Right. Yeah for sure.
And they come at a different moment and they speak a different language. But I
think that we can agree that they’re both there. So I don’t think it’s about the one
being without and the other being with. Of course there is another sense of placing
something in a canvas-based software than in a ... how would you call this?
So I guess it is either ‘declarative’ or ‘sequence’ based. You could say generative in a way ... or compiled or ... I don’t even know. That’s a cool question.
What is the difference really and why would you choose the one or the other? Or
what would you gain from one to the other? Because it’s clear that posters are not
easily made in ConTeXt. And that it’s much easier to typeset a book in ConTeXt
than it is in Scribus, for example.
Declarative maybe ...
So, there’s hierarchy. There’s direction. There’s an assumption about structure
being good or bad.
62
Yeah. Boxes, Glue. 9
What is exciting in something like this is that placement is relative always.
Relative to a page, relative to a chapter, relative to itself, relative to what’s next
to it. Where in a canvas based software your page is fixed.
Right.
This is very different from a system where you make a change, then you compile
and then you look at it and then you go back into your code. So where there is a
larger distinction between output and action. It’s almost gestural ...
It’s like two different ways of having a conversation. Larry Wall has this really great metaphor. He talks about ‘ballistic design’. So when you’re doing
code, maybe he’s talking more about software design at this point, basically
it’s a ‘ballistic practise’ to write code. Ballistics comes from artillery. So you
shoot at a thing. If you hit it, you hit it. If you miss it, you change the
amount of gun powder, the angle. So code is very much a ‘ballistic practise’.
I think that filters into this difference in how the conversation works. And
this goes back to the agencies where you have to wait for the computer to
figure out. To come with its into the conversation. You’re putting the code
in and then the computer is like ok; this is what the code means
and then is this what you wanted? Whereas with the WYSIWYG
kind of interface the agency is distributed in a different way. The computer is just like ok, I m a canvas; I m just here to hold what
you re putting on and I m not going to change it any way or
affect it in any way that you don t tell me to. I mean it’s
the same way but I ... is it just a matter of the compilation time? In one
you’re sort of running a experiment, in another you’re just sort of painting.
If that’s a real enough distinction or if that’s ... you know ... it’s sort of ... I
mean I kind of see that it is like this. There is ballistics vs. maybe fencing
or something.
9
Boxes, which are things can be drawn on a page, and glue, which is invisible stretchy stuff that sticks
boxes together. Mark C. Chu-Carroll. The Genius of Donald Knuth: Typesetting with Boxes and Glue, 2008
63
Fencing?
Fencing. Like more of a ...
Or wrestling?
Or wrestling.
When you said just sort of painting I felt offended. ( laughs)
I’m sorry. I didn’t mean it like that.
Maybe back to wrestling vs. ballistics. Where am I and where is the machine?
Right.
I understand that there’s lots of childish way of solving this need to make the
computer dissapear. Because if you are not wrestling ... you’re dancing, you know.
Yeah.
But I think it’s interesting to see that ballistics, that the military term of shooting
at something, is the kind of metaphor to be used. Which is quite different than a
creative process where there is a direct feedback between something placed and the
responses you have.
Right.
And it’s not always about aiming, but also sometimes about trying and about
kind of subtle movements that spark off something else. Which is very immediate.
And needs an immediate connection to ... let’s say ... what you do and what you
get. It would be interesting to think about ways to talking about ‘what you see
is what you get’ away from this assumption that is always about those poor users
that are not able do it in code.
Right.
64
Because I think there is essential stuff that you can not do in a tool like this –
that you can do in canvas-based tools. And so ... I think it’s really a pity when
... yeah ... It’s often overlooked and very strange to see. There is not a lot of good
thinking about that kind of interaction. Like literal interaction. Which is also
about agency with the painter. With the one that makes the movement. Where
here the agency is very much in this confrontational relation between me aiming
and ...
So yeah, when we put it in those metaphors. I’m on the side with the
painting, because ...
But I mean it’s difficult to do a book while wrestling. And I think that’s why a
poster is very difficult to do in this sort of aiming sense. I mean it’s fun to do but
it’s a strange kind of posters you get.
You can’t fit it all in your head at once. It’s not possible.
No. So it’s okay to have a bit of delay.
I wondered to what extent, if it were updated in real time, all the changes
you’re making in the code, if compilation was instantaneous, how that would
affect the experience. I guess it would still have this ballistic aspect, because
what you are doing is ... and that’s really the side of the metaphor ... or
a metaphorical difference between the two. One is like a translation. The
metaphor of ok this code means this effect ... That’s very different from picking
a brush and choosing the width of the stroke. It’s like when you initialise
a brush in code, set the brush width and then move it in a circle with a
radius of x. It’s different than taking the brush in Scribus or in whatever
WYSIWYG tool you are gonna use. There is something intrinsically different about a translation from primitives to visual effect than this kind of
metaphorical translation of an interaction between a human and a canvas ...
kind of put into software terms.
But there is a translation from me, the human, to the machine, to my human eye
again, which is hard to grasp. Without wanting it to be made invisible somehow.
65
Or to assume that it is not there. This would be my dream tool that would
allow you to sense that kind of translation without losing the ... canvasness of the
canvas. Because it’s frustrating that the canvas has to not speak of itself to be able
to work. That’s a very sad future for the canvas, I think.
I agree.
But when it speaks of itself it’s usually seen as buggy or it doesn’t work. So that’s
also not fair to the canvas. But there is something in drawing digitally, which
is such a weird thing to do actually, and this is interesting in this sort of cyborgs
we’re becoming, which is all about forgetting about the machine and not feeling
what you do. And it’s completely a different world in a way than the ballistics of
ConTeXt, LaTeX or whatever typesetting platform.
Yeah, that’s true. And it’s something that my students were forced to confront and it was really interesting because that supposed invisibility or almost
necessitated invisibility of the software. As soon as they’re in Inkscape instead of Illustrator they go crazy. Because it’s like they know what they want
to do, but it’s a different mechanism. It’s the same underlying process which
itself is only just meant to give you a digital version of what you could easily
do on a piece of paper. Provided you have the right paints and stuff. So
perhaps it’s like the difference between moving from a brush to an air brush.
It’s a different ... interface. It’s a different engagement. There is a different
thing between the human and the canvas. You engage in this creative process where it’s like ok, we’ll now have an airbrush and I can play around to
see what the capacities are without being stuck in well I can’t get it to do
my fine lines the same way I can when I have my brush. It’s like when you
switch the software out from between the person and the canvas. It’s that
sort of invisibility of the interface and it’s intense for people. They actually
react quite negatively. They’re not gonna bother to learn this other software
because in the end they’re doing less. The reappearance of this software
... of software between them and their ideas is kinda too much. Whereas
people who don’t have any preconceived notions are following the tutorials
and they’re learning and they’re like ok, I’m gonna continue to play with this.
Because this software is starting to become more invisible.
66
But on a sort of theoretical level the necessitated invisibility, as you said it nicely, is
something I would always speak against. Because that means you hide something
that’s there. Which seems a stupid thing to do, especially when you want to find
a kind of more flexible relation to your tools. I want to find a better word for
describing that sort of quick feedback. Because if it’s too much in the way, then
the process stops. The drawing can not be made if I’m worried too much about
the point of my pencil that might break ... or the ... I dont’t know ... the nozzle
being blocked.
Dismissing the other tools is ... I was kinda joking, but ... there is something sort of blocklike: Point. Move. This. But at the same time, like I
said, I wouldn’t do a cover in ConTeXt. Just like I probably wouldn’t try to
do something like a recreation of a Pre-Raphaelite painting in Processing or
something like that. There is just points where our metaphors break down.
And so ... It sounded sort of, ok, bottom-up über alles like always.
Ok, there’s still painters and there’s still people doing Pre-Raphaelite paintings
with Pre-Raphaelite tools, but most of us are using computers. So there should be
more clever ways of thinking about this.
Yeah. To borrow a quote from my old buddy Donald Rumsfeld: There are
the known knowns, the known unknowns and the unknown unknowns. That
actually popped into my head earlier because when we were talking about
the potentials of the software and the way that we interact and stuff, it’s like
we know that we don’t know ... other ways of organizing. We know that
there are, like there has to be, another way, whether it is a middle path between these two or some sort of ... Maybe it’s just tenth dimensional, maybe
it’s fourth dimensional, maybe it’s completely hypermodern or something.
Anyway. But the unknown unknowns ... It’s like the stuff that we can’t
even tell we don’t know about. The questions that we don’t know about
that would come up once we figure out these other ways of organising it.
That’s when I start to get really interested in this sort of thing. How do you
even conceive of a practise that you don’t know? And once you get there,
there’s going to be other things that you know you don’t know and have to
keep finding them. And then there’s gonna be things that you don’t know
you don’t know and they just appear from nowhere and ... it’s fun.
67
We discovered the work of Tom Lechner for the first time at
the Libre Graphics Meeting 2010 in Brussels. Tom traveled
from Portland to present Laidout, an amazing tool that he
made to produce his own comic books and also to work on
three dimensional mathematical objects. We were excited
about how his software represents the gesture of folding,
loved his bold interface decisions plus were impressed by the
fact that Tom decided to write his own programming framework for it. A year later, we met again in Montreal, Canada
for the Libre Graphics Meeting 2011 where he presents a
follow-up. With Ludivine Loiseau 1 and Pierre Marchand 2 ,
we finally found time to sit down and talk.
What is Laidout?
Well, Laidout is software that I wrote to lay out my cartoon books in an
easy fashion. Nothing else fit my needs at the time, so I just wrote it.
It does a lot more than laying out cartoons?
It works for any image, basically, and gradients. It does not currently do
text. It is on my todo list. I usually write my own text, so it does not really
need to do text. I just make an image of it.
It can lay out T-shirts?
But that’s all images too. I guess it’s two forms of laying out. It’s laying
out pieces of paper that remain whole in themselves, or you can take an
image and lay it out on smaller pieces of paper. Tiling, I guess you could
call it.
Can you talk us through the process of doing the T-shirt?
1
2
amateur bookbinder and graphic designer
artist/developer, contributing amongst others to PodofoImpose and Scribus
71
OK. So, you need a pattern. I had just a shirt that sort of fit and I
approximated it on a big piece of paper, to figure out what the pieces were
shaped like, and took a photograph of that. I used a perspective tool to
remove the distortion. I had placed rulers on the ground so that I could
remember the actual scale of it. Then once it was in the computer, I traced
over it in Inkscape, to get just the basic outline so that I could manipulate
further. Blender didn’t want to import it so I had to retrace it. I had to
use Blender to do it because that lets me shape the pattern, take it from
flat into something that actually makes 3D shapes so whatever errors were
in the original pattern that I had on the paper, I could now correct, make
the sides actually meet and once I had the molded shape, and in Blender
you have to be extremely careful to keep any shape, any manipulation that
you do to make sure your surface is still unfoldable into something flat. It is
very easy to get away from flat surfaces in Blender. Once I have the molded
shape, I can export that into an .off file which my unwrapper can import
and that I can then unwrap into the sleeves and the front and the back as
well as project a panoramic image onto those pieces. Once I have that, it
becomes a pattern laid out on a giant flat surface. Then I can use Laidout
once again to tile pages across that. I can export into a .pdf with all the
individual pieces of the image that were just pieces of the larger image that
I can print on transfer paper. It took forty iron-on transfer papers I ironed
with an iron provided to me by the people sitting in front of me so that
took a while but finally I got it all done, cut it all out, sewed it up and there
you go.
Could you say something about your interest in moving from 2D to 3D
and back again? It seems everything you do is related to that?
I don’t know. I’ve been making sculpture of various kinds for quite a
long time. I’ve always drawn. Since I was about eighteen, I started making
sculptures, mainly mathematical woodwork. I don’t quite have access to a
full woodwork workshop anymore, so I cannot make as much woodwork as
I used to. It’s kind of an instance of being defined by what tools you have
available to you, like you were saying in your talk. I don’t have a woodshop,
but I can do other stuff. I can still make various shapes, but mainly out of
paper. Since I had been doing woodwork, I picked up photography I guess
and I made a ton of panoramic images. It’s kind of fun to figure out how
72
to project these images out of the computer into something that you can
physically create, for instance a T-shirt or a ball, or other paper shapes.
Is there ever any work that stays in the computer, or does it always need
to become physical?
Usually, for me, it is important to make something that I can actually
physically interact with. The computer I usually find quite limiting. You
can do amazing things with computers, you can pan around an image, that
in itself is pretty amazing but in the end I get more out of interacting with
things physically than just in the computer.
But with Laidout, you have moved folding into the computer! Do you
enjoy that kind of reverse transformation?
It is a challenge to do and I enjoy figuring out how to do that. In making
computer tools, I always try to make something that I can not do nearly as
quickly by hand. It’s just much easier to do in a computer. Or in the case
of spherical images, it’s practically impossible to do it outside the computer.
I could paint it with airbrushes and stuff like that but that in itself would
take a hundred times longer than just pressing a couple of commands and
having the computer do it all automatically.
My feeling about your work is that the time you spent working on the
program is in itself the most intriguing part of your work. There is of course a
challenge and I can imagine that when you are doing it like the first time you
see a rectangle, and you see it mimic a perspective you think wow I am folding
a paper, I have really done something. I worked on imposition too but more
to figure out how to work with .pdf files and I didn’t go this way of the gesture
like you did. There is something in your work which is really the way you wrote
your own framework for example and did not use any existing frameworks. You
didn’t use existing GUIs and toolboxes. It would be nice to listen to you about
how you worked, how you worked on the programming.
I think like a lot of artists, or creative people in general, you have to
enjoy the little nuts and bolts of what you’re doing in order to produce any
final work, that is if you actually do produce any final work. Part of that is
making the tools. When I first started making computer tools to help me
73
in my artwork, I did not have a lot of experience programming computers.
I had some. I did little projects here and there. So I looked around at the
various toolkits, but everything seemed really rigid. If you wanted to edit
some text, you had this little box and you write things in this little box and
if you want to change numbers, you have to erase it and change tiny things
with other tiny things. It’s just very restrictive. I figured I could either
figure out how to adapt those to my own purposes, or I could just figure
out my own, so I figured either way would probably take about that same
amount of time I guessed, in my ignorance. In the process, that’s not quite
been true. But it is much more flexible, in my opinion, what I’ve developed,
compared to a lot of other toolkits. Other people have other goals, so I’m
sure they would have a completely different opinion. For what I’m doing,
it’s much more adaptable.
You said you had no experience in programming? You studied in art school?
I don’t think I ever actually took computer programming classes. I grew
up with a Commodore 64, so I was always making letters fly around the
screen and stuff like that, and follow various curves. So I was always doing
little programming tricks. I guess I grew up in a household where that
sort of thing was pretty normal. I had two brothers, and they both became
computer programmers. And I’m the youngest, so I could learn from their
mistakes, too. I hope.
You’re looking for good excuses to program.
(laughs) That could be.
We can discuss at length about how actual toolkits don’t match your needs,
but in the end, you want to input certain things. With any recent toolkit, you
can do that. It’s not that difficult or time consuming. The way you do it, you
really enjoy it, by itself. I can see it as a real creative work, to come up with new
digital shapes.
Do you think that for you, the program itself is part of the work?
I think it’s definitely part of the work. That’s kind of the nuts and bolts
that you have to enjoy to get somewhere else. But if I look back on it, I
74
spend a huge amount of time just programming and not actually making
the artwork itself. It’s more just making the tools and all the programming
for the tools. I think there’s a lot of truth to that. When it comes time to
actually make artwork, I do like to have the tool that’s just right for the job,
that works just the way that seems efficient.
I think the program itself is an artwork, very much. To me it is also
a reflection on moving between 2D and 3D, about physical computation.
Maybe this is the actual work. Would you agree?
I don’t know. To an extent. In my mind, I kind of class it differently.
I’ve certainly been drawing more than I’ve been doing technical stuff like
programming. In my mind, the artwork is things that get produced, or a
performance or something like that. And the programming or the tools
are in service to those things. That’s how I think of it. I can see that ...
I’ve distributed Laidout as something in itself. It’s not just some secret tool
that I’ve put aside and presented only the artwork. I do enjoy the tools
themselves.
I have a question about how the 2D imagines 3D. I’ve seen Pierre and
Ludi write imposition plans. I really enjoy reading this, almost as a sort of
poetry, about what it would be to be folded, to be bound like a book. Why is
it so interesting for you, this tension between the two dimensions?
I don’t know. Perhaps it’s just the transformation of materials from
something more amorphous into something that’s more meaningful, somehow. Like in a book, you start out with wood pulp, and you can lay it out in
pages and you have to do something to that in order to instil more meaning
to it.
Is binding in any way important to you?
Somewhat. I’ve bound a few things by hand. Most of my cartoon books
ended up being just stapled, like a stack of paper, staple in the middle and
fold. Very simple. I’ve done some where you cut down the middle and lay
the sides on top and they’re perfect bound. I’ve done just a couple where
it’s an actual hand bound, hard cover. I do enjoy that. It’s quite a time
75
consuming thing. There’s quite a lot of craft in that. I enjoy a lot of hand
made, do-it-yourself activities.
Do you look at classic imposition plans?
I guess that’s kind of my goal. I did look up classic book binding
techniques and how people do it and what sort of problems they encounter.
I’m not sure if I’ve encompassed everything in that, certainly. But just the
basics of folding and trimming, I’ve done my best to be able to do the same
sort of techniques that have been done in the past, but only manually. The
computer can remember things much more easily.
Imposition plans are quite fixed, you have this paper size and it works with
specific imposition plans. I like the way your tool is very organic, you can play
with it. But in the end, something very classic comes out, an imposition plan you
can use over and over, which gives a sort of continuity.
What’s impressive is the attention you put into the visualization. There are
some technical programs which do really big imposition stuff, but it’s always at the
printer. Here, you can see the shape being peeled. It’s really impressive. I agree
with Femke that the program is an artwork too, because it’s not only technical,
it’s much more.
How is the material imagined in the tool?
So, far not really completely. When you fold, you introduce slight twists
and things like that. And that depends on the stiffness of the paper and
the thickness of the paper and I’ve not adequately dealt with that so much.
If you just have one fold, it’s pretty easy to figure out what the creep is for
that. You can do tests and you can actually measure it. That’s pretty easy
to compensate for. But if you have many more folds than that, it becomes
much more difficult.
Are you thinking about how to do that?
I am.
That would be very interesting. To imagine paper in digital space, to give
an idea of what might come out in the end. Then you really have to work
your metaphors, I think?
76
A long time ago, I did a lot of T-shirt printing. Something that I did not
particularly have was a way to visualize your final image on some kind of shirt
and the same thing applies for book binding, too. You might have a strange
texture. It would be nice to be able to visualize that beforehand, as well
as the thickness of the paper that actually controls physical characteristics.
These are things I would like to incorporate somehow but haven’t gotten
around to.
You talked about working with physical input, having touchpads ... Can
you talk a bit more about why you’re interested in this?
You can do a lot of things with just a mouse and a keyboard. But it’s
still very limiting. You have to be sitting there, and you have to just control
those two things. Here’s your whole body, with which you can do amazing
things, but you’re restricted to just moving and clicking and you only have a
single point up on the screen that you have to direct very specifically. It just
seems very limiting. It’s largely an unexplored field, just to accept a wider
variety of inputs to control things. A lot of the multitouch stuff that’s been
done is just gestures for little tiny phones. It’s mainly for browsing, not
necessarily for actual work. That’s something I would like to explore quite a
lot more.
Do you have any fantasies about how these gestures could work for real?
There’s tons of sci fi movies, like ‘Minority Report’, where you wear these
gloves and you can do various things. Even that is still just mainly browsing.
I saw one, it was a research project by this guy at Caltech. He had made
this table and he wore polarized glasses so he could look down at this table
and see a 3D image. And then he had gloves on, and he could sculpt things
right in the air. The computer would keep track of where his hand is going.
Instead of sculpting clay, you’re sculpting this 3D mesh. That seemed quite
impressive to me.
You’re thinking about 3D printers, actually?
It’s something that’s on my mind. I just got something called the
Eggbot. You can hold spheres in this thing and it’s basically a plotter that
can print on spherical surfaces or round surfaces. That’s something I’d like
77
to explore some more. I’ve made various balls with just my photographic
panoramas glued onto them. But that could be used to trace an outline for
something and then you could go in with pens or paints and add more detail.
If you’re trying to paint on a sphere, just paint and no photograph, laying out
an outline is perhaps the hardest part. If you simplify it, it becomes much
easier to make actual images on spheres. That would be fun to explore.
I’d like to come back to the folding. Following your existing aesthetic, the
stiffness and the angles of the drawing are very beautiful. Is it important you,
preserving the aesthetic of your programs, the widgets, the lines, the arrows ...
I think the specific widgets, in the end, are not really important to me
at all. It’s more just producing an actual effect. So if there is some better
way, more efficient way, more adaptable way to produce some effect, then it’s
better to just completely abandon what doesn’t work and make something
that’s new, that actually does work. Especially with multitouch stuff, a lot of
old widgets make no more sense. You have to deal with a lot of other kinds
of things, so you need different controls.
It makes sense, but I was thinking about the visual effect. Maybe it’s not
Laidout if it’s done in Qt.
Your visuals and drawings are very aesthetically precise. We’re wondering
about the aesthetics of the program, if it’s something that might change in the
future.
You mean would the quality of the work produced be changed by the
tools?
That’s an interesting question as well. But particularly the interface, it’s
very related to your drawings. There’s a distinct quality. I was wondering
how you feel about that, how the interaction with the program relates to the
drawings themselves.
I think it just comes back to being very visually oriented. If you have to
enter a lot of values in a bunch of slots in a table, that’s not really a visual
way to do it. Especially in my artwork, it’s totally visual. There’s no other
component to it. You draw things on the page and it shows up immediately.
78
It’s just very visual. Or if you make a sculpture, you start with this chunk
of stuff and you have to transform it in some way and chop off this or sand
that. It’s still all very visual. When you sit down at a computer, computers
are very powerful, but what I want to do is still very visually oriented. The
question then becomes: how do you make an interface that retains the visual
inputs, but that is restricted to the types of inputs computers need to have
to talk to them?
The way someone sets up his workshop says a lot about his work. The way
you made Laidout and how you set up its screen, it’s important to define a spot
in the space of the possible.
What is nice is that you made the visualisation so important. The windows
and the rest of the interface is really simple, the attention is really focused on
what’s happening. It is not like shiny windows with shadows everywhere, you feel
like you are not bothered by the machine.
At the same time, the way you draw the thickness of the line to define the
page is a bit large. For me, these are choices, and I am very impressed because I
never manage to make choices for my own programs. The programs you wrote,
or George Williams, make a strong aesthetic assertion like: This is good. I can’t
do this. I think that is really interesting.
Heavy page borders, that still comes down to the visual thing you end
up with, is still the piece of paper so it is very important to find out where
that page outline actually is. The more obvious it is, the better.
Yes, I think it makes sense. For a while now, I paid more attention than
others in Scribus to these details like the shape of the button, the thickness of the
lines, what pattern do you chose for the selection, etcetera. I had a lot of feedback
from users like: I want this, this is too big and at some point you want to please
everybody and you don’t make choices. I don’t think that you are so busy with
what others think.
Are there many other users of the program?
Not that I know of (laughter). I know that there is at least one other
person that actually used it to produce a booklet. So I know that it is
79
possible for someone other than myself to make things with it. I’ve gotten
a couple of patches from people to not make it crash at various places but
since Laidout is quite small, I can just not pay any attention to criticism.
Partially because there isn’t any, and I have particular motivations to make
it work in a certain way and so it is easier to just go forward.
I think people that want to use your program are probably happy with this
kind of visualisation. Because you wrote it alone, there is also a consistency across
the program. It is not like Scribus, that has parts written by a lot of people so you
can really recognize: this is Craig (Bradney), this is Andreas (Vox), this is Jean
(Ghali), this is myself. There is nothing to follow.
I remember Donald Knuth talking about TeX and he was saying that
the entire program was written from scratch three times before its current
incarnation. I am sympathetic to that style of programming.
Start again.
I think it is a good idea, to start again. To come back to a little detail. Is
there a fileformat for your imposition tool, to store the imposition plan? Is it a
text or a binary format?
It is text-based, an indented file format, sort of like Python. I did
not want to use XML, every time I try to use XML there are all these
greater thans and less thans. It is better than binary, but it is still a huge
mess. When everything is indented like a tree, it is very easy to find things.
The only problem is to always input tabs, not spaces. I have two different
imposition types, basically, the flat-folding sheets and the three dimensional
ones. The three dimensional one is a little more complicated.
If you read the file, do you know what you are folding?
Not exactly. It lists what folds exists. If you have a five by five grid, it
will say Fold along this line, over in such and such direction. What it actually
translates to in the end, is not currently stored in the file. Once you are in
Laidout you can export into a PodofoImpose plan file.
Is this file just values, or are there keywords, is it like a text?
80
I try to make it pretty readable, like trimright or trimleft.
Does it talk about turning pages? This I find beautiful in PodofoImpose
plans, you can almost follow the paper through the hands of the program.
Turn now, flip backwards, turn again. It is an instruction for a dance.
Pretty much.
The text you can read in the PodofoImpose plans was taken from what Ludi
and me did by hand. One of us was folding the paper, and the other was writing
it into the plan. I think a lot of the things we talk about, are putting things from
the real world into the computer. But you are putting things from the computer
into the real world.
Can you describe again these two types of imposition, the first one being
very familiar to us. It must be the most frequently asked question on the
Scribus mailing list: How to do imposition. Even the most popular search
term on the OSP website is ‘Bookletprinting’. But what is the difference with
the plan for a 3D object? A classic imposition plan is also somehow about
turning a flat surface into a three dimensional object?
It is almost translatable. I’m reworking the 3D version to be able to
incorporate the flat folding. It is not quite there yet, the problem is the
connection between the pages. Currently, in the 3D version, you have a
shape that has a definitive form and that controls how things bleed across
the edges. When you have a piece of paper for a normal imposition, the
pages that are next to each other in the physical form are not necessarily
related to each other at all in the actual piece of paper. Right now, the piece
of paper you use for the 3D model is very defined, there is no flexibility.
Give me a few months!
So it is very different actually.
It is a different approach. One person wanted to do flexagons, it is sort
of like origami I guess, but it is not quite as complicated. You take a piece
of paper, cut out a square and another square, and than you can fold it and
you end up with a square that is actually made up of four different sections.
Than you can take the middle section, and you get another page and you can
81
keep folding in strange ways and you get different pages. Now the question
becomes: how do you define that page, that is a collection of four different
chunks of paper? I’m working on that!
We talk about the move from 2D to 3D as if these pages are empty. But
you actually project images on them and I keep thinking about maps, transitional objects where physical space is projected on paper which then becomes a
second real space and so on. Are you at all interested in maps?
A little bit. I don’t really want to because it is such a well-explored
field already. Already for many hundreds of years the problem is how do
you represent a globe onto a more or less two dimensional surface. You
have to figure out a way to make globe gores or other ways to project it and
than glue it on to a ball for example. There is a lot of work done with that
particular sort of imagery, but I don’t know.
Too many people in the field!
Yes. One thing that might be interesting to do though is when you have
a ball that is a projection surface, then you can do more things, like overlays
onto a map. If you want to simulate earthquakes for example. That would
be entertaining.
And the panoramic images you make, do you use special equipment for
this?
For the first couple that I made, I made this 30-sided polyhedron that
you could mount a camera inside and it sat on a base in a particular way so
you could get thirty chunks of images from a really cheap point and shoot
camera. You do all that, and you have your thirty images and it is extremely
laborious to take all these thirty images and line them up. That is why I
made the 3D portion of Laidout, it was to help me do that in an easier
fashion. Since then I’ve got a fish-eyed lens which simplifies things quite
considerably. Instead of spending ten hours on something, I can do it in ten
minutes. I can take 6 shots, and one shot up, one shot down. In Hugin you
can stitch them all together.
And the kinds of things you photograph? We saw the largest rodent on
earth? How do you pick a spot for your images?
82
I am not really sure. I wander around and than photograph whatever
stands out. I guess some unusual configuration of architecture frequently
or sometimes a really odd event, or a political protest sometimes. The trick
with panoramas is to find an area where something is happening all over
the globe. Normally, on sunny days, you take a picture and all your image
is blank. As pretty as the blue sky is, there is not a lot going on there
particularly.
Panoramic images are usually spherical or circular. Do you take certain
images with a specific projection surface in mind?
To an extent. I take enough images. Once I have a whole bunch of
images, the task is to select a particular image that goes with a particular
shape. Like cubes there are few lines and it is convenient to line them up to
an actual rectangular space like a room. The tetrahedron made out of cones,
I made one of Mount St. Helens, because I thought it was an interesting
way to put the two cones together. You mentioned 3D printers earlier, and
one thing I would like to do is to extend the panoramic image to be more
like a progression. For most panoramic images, the focal point is a single
point in space. But when you walk along a trail, you might have a series of
photographs all along. I think it could be an interesting work to produce,
some kind of ellipsoidal shape with a panoramic image that flows along the
trail.
Back to Laidout, and keeping with the physical and the digital. Would
there be something like a digital papercut?
Not really. Maybe you can have an Arduino and a knife?
I was more imagining a well placed crash?
In a sense there is. In the imposition view, right now I just have a green
bar to tell where the binding is. However when you do a lot of folds, you
usually want to do a staple. But if you are stapling and there is not an actual
fold there, than you are screwed.
83
The following statements were recorded by Urantsetseg
Ulziikhuu (Urana) in 2014. She studied communication in
Istanbul and Leuven and joined Constant for a few months
to document the various working practices at Constant
Variable. Between 2011 and 2014, Variable housed studios
for Artists, Designers, Techno Inventors, Data Activists,
Cyber Feminists, Interactive Geeks, Textile Hackers, Video
Makers, Sound Lovers, Beat Makers and other digital creators who were interested in using F/LOS software for
their creative experiments.
Why do you think people should use and or practice
Open Source software? What is in it for you?
Urantsetseg Ulziikhuu
The knitting machine that I am using normally has a
computer from the eighties. Some have these scanners that are really old
and usually do not work anymore. They became obsolete. If it wasn’t for
Open Source, we couldn’t use these technologies anymore. Open Source
developers decided that they should do something about these machines and
found that it was not that complicated to connect these knitting machines
directly to computers. I think it is a really good example how Open Source
is important, because these machines are no longer produced and industry
is no longer interested in producing them again, and they would have died
without further use.
The idea that Open Source is about sharing is also important. If you try to
do everything from zero, you just never advance. Now with Open Source, if
somebody does something and you have access to what they do, and you can
take it further and take it into a different direction.
Claire Williams
99
I haven’t always used Open Source software. It started
at the Piet Zwart Institute where there was a decision made by Matthew
Fuller and Femke Snelting who designed the program. They brought a
bunch of people together that asked questions about how our tools influence
practice, how they are used. And so, part of my process is then teaching in
that program, and starting to use Free Software more and more. I should
say, I had already been using one particular piece of Free Software which
is FFmpeg, a program that lets you work with video. So there again there
was a kind of connection. It was just by the virtue of the fact that it was
one of the only tools available that could take a video, pull out frames,
work with lots of different formats, just an amazing tool. So it started with
convenience. But the more that I learned about the whole kind of approach
of Open Source, the more Open Source I started to use. I first switched from
MacOSX to maybe Dual Booting and now indeed I am pretty much only
using Open Source. Not exclusively Open Source, because I occasionally use
platforms online that are not free, and some applications.
I am absolutely convinced that when you use these tools, you are learning
much more about inner workings of things, about the design decisions that
go into a piece of software so that you are actually understanding at a very
deep level, and this then lets you move between different tools. When
tools change, or new things are offered, I think it is really a deep learning
that helps you for the future. Whereas if you just focus on the specific
particularities of one platform or piece of software, that is a bit fragile and
will inevitably be obsolete when a software stops being developed or some
kind of new kind of way of working comes about.
Michael Murtaugh
I use Open Source software every day, as I have
Debian on my laptop. I came to it through anarchism – I don’t have a tech
background – so it’s a political thing mainly. Not that F/LOSS represents
a Utopian model of production by any means! As an artist it fits in with
my interest in collaborative production. I think the tools we use should be
malleable by the people who use them. Unfortunately, IT education needs
to improve quite a lot before that ideal becomes reality.
Politically, I believe in building a culture which is democratic and malleable
by its inhabitants, and F/LOSS makes this possible in the realm of software.
The benefits as a user are not so great unless you are tech-savvy enough to
really make use of that freedom. The software does tend to be more secure
Eleanor Greenhalgh
100
and so on, though I think we’re on shaky ground if we try to defend F/LOSS
in terms of its benefits to the end user. Using F/LOSS has a learning curve,
challenges which I put up with because I believe in it socially. This would
probably be a different answer from say, a sysadmin, someone who could see
really concrete benefits of using F/LOSS.
Actually I came from Open Content and alternative licensing to the technical side of using GNU/Linux. My main motivation
right now is the possibility to develop a deeper relationship with my tools.
For me it is interesting to create my own tools for my work, rather than
to use something predefined. Something everyone else uses. With Free
Software this is easier – to invent tools. Another important point is that
with Free Software and open standards it’s more likely that you will be able
to keep track of your work. With proprietary software and formats, you are
pretty much dependent on decisions of a software company. If the company
decides that it will not continue an application or format, there is not much
you can do about it. This happened to users of FreeHand. When Adobe
acquired their competitor Macromedia they decided to discontinue the development of FreeHand in favour of their own product Illustrator. You can
sign a petition, but if there is no commercial interest, most probably nothing
will happen. Let’s see what happens to Flash.
Christoph Haag
I studied sculpture, which is a very solitary way of working. Already through my studies, this idea of an artist sitting around in a
studio somewhere, being by himself, just doing his work by himself, didn’t
make sense to me. It is maybe true for certain people, but it is definitely
not true to me today, the person I am. I always integrated other people into
my work, or do collaborative work. I don’t really care about this ‘it is my
work’ or ‘it is your work’, if you do something together, at some point the
work exists by itself. For me, that is the greatest moment, it is just independent. It actually rejoins the authorship question, because I don’t think
you can own ideas. You can kind of put them out there and share them.
It is organic, like things that can grow and that they will become bigger
and bigger, become something else that you couldn’t have ever thought. It
makes the horizon much bigger. It is a different way of working I guess.
The obvious reason is that it is free, but the sharing philosophy is really at
the core of it. I have always thought that when you share things, you do not
Christina Clar
101
get back things instantly, but you do get so much things in another way,
not in the way you expect. But if you put in a idea out, use tools that are
open and change them, put them out again. So there is lot of back and
forth of communication. I think that is super important. It is the idea of
evolving together, not just by ourselves. I really do believe that we do evolve
much quicker if we are together than everybody trying to do things by his
or herselves. I think it is very European idea to get into this individualism,
this thinking of idea of doing things by myself, my thing. But I think we
can learn a lot from Asia, just ways of doing, because there community is
much more important.
I don’t necessarily develop like software or codes, because I am not a software developer. But I would say, I am involved in
analog way. I do use Open Source software, although I have to say I do not
much with computers. Most of my work is analog. But I do my researches
on the website. I am a user.
I started to develop an antipathy against large corporations, operating systems or softwares, and started to look for alternatives. Then you come to the
Linux system and Ubuntu which has a very user-friendly interface. I like the
fact that behind the software that I am using, there is a whole community,
who are until now without major financial interests and who develop tools
for people like me. So now I am totally into Open Source software, and I
try to use as much as I can. So my motivation would be I want to get off
the track of big corporates who will always kind of lead you into consuming
more of their products.
John Colenbrander
What does Free Culture mean to you? Are you taking
part in a ‘Free Culture Movement’?
Urantsetseg Ulziikhuu
Michael Murtaugh I’d like to think so, but I realised of that it is quite
hard. Only now, I am seriously trying to really contribute back to projects
and I wouldn’t even say that I am an active contributer to Free Software
projects. I am much more of a user and part of the system. I am using it in
my teaching and my work, but now I try to maybe release software myself in
some way or I try to create projects that people could actually use. I think
102
it is another kind of dimension of engagement. I haven’t really fully realised
it, so yes for that question if I am contributing to Free Culture. Yes, but I
could go lot deeper.
John Colenbrander I am a big supporter of the idea of Free Culture. I
think information should be available for people, especially for those who
have little access to information. I mean we live in the West and we have
access to information more or less with physical libraries and institutions
where we can go. Specially in Asia, South America, Africa this is very
important. There is a big gap between those who have access to knowledge
and those don’t have access to knowledge.
That’s a big field to explore to be able to open up information to people who
have very poor access to information. Maybe they are not even able to write
or read. That’s already is a big handicap. So I think it is a big mission in
that sense.
Could Free Culture be seen as an opposition to commercialism?
Urantsetseg Ulziikhuu
Michael Murtaugh It is a tricky question. I think no matter what, if you
go down the stack, in terms of software and hardware, if you get down to
the deepest level of a computer then there is little free CPU design. So I
think it is really important to be able to work in this kind of hybrid spaces
and to be aware of then how free Free is, and always look for alternatives
when they are available. But to a certain degree, I think it is really hard to
go for a total absolute. Or it is a decision, you can go absolute but that may
mean that you are really isolated from other communities. So that’s always
a bit of balancing act, how independent can you be, how independent you
want to be, how big does your audience need to be, or you community needs
to be. So that’s a lot of different decisions. Certainly, when I am working
in the context of an art school with design practitioners, you know it is not
always possible to really go completely independent and there are lots of
implications in terms of how you work and whom you can work with, and
the printers you can work with. So it is always a little bit of trade-off, but it
is important to understand what the decisions are.
103
Eleanor Greenhalgh I think the idea of a Free Culture movement is very
exciting and important. It has always gone on, but stating it in copyrightaware terms issues an important challenge to the ‘all rights reserved’ statusquo. At the same time I think it has limitations, at least in its current form.
I’m not sure that rich white kids playing with their laptops is necessarily a
radical act. The idea and the intention are very powerful though, because
it does have the potential to challenge the way that power – in the form of
‘intellectual property’ – is distributed.
Christoph Haag Copyright has become much more enforced over the last
years than it was ever before. In a way, culture is being absorbed by companies trying to make money out of it. And Free Culture developed as a
counter movement against this. When it comes to mainstream culture, you
are most often reduced to a consumer of culture. Free Culture then is a
obvious reaction. The idea of culture where you have the possibility to engage again, to become active and create your version, not just to consume
content.
How could Open Source software be economically sustainable, in a way that is beneficial for both developers/creators and users?
Urantsetseg Ulziikhuu
Eleanor Greenhalgh That’s a good question! A very hard one. I’m not
involved enough in that community to really comment on its economic future. But it does, to me, highlight what is missing from the analysis in
Free Culture discourse, the economic reality. It depends on where they (developers) work. A lot of them are employed by companies so they get a
salary. Others do it for a hobby. I’d be interested to get accurate data on
what percentage of F/LOSS developers are getting paid, etc. In the absence
of that data, I think it’s fair to say it is an unsolved problem. If we think
that developers ‘should’ be compensated for their work, then we need to talk
about capitalism. Or at least, about statutory funding models.
104
It is interesting that you used both ‘sustainability’ and
‘economic viability’. And I think those are two things very often in opposition. I am doing a project now about publishing workflows and future electronic publishing forums. And that was the one thing we looked at. There
were several solutions on the market. One was a platform called ‘Editorial’
which was a very nice website that you could use to mark down texts collaboratively and and then it could produce ePub format books. After about
six months of running, it closed down as many platforms do. Interestingly,
in their sign-off message it said: You have a month to get your stuff out of the
website, and sorry we have decided not to Open Source the project. As much as
we loved making it, it was just too much work for us to keep this running. In
terms of real sustainability, Open Source of course would have allowed them
to work with anybody, even if it is just a hobby.
Michael Murtaugh
It is very related to passion of doing these things.
Embroidering machines have copyrighted softwares installed. The software
itself is very expensive, around 1000 , and the software for professionals is
6000 to buy. Embroidering machines are very expensive themselves too.
These softwares are very tight and closed, you even have to have special USB
key for patterns. And there are these two guys who are software developers,
they are trying to come up with a format which all embroidering machines
could read. They take their time to do this and I think in the end if the
project works out, they will probably get attention and probably get paid
also. Because instead of giving 1000 to copyrighted software, maybe you
would be happy to give 50 to these people.
Claire Williams
For a long time I have wanted to organise a conversation with you
about the place and meaning of distributed version control in OSP
design work. First of all because after three years of working with
Git intensely, it is a good moment to take stock. It seems that many
OSP methods, ideas and politics converge around it and a conversation discussing OSP practice linked to this concrete (digital) object
could produce an interesting document; some kind of update on what
OSP has been up to over the last three years and maybe will be in
the future. Second: Our last year in Variable has begun. Under the
header Etat des Lieux, Constant started gathering reflections and documents to archive this three year working period. One of the things
I would like to talk about is the parallels and differences between a
physical studio space and a distributed workflow. And of course I am
personally interested in the idea of ‘versions’ linked to digital collaboration. This connects to old projects and ideas and is sparked again
by new ones revived through the Libre Graphics Research Unit and
of course Relearn.
I hope you are also interested in this, and able to make time for it. I
would imagine a more or less structured session of around two hours
with at least four of you participating, and I will prepare questions
(and cake).
Speak soon!
xF
109
How do you usually explain Git to design students?
Before using Git, I would work on a document. Let’s say a layout, and to
keep a trace of the different versions of the layout, I would append _01, _02
to the files. That’s in a way already versioning. What Git does, is that it
makes that process somehow transparent in the sense that, it takes care of
it for you. Or better, you have to make it take care for you. So instead of
having all files visible in your working directory, you put them in a database,
so you can go back to them later on. And then you have some commands to
manipulate this history. To show, to comment, to revert to specific versions.
More than versioning your own files, it is a tool to synchronize your work
with others. It allows you to work on the same projects together, to drive
parallel projects.
It really is a tool to make collaboration easier. It allows you to see differences.
When somebody proposes you a new version of a file, it highlights what has
changed. Of course this mainly works on the level of programming code.
Did you have any experience with Git before working with OSP?
Well, not long before I joined OSP, we had a little introduction to Mercurial,
another versioning software, at school in 2009. Shortly after I switched to
Git. I was working with someone else who was working with Git, and it was
so much better.
Alex was interested in using Git to make Brainch 1 . We wanted to make a web
application to fork texts that are not code. That was our first use of Git.
I met OSP through Git in a way. An intern taught me the program and he
said: Eric once you’ll get it, you’ll get so excited!. We were in the cafeteria of
the art school. I thought it was really special, like someone was letting me
in on a secret and we we’re the only ones in the art school who knew about
it. He thought me how to push and pull. I saw quickly how Git really
is modeled on how culture works. And so I felt it was a really interesting,
promising system. And then I talked about it at the Libre Graphics Meeting
in 2010, and so I met OSP.
1
A distributed text editing platform based on Django and Git http://code.dyne.org/brainch
110
I started to work on collaborative, graphic design related stuff when I was
developing a font manager. I’ve been connected to two versioning systems
and mainly used SVN. Git came well after, it was really connected to web
culture, compared to Subversion, which is more software related.
What does it mean that Git is referred to as ‘distributed versioning’?
The first command you learn in Git, is the clone command. It means that
you make a copy of a project that is somehow autonomous. Contrary to
Subversion you don’t have this server-client architecture. Every repository
is in itself a potential server and client. Meaning you can keep track of your
changes offline.
At some point, you decided to use ‘distributed versioning’ rather than a
centralized system such as Subversion. I remember there was quite some
discussion ...
I was not hard to convince. I had no experience with other versioning
systems. I was just excited by the experience that others had with this new
tool. In fact there was this discussion, but I don’t remember exactly the
arguments between SVN or Git. For what I remember Git was easier.
The discussion was not really on the nature of this tool. It was just: who
would keep Git running for OSP? Because the problem is not the system in
itself, it’s the hosting platform. We didn’t find any hosted platform which
fitted our taste. The question was: do we set up our own server, and who is
going to take care of at. At this time Alex, Steph and Ivan were quite excited
about working with Git. And I was excited to use Subversion instead, but I
didn’t have to time to take care of setting it up and everything.
You decided not to use a hosted platform such as Gitorious or GitHub?
I guess we already had our own server and were hosting our own projects. But
Pierre you used online platforms to share code?
When I started developing my own projects it was kind of the end of
SourceForge. 2 I was looking for a tool more in the Free Software tradition.
2
SourceForge is a web based source code repository. It was the first platform to offer this
service for free to Open Source projects.
111
There was gna, and even though the platform was crashing all the time, I
felt it was in line with this purpose.
If I remember correctly, when we decided between Git and Subversion,
Pierre, you were also not really for it because of the personality of its main
developer, Linus Torvalds. I believe it was the community aspect of Git that
bothered you.
Well Git has been written to help Linus Torvalds receive patches for the
Linux kernel; it is not aimed at collaborative writing. It was more about
making it convenient for Linus. And I didn’t see a point in making my
practice convenient for Linus. I was already using Subversion for a while
and it was really working great at providing an environment to work together with a lot of people and check out different versions. Anything you
expect from a versioning system was there, all elements for collaborative
work were there. I didn’t see the point to change for something that didn’t
feel as comfortable with, culturally. This question of checking out different
directories of repositories was really important to me. At this time (Git has
evolved a lot) it was not possible to do that. There were other technical
aspects I was quite keen of. I didn’t see why to go for Git which was not
offering the same amount of good stuff.
But then there is this aspect of distribution, and that’s not in Subversion.
If some day somebody decides to want a complete copy of an OSP project,
including all it’s history, they would need to ask us or do something complicated to give it to them.
I was not really interested in this ‘spreading the whole repository’. I was
more concerned about working together on a specific project.
It feels like your habit of keeping things online has shifted. From making
an effort afterwards to something that happens naturally, as an integral
part of your practice.
It happened progressively. There is this idea that the Git repository is linked
to the website, which came after. The logic is to keep it all together and
linked, online and alive.
112
That’s not really true ... it was the dream we had: once we have Git, we
share our files while working on them. We don’t need to have this effort
afterwards of cleaning up the sources and it will be shareable. But it is not
true. If we do not put an effort to make it shareable it remains completely
opaque. It requires still an investment of time. I think it takes about 10%
of time of the project, to make it readable from the outside afterwards.
Now, with the connection to our public website, you’re more conscious that all
the files we use are directly published. Before we had a Git web application that
allowed someone to just browse repositories, but it was not visual, so it was hard
to get into it. The Cosic project is a good example. Every time I want to show
the project to someone, I feel lost. There are so many files and you really don’t
know which ones to open.
Maybe, Eric, you can talk about ‘Visual Culture’?
Basically ‘Visual Culture’ is born out of this dream I talked about just now.
That turns out not to be true, but shapes our practice and helps us think
about licensing and structuring and all those interesting questions. I was
browsing through this Git interface that Stéphanie described, and thought
it was a missed opportunity, because here is this graphic design studio,
who publishes all their works, while they are working. Which has all kind
of consequences but if you can’t see it, if you don’t know anything about
computer programming, you have no clue on what’s going on. And also,
because it’s completely textual. And for example a .sla file, if you don’t know
about Open Source, if you don’t know about Scribus it could as well be
salad. It is clear that Git was made for text. It was the idea to show all the
information that is already there in a visual form. But an image is an image,
and type is a typeface, and it changes in a visual way. I thought it made
sense for us to do. We didn’t have anyone writing posts on our blog. But
we had all this activity in the Git repository.
It started to give some schematic view on our practice, and renders the current
activity visible, very exciting. But it is also very frustrating because we have lots
of ideas and very little time to implement them. So the ‘Visual Culture’ project
is terribly late on the ball comparing to our imagination.
113
Take by example the foundry. Or the future potential of the ‘Iceberg’ folders. Or
our blog that is sometimes cruelly missing. We have ways to fill all these functions
with ‘Visual Culture’ but still no time to do it!
In a way you follow established protocols on how Open Source code is
usually published. There should be a license, a README file ... But OSP
also decided to add a special folder, which you called ‘Iceberg’. This is a
trick to make your repository more visual?
Yeah, because even if something is straightforward to visualise, it helps if
you can make a small render of it. But most of the files are a accumulation
of files, like a webpage. The idea is that in the ‘Iceberg’ folder, we can put a
screenshot, or other images ...
We wanted the files that are visible, to be not only the last files added. We wanted
to be able to show the process. We didn’t want it to be a portfolio and just show
the final output. But we wanted to show errors and try-outs. I think it’s not only
related to Git, but also to visual layout. When you want to share software, we
say release early, release often, which is really nice. But it’s not enough to just
release, because you need to make it accessible to other people to understand what
they are reading. It’s like commenting your code, making it ... I don’t want to
say ‘clean’ ... legible, using variable names that people can understand. Because,
sometimes when we code just for ourselves I use French variables so that I’m sure
that it’s not word-protected by the programming language. But then it is not
accessible to many people. So stuff like that.
You have decided to use a tool that’s deeply embedded in the world of
F/LOSS. So I’ve always seen your choice for Git both as a pragmatic
choice as well as a fan choice?
Like as fans of the world of Open Source?
Yes. By using this tool you align yourself, as designers, with people that
develop software.
I’m not sure, I join Pierre on his feelings towards Linus Torvalds, even
though I have less anger at him. But let’s say he is not someone I especially
114
like in his way of thinking. What I like very much about Git is the distributed aspect. With it you can collaborate without being aligned together.
While I think Linus Torvalds idea is very liberal and in a way a bit sad, this
idea that you can collaborate without being aligned, without going through
this permission system, is interesting. With Scribus for example, I never
collaborated on it, it’s such a pain to got through the process. It’s good and
bad. I like the idea of a community which is making a decision together, at
the same time it is so hard to enter this community that you just don’t want
to and give up.
How does it feel, as a group of designer-developers, to adopt workflows,
ways of working, and also a vocabulary that comes from software development?
On the one hand it’s maybe a fan act. We like this movement of F/LOSS
development which is not always given the importance it has in the cultural
world. It’s like saying hey I find you culturally relevant and important. But
there’s another side to it. It’s not just a distant appropriation, it’s also the fact
that software development is such a pervasive force. It’s so much shaping
the world, that I feel I also want to take part in defining what are these
procedures, what are these ways of sharing, what are these ways of doing
things. Because I also feel that if I ask someone from another field as
a cultural actor, and take and appropriate these mechanisms and ways of
doing, I will be able to influence what they are. So there is the fan act, and
there’s also the act of trying to be aware of all the logic contained in these
actions.
And from another side, in the world of graphic design it is also a way to
affirm that we are different. And that we’re really engaged in doing this
and not only about designing nice pictures. That we really develop our own
tools.
It is a way to say: hey, we’re not a kind of politically engaged designers with
a different political goal each next half month, and than we do a project
about it. It really impacts our ecosystem, we’re serious about it.
115
It’s true that, before we started to use Git, people asked: So you’re called
Open Source Publishing, but where are your sources? For some projects you
could download a .zip file but it was always a lot of trouble, because you needed
to do it afterwards, while you were already doing other projects.
Collaboration started to become a prominent part of the work; working
together on a project. Rather than, oh you do that and when you are finished
you send the file over and I will continue. It’s really about working together on
a project. Even if you work together in the same space, if you don’t have a
system to share files, it’s a pain in the ass.
After using it for a few years, would you say there are parts of in Git
where you do not feel at home?
In Git, and in versioning systems in general, there is that feeling that the
latest version is the best. There is an idea of linearity, even though you can
have branches, you still have an idea of linearity in the process.
Yes, that’s true. We did this workshop Please computer let me design, the first
time was in a French school, in French, and the second time for a more European
audience, in English. We made a branch, but then you have the default branch the English one - you only see that one, while they are actually on the same level.
So the convention is to always show the main branch, the ‘master’?
In a way there is no real requirement in Git to have a branch called ‘master’.
You can have a branch called ‘English’ and a branch called ‘French’. But
it’s true that all the visualization software we know (GitHub or Gitorious
are ways to visualize the content of a Git repository), you’ll need to specify
which is the branch that is shown by default. And by default, if you don’t
define it, it is ‘master’.
For certain types of things such as code and text it works really well, for
others, like you’re making a visual design, it’s still very hard to compare
differences. If I make a poster for example I still make several files instead of
branches, so I can see them together at once, without having to check-out
another branch. Even in websites, if I want to make a layout, I’ll simply make
a copy of the HTML and CSS, because I want to be able to test out and
116
compare them. It might be possible with branches, it’s just to complicated.
Maybe the tools to visualize it are not there ... But it’s still easier to make
copies and pick the one you like.
It’s quite heavy to go back to another version. Also working collaboratively is
actually quite heavy. For example in workshops, or the ‘Balsamine’ project ... we
were working together on the same files at the same time, and if you want to share
your file with Git you’ll have to first add your file, then commit and pull and
push, which is four commands. And every time you commit you have to write
a message. So it is quite long. So while we were working on the .css for ‘Visual
Culture’, we tried it in Etherpad, and one of us was copying the whole text file
and committing.
So you centralized in the end.
It’s more about third-party visual software. Let’s say Etherpad for example,
it’s a versioning system in itself. You could hook into Git through Etherpad
and each letter you type could be a commit. And it would make nonsense
messages but at the same time it would speed up the process to work together. We can imagine the same thing with Git (or any other collaborative
working system) integrated into Inkscape. You draw and every time you save
... At some point Subversion was also a WebDav server, it means that for
any application it was possible to plug things together. Each time you would
save you file it would make a commit on the server. It worked pretty well
to bring new people into this system because it was just exactly the same as
the OpenOffice, it was an open WebDav client. So it was possible to say to
OpenOffice that you, where you save is a disk. It was just like saving and it
was committing.
I really agree. From the experience of working on a typeface together in
Git with students, it was really painful. That’s because you are trying to
do something that generates source code, a type design program generates
source code. You’re not writing it by hand, and if you then have two versions
of the type design program, it already starts to create conflicts that are quite
hard. It’s interesting to bring to models together. Git is just an architecture
on how to start your version, so things could hook into it.
117
For example with Etherpad, I’ve looked into this API the other day, and
working together with Git, I’m not sure if having every Etherpad revision
directly mapped to a Git revision would makes sense if you work on a project
... but at the same time you could have every saved revision mapped to a
Git revision. It’s clear Git is made for asynchronous collaboration process.
So there is Linus in his office, there are patches coming in from different
people. He has the time also to figure out which patch needs to go where.
This doesn’t really work for the Etherpad-style-direct-collaboration. For
me it’s cool to think about how you could make these things work together.
Now I’m working on this collaborative font editor which does that in some
sort of database. How would that work? It would not work if every revision
would be in the Git. I was thinking you could save, or sort of commit, and
that would put it in a Git repository, this you can pull and push. But if
you want to have four people working together and they start pulling, that
doesn’t work on Git.
I never really tried Sparkleshare, that could maybe work? Sparkleshare is making
a commit message every time you save a document. In a way it works more like
Dropbox. Every time you save it’s synchronized with the server directly.
So you need to find a balance between the very conscious commits you
make with Git and the fluidness of Etherpad, where the granularity is
much finer. Sparkleshare would be in between?
I think it would be interesting to have this kind of Sparkleshare behaviour, but
only when you want to work synchronously.
So you could switch in and out of different modes?
Usually Sparkleshare is used for people who don’t want to get to much involved
in Git and its commands. So it is really transparent: I send my files, it’s synchronized. I think it was really made for this kind of Dropbox behaviour. I think
it would make sense only when you want to have your hands on the process. To
have this available only when you decide, OK I go synchronous. Like you say,
if you have a commit for every letter it doesn’t make sense.
It makes sense. A lot of things related to versions in software development
is meant to track bugs, to track programming choices.
118
I don’t know for you ... but the way I interact with our Git repository since we
started to work with it ... I almost never went into the history of a project. It’s
just, it really never happened to go back into this history, to check out an old
version.
I do!
Some neat feature of Git is the dissect command. To find where it broke.
You can top from an old revision that you know that works and then track
down, like checkout, track down the bug.
Can you give a concrete example, where that would be useful, I mean,
not in code.
Not code, okay. That I don’t know.
In a design, like visual design, I think it never happens. It happens on websites,
on tools. Because there is a bug, so you need to come back to see where it broke.
But for a visual design I’m not sure.
It’s true, also because as you said before, with .svg files or .sla files we often
have several duplicates. I sometimes checkout those. But it’s true it’s often
related to merge problems. Or something, you don’t know what to do, so
you’ll just check-out, to go back to an earlier version.
It would be interesting for me to really look at our use of Git and map some
kind of tool on top of a versioning system. Because it’s not even versioning,
it is also a collaborative workflow, and to see what we mean. Just to use
maybe some feature of Git or whatever to provide the services we need and
really see what we exactly work with. And, this kind of thing where we
want to see many versions at the same time, to compare seems important.
Well it’s the kind of thing that could take advantage of a versioning system,
to build.
It is of course a bit strange that if you want to see different versions next
to each other you have to go back in time. It’s a kind of paradox, no?
But then you can’t see them at the same time
Exactly, no.
119
Because there is no way to visualize your trip back in history.
Well I think, something you could all have some interesting discussion
about, is the question of exchange. Because now we are talking about the
individual. We’ve talked how it’s easier to contribute to Git based projects
but to be accepted into an existing repository someone needs to say okay,
I want it, which is like SVN. What is easier, is to publish you’re whole
Git repository online, with the only difference from the the first version,
is that you added your change, but it means that in proposing a change
you are already making a new cultural artifact. You’re already putting a new
something there. I find this to be a really fascinating phenomena because
it has all kinds of interesting consequences. Of course we can look at it
the way of, it’s the cold and the liberal way of doing things. Because the
individual is at the center of this, because you are on your own. It’s your
thing in the first place, and then you can see if it maybe becomes someone
else’s thing too. So that has all kinds of coldness about it and it leads to
many abandoned projects and maybe it leads to a decrease of social activity
around specific projects. But there’s also an interesting part of it, where it
actually resembles quite well how culture works in the first place. Because
culture deals with a lot redundancy, in the sense that we can deal with many
kinds of very similar things. We can have Akzidenz Grotesk, Helvetica and
the Akkurat all at the same time, and they have some kind of weird cultural
lineage thing going on in between them.
Are there any pull requests for OSP?
We did have one.
Eric is right to ask about collaboration with others, not only how to work
internally in a group.
That’s why GitHub is really useful. Because it has the architecture to exchange
changes. Because we have our own server it’s quite private, it’s really hard to
allow anyone to contribute to fonts for example. So we had e-mails: Hey here’s
a new version of the font, I did some glyphs, but also changed the shape of
the A. There we have two different things, new glyphs is one thing, we could say
120
we take any new glyph. But changing the A, how do you deal with this? There’s
a technical problem, well not technical ...
An architectural problem?
Yeah, we won’t add everyone’s SSH-key to the server because it will be endless
to maintain. But at the same time, how do you accept changes? And then, who
decides what changes will be accepted?
For the foundry we decided to have a maintainer for each font project.
It’s the kind of thing we didn’t do well. We have this kind of administrative
way of managing the server. Well it’s a lot of small elements that all together
make it difficult. Let’s say at some point we start to think maybe we need to
manage our repositories, something a bit more sophisticated then Gitolite. So we
could install something like Gitorious. We didn’t do it but we could imagine
to rebuild a kind of ecosystem where people have their own repositories and
do anything we can imagine on this kind of hosting service. Gitorious is a
Free Software so you can deploy it on your own server. But it is not trivial
to do.
Can you explain the difference between Gitorious and GitHub?
Gitorious is first a free version, it’s not a free version of Git but GitHub. One
is free and one is not.
Meaning you can not install GitHub on your own server.
Git is a storage back-end, and Gitorious or GitHub are a kind of web application to interact with the repository and to manage them. And GitHub
is a program and a company deploying these programs to offer both a commercial service and a free-of-charge service. They have a lot of success with
the free service Git in a sense. And they make a lot of money at providing
the same service, exactly the same, just it means that you can have private
space on the server. It’s quite convenient, because the tools are really good
to manage repositories. And Gitorious I don’t exactly know what is their
business model, they made all their source code to run the platform Free
Software. It means they offer a bit less fancy features.
121
A bit less shiny?
Yeah, because they have less success and so less money to dedicate to development of the platform. But still it’s some kind of easy to grasp web interface
management, repositories manager. Which is quite cool. We could do that,
to install this kind of interface, to allow more people to have their repositories on the OSP-server. But here comes the difficult thing: we would need
a bit more resources to run the server to host a lot of repositories. Still this
moment we have problems sometimes with the server because it’s not like
a large server. Nobody at OSP is really a sysadmin, and has time to install
and setup everything nicely etcetc. And we also would have to work on the
gitorious web application to make it a bit more in line with our visual universe. Because now it’s really some kind of thing we cannot associate with
really.
Do you think ‘Visual Culture’ can leverage some of the success of GitHub?
People seem to understand and like working this way.
Well, it depends. We also meet a lot of people who come to GitHub and say,
I don’t understand, I don’t understand anything of this! Because of it’s huge
success GitHub can put some extra effort in visualization, and they started
to run some small projects. So they can do more than ‘Visual Culture’ can
do.
And is this code available?
Some of their projects are Open Source.
Some of their projects are free. Even if we have some things going on in
‘Visual Culture’, we don’t have enough manpower to finalize this project.
The GitHub interface is really specific, really oriented, they manage to do
things like show fonts, show pictures, but I don’t think they can display
.pdf. ‘Visual Culture’ is really a good direction, but it can become obsolete
by the fact that we don’t have enough resource to work on it. GitHub starts
to cover a lot of needs, but always in their way of doing things, so it’s a
problem.
122
I’m very surprised ... the quality of Git is that it isn’t centralized, and nowadays everything is becoming centralized in GitHub. I’m also wondering
whether ... I don’t think we should start to host other repositories, or maybe
we should, I don’t know.
Yeah, I think we should
You do or you don’t want to become a hosting platform?
No. What I think is nice about GitHub is of course the social aspect around
sharing code. That they provide comments. Which is an extra layer on top
of Git. I’m having fantasies about another group like OSP who would use
Git and have their own server, instead of having this big centralized system.
But still have ways to interact with each other. But I don’t know how.
It would be interesting if it’s distributed without being disconnected.
If it was really easy to setup Git, or a versioning server, that would be
fantastic. But I can remember, as a software developer, when I started to
look for somewhere to host my code it was no question to setup my own
server. Because of not having time, no time to maintain, no time to deploy
etcetc. At some point we need hosting-platforms for ourselves. We have
almost enough to run our own platform. But think of all the people who
can’t afford it.
But in a way you are already hosting other people’s projects. Because
there are quite a few repositories for workshops that actually not belong
to you.
Yeah, but we moved some of them to GitHub just to get rid of the pain of
maintaining these repositories.
We wanted the students to be independent. To really have them manage
their own projects.
GitHub is easier to manage then our own repository which is still based on
a lot of files.
123
For me, if we ever make this hosting platform, it should be something else then
our own website. Because, like you say, it’s kind of centralized in the way we use
it now. It’s all on the Constant server.
Not anymore?
No, the Git repositories are still on the Constant server.
Ah, the Git is still. But they are synced with the OSP server. But still, I can
imagine it would be really nice to have many instances of ‘Visual Culture’
for groups of people running their own repositories.
It feels a bit like early days of blogging.
It would be really, really nice for us to allow other people to use our services.
I was also thinking of this, because of this branching stuff. For two reasons,
first to make it easier for people to take advantage of our repository. Just
like branching our repository would be one click, just like in Gitorious or
GitHub. So I have an account and I like this project and I want to change
something, I just click on it. You’re branched into your own account and
you can start to work with it. That’s it, and it would be really convenient
for people who would like to work with our font files etc. And once we
have all these things running on our server we can think of a lot of ideas to
promote our own dynamic over versioning systems. But now we’re really a
bit stuck because we don’t have the tools we would like to have. With the
repositories, it’s something really rigid.
It is interesting to see the limits of what actually can happen. But it is
still better than the usual (In)design practices?
We would like to test GitMX. We don’t know much about it, but we would
like to use it for the pictures in high-resolution, .pdfs. We thought about it
when we were in Seoul, because we were putting pictures on a gallery, and
we were like ah, this gallery. We were wondering, perhaps if GitMX works
well, perhaps it can be separated into different types of content. And then
we can branch them into websites. And perhaps pictures of the finalized
work. In the end we have the ‘Iceberg’ with a lot of ‘in-progress’-pictures,
124
but we don’t have any portfolio or book. Again because we don’t care much
about this, but at the end we feel we miss it a bit.
A narration ...
... to have something to present. Each time we prepare a presentation, we
need to start again to find back the tools and files, and to choose what we
want to send for the exhibition.
It’s really important because at some point, working with Git, I can remember telling people ...
Don’t push images!
I remember.
The repository is there to share the resources. And that’s really where it
shines. And don’t try to put all your active files in it. At some point we miss
this space to share those files.
But an image can be a recipe. And code can be an artifact. For me the
difference is not so obvious.
It is not always so clear. Sometimes the cut-off point is decided by the weight of
the file, so if it is too heavy, we avoid Git. Another is: if it is easy to compile, leave
it out of Git. Sometimes the logic is reversed. If we need it to be online even if
not a source, but simply we need to share it, we put it on the Git. Some commits
are also errors. The distinction is quite organic until now, in my experience. The
closer the practice gets to code, the more clean the versioning process is.
There is also a kind of performative part of the repository. Where a
commit counts as a proof of something ...
When I presented the OSP’s website, we had some remarks like, ah it’s good we
can see what everybody has done, who has worked.
But strangely so far there were not many reactions from partners or clients
regarding the fact that all the projects could be followed at any stage. Even budget
wise ... Mostly, I think, because they do not really understand how it works.
And sometimes it’s true, it came to my mind, should we really show our website
to clients? Because they can check whether we are working hard, or this week
125
we didn’t do shit ... And it’s, I think it’s really based on trust and the type of
collaboration you want with your client. Actually collaboration and not a hierarchical relationship. So I think in the end it’s something that we have to work
on. On building a healthy relationship, that you show the process but it’s not
about control. The meritocracy of commits is well known, I think, in platforms
like GitHub. I don’t think in OSP this is really considered at all actually.
It supports some self-time tracking that is nuanced and enriched by e-mail,
calendar events, writing in Etherpads. It gives a feeling of where is the activity
without following it too closely. A feeling rather than surveillance or meritocracy.
I know that Eric ... because he doesn’t really keep track of his working hours. He
made a script to look into his commit messages to know when he worked on a
project. Which is not always truthful. Because sometimes you make a commit on
some files that you made last week, but forgot to commit. And a commit is a
text message at a certain time. So it doesn’t tell you how much time you spent on
the file.
Although in the way you decided to visualize the commits, there is a sense
of duration between the last and the commit before. So you have a sense
of how much time passed in between. Are there ways you sometimes
trick the system, to make things visible that might otherwise go missing?
In the messages sometimes, we talk about things we tried and didn’t work.
But it’s quite rare.
I kind of regret that I don’t write so much on the commits. At the beginning
when we decided to publish the messages on the homepage we talked about
this theater dialogue and I was really excited. But in the end I see that I
don’t write as much as I would like.
I think it’s really a question of the third-party programs we use. Our
messages are like a dialogue on the website. But when you write
a commit message you’re not at all in this interface. So you don’t answer
to something. If we would have the same kind of interface we have on the
website, you would realize you can answer to the previous commit message.
You have this sort of narrative thread and it would work. We are in the
commit
126
middle, we have this feeling of a dialogue on one side, but because when
you work, you’re not on the website to check the history. It’s just basically, it
would be about to make things really in line with what we want to achieve.
I commit just when I need to share the files with someone else. So I wait
until the last moment.
To push you mean?
No, to commit. And then I’ve lost track of what I’ve done and then I just
write ...
But it would be interesting, to look at the different speeds of collaboration. They might need each another type of commit message.
But it’s true, I must admit that when I start working on a project I don’t read the
last messages. And so, then you lose this dialogue as you said. Because sometimes
I say, Ludi is going to work on it. So I say, OK Ludi it’s your turn now,
but the thing is, if she says that to me I would not know because I don’t read the
commit messages.
I suppose that is something really missing from the Git client. When you
you update your working copy to synchronize with the server it just
says files change, how many changes there were. But doesn’t give you the
story.
pull,
That’s what missing when you pull. It should instead of just showing which files
have changed, show all the logs from the last time you pulled.
Your earlier point, about recipes versus artifacts. I have something to add
that I forgot. I would reverse the question, what the versioning system
considers to be a recipe is good, is a recipe. I mean, in this context ‘a
recipe’ is something that works well within the versioning system. Such as
the description of your process to get somewhere. And I can imagine it’s
something, I would say the Git community is trying to achieve that fact.
Make it something that you can share easily.
But we had a bit of this discussion with Alex for a reader we made. It is going to
be published, so we have the website with all the texts, and the texts are all under
127
a free license. But the publisher doesn’t want us to put the .pdfs online. I’m quite
okay with that, because for me it’s a condition that we put the sources online. But
if you really want the .pdf then you can clone the repository and make them
yourself in Scribus. It’s just an example of not putting the .pdf, but you have
everything you need to make the .pdf yourself. For me it’s quite interesting to say
our sources are there. You can buy the book but if you want the .pdf you have
to make a small effort to generate it and then you can distribute it freely. But I
find it quite interesting to, of course the easiest way would be the .pdf but in this
case we can’t. Because the publisher doesn’t want us to.
But that distinction somehow undervalues the fact that layout for example
is not just an executed recipe, no? I mean, so there is this kind of grey
area in design that is ... maybe not the final result, but also not a sort of
executable code.
We see it with ‘Visual Culture’, for instance, because Git doesn’t make it easy
to work with binaries. And the point of ‘Visual Culture’ is to make .jpegs
visible and all the kind of graphical files we work with. So it’s like we don’t
know how to decide whether we should put for instance .pdfs in the Git
repository online. Because on the one hand it makes it less manageable with
Git to work with. But on the other hand we want to make things visible on
the website.
But it’s also storage-space. If you want to clone it, if you want people to clone
it also you don’t want a 8 gigabyte repository.
I don’t know because it’s not really what OSP is for, but you can imagine, like
Dropbox has been made to easily share large files, or even files in general.
We can imagine that another company will set up something, especially
graphic designers or the graphic industry. The way GitHub did something
for the development industry. They will come up with solutions for this
very problem.
I just want to say that I think because we’re not a developer group, at the start the
commit messages were a space where you would throw all your anger, frustration.
And we first published a Git log in the Balsamine program, because we saw that.
This was the first program we designed with ConTeXt. So we were manipulating
128
code for layout. The commit messages were all really funny, because Pierre and
Ludi come from a non-coding world and it was really inspiring and we decided
to put it in the publication. Then we kind of looked, Ludi says two kind of bad
things about the client, but it was okay. Now I think we are more aware that it’s
public, we kind of pay attention not to say stuff we don’t mean to ...
It’s not such an exciting space anymore as in the first half year?
It often very formal and not very, exciting, I think. But sometimes I put
quite some effort to just make clear what I’m trying to share.
And there are also commits that you make for yourself. Because sometimes, even
if you work on a project alone, you still do a Git project to keep track, to have a
history to come back to. Then you write to yourself. I think it’s also something
else. I’ve never tried it.
It’s a lot to ask in a way, to write about what you are doing while you are
doing it.
I think we should pay more attention to the first commit of a project, and
the last. Because it’s really important to start the story and to end it. I speak
about this ‘end’ because I feel overflowed by all these not-ended projects, I’m
quite tired of it. I would like us to find a way to archive projects which are
not alive any more. To find a good way to do it. Because the list of folders
is still growing, and in a way it is okay but a lot of projects are not active.
But it’s hard to know when is the last commit. With the Balsamine project it’s
quite clear, because it’s season per season. But still, we never know when it is the
last one. The last one could be solved by the ‘Iceberg’, to make the last snapshots
and say okay now we make the screenshots of the latest version. And then you close
it ... We wanted that the last one was Hey, we sent the .pdfs to the printer.
But actually we had to send it back another time because there was a mistake.
And then the log didn’t fit on the page anymore.
129
At the Libre Graphics Meeting 2008, OSP sat down with
Chris Lilley on a small patch of grass in front of the
Technical University in Wroclaw, Poland. Warmed up by
the early May sun, we talked about the way standards are
made, how ‘specs’ influence the work of designers, programmers and managers and how this process is opening up to voices from outside the W3C. Chris Lilley is
trained as a biochemist, and specialised in the application
of biological computing. He has been involved with the
World Wide Web Consortium since the 1990s, headed the
Scalable Vector Graphics (SVG) working group and currently looks after two W3C activity areas: graphics, including PNG, CGM, graphical quality, and fonts, including font formats, delivery, and availability of font software.
I would like to ask you about the way standards are made ... I think there’s a
relation between the way Free, Libre and Open Source software works, and
how standards work. But I am particularly interested in your announcement
in your talk today that you want to make the process of defining the SVG
standard a public process?
Right. So, there’s a famous quote that says that standards are like sausages.
Your enjoyment of them is improved by not knowing how they’re made. 1
And to some extent, depending on the standards body and depending on
what you’re trying to standardize, the process can be very messy. If you
were to describe W3C as a business proposition, it has got to fail. You’re
taking companies who all have commercial interests, who are competing and
you’re putting them in the same room and getting them to talk together and
agree on something. Oddly, sometimes that works! You can sell them the
idea that growing the market is more important and is going to get them
more money. The other way ... is that you just make sure that you get the
managers to sign, so that their engineers can come and discuss standards,
1
Laws are like sausages. It’s better not to see them being made. Otto von Bismarck, 1815–1898
135
and then you get the engineers to talk and the managers are out of the way.
Engineers are much more forthcoming, because they are more interested in
sharing stuff because engineers like to share what they’re doing, and talk
on a technical level. The worst thing is to get the managers involved, and
even worse is to get lawyers involved. W3C does actually have all those
three in the process. Shall we do this work or not is a managerial level that’s
handled by the W3C advisory committee, and that’s where some people
say No, don’t work on that area or We have patents or This is a bad idea or
whatever. But often it goes through and then the engineers basically talk
about it. Occasionally there will be patents disclosed, so the W3C also has
a process for that. The first things are done are the ‘charters’. The charter
says what the group is going to work on a broad scope. As soon as you’ve got
your first draft, that further defines the scope, but it also triggers what it’s
called an exclusion opportunity, which basically gives the companies I think
ninety days to either declare that they have a specific patent and say what it’s
number is and say that they exclude it, or not. And if they don’t, they’ve just
given a royalty-free licence to whatever is needed to implement that spec.
The interesting thing is that if they give the royalty-free licence they don’t
have to say which patents they’re licencing. Other standards organizations
build up a patent portfolio, and they list all these patents and they say what
you have to licence. W3C doesn’t do that, unless they’ve excluded it which
means you have to work around it or something like that. Based on what
the spec says, all the patents that have been given, are given. The engineers
don’t have to care. That’s the nice thing. The engineers can just work away,
and unless someone waves a red flag, you just get on with it, and at the end
of the day, it’s a royalty-free specification.
But if you look at the SVG standard, you could say that it’s been quite a
bumpy road 2 ... What kind of work do you need to do to make a successful
standard?
Firstly, you need to agree on what you’re building, which isn’t always firm
and sometimes it can change. For example, when SVG was started the idea
was that it would be just static graphics. And also that it would be animated
2
using scripts, because with dynamic HTML and whatever, this was ’98, we
were like: OK, we’re going to use scripting to do this. But when we put it
out for a first round of feedback, people were like No! No, this is not good
enough. We want to have something declarative. We don’t want to have to write
a script every time we want something to move or change color. Some of the
feedback, from Macromedia for example was like No, we don’t think it should
have this facility, but it quickly became clear why they were saying that and
what technology they would rather use instead for anything that moved or
did anything useful ... We basically said That’s not a technical comment, that’s
a marketing comment, and thank you very much.
Wait a second. How do you make a clear distinction between marketing and
technical comments?
People can make proposals that say We shouldn’t work on this, we shouldn’t
work on that, but they’re evaluated at a technical level. If it’s Don’t do it
like that because it’s going to break as follows, here I demonstrate it then that’s
fine. If they’re like Don’t do it because that competes with my proprietary
product then it’s like Thanks for the information, but we don’t actually care.
It’s not our problem to care about that. It’s your problem to care about
that. Part of it is sharing with the working group and getting the group
to work together, which requires constant effort, but it’s no different from
any sort of managerial or trust company type thing. There’s this sort of
encouragement in it that at the end of the day you’re making the world a
better place. You’re building a new thing and people will use it and whatever.
And that is quite motivating. You need the motivation because it takes a lot
longer than you think. You build the first spec and it looks pretty good and
you publish it and you smooth it out a bit, put it out for comments and you
get a ton of comments back. People say If you combine this with this with this
then that’s not going to work. And you go Is anyone really going to do that? But
you still have to say what happens. The computer still has to know what
happens even if they do that. Ninety percent of the work is after the first
draft, and it’s really polishing it down. In the W3C process, once you get
to a certain level, you take it to what is euphemistically called the ‘last call’.
This is a term we got from the IETF. 3 It actually means ‘first call’ because
3
The Internet Engineering Task Force, http://www.ietf.org/
137
you never have just one. It’s basically a formal round of comments. You log
every single comment that’s been made, you respond to them all, people can
make an official objection if you haven’t responded to the comment correctly
etcetera. Then you publish a list of what changes you’ve made as a basis of
that.
What part of the SVG standardization process would you like to make public?
The part that I just said has always been public. W3C publishes specifications on a regular basis, and these are always public and freely available.
The comments are made in public and responded to in public. What hasn’t
been public has been the internal discussions of the group. Sometimes it
can take a long time if you’ve got a lot of comments to process or if there’s a
lot of argumentation in the group: people not agreeing on the direction to
go, it can take a while. From the outside it looks like nothing is happening.
Some people like to follow this at a very detailed level, and blog about it,
and blablabla. Overtime, more and more working groups have become public. The SVG group just recently got recharted and it’s now a public group.
All of its minutes are public. We meet for ninety minutes twice a week on
a telephone call. There’s an IRC log of that and the minutes are published
from that, and that’s all public now. 4
Could you describe such a ninety minute meeting for us?
There are two chairs. I used to be the chair for eight years or so, and then
I stepped down. We’ve got two new chairs. One of them is Erik Dahlström
from Opera, and one of them is Andrew Emmons from Bitflash. Both
are SVG implementing companies. Opera on the desktop and mobile, and
Bitflash is just on mobile. They will set out an agenda ahead of time and
say We will talk about the following issues. We have an issue tracker, we have
an action tracker which is also now public. They will be going through the
actions of people saying I’m done and discussing whether they’re actually
done or not. Particular issues will be listed on the agenda to talk about
and to have to agree on, and then if we agree on it and you have to change
the spec as a result, someone will get an action to change that back to the
4
spec. The spec is held into CVS so anyone in the working group can edit
it and there is a commit log of changes. When anyone accidentally broke
something or trampled onto someone else’s edit, or whatever - which does
happen - or if it came as the result of a public comment, then there will be
a response back saying we have changed the spec in the following way ... Is
this acceptable? Does this answer your comment?
How many people do take part in such a meeting?
In the working group itself there are about 20 members and about 8 or
so who regularly turn up, every week for years. You know, you lose some
people over time. They get all enthusiastic and after two years, when you
are not done, they go off and do something else, which is human nature.
But there have been people who have been going forever. That’s what you
need actually in a spec, you need a lot of stamina to see it through. It is a
long term process. Even when you are done, you are not done because you’ve
got errata, you’ve got revisions, you’ve got requests for new functionalities
to make it into the next version and so on.
On the one hand you could say every setting of a standard is a violent process,
some organisation forcing a standard upon others, but the process you describe
is entirely based on consensus.
There’s another good quote. Tim Berners Lee was asked why W3C works
by consensus, rather than by voting and he said: W3C is a consensus-based
organisation because I say so, damn it. 5 That’s the Inventor of the Web,
you know ... (laughs) If you have something in a spec because 51% of the
people thought it was a good idea, you don’t end up with a design, you end
up with a bureaucratic type decision thing. So yes, the idea is to work by
consensus. But consensus is defined as: ‘no articulated dissent’ so someone
can say ‘abstain’ or whatever and that’s fine. But we don’t really do it on
a voting basis, because if you do it like that, then you get people trying to
5
Consensus is a core value of W3C. To promote consensus, the W3C process requires Chairs
to ensure that groups consider all legitimate views and objections, and endeavor to resolve
them, whether these views and objections are expressed by the active participants of the
group or by others (e.g., another W3C group, a group in another organization, or the general
public). World Wide Web Consortium. General Policies for W3C Groups, 2005. [Online; accessed 30.12.2014]
139
make voting blocks and convince other people to vote their way ... it is much
better when it is done on the basis of a technical discussion, I mean ... you
either convince people or you don’t.
If you read about why this kind of work is done ... you find different arguments. From enhancing global markets to: ‘in this way, we will create a
better world for everyone’. In Tim Berners-Lee’s statements, these two are
often mixed. If you for example look at the DIN standards, they are unambiguously put into the world as to help and support business. With Web
Standards and SVG, what is your position?
Yes. So, basically ... the story we tell depends on who we are telling it to and
who is listening and why we want to convince them. Which I hope is not as
duplicitous as it may sound. Basically, if you try to convince a manager that
you want 20% time of an engineer for the coming two years, you are telling
them things to convince them. Which is not untrue necessarily, but that is
the focus they want. If you are talking to designers, you are telling them how
that is going to help them when this thing becomes a spec, and the fact that
they can use this on multiple platforms, and whatever. Remember: when
the web came out, to exchange any document other than plain text was extremely difficult. It meant exchanging word processor formats, and you had
to know on what platform you were on and in what version. The idea that
you might get interoperability, and that the Mac and the PC could exchange
characters that were outside ASCII was just pie in the sky stuff. When we
started, the whole interoperability and cross-platform thing was pretty novel
and an untested idea essentially. Now it has become pretty much solid. We
have got a lot of focus on disabled accessibility, and also internationalization
which is if you like another type of accessibility. It would be very easy for
an organisation like W3C, which is essentially funded by companies joining it, and therefore they come from technological countries ... it would be
very easy to focus on only those countries and then produce specifications
that are completely unusable in other areas of the world. Which still does
sometimes happen. This is one of the useful things of the W3C. There is
the internationalization review, and an accessibility review and nowadays also
a mobile accessible review to make sure it does not just work on desktops.
Some organisations make standards basically so they can make money. Some
140
of the ISO 6 standards, in particular the MPEG group, their business model
is that you contribute an engineer for a couple of years, you make a patent
portfolio and you make a killing off licencing it. That is pretty much to keep
out the people who were not involved in the standards process. Now, W3C
takes quite an opposite view. The Royalty-Free License 7 for example, explicitly says: royalty-free to all. Not just the companies who were involved
in making it, not just companies, but anyone. Individuals. Open Source
projects. So, the funding model of the W3C is that members pay money,
and that pays our salaries, basically. We have a staff of 60 odd or so, and
that’s where our salaries come from, which actually makes us quite different
from a lot of other organisations. IETF is completely volunteer based so
you don’t know how long something is going to take. It might be quick, it
might be 20 years, you don’t know. ISO is a national body largely, but the
national bodies are in practice companies who represent that nation. But in
W3C, it’s companies who are paying to be members. And therefore, when
it started there was this idea of secrecy. Basically, giving them something
for their money. That’s the trick, to make them believe they are getting
something for their money. A lot of the ideas for W3C came from the
X Consortium 8 actually, it is the same people who did it originally. And
there, what the meat was ... was the code. They would develop the code and
give it to the members of the X Consortium three months before the public
got it and that was their business benefit. So that is actually where our ‘three
month rule’ comes from. Each working group can work for three months
but then they have to go public, have to publish. ‘The heartbeat rule’, we
call it now. If you miss several heartbeats then you’re dead. But at the same
time if you’re making a spec and you’re growing the market then there’s a
need for it to be implemented. There’s an implementation page where you
encourage people to implement, you report back on the implementations,
6
7
8
International Standards for Business, Government and Society International Organization for
Standardization (ISO), http://www.iso.org
Overview and Summary of W3C Patent Policy
http://www.w3.org/2004/02/05-patentsummary.html
The purpose of the X Consortium was to foster the development, evolution, and maintenance of the
X Window System, a comprehensive set of vendor-neutral, system-architecture neutral,
network-transparent windowing and user interface standards.
http://www.x.org/wiki/XConsortium
141
you make a test suite, you show that every feature in the spec that there’s
a test for ... at least two implementations pass it. You’re not showing that
everyone can use it at that stage. You’re showing that someone can read the
spec and implement it. If you’ve been talking to a group of people for four
years, you have a shared understanding with them and it could be that the
spec isn’t understandable without that. The implementation phase lets you
find out that people can actually implement it just by reading the spec. And
often there are changes and clarifications made at that point. Obviously one
of the good ways to get something implemented is to have Open Source
people do it and often they’re much more motivated to do it. For them it’s
cool when it is new, If you give me this new feature it’s great we’ll do it rather
than: Well that doesn’t quite fit into our product plans until the next quarter
and all that sort of stuff. Up until now, there hasn’t really been a good way
for the Open Source people to get involved. They can comment on specs
but they’re not involved in the discussions. That’s something we’re trying
to change by opening up the groups, to make it easier for an Open Source
group to contribute on an ongoing basis if they want to. Right from the
beginning part, to the end where you’re polishing the tiny details in the
corner.
I think the story of web fonts shows how an involvement of the Open Source
people could have made a difference.
When web fonts were first designed, essentially you had Adobe and Apple
pushing one way, Bitstream pushing the other way, both wanting W3C to
make their format the one and only official web format, which is why you
ended up with a mechanism to point to fonts without saying what format
was required. And than you had the Netscape 4, which pointed off to a
Bitstream format, and you had IE4 which pointed off to this Embedded
Open Type (EOT) format. If you were a web designer, you had to have two
different tools, one of which only worked on a Mac, and one of which only
worked on PC, and make two different fonts for the same thing. Basically
people wouldn’t bother. As Håkon 9 mentioned the only people who do
actually use that right now really, are countries where the local language
9
is not well provided for by the Operating Systems. Even now, things like
WindowsXP and MacOSX don’t fully support some of the Indian languages.
But they can get it into web pages by using these embedded fonts. Actually
the other case where it has been used a lot, is SVG, not so much on the
desktop though it does get used there but on mobiles. On the desktop
you’ve typically got 10 or 20 fonts and you got a reasonable coverage. On a
mobile phone, depending on how high or low ended it is, you might have
a single font, and no bold, and it might even be a pixel-based font. And
if you want to start doing text that skews and swirls, you just can’t do that
with a pixel-based font. So you need to download the font with the content,
or even put the font right there in the content just so that they can see
something.
I don’t know how to talk about this, but ... envisioning a standard before
having any concrete sense of how it could be used and how it could change the
way people work ... means you also need to imagine how a standard might
change, once people start implementing it?
I wouldn’t say that we have no idea of how it’s going to work. It’s more a
case that there are obvious choices you can make, and then not so obvious
choices. When work is started, there’s always an idea of how it would fit in
with a lot of things and what it could be used for. It’s more the case that
you later find that there are other things that you didn’t think of that you
can also use it for. Usually it is defined for a particular purpose and than
find that it can also do these other things.
Isn’t it so that sometimes, in that way, something that is completely marginal,
becomes the most important?
It can happen, yes.
For me, SVG is a good example of that. As I understood it, it was planned
to be a format for the web. And as I see it today, it’s more used on the
desktop. I see that on the Linux desktop, for theming, most internals are
using SVG. We are using Inkscape for SVG to make prints. On the other
hand, browsers are really behind.
143
Browsers are getting there. Safari has got reasonably good support. Opera
has got very good support. It really has increased a lot in the last couple
of years. Mozilla Firefox less so. It’s getting there. They’ve been at it
for longer, but it also seems to be going slower. The browsers are getting
there. The implementations which I showed a couple of days ago, those
were mobile implementations. I was showing them on a PC, but they were
specially built demos. Because they’re mobile, it tends to move faster.
But you still have this problem that Internet Explorer is a slow adopter.
Yes, Internet Explorer has not adopted a lot of things. It’s been very slow
to do CSS. It hasn’t yet done XHTML, although it has shipped with an
XML parser since IE4. It hasn’t done SVG. Now they’ve got their own
thing ... Silverlight. It has been very hard to get Microsoft on board and
getting them doing things. Microsoft were involved in the early part of
SVG but getting things into IE has always been difficult. What amazes me
to some extent, is the fact that it’s still used by about 60-70% of people.
You look at what IE can do, and you look at what all the other browsers
can do, and you wonder why. The thing is ... it is still a break and some
technologies don’t get used because people want to make sure that everyone
can see them. So they go down to the lowest common denominator. Or
they double-implement. Implement something for all the other browsers,
and implement something separate for IE, and than have to maintain two
different things in parallel, and tracking revisions and whatever. It’s a nightmare. It’s a huge economic cost because one browser doesn’t implement the
right web stuff. (laughing, sighing)
My question would be: what could you give us as a kind of advice? How
could we push this adoption where we are working? Even if it only is the
people of Firefox to adopt SVG?
Bear in mind that Firefox has this thing of Trunk builds and Branch builds
and so on. For example when Firefox 3 came out, well the Beta is there.
Suddenly there’s a big jump in the SVG stuff because all the Firefox 2 was
on the same branch as 1.5, and the SVG was basically frozen at that point.
The development was ongoing but you only saw it when 3 came out. There
were a bunch of improvements there. The main missing features are the
144
animation and the web fonts and both of those are being worked on. It’s
interesting because both of those were on Acid 3. Often I see an acceleration
of interest in getting something done because there’s a good test. The Acid
Test 10 is interesting because it’s a single test for a huge slew of things all at
once. One person can look at it, and it’s either right or it’s wrong, whereas
the tests that W3C normally produces are very much like unit tests. You
test one thing and there’s like five hundred of them. And you have to go
through, one after another. There’s a certain type of person who can sit
through five hundred test on four browsers without getting bored but most
people don’t. There’s a need for this sort of aggregative test. The whole
thing is all one. If anything is wrong, it breaks. That’s what Acid is designed
to do. If you get one thing wrong, everything is all over the place. Acid 3
was a submission-based process and like a competition, the SVG working
group was there, and put in several proposals for what should be in Acid 3,
many of which were actually adopted. So there’s SVG stuff in Acid 3.
So ... who started the Acid Test?
Todd Fahrner designed the original Acid 1 test, which was meant to exercise
the tricky bits of the box-model in CSS. It ended like a sort Mondrian
diagram, 11 red squares, and blue lines and stuff. But there was a big scope
for the whole thing to fall apart into a train wreck if you got anything
wrong. The thing is, a lot of web documents are pretty simple. They got
paragraphs, and headings and stuff. They weren’t exercising very much the
model. Once you got tables in there, they were doing it a little bit more. But
it was really when you had stuff floated to one side, and things going around
or whatever, and that had something floated as well. It was in that sort of
case where it was all breaking, where people wouldn’t get interoperability.
It was ... the Web Standards Project 12 who proposed this?
Yes, that’s right.
10
11
12
The Acid 3 test: http://acid3.acidtests.org is comprehensive in comparison to more detailed,
but fragmented SVG tests:
http://www.w3.org/Graphics/SVG/WG/wiki/Test_Suite_Overview#W3C_Scalable_Vector_Graphics_.28SVG.29_Test
Acid Test Gallery http://moonbase.rydia.net/mental/writings/box-acid-test/
The Web Standards Project is a grassroots coalition fighting for standards which ensure simple,
affordable access to web technologies for all http://www.webstandards.org/
145
It didn’t come from a standards body.
No, it didn’t come from W3C. The same for Acid 2, Håkon Wium Lie was
involved in that one. He didn’t blow his own trumpet this morning, but
he was very much involved there. Acid 3 was Ian Hickson, who put that
together. It’s a bit different because a lot of it is DOM scripting stuff. It
does something, and then it inquires in the DOM to see if it has been done
correctly, and it puts that value back as a visual representation so you can
see. It’s all very good because apparently it motivates the implementors to
do something. It’s also marketable. You can have a blog posting saying we
do 80% of Acid Test. The public can understand that. The people who are
interested can go Oh, that’s good.
It becomes a mark of quality.
Yes, it’s marketing. It’s like processor speed in PCs and things. There are
so much technology in computers, so than what do you market it on? Well
it’s got that clock speed and it’s got this much memory. OK, great, cool.
This one is better than that one because this one’s got 4 gigs and that one’s
got 2 gigs. It’s a lot of other things as well, but that’s something that the
public can in general look at and say That one is better. When I mentioned
the W3C process, I was talking about the engineers, managers. I didn’t talk
about the lawyers, but we do have a process for that as well. We have a patent
advisory group conformed. If someone has made a claim, and it’s disputed
then we can have lawyers talking among themselves. What we really don’t
have in that is designers, end-users, artists. The trick is to find out how to
represent them. The CSS working group tried to do that. They brought in
a number of designers, Jeff Veen 13 and these sort of people were involved
early on. The trouble is that you’re speaking a different language, you’re
not speaking their language. When you’re having weekly calls ... Reading a
spec is not bedtime reading, and if you’re arguing over the fine details of a
sentence ... (laughing) well, it will put you to sleep straight away. Some of
the designers are like: I don’t care about this. I only want to use it. Here’s what
I want to be able to do. Make it that I can do that, but get back to me when it’s
done.
13
Jeff Veen was a designer at Wired magazine, in those days.
http://adaptivepath.com/aboutus/veen.php
146
That’s why the idea of the Acid Test is a nice breed between the spec and
the designer. When I was seeing the test this morning, I was thinking
that it could be a really interesting work to do, not to really implement it
but to think about with the students. How would you conceive a visual
test? I think that this could be a really nice workshop to do in a university
or in a design academy ...
It’s the kind of reverse-reverse engineering of a standard which could help
you understand it on different levels. You have to imagine how wild you
can go with something. I talk about standards, and read them - not before
going to bed - because I think that it’s interesting to see that while they’re
quite pragmatic in how they’re put together, but they have an effect on the
practice of, for example, designers. Something that I have been following with
interest is the concept of separating form and content has become extremely
influential in design, especially in web design. Trained as a pre-web designer,
I’m sometimes a bit shocked by the ease with which this separation is made.
That’s interesting. Usually people say that it’s hard or impossible, that you
can’t ever do it. The fact that you’re saying that it’s easy or that it comes
naturally is interesting to me.
It has been appropriated by designers as something they want. That’s why it’s
interesting to look at the Web Standards Project where designers really fight
for a separation of content and form. I think that this is somehow making
the work of designers quite ... boring. Could you talk a bit about how this is
done?
It’s a continuum. You can’t say that something is exactly form or exactly
presentation because there are gradations. If you take a table, you’ve already
decided that you want to display the material in a tabular way. If it’s a real
table, you should be able to transpose it. If you take the rows and columns,
and the numbers in the middle then it should still work. If you’ve got
‘sales’ here and if you’ve got ‘regions’ there, then you should still be able to
transpose that table. If you’re just flipping it 90 degrees then you are using
it as a layout grid, and not as a table. That’s one obvious thing. Even then,
deciding to display it as a tabular thing means that it probably came from a
much bigger dataset, and you’ve just chosen to sum all of the sales data over
147
one year. Another one: you have again the sales data, you could have it as pie
chart, but you could also have it as a bar chart, you could have it in various
other ways. You can imagine that what you would do is ship some XML
that has that data, and then you would have a script or something which
would turn it into an SVG pie chart. And you could have a bar chart, or you
could also say show me only February. That interaction is one of the things
that one can do, and arguably you’re giving it a different presentational form.
It’s still very much a gradation. It’s how much re-styleability remains. You
can’t ever have complete separation. If I’m describing a company, and [1]
I want to do a marketing brochure, and [2] I want to do an annual report
for the shareholders, and [3] I want to do an internal document for the
engineering team. I can’t have the same content all over those three and just
put styling on it. The type of thing I’m doing is going to vary for those
audiences, as will the presentation. There’s a limit. You can’t say: here’s the
überdocument, and it can be styled to be anything. It can’t be. The trick is
to not mingle the style of the presentation when you don’t need to. When
you do need to, you’re already halfway down the gradient. Keep them as far
apart as you can, delay it as late as possible. At some point they have to be
combined. A design will have to go into the crafting of the wording, how
much wording, what voice is used, how it’s going to fit with the graphics
and so on. You can’t just slap random things together and call it design,
it looks like a train wreck. It’s a case of deferment. It’s not ever a case of
complete separation. It’s a case of deferring it and not tripping yourself up.
Just simple things like bolds and italics and whatever. Putting those in as
emphasis and whatever because you might choose to have your emphasized
words done differently. You might have a different font, you might have a
different way of doing it, you might use letter-spacing, etc. Whereas if you
tag that in as italics then you’ve only got italics, right? It’s a simple example
but at the end of the day you’re going to have to decide how that is displayed.
You mentioned print. In print no one sees the intermediate result. You see
ink on paper. If I have some Greek in there and if I’ve done that by actually
typing in Latin letters on the keyboard and putting a Greek font on it and
out comes Greek, nobody knows. If it’s a book that’s being translated, there
might be some problems. The more you’re shipping the electronic version
around, the more it actually matters that you put in the Greek letters as
148
Greek because you will want to revise it. It matters that you have flowing
text rather than text that has been hand-ragged because when you put in
the revisions you’re going to have to re-rag the entire thing or you can just
say re-flow and fix it up later. Things like that.
The idea of time, and the question of delay is interesting. Not how, but when you
enter to fine-tune things manually. As a designer of books, you’re always facing
the question: when to edit, what, and on what level. For example, we saw this
morning 14 that the idea of having multiple skins is really entering the publishing
business, as an idea of creativity. But that’s not the point, or not the complete
point. When is it possible to enter the process? That’s something that I think we
have to develop, to think about.
The other day there was a presentation by Michael Dominic Kostrzewa 15
that shocked me. He is now working for Nokia, after working for Novell
and he was explaining how designers and programmers were fighting each
other instead of fighting the ‘real villain’, as he said, who were the managers. What was really interesting was how this division between content
and style was also recouping a kind of political or socio-organizational divide within companies where you need to assign roles, borders, responsibilities to different people. What was really frightening from the talk was
that you understood that this division was encouraging people not to try
and learn from each other’s practice. At some point, the designer would
come to the programmer and say: In the spec, this is supposed to be like this
and I don’t want to hear anything about what kind of technical problems you
face.
Designers as lawyers!
Yes ... and the programmer would say: OK, we respect the spec, but then
we don’t expect anything else from us. This kind of behaviour in the end,
blocks a lot of exchange, instead of making a more creative approach
possible.
14
15
Andy Fitsimon: Publican, the new Open Source publishing tool-chain (LGM 2008)
http://media.river-valley.tv/conferences/lgm2008/quicktime/0201-Andy_Fitzsimon.html
Michael Dominic Kostrzewa. Programmers hell: working with the UI designer (LGM 2008)
149
I read about (and this is before skinning became more common) designers
doing some multimedia things at Microsoft. You had designers and then
there were coders. Each of them hated the other ones. The coders thought
the designers were idiots who lived in lofts and had found objects in their
ears. The designers thought that the programmers were a bunch of socially
inept nerds who had no clue and never got out in sunlight and slept in their
offices. And since they had that dynamic, they would never explain to each
other ( ... )
(policeman arrives)
POLICEMAN:
Do you speak English?
Yes.
POLICEMAN:
You must go from this place because there’s a conference.
Yes, we know. We are part of this conference (shows LGM badge).
POLICEMAN:
We had a phone call that here’s a picnic. I don’t really see a picnic ...
We’re doing an interview.
POLICEMAN:
It looks like a picnic, and professors are getting nervous. You must go sit
somewhere else. Sorry, it is the rules. Have a nice day!
150
At the Libre Graphics Meeting 2008, OSP picks up a conversation that Harrison allegedly started in a taxi in Montreal, a year
earlier. We meet font designer and developer Dave Crossland
in a noisy food court to speak about his understanding of the
intertwined histories of typography and software, and the master in type design at the Department of Typography at the
University of Reading. Since the interview, a lot has happened.
Dave finished his typeface Cantarell and moved on to consult
the Google Web Fonts project, commissioning new typefaces
designed for the web. He is also currently offering lectures on
typeface design with Free Software.
Harrison (H)
1, 2.
Ludivine Loiseau (LL)
and now all:
Dave Crossland (DC)
Hello Dave.
Hellooo ...
Alright!
Well, thank you for taking a bit of time with us for the interview. First
thing is maybe to set a kind of context of your situation, your current situation.
What you’ve done before. Why are you setting fonts and these kind of things.
H
Oh yes, yeah. Well, I take it quite far back, when I was a teenager. I
was planning to do computer science university studying like mathematics
and physics in highschool. I needed some work experience. I decided I
didn’t want to work with computers. So I dropped maths and physics and
I started working at ... I mean I started studying art and design, and also
socio-linguistics in highschool. I was looking at going to Fine Arts but I
wasn’t really too worried about if I could get a job at the end of it, because
I could get a job with computers, if I needed to get a job So I studied that
at my school for like a one year course, after my school. A foundation year,
and the deal with that is that you study all the different art and design disciplines. Because in highschool you don’t really have the specialities where you
specifically study textile or photography, not every school has a darkroom,
schools are not well equipped.
DC
155
You get to experience all these areas of design and in that we studied graphic
design, motion graphics and I found in this a good opportunity to bring together the computer things with fine arts and visual arts aspects. In graphic
design in my school it was more about paper, it had nothing to do with
computers. In art school, that was more the case. So I grew into graphic
design.
Ordering coffee and change of background music: Oh yeah, African beats!
So, yes. I was looking at graphic design that was more computer based than
in art school. I wasn’t so interested in like regular illustration as a graphic
design. Graphic design has really got three purposes: to persuade people,
that’s advertising; to entertain people, movie posters, music album covers,
illustration magazines; and there is also graphic design to inform people,
in England it’s called ‘information design’, in the US it’s called ‘information
architecture’ ... stucturing websites, information design. Obviously a big
part of that is typography, so that’s why I got interested in typography, via
information design. I studied at Ravensbourne college in London, what
I applied for was graphic information design. I started working at the IT
department, and that really kept me going to that college, I wasn’t so happy
with the direction of the courses. The IT department there was really really
good and I ended up switching to the interaction design course, because that
had more freedom to do the kind of typographic work I was intersted in.
So I ended up looking at Free Sofware design tools because I became frustrated by the limitations of the Adobe software which in the college was
using, just what everybody used. And at that point I realized what ‘software freedom’ meant. I’ve been using Debian since I was like a teenager,
but I hadn’t really looked to the depth of what Free Software was about. I
mean back in the nineties Windows wasn’t very good but probably at that
time 2003-2004, MacOSX came out and it was getting pretty nice to use.
I bought a Mac laptop without really thinking about it and because it was
a Unix I could use the software like I was used to do. And I didn’t really
think about the issues with Free Software, MacOSX was Unix so it was the
same I figured. But when I started to do my work I really stood against the
limitations of Adobe software, specifically in parallel publishing which is
when you have the same basic informations that you want to communicate
in different mediums. You might want to publish something in .pdf, on the
web, maybe also on your mobile phone, etc. And doing that with Adobe
156
software back then was basically impossible. I was aware of Free Software
design tools and it was kind of obvious that even if they weren’t very pushed
by then they at least had the potential to be able to do this in a powerful
way. So that’s what I figured out. What that issue with Free Software really
meant. Who’s in control of the software, who decides what it does, who
decides when it’s going to support this feature or that feature, because the
features that I wanted, Adobe wasn’t planning to add them. So that’s how I
got interested in Free Software.
When I graduated I was looking for something that I could contribute in
this area. And one of the Scribus guys, Peter Linnell, made an important
post on the Scribus blog. Saying, you know, the number one problem
with Free Software design is fonts, like it’s dodgy fonts with incorrect this,
incorrect that, have problems when printed as well ... and so yeah, I felt
woa, I have a background in typography and I know about Free Software,
I could make contributions in fonts. Looking into that area, I found that
there was some postgraduate course you can study at in Europe. There’s
two, there is one at The Hague in The Netherlands and one at Reading.
They’re quite different courses in their character and in how much they cost
and how long they last for and what level of qualification they are. But
they’re both postgraduate courses which focus on typeface design and font
software development. So if you’re interesed in that area, you can really
concentrate for about a year and bring your skills up to a high professional
level. So I applied to the course at Reading and I was accepted there and
I’m currently studying there part time. I’m studying there to work on Free
Software fonts. So that’s the full story of how I ended up in this area.
Excellent! Last time we met, you summarized in a very relevant way the
history of font design software which is a proof by itself that everything is related
with fonts and this kind of small networks and I would like you to summarize it
again.
H
L
a
u
g
h
i
n
g
Alright. In that whole journey of getting into this area of parallel publishing and automated design, I was asking around for people who
worked in that area because at that time not many people had worked in
parallel publishing. It’s a lot of a bigger deal now, especially in the Free
Software community where we have Free Software manuals translated into
DC
157
many languages, written in .doc and .xml and then transformed into print
and web versions and other versions. But back then this was kind of a new
concept, not all people worked on it. And so, asking around, I heard about
the department of typography at the university of Reading. One of the lecturers there, actually the lecturer of the typeface design course put me on
to a designer in Holland, Petr van Blokland. He’s a really nice guy, really
friendly. And I dropped him an e-mail as I was in Holland that year – just
dropped by to see him and it turned out he’s not only involved in parallel
publishing and automated design, but also in typedesign. For him there is
really no distinctions between type design and typography. It’s kind of like a
big building – you have the architecture of the building but you can also go
down into the bricks. It’s kind of like that with typography, the type design
is all these little pieces you assembly to create the typography out of . He’s
an award-winning typeface designer and typographer and he was involved
in the early days of typography very actively. He kind of explained me the
whole story of type design technology.
C
o
f
f
e
e
d
e
l
i
v
e
r
y
a
n
d
j
a
z
z
m
u
s
i
c
So, the history of typography actually starts with Free Software, with Donald
Knuth and his TeX. The TeX typesetting system has its own font software
or font system called Metafont. Metafont is a font programming language,
and algebraic programming language describing letter forms. It really gets
into the internal structure of the shapes. This is a very non-visual programming approach to it where you basically use this programming language to
describe with algebra how the shapes make up the letters. If you have a
capital H, you got essentially 3 lines, two verticals stands and a horizontal
crossbar and so, in algebra you can say that you’ve got one ratio whitch is
the height of the vertical lines and another ratio which is the width between
them and another ratio which is the distance between the top point and the
middle point of the crossbar and the bottom point. By describing all of that
in algebra, you really describe the structure of that shape and that gives you
a lot of power because it means you can trace a pen nib objects over that
skeleton to generate the final typeform and so you can apply variations, you
can rotate the pen nib – you can have different pen nib shapes And you can
have a lot of different typefaces out of that kind of source code. But that
approach is not a visual approach, you have to take it with a mathematical
158
mind and that isn’t something which graphic designers typically have as a
strong part of their skill set.
The next step was describing the outline of a typeface, and the guy who
did this was working, I believe, at URW. He invented a digital typography
system or typedesign program called Ikarus. The rumor is it’s called Ikarus
because it crashed too much. Peter Karow is this guy. He was the absolute
unknown real pioneer in this area. They were selling this proprietary software powered by a tablet, with a drawing pen for entering the points and it
used it’s own kind of spline-curve technology.
This was very expensive – it ran on DMS computers and URW was making
a lot of money selling those mini computers in well I guess late 70s and
early 80s. And there was a new small home computer that came out called
the Apple Macintosh. This was quite important because not only was it a
personal computer. It had a graphical user interface and also a printer, a laser
writer which was based on the Adobe PostScript technology. This was what
made desktop publishing happen. I believe it was a Samsung printer revised
by Apple and Adobe’s PostScript technology. Those three companies, those
three technologies was what made desktop publishing happen. Petr van
Blokland was involved in it, using the Ikarus software, developing it. And
so he ported the program to the Mac. So Ikarus M was the first font
editor for personal computers and this was taken on by URW but never
really promoted because the ... Mac costs not a lot money compared to those
big expensive computers. So, Ikarus M was not widely distributed. It’s
kind of an obvious idea – you know you have those innovative computers
doing graphic interfaces and laser printing and several different people had
several different ideas about how to employ that. Obviously you had John
Warnock within Adobe and at that point Adobe was a systems company,
they made this PostScript system and these components, they didn’t make
any user applications. But John Warnock – and this is documented in the
book on the Adobe story – he really pushed within the company to develop
Adobe Illustrator, which allowed you to interact with the edit PostScript
code and do vector drawings interactively. That was the kind of illustration
and graphic design which we mentioned earlier. That was the ... page layout
sort of thing and that was taking care of by a guy called Paul Brainerd,
whose company Aldus made PageMaker. That did similar kind of things
than Illustrator, but focused on page layout and typography, text layout
159
rather than making illustrations. So you had Illustrator and PageMaker and
this was the beginning of the desktop publishing tool-chain.
When was it?
H
This is in the mid-eighties. The Mac came out in 1984
DC
Pierre Huyghebaert (PH)
Illustrator in 1986 I think.
Yeah. And then the Apple LaserWriter, which is I believe a Samsung
printer, came out in 1985, and I believe the first edition of Illustrator was in
1988 ...
DC
No, I think Illustrator 1 was in 1986.
PH
DC
H
OK, if you read the official Adobe story book, it’s fully documented 1 .
It’s interesting that it follows so quickly after the Macintosh.
Yes! That’s right. It all happened very quickly because Adobe and
Apple had really built with PostScript and the MacOS, they had the infrastructure there, they could build on top of. And that’s a common thing we
see played out over and over ... Things are developed quite slowly when they
are getting the infrastructure right, and then when the infrastructure is in
place you see this burst of activity where people can slot it together very
quickly to make some interesting things. So, you had this other guy called
Jim von Ehr and he saw the need for a graphical user interface to develop
fonts with and so he founded a small compagny called Altsys and he made a
program called Fontographer. So that became the kind of de-facto standard
font editing program.
DC
PH
used?
And before that, do you know what font design software Adobe designers
I don’t know. Basically when Adobe made PostScript for the Apple
LaserWriter then they had the core 35 PostScript fonts, which is about
a thousand families, 35 differents weights or variants of the fonts. And I
believe that those were from Linotype. Linotype developed that in collaboration with Adobe, I have no idea about what software they used, they
may have had their own internal software. I know that before they had
DC
1
Pamela Pfiffner. Inside the Publishing Revolution: The Adobe Story. Adobe Press, 2008
160
Illustrator they were making PostScript documents by hand like TeX, programming PostScript sourcecode. It might have been in a very low tech way.
Because those were the core fonts that have been used in PostScript.
So you had Fontographer and this is yeah I mean a GUI application for
home computers to make fonts with. Fontographer made early 90s David
Carson graphic design posters. Because it meant that anybody could start
making fonts not only people that were in the type design guild. That all
David Carson kind of punk graphic design, it’s really because of Desktop
publishing and specifically because of Fontographer. Because that allowed
people to make these fonts. Previous printing technologies wouldn’t allow
you to make these kinds of fonts without extreme efforts. I mean a lot of the
effects you can do with digital graphics you can’t do without digital graphics
– air brushing sophisticated effects like that can be achieved but it’s really a
lot of efforts.
So going back to the guys from Holland, Petr has a younger brother called
Erik and he went to the college at the Royal Academie of design the KABK
in the Hague with a guy who is Just van Rossum and he’s the younger
brother of Guido van Rossum who is now quite famous because he’s the guy
who developed and invented Python. In the early 90s Jim von Ehr is developping Fontographer, and Fontographer 4 comes out and Petr and Just and
Erik managed to get a copy of the source code of Fontographer 3 which is the
golden version that we used, like Quark, that was what we used throughout
most of the 90s and so they started adding things to that to do scripting on
Fontographer with Python and this was called Robofog, and that was still
used until quite recently, because it had features no one has ever seen enywhere else. The deal was you had to get a Fontographer 4 license, and then
you could get a Robofont license, for Fontographer 3. Then Apple changed
the system architecture and that meant Fontographer 3 would no longer
run on Apple computers. Obviously that was a bit of a damn on Robofog.
Pretty soon after that Jim sold Fontographer to Macromedia. He and his
employes continued to develop Fontographer into Freehand, it went from a
font drawing application into a more general purpose illustration tool. So
Macromedia bought Altsys for Freehand because they were competing with
Adobe at that time. And they didn’t really have any interest in continuing
to develop Fontographer. Fonts is a really obscure kind of area. As a proprietary software company, what you are doing things to make a profit and if
161
the market is too small to justify your investment then you’ll just not keep
developing the software. Fontographer shut at that point.
PH
I think they paid one guy to maintain it and answer questions.
Yeah. I think they even stop actively selling it, you had to ask them to
sell you a license. Fontographer has stopped at that point and there was no
actively developed font editor. There were a few Windows programs, which
were kind of shareware for developing fonts because in this time Apple and
Microsoft got fed up with paying Adobe’s extortion of PostScript licensing
fees. They developed their own font format called TrueType. There were
Windows font editing programs.
Yeah. I think they even stop actively selling it, you had to ask them to sell
you a license. Fontographer has stopped at that point and there was no actively developed font editor. There were a few Windows programs, which
were kind of shareware for developing fonts because in this time Apple and
Microsoft got fed up with paying Adobe’s extortion of PostScript licensing fees. They developed their own font format called TrueType. When
Fontographer stopped there was the question of which one will become the
predominant font editor and so there was Fontlab. This was developed by
a guy Yuri Yarmola, Russian originally I believe, and it became the primary
proprietary type design tool.
The Python guys from Holland started using Fontlab. They managed to
convince the Fontlab guys to include Python scripting support in Fontlab.
Python had become a major language, for doing this kind of scripting. So
Fontlab added in Python scripting. And then different type designers, font
developers started to use Python scripts to help them develop their fonts,
and a few of the guys doing that decided to join up and they created the
RoboFab project which took the ideas that had been developed for Robofob
and reimplemented them with Fontlab – so RoboFab. This is now a Free
Software package, under the MIT Python style licence. So it is a Free
Software licence but without copyleft. It has beeing developed as a collaborative project. If you’re interested in the development you can just join the
mailing list. It’s a very mature project and the really beautiful thing about
it that they developed a font object model and so in Python you have a very
clean and easily understandable object-oriented model of what a font is. It
makes it very easy to script things. This is quite exciting because that means
you can start to do things which are just not really visible with the graphic
design interface. The thing with those fonts is like there is a scale, it is like
DC
162
architecture. You’ve got the designer of the building and the designer of
the bricks. With a font it is the same. You have the designer who shapes
each letter and then you’ve got the character-spacing which makes what a
paragraph will look like. A really good example of this is if you want to do
interpolation, if you have a very narrow version of a font and a very wide one,
and you want to interpolate in different versions between those two masters
– you really want to do that in a script, and RoboFab makes this really easy
to do this within Fontlab. The ever important thing about RoboFab was
that they developed UFO, I think it’s the Universal Font Object – I’m not
sure what the exact name is – but it’s a XML font format which means that
you can interchange font source data with different programs and specifically
that means that you have a really good font interpolation program that can
read and write that UFO XML format and then you can have your regular
type design format font editor that will generate bitmap font formats that
you actually use in a system. You can write your own tool for a specific
task and push and pull the data back and forth. Some of these Dutch guys,
especially Erik has written a really good interpolation tool. So, as a kind
of thread in the story of font. Remember that time where Fontographer
was not developed actively then you have Georges Williams from California
who was interested in digital typography and fonts and Fontographer was
not being activelly developed and he found that quite frustrating so he said
like Well, I’ll write my own font editor. He wrote it from scratch. I mean
this is a great project.
LL
Can you tell us some details about your course?
DC
There are four main deliverables in the course, that you normally
do in one year, twelve months. The big thing is that you do a professional quality OpenType font, with an extended pan-european latin coverage in regular and italic, maybe bold. You also do a complex non-latin
in Arabic, Indic, maybe Cyrillic ... well not really Cyrillic because there are
problems to get a Cyrillic type experts from Russia to Britain ... or Greek,
or any script with which you have a particular background in. And so,
they didn’t mandate which software students can use, and I was already
used to FontForge, while pretty much all the other students were using
FontLab. This font development is the main thing. The second thing is
the dissertation, that goes up to 8,000 words, an academic master in typography dissertation. Then there is a smaller essay, that will be published
on http://www.typeculture.com/academic_resource/articles_essays/, and it’s
163
a kind of a practice for writing the dissertation. Then you have to document
your working process throughout the year, you have to submit your working
files, source files. Every single step is documented and you have to write
a small essay describing your process. And also, of course, apart from the
type design, you make a font specimen, so you make a very nice piece of
design that show up your font in use, as commercial companies do. All that
takes a full intense year. For British people, the course costs about £3,000,
for people in the EU, it costs about £5,000 and about £10,000 for non-EU.
Have a look at the website for details, but yes, it’s very expensive.
LL
And did you also design a font?
Yes. But I do it part-time. Normally, you could do the typeface,
and the year after you do the dissertation. For personal reasons, I do the
dissertation first, in the summer, and next year I’ll do the typeface, I think
in July next year.
DC
LL
You have an idea on which font you’ll work?
Yes. The course doesn’t specify which kind of typeface you have to
work on. But they really prefer a textface, a serif one, because it’s the most
complicate and demanding work. If you can do a high quality serif text
typeface design, you can do almost any typeface design! Of course, lots of
students do also a sans serif typeface to be read at 8 or 9 points, or even
for by example dictionaries at 6 or 7 points. Other students design display
typefaces that can be used for pararaphs but probably not at 9 points ...
DC
It looks like you are asked to produce quite a lot of documents.
Are these documents published anywhere, are they available for other designers?
Femke Snelting (FS)
Yes, the website is http://www.typefacedesign.net and the teaching
team encourages students to publish their essays, and some people have
published their dissertation on the web, but it varies. Of course, being an
academic dissertation, you can request if from the university.
DC
I’m asking because in various presentations the figure of the ‘expert typographer’ came up, and the role Open Source software could have, to open up this
guild.
FS
Yeah, the course in The Hague is cheaper, the pound was quite high so
it’s expensive to live in Britain during the last year, and the number of people
able to produce high quality fonts is pretty small ... And these courses are
DC
164
quite inaccessible for most of the people because of being so expensive, you
have to be quite commited to follow them. The proprietary font editing
software, even with a student discount, is also a bit expensive. So yes, Free
and Open Source software could be an enabler. FontForge allows anybody
to grab it on the Internet and start making fonts. But having the tools
is just the beginning. You have to know what you’re doing to a design a
typeface, and this is separate from font software techinques. And books
on the subject, there are quite a few, but none are really a full solution.
There www.typophile.org, a type design forum on the web, where you can
post preliminary designs. But of course you do not get the kind of critical
feedback as you can get on a masters course ...
FS
We talked to Denis Jacquerye from the DéjàVu project, and most of the
people who collaborate on the project are not type designers but people who are
interested in having certain glyphs added to a typeface. And we asked him if
there is some kind of teaching going on, to be sure that the people contributing
understand what they are doing. Do you see any way of, let’s say, a more open
way of teaching typography starting to happen?
Yeah, I mean, that the part of why the Free Software movement is
going to branch down into the Free Culture movement. There is that website Freedom Defined 2 that states that the principles of Free Software can
apply to all other kind of works. This isn’t shared by everybody in the Free
Software movement. Richard Stallman makes a clear difference between
three kind of works: the ones that function like software, encyclopedias,
dictionaries, text books that tell how to makes things, and text typefaces.
Art works like music and films, and text works about opinions like scientific papers or political manifestos. He believes that different kinds of rights
should apply for that different kind of works. There is also a different view
in which anything in a computer can be edited ought to be free like Free
Software. That is certainly a position that many people take in the Free
Software community. In the WikiMedia Foundation text books project,
you can see that when more and more people are involved in typeface design
from the Free Culture community, we will see more and more education
material. There will be a snowball effect.
DC
PH
2
Dave, we are running out of time ...
http://freedomdefined.org
165
So just to finish about the FontForge Python scripting ... There is
Python embeded in FontForge so you can run scripts to control FontForge,
you can add new features that maybe would be specific to your font and then
in FontForge there is also a Python module which means that you can type
into a Python interpretor. You type import fontforge and if it doesn’t
give you an error then you can start to do FontForge functions, just like in
the RoboFab environment. And in the process of adding that George kind
of re-architectured the FontForge source code so instead of being one large
program, there is now a large C library, libfontforge, and then a small C
program for rendering and also the Python module, a binding or interface
to that C library. This means if you are an application programmer it is very
straightforward to make a new font editor in whatever language you want,
using whatever graphic toolkit you want. So if you’re a JDK guy or a GTK
guy or even if you’re on Windows or Mac OS X, you can make a font editor
that has all the functionality of FontForge. FontForge is a kind of engine to
make font edi